nextflow-io / nextflow

A DSL for data-driven computational pipelines
http://nextflow.io
Apache License 2.0
2.78k stars 632 forks source link

Cleanup directive does not work when `bucket-dir` is specified. #5373

Open siddharthab opened 1 month ago

siddharthab commented 1 month ago

Bug report

Expected behavior and actual behavior

When cleanup = true for a workflow that has specified a bucket-dir and has files that need to be cleaned up, the cleanup operation fails.

Steps to reproduce the problem

main.nf:

process STAGE {
  container 'ubuntu'
  stageInMode 'copy'
  input:
  path x
  output:
  path "${x.name}"
  script:
  """
  """
}

workflow {
  ch = Channel.fromPath('foo')
  STAGE(ch).view()
}

nextflow.config:

cleanup = true
process {
    executor = 'google-batch'
}
google {
  ...
}

Program output

Relevant part of the log file:

Oct-04 20:31:53.079 [Task monitor] DEBUG n.processor.TaskPollingMonitor - Task completed > TaskHandler[id: 1; name: STAGE (1); status: COMPLETED; exit: 0; error: -; workDir: gs://moonwalkbio-scratch/nextflow-work/4d/c3a765b4166bafdb4c46ede2636384]
Oct-04 20:31:53.210 [main] DEBUG nextflow.Session - Session await > all processes finished
Oct-04 20:32:01.544 [Task monitor] DEBUG n.processor.TaskPollingMonitor - <<< barrier arrives (monitor: google-batch) - terminating tasks monitor poll loop
Oct-04 20:32:01.545 [main] DEBUG nextflow.Session - Session await > all barriers passed
Oct-04 20:32:01.552 [main] DEBUG nextflow.util.ThreadPoolManager - Thread pool 'TaskFinalizer' shutdown completed (hard=false)
Oct-04 20:32:01.564 [main] DEBUG n.trace.WorkflowStatsObserver - Workflow completed > WorkflowStats[succeededCount=1; failedCount=0; ignoredCount=0; cachedCount=0; pendingCount=0; submittedCount=0; runningCount=0; retriesCount=0; abortedCount=0; succeedDuration=1s; failedDuration=0ms; cachedDuration=0ms;loadCpus=0; loadMemory=0; peakRunning=1; peakCpus=1; peakMemory=0; ]
Oct-04 20:32:01.841 [main] DEBUG nextflow.cache.CacheDB - Closing CacheDB done
Oct-04 20:32:01.850 [main] INFO  org.pf4j.AbstractPluginManager - Stop plugin 'nf-google@1.13.2-patch1'
Oct-04 20:32:01.850 [main] DEBUG nextflow.plugin.BasePlugin - Plugin stopped nf-google
Oct-04 20:32:01.850 [main] DEBUG nextflow.plugin.PluginsFacade - Using Default plugin manager
Oct-04 20:32:01.852 [main] INFO  o.pf4j.DefaultPluginStatusProvider - Enabled plugins: []
Oct-04 20:32:01.852 [main] INFO  o.pf4j.DefaultPluginStatusProvider - Disabled plugins: []
Oct-04 20:32:01.852 [main] INFO  org.pf4j.DefaultPluginManager - PF4J version 3.12.0 in 'deployment' mode
Oct-04 20:32:01.853 [main] DEBUG nextflow.cache.CacheFactory - Using Nextflow cache factory: nextflow.cache.DefaultCacheFactory
Oct-04 20:32:01.878 [main] DEBUG nextflow.plugin.PluginsFacade - Using Default plugin manager
Oct-04 20:32:01.878 [main] WARN  nextflow.file.FileHelper - Unable to start plugin 'nf-google' required by gs://moonwalkbio-scratch/nextflow-work/4d/c3a765b4166bafdb4c46ede2636384
java.lang.NullPointerException: Cannot invoke "nextflow.plugin.CustomPluginManager.getPlugin(String)" because "this.manager" is null
        at nextflow.plugin.PluginsFacade.isStarted(PluginsFacade.groovy:345)
        at nextflow.plugin.PluginsFacade.startIfMissing(PluginsFacade.groovy:447)
        at nextflow.plugin.Plugins.startIfMissing(Plugins.groovy:87)
        at nextflow.file.FileHelper.autoStartMissingPlugin(FileHelper.groovy:367)
        at nextflow.file.FileHelper.asPath0(FileHelper.groovy:347)
        at nextflow.file.FileHelper.asPath(FileHelper.groovy:336)
        at nextflow.Session$_cleanup_closure23.doCall(Session.groovy:1178)
        at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
        at java.base/java.lang.reflect.Method.invoke(Method.java:580)
        at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:343)
        at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:328)
        at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:279)
        at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1007)
        at groovy.lang.Closure.call(Closure.java:433)
        at nextflow.cache.CacheDB.eachRecord(CacheDB.groovy:213)
        at nextflow.Session.cleanup(Session.groovy:1174)
        at nextflow.script.ScriptRunner.shutdown(ScriptRunner.groovy:262)
        at nextflow.script.ScriptRunner.execute(ScriptRunner.groovy:146)
        at nextflow.cli.CmdRun.run(CmdRun.groovy:372)
        at nextflow.cli.Launcher.run(Launcher.groovy:503)
        at nextflow.cli.Launcher.main(Launcher.groovy:657)
Oct-04 20:32:01.879 [main] DEBUG nextflow.plugin.PluginsFacade - Using Default plugin manager
Oct-04 20:32:01.879 [main] WARN  nextflow.Session - Failed to cleanup work dir: /Users/sid.bagaria/temp/work
Oct-04 20:32:01.880 [main] DEBUG nextflow.cache.CacheDB - Closing CacheDB done
Oct-04 20:32:01.881 [main] DEBUG nextflow.util.ThreadPoolManager - Thread pool 'FileTransfer' shutdown completed (hard=false)
Oct-04 20:32:01.881 [main] DEBUG nextflow.script.ScriptRunner - > Execution complete -- Goodbye

Environment

Additional context

This is not really important as users can always specify separate paths for each workflow and cleanup manually after a successful run.

bentsherman commented 1 month ago

Duplicate of #2683

bentsherman commented 1 month ago

Current cleanup doesn't work with remote storage

Check out nf-boost for a more robust cleanup (also something we plan to merge into Nextflow eventually)

siddharthab commented 1 month ago

Thank you. I did not notice the duplicate bug report.

I tried your suggestion by adding this to nextflow.config and keeping everything else the same as in previous example:

plugins {
    id 'nf-boost'
}
boost {
    cleanup = true
}

It does not seem to have worked fully. Staged files and .command.* files were still there at the end. Log file attached, but I don't think it has any useful details.

Also, when I changed my larger pipeline to use boost.cleanup and reran it, I received this crash:

Oct-05 04:41:43.672 [Task monitor] DEBUG n.processor.TaskPollingMonitor - <<< barrier arrives (monitor: google-batch) - terminating tasks monitor poll loop
Oct-05 04:41:43.675 [main] ERROR nextflow.cli.Launcher - @unknown
java.lang.NullPointerException: Cannot invoke "java.util.Collection.toArray()" because "c" is null
        at java.base/java.util.ArrayList.addAll(ArrayList.java:752)
        at nextflow.boost.cleanup.CleanupObserver.onFlowBegin(CleanupObserver.groovy:151)
        at nextflow.Session.notifyFlowBegin(Session.groovy:1088)
        at nextflow.Session.fireDataflowNetwork(Session.groovy:500)
        at nextflow.script.ScriptRunner.run(ScriptRunner.groovy:247)
        at nextflow.script.ScriptRunner.execute(ScriptRunner.groovy:138)
        at nextflow.cli.CmdRun.run(CmdRun.groovy:372)
        at nextflow.cli.Launcher.run(Launcher.groovy:503)
        at nextflow.cli.Launcher.main(Launcher.groovy:657)

As noted in the summary, this is not important, I just thought I would share my experience.

bentsherman commented 1 month ago

Good catch, the null pointer bug is easy to fix.

I'm still deciding how to handle the logs in the cleanup, but in general we recommend using a cleanup policy on the underlying filesystem or object storage to cleanup things like logs and helper files.

The nf-boost cleanup is mainly intended to delete large intermediate files during the run, in order to prevent cost and storage overruns. But many users like to keep the logs, and deleting all of those little files is better handled by things like retention policies rather than the pipeline run.

bentsherman commented 1 month ago

You also mentioned staged inputs. The problem is that it's difficult to know when a staged input is no longer used. You might be able to do some DAG analysis to figure it out, but sometimes people use the same input file (e.g. a reference genome or AI model) across many tasks and that complicates things. So this use case is also better covered by a retention policy for now

siddharthab commented 1 month ago

Thank you. That makes better sense actually that you may want to leave behind the command scripts and log files, and even the staged files. This makes it ideal to reuse the same workdir for multiple workflows without having to worry about bloat over time, or configuring a TTL like mechanism for cleanup, which is easy in the cloud but not so easy in local HPC envs. Maybe whenever the feature makes it in, this can be an explicit point in the documentation.

I will leave it to you to close the issue or keep it to track the null pointer bug. Thanks!

bentsherman commented 1 month ago

The HPC cluster at my university had a cleanup policy on the shared scratch storage, basically a bot would periodically delete old files based on their last modified timestamp. I don't know the particulars of how it was set up but it worked well.