Open siddharthab opened 1 month ago
Duplicate of #2683
Current cleanup doesn't work with remote storage
Check out nf-boost for a more robust cleanup (also something we plan to merge into Nextflow eventually)
Thank you. I did not notice the duplicate bug report.
I tried your suggestion by adding this to nextflow.config
and keeping everything else the same as in previous example:
plugins {
id 'nf-boost'
}
boost {
cleanup = true
}
It does not seem to have worked fully. Staged files and .command.*
files were still there at the end. Log file attached, but I don't think it has any useful details.
Also, when I changed my larger pipeline to use boost.cleanup and reran it, I received this crash:
Oct-05 04:41:43.672 [Task monitor] DEBUG n.processor.TaskPollingMonitor - <<< barrier arrives (monitor: google-batch) - terminating tasks monitor poll loop
Oct-05 04:41:43.675 [main] ERROR nextflow.cli.Launcher - @unknown
java.lang.NullPointerException: Cannot invoke "java.util.Collection.toArray()" because "c" is null
at java.base/java.util.ArrayList.addAll(ArrayList.java:752)
at nextflow.boost.cleanup.CleanupObserver.onFlowBegin(CleanupObserver.groovy:151)
at nextflow.Session.notifyFlowBegin(Session.groovy:1088)
at nextflow.Session.fireDataflowNetwork(Session.groovy:500)
at nextflow.script.ScriptRunner.run(ScriptRunner.groovy:247)
at nextflow.script.ScriptRunner.execute(ScriptRunner.groovy:138)
at nextflow.cli.CmdRun.run(CmdRun.groovy:372)
at nextflow.cli.Launcher.run(Launcher.groovy:503)
at nextflow.cli.Launcher.main(Launcher.groovy:657)
As noted in the summary, this is not important, I just thought I would share my experience.
Good catch, the null pointer bug is easy to fix.
I'm still deciding how to handle the logs in the cleanup, but in general we recommend using a cleanup policy on the underlying filesystem or object storage to cleanup things like logs and helper files.
The nf-boost cleanup is mainly intended to delete large intermediate files during the run, in order to prevent cost and storage overruns. But many users like to keep the logs, and deleting all of those little files is better handled by things like retention policies rather than the pipeline run.
You also mentioned staged inputs. The problem is that it's difficult to know when a staged input is no longer used. You might be able to do some DAG analysis to figure it out, but sometimes people use the same input file (e.g. a reference genome or AI model) across many tasks and that complicates things. So this use case is also better covered by a retention policy for now
Thank you. That makes better sense actually that you may want to leave behind the command scripts and log files, and even the staged files. This makes it ideal to reuse the same workdir for multiple workflows without having to worry about bloat over time, or configuring a TTL like mechanism for cleanup, which is easy in the cloud but not so easy in local HPC envs. Maybe whenever the feature makes it in, this can be an explicit point in the documentation.
I will leave it to you to close the issue or keep it to track the null pointer bug. Thanks!
The HPC cluster at my university had a cleanup policy on the shared scratch storage, basically a bot would periodically delete old files based on their last modified timestamp. I don't know the particulars of how it was set up but it worked well.
Bug report
Expected behavior and actual behavior
When
cleanup = true
for a workflow that has specified abucket-dir
and has files that need to be cleaned up, the cleanup operation fails.Steps to reproduce the problem
main.nf:
nextflow.config:
Program output
Relevant part of the log file:
Environment
Additional context
This is not really important as users can always specify separate paths for each workflow and cleanup manually after a successful run.