Closed robsyme closed 1 month ago
Not sure this is worth supporting. Normally we detect fusion symlinks by checking if the output file is also an input, which is cheap because the list of inputs is in memory. But detecting this would require checking for .fusion.symlnks
for every task, including subdirectories. This is because, though this example is simple, there is no way to know in general if a task will link inputs to outputs. So now you're doing a bunch of extra S3 calls for every task, even if you never use this pattern.
Instead, I would recommend that you either copy the file or declare the input file as an output directly (e.g. output: path("jsons/id.${name}.json")
). I understand this might be more verbose compared to e.g. a single output directory containing a bunch of different files. It might be more readable when we have static types #4553 and can output a record type representing a directory of diverse output files.
it's not even running for me
ERROR ~ Error executing process > 'MakeJson'
Caused by:
No signature of method: nextflow.script.ScriptBinding.echo '{() is applicable for argument types: () values: [] -- Check script 't.nf' at line: 10
Source block:
"echo '{"name":"$name"}' > id.${name}.json"
Tip: when you have fixed the problem you can continue the execution adding the option `-resume` to the run command line
-- Check '.nextflow.log' file for details
However, this is introducing an increasing inconsistency between Fusion and non-fusion usage.
Normally this is not a problem because the OS knows how to handle symlinks, which obviously is not the case for Fusion.
I'm starting to think it should be the role of Fusion to concretize symlinks when referenced in the publishDir.
Now that were are introducing the task meta-comment, it should be an easier task. Adding @jordeu for visibility
should be the role of Fusion to concretize symlinks when referenced in the publishDir
I assume you mean the process outputs? Because we are decoupling the publishing from the task execution with the workflow publish
So if a process output turns out to be a symlink -- whether because it was also an input or it was linked by the task, or some other reason -- Fusion should upload a copy to S3. Then Nextflow doesn't need to check for forwarded inputs when publishing files
This is actually necessary with the workflow publish, because files published by the workflow don't know the task they came from, so they can't check if they are a forwarded input
No, I mean only publish, because non-publish output files will be used by Fusion in downstream task and, I guess, Fusion is able to handle properly
Well, if we use the workflow publish instead of publishDir then the task doesn't know which outputs will be published
What is the "the workflow publish"?
Workflow publish (i.e. output) definition: https://github.com/nextflow-io/nextflow/pull/4784
I was going to summarize the hybrid solution we discussed on the call, but thinking through it now I realize that it won't work. The workflow output definition makes it difficult for a task to know whether an output file will be published.
I think the best thing we can do instead is (1) give Fusion the output file patterns and (2) Fusion should always upload a copy if a symlink is an output file. This will only happen when an output file is a symlink, which is relatively uncommon. We can also remove the Fusion symlink logic from Nextflow.
I think the best thing we can do instead is (1) give Fusion the output file patterns and (2) Fusion should always upload a copy if a symlink is an output file. This will only happen when an output file is a symlink, which is relatively uncommon. We can also remove the Fusion symlink logic from Nextflow.
it makes sense to me
Giving Fusion the output patterns is a huge win because it provides the opportunity for Fusion to prioritize uploading particular files and potentially abandon the upload of non-output files when the task is complete.
Distinguishing between input, intermediate, and output files in the working directory opens the door to many optimizations.
But still, I'm not convinced that it is a good idea to "materialize" all the outputs as objects when they are symbolic links. Imagine, for example, that the output is a symbolic link to a folder with thousands of small files. Materializing all these thousands of files in an object store is a costly operation.
Keep in mind that it only heeds to happen when the user creates such a link or forwards an input as an output. These cases are uncommon and can be easily avoided by writing the pipeline differently
Bug report
Given a simple workflow that creates a file
... and configuration that sets fusion:
Expected behavior and actual behavior
I would expect the contents of the file published at
./results/symlink.json
to include the json document...but instead it is a text file containing:
Notes
The .fusion.symlinks document in the task working directory of the PublishIdCard task contains the text
symlink.json
, but Nextflow doesn't seem to follow the symlink when publishing.