Closed l-modolo closed 3 years ago
We have also hit this issue on our cluster using Conda, where users outside of our group do not have write permissions on the shared cacheDir so they are unable to create a lock file and run our workflow(s). Just wanted to add that we also see this issue using Singularity with a shared cacheDir. Happy to open a separate issue if needed.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Since this was closed, are there any known workarounds? We use shared environments as well and a typical user will not have access to the conda env directory. Is it strictly necessary to create this lock file?
Actually, this should be solved. Downloading the Singularity image manually should bypass the issue.
Ah, in my case, I am not using Docker/Singularity. I installed nextflow via conda and wanted to use it in it's own environment. Upon running the pipeline, I want nextflow to load the other conda environment with all the tools. I wanted to avoid installing nextflow in the same environment should there be any conflict or changes in versions of any already installed packages. Should I open a new issue?
Sorry, I've missed the issue. This looks more a Conda problem tho. Can you share the .nextflow.log
file created by the failed execution.
Hi @pditommaso, I just thought I'd chime in to hopefully clear things up. The issue is simply this: when Nextflow first starts up, it tries to create a lockfile in the conda or singularity cacheDirs even if the condaEnv or imageUrl already exists on disk:
https://github.com/nextflow-io/nextflow/blob/v20.08.1-edge/modules/nextflow/src/main/groovy/nextflow/conda/CondaCache.groovy#L220 https://github.com/nextflow-io/nextflow/blob/v20.08.1-edge/modules/nextflow/src/main/groovy/nextflow/container/SingularityCache.groovy#L179
The problem, I think, is actually here:
https://github.com/nextflow-io/nextflow/blob/v20.08.1-edge/modules/nextflow/src/main/groovy/nextflow/conda/CondaCache.groovy#L303-L306 https://github.com/nextflow-io/nextflow/blob/v20.08.1-edge/modules/nextflow/src/main/groovy/nextflow/container/SingularityCache.groovy#L244-L272
When the workflow first starts, condaPrefixPaths and localImageNames are empty. I think these should instead be populated with any condaEnvs or imageUrls required to run the workflow. To be able to run multiple instances of Nextflow concurrently, this process will need to wait for all required lockfiles in the cacheDir to be cleaned up before adding them to the condaPrefixPaths and localImageNames HashMaps.
@yzhernand The workaround I have is to just create a new directory and then create symbolic links to each condaEnv or imageUrl. For example, noting the dot (.
) on the last line:
mkdir my_conda_cachedir && cd my_conda_cachedir
ln -s /path/to/conda_cacheDir/env-* .
The same for Singularity:
mkdir my_singularity_cachedir && cd my_singularity_cachedir
ln -s /path/to/singularity_cacheDir/*.img .
Then set your conda.cacheDir
and singularity.cacheDir
configuration to point to the new cachedir:
conda {
cacheDir = '/path/to/my_conda_cachedir'
}
singularity {
cacheDir = '/path/to/my_singularity_cachedir'
}
https://www.nextflow.io/docs/latest/conda.html#advanced-settings https://www.nextflow.io/docs/latest/singularity.html#advanced-settings
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
May be related to #1819
Bug report
Expected behavior and actual behavior
When running Nextflow with the conda option on a shared conda installation, read and execution permission should be enough to load and run shared conda environments.
Nextflow tries to create a
.lock
file in theconda/envs/
directory where user hasn't write permissionSteps to reproduce the problem
As you need a conda install without write permission, here are a docker to reproduce the bug:
Build the docker:
Launch it:
Run the faulty pipeline:
Program output
Environment
Additional context
The bug was encountered on a computing cluster with a shared miniconda3 installation not the Docker dummy context.