Open pakiessling opened 1 year ago
Hi @pakiessling,
Thank you for the kind words! We have a Singularity config available. You should be able to just do
nextflow run labsyspharm/mcmicro --in exemplar-001 -profile singularity
If your HPC team needs to whitelist individual Singularity images, the exact ones are listed under the container:
and version:
fields in https://github.com/labsyspharm/mcmicro/blob/master/config/defaults.yml
For example, the ASHLAR module lists:
container: labsyspharm/ashlar
version: 1.17.0
which would correspond to singularity pull docker://labsyspharm/ashlar:1.17.0
If the Singularity images are stored in a non-standard location, you can specify that by making a custom.config
file with the following line:
singularity.cacheDir = '/full/path/to/images'
and provide it to your run with
nextflow run labsyspharm/mcmicro --in exemplar-001 -profile singularity -c custom.config
Let me know if something doesn't make sense or if you run into problems.
@ArtemSokolov Perfect, I will get back to you if I run into trouble
Hi @ArtemSokolov the university agreed to install the containers. They asked me if Mcmicro uses MPI for parallelization. I assum no?
Hi @pakiessling,
I suspect your cluster administrators are wondering how MCMICRO schedules its tasks. The pipeline is based on Nextflow, which supports a number of different executors. Most academic high-performance clusters use LSF, SGE or SLURM as job schedulers, so Nextflow should run without any issues. You can forward this page to them: https://www.nextflow.io/docs/latest/executor.html
-Artem
Thank you so much for discussing the above - this has been immensely helpful! I am running the pipeline on a HPC that supports SLURM, however, without internet connection so if I try run mcmicro I get no host errors at every step. We are not allowed to run computationally heavy jobs from our login nodes (not submitted as a SLURM job) - they automatically kill our jobs.
I was trying to follow the above instructions.
singularity.enabled = true
singularity.autoMounts = true
singularity.runOptions = '-C -H "$PWD"'
singularity.cacheDir = '/scicore/home/rueegg/boehm0002/miniconda3/envs/nextflow/work/singularity/'
3. However, I still receive the same error:
4. ```
(nextflow) [boehm0002@sgi26 spatial_transcriptomics]$ nextflow run /scicore/home/rueegg/boehm0002/miniconda3/envs/nextflow/mcmicro-master --in ./data1 --params ./data1/myparams2.yml -profile singularity -c ./custom.config
N E X T F L O W ~ version 23.04.1
Launching `/scicore/home/rueegg/boehm0002/miniconda3/envs/nextflow/mcmicro-master/main.nf` [trusting_gutenberg] DSL2 - revision: 4408d4565a
executor > local (1)
[- ] process > illumination -
[- ] process > registration:ashlar -
[- ] process > background:backsub -
[- ] process > dearray:coreograph -
[- ] process > dearray:roadie:runTask -
[- ] process > segmentation:roadie:runTask -
[- ] process > segmentation:worker -
[bf/c72eba] process > segmentation:s3seg (1) [ 0%] 0 of 1
[- ] process > quantification:mcquant -
[- ] process > downstream:worker -
[- ] process > viz:autominerva -
ERROR ~ Error executing process > 'segmentation:s3seg (1)'
Caused by:
Process `segmentation:s3seg (1)` terminated with an error exit status (1)
Command executed:
python /app/S3segmenter.py --imagePath sect1z17cycle-01.tif --stackProbPath sect1z17cycle-01-pmap.tif --probMapChan 1 --probMapChan 1 --outputPath .
Command exit status:
1
executor > local (1)
[- ] process > illumination -
[- ] process > registration:ashlar -
[- ] process > background:backsub -
[- ] process > dearray:coreograph -
[- ] process > dearray:roadie:runTask -
[- ] process > segmentation:roadie:runTask -
[- ] process > segmentation:worker -
[bf/c72eba] process > segmentation:s3seg (1) [100%] 1 of 1, failed: 1 \u2718
[- ] process > quantification:mcquant -
[- ] process > downstream:worker -
[- ] process > viz:autominerva -
ERROR ~ Error executing process > 'segmentation:s3seg (1)'
Caused by:
Process `segmentation:s3seg (1)` terminated with an error exit status (1)
Command executed:
python /app/S3segmenter.py --imagePath sect1z17cycle-01.tif --stackProbPath sect1z17cycle-01-pmap.tif --probMapChan 1 --probMapChan 1 --outputPath .
Command exit status:
1
Command output:
<urlopen error [Errno 113] No route to host>
Pixel size detection using ome-types failed
Command error:
/usr/local/lib/python3.10/site-packages/ome_types/_convenience.py:112: FutureWarning: The default XML parser will be changing from 'xmlschema' to 'lxml' in version 0.4.0. To silence this warning, please provide the `parser` argument, specifying either 'lxml' (to opt into the new behavior), or'xmlschema' (to retain the old behavior).
d = to_dict(os.fspath(xml), parser=parser, validate=validate)
2023-08-09 14:51:40 | INFO | Resource 'XMLSchema.xsd' is already loaded (schemas.py:1234)
2023-08-09 14:51:40 | ERROR | Auto-detect pixel size failed, use `--pixelSize SIZE` to specify it (S3segmenter.py:72)
<urlopen error [Errno 113] No route to host>
Pixel size detection using ome-types failed
Work dir:
/scicore/home/rueegg/boehm0002/spatial_transcriptomics/work/bf/c72ebab11d2cecf269c28fba4f845f
Tip: you can try to figure out what's wrong by changing to the process work dir and showing the script file named `.command.sh`
-- Check '.nextflow.log' file for details
I get an index error as discussed in the Forum here when I have internet. I certainly did something wrong when defining the custom.config - is there anything I need to change in the .yml file as well?
Hi, great work with the pipeline.
Unfortunately, my university does not allow docker on the HPC, but they will whitelist and install Singularity/Apptainer images on request.
Can I just point them to https://hub.docker.com/u/labsyspharm and then change some lines in the nextflow config to point to the container paths or do I need to make further changes?
Thank you!