Closed Midnighter closed 1 year ago
Hi, I just ran it using the same set up and config files as you and could not recreate your error. Do you have the recommended amount of RAM available, you may need to adjust your docker settings to allow the recommended 8GB Memory. You can do this in docker desktop or maybe try using the conda or singularity profile.
I have 120 GB of RAM available, yes, that is not the issue. I'll try it some more.
Hello, was this problem solved? I encountered the same issue while running the test data (but also with any other data). See file attached for more details about the error:
The error is obtained either while running EPI2ME through command lines or GUI. EPI2ME is running on a GridION with 64 GB of memory and docker has access to all the resources.
Could you please help to solve this issue? Thank you
Thanks for letting us know, I am looking in to it.
@magandBE Would you be able to try it with the parameter --source ncbi_16s_18s
which has been tested with the test_data and is a smaller set. Trying to rule out that it's an issue with the data.
@sarahjeeeze Thank you for your reply, I tried with the indicated parameter and this reproduced the same error.
Hi, I have tried but been unable to recreate your error. I notice in your error is says Your local project version looks outdated - a different revision is available in the remote repository
so perhaps try nextflow pull epi2me-labs/wf-metagenomics
to update it and try and run it again.
Hi, I have run nextflow pull epi2me-labs/wf-metagenomics
and run again the workflow but the same error continues to appear. The version of the revision I used is v1.1.4 and this is the same that is pulled with nextflow pull
The same error is obtained all the time whatever the dataset/database/config file/ is used. For a run that we tried with the dataset /data/magand/gmstd_pure_Sciensano_HQ10-500.fastq
and the database PlusPF-8
we have tried to look into more details in the temporary files what was the cause of the exciting error.
Apparently a file related to my dateset cannot be read:
grid@GXB03465:/data/scratch/magand/tmp/a5/b9c63770483a597f107c9d47fd138d$ cat .command.err Processing gmstd_pure_Sciensano_HQ10-500/gmstd_pure_Sciensano_HQ10-500.fastq Warning: file 'gmstd_pure_Sciensano_HQ10-500/gmstd_pure_Sciensano_HQ10-500.fastq' cannot be read.
Which is strange because when I look at my file, it is not empty and has read permission for everyone (see attached file). Moreover, I see that there are many links to my data that were created.
Hi @magandBE, Thanks for the information. It might be an issue with docker permissions. I would try following these instructions just the manage-docker-as-a-non-root-user section and then trying the workflow again.
Hi @sarahjeeeze Thanks for your reply and sorry for the late answer. I followed the instructions to adapt the docker permission (which were already correct I think) but this still lead to the same error.
Hi, Thank you for using the workflow. Could you confirm if this issue has been solved? We'll close this ticket on the assumption things are now resolved.
I cannot successfully run the pipeline on my own or the test data. Both kraken2 and minimap2 fail to classify any sequences.
I downloaded the test data and confirmed that it contains 1000 reads. Then run the pipeline as follows:
The local config just contains
tower.enabled = false
.params.yml
When I run like this, kraken2 is the first to fail.
When I set kraken2 to
false
, the minimap2 step completes but the report fails.This last error is a bit suspicious since the pipeline runs with the standard profile and thus Docker...
I also tried to run it with revision v1.1.4 and nextflow 22.04.3 same errors.