Closed jdiazsupsi closed 3 months ago
Hi @jdiazsupsi
Sorry that you are experiencing issues with the workflow. We'll get this sorted ASAP
Hit this issue too...
@nrhorner, after your help and support on this one too.
Many thanks
Matt
Hi @mdhitch
Thanks for letting us know. We are working on getting a fix out for this issue. I'll let you know when it's ready Neil
Hello,
I have the same issue (12 samples to analyze..)
To solve this temporarily, you can edit the local.config
file of the in-error workflow with this :
executor.$local.memory = "10 GB"
process {
withName: 'makeReport' {
memory = 10.GB
}
}
Of course, the value of 10Gb can be adapted to your available memory. And the memory in the the process part can not be superior to the one specified with local.memory parameter
Thanks a lot @audrey-gibert !
Modifying the nexflow.config
as you said worked and the report was generated properly. I monitored the docker container with docker stats
and saw the ram increase slowly to 4.2 GB. Then it exited successfully
HI @jdiazsupsi Thanks for reporting that, your feedback is useful to us.
This should be fixed in the next release.
Hi, this should now be fixed in the latest release v1.1.0
Closing as this is now released.
Operating System
Ubuntu 22.04
Other Linux
No response
Workflow Version
v1.0.0-gc66a485
Workflow Execution
Command line
EPI2ME Version
No response
CLI command run
nextflow run epi2me-labs/wf-transcriptomes --fastq myfiles/fastq --ref_genome $ref --transcriptome-source reference-guided --ref_annotation $annot_gtf --direct_rna --de_analysis --sample_sheet sample_sheet.csv --threads 20 -profile standard
Workflow Execution - CLI Execution Profile
standard (default)
What happened?
Hi, I have been trying to run the workflow on six bacterial samples from a novel species and everything runs smoothly until the step in which it tries to create the report, where the docker container runs out of memory and gets killed. I have been monitoring the usage of resources with
docker stats
and indeed, memory goes up until it hits the limit, 2 GB (as specified in the main.nf script https://github.com/epi2me-labs/wf-transcriptomes/blob/c66a48525feb2ac4e8896776257715ba2b09a21a/main.nf#L381C1-L381C1). Is there a simple way to run the pipeline changing the 2 GB value set on main.nf?I am not sure of what exactly makes the memory go up for this process. The test data ran through with no issue so it could be something specific to my dataset.
This issue could also be related to this closed one https://github.com/epi2me-labs/wf-transcriptomes/issues/22. Would it make sense to increase the default value in the pipeline a higher one to cover more use cases? After all, the pipeline requirements in the README specify a minimum of 16 GB.
Thanks for developing the workflow and for the support! Juan.
Relevant log output
Application activity log entry
No response