Open AdrianZrm opened 1 year ago
Thank you for your report, Adrian. We will have a look at this at earliest opportunity. I presume this was not a standard NCBI provided example input, correct?
Thanks for your feedback.
Yes indeed, it's not a provided example input. However it is a genome from an Helicobacter pylori strain that was successfully annotated multiple times on different machines using PGAP before.
Thanks.
whether we change the number of cpu and memory used or not.
Would you mind posting the range of cpu and memory parameters that you varied?
Sure, From what I remember we tried from : 24 Cpus with 6Gb of mem per cpu to 16 Cpu with 6Gb of mem per cpu 12 cpu; 6Gb mem per cpu 1 cpu ; 6Gb mem per cpu
And since the options mem-per-cpu and mem are mutually exclusive we also tried 128Gb total mem, 12 cpus 128Gb total mem, 1 cpu
Thanks. Could you please confirm that in all cases first occurrence of permanentFail was on "job actual"?
Thanks. Could you please confirm that in all cases first occurrence of permanentFail was on "job actual"?
Yes it's always encountering a SIGBUS error when the command line :
INFO [job actual] /pgap/output/debug/tmp-outdir/_xijgzoj$ gp_makeblastdb \
-nogenbank \
-asn-cache \
/pgap/output/debug/tmpdir/hvz6tkzg/stg14115606-2a1b-436e-b560-03bfa31712ea/sequence_cache \
-dbtype \
nucl \
-fasta \
/pgap/output/debug/tmpdir/hvz6tkzg/stg54c6e9e1-e346-48f0-a1a0-90ef11d7d728/adaptor_fasta.fna \
-found-ids-output \
found_ids.txt \
-found-ids-output-manifest \
found_ids.mft \
-db \
blastdb \
-output-manifest \
blastdb.mft \
-title \
'BLASTdb created by GPipe'
is executed throwing this error :
Bus error (Nonexisting physical address [0x7fe81040b090])
[2023-07-17 14:33:50] INFO [job actual] Max memory used: 49MiB
[2023-07-17 14:33:50] WARNING [job actual] was terminated by signal: SIGBUS
[2023-07-17 14:33:50] ERROR [job actual] Job error:
("Error collecting output for parameter 'found_ids': pgap/progs/gp_makeblastdb.cwl:105:15: Did not find output file with glob pattern: ['found_ids.txt'].", {})
[2023-07-17 14:33:50] WARNING [job actual] completed permanentFail
[2023-07-17 14:33:50] DEBUG [job actual] outputs {}
@AdrianZrm, Are you able to share your input assembly, HP_otzi.fna ?
Hello George @george-coulouris ,
I am able to share my input assembly, but I double checked, and we can't even pass the test genome "ASM2732v1" Mycoplasmoides genitalium G37 on the cluster. I am afraid that the problem is not related to our input assembly...
Either :
"/pathto/pgap.py -no-self-update -n --no-internet -d -c 16 -o Mycoplasma -g ASM2732v1.annotation.nucleotide.1.fasta -s 'Mycoplasmoides genitalium' -D singularity
"
or
"/pathto/pgap.py --no-self-update -n --no-internet -d -c 16 -o /paththo/PGAP_RESULTS/Mycoplasma /pathto/PGAP_GENOMES/Mycoplasma/input.yaml -D singularity
"
Gives the same output : cwltool.log
Bus error (Nonexisting physical address [0x7fc357d4e090])
[2023-08-21 09:31:25] INFO [job actual] Max memory used: 44MiB
[2023-08-21 09:31:25] WARNING [job actual] was terminated by signal: SIGBUS
[2023-08-21 09:31:25] ERROR [job actual] Job error:
("Error collecting output for parameter 'found_ids': pgap/progs/gp_makeblastdb.cwl:105:15: Did not find output file with glob pattern: ['found_ids.txt'].", {})
[2023-08-21 09:31:25] WARNING [job actual] completed permanentFail
We're trying to check with some other labs that achieved to make PGAP work on their cluster with singularity what could be our issue. I'll give a follow up here If we find anything on our side.
Regards
Thanks for the update- we haven't tested on Debian 12 yet, so we'll try that on our end as well.
Hello,
I'm trying to run PGAP on an HPC cluster using singularity + slurm and I'm running into some troubles.
While pgap is installing/running fine with our "test genome" on the main machine that dispatches slurm jobs on the HPC nodes, it crashes when we submit our pgap script with slurm on any node via this machine...
The error I'm experiencing seems to be a memory related issue. Here is the part where the memory problem
Bus error (Nonexisting physical address [0x7feb076e6090])
is described (from the cwltool.log file) :Here is the slurm script we use to submit our job to the nodes :
Unfortunately, it doesn't matter whether we change the number of cpu and memory used or not.
HPC is running on Debian GNU/Linux 12 (bookworm), singularity with apptainer version 1.1.9-1.el9, slurm-wlm 22.05.8 Please find attached the
cwltool.log
with thetmp-outdir
folder.tmp-outdir.zip cwltool.log
I can't use podman or docker on the cluster. Do you have any ideas or hints as to what I can do to make this work?
Best regards, Adrian