Open GooLey1025 opened 4 days ago
2024-11-17 23:05:42.305327: Successfully ran: "bash -c set -eo pipefail && gaf2paf /public/home/cszx_huangxh/qiujie/collabrators/gulei/maize_graph_pan genome/graph_construction/MC/tmp/toilwf-e8f3fb951a82514fbbc18231b336ac71/ad86/job/tmpqpsakgvk/Zm_Ki3_REFERENCE_NAM_1_0.0.gaf.unstable -l /public/home/ cszx_huangxh/qiujie/collabrators/gulei/maize_graph_pangenome/graph_construction/MC/tmp/toilwf-e8f3fb951a82514fbbc18231b336ac71/ad86/job/tmpqpsakgvk/mg .gfa.node_lengths.tsv | awk 'BEGIN{OFS=" "} {$6="id=_MINIGRAPH_|"$6; print}'" in 7.8689 seconds with job-memory 158.7 Gi Issued job 'merge_pafs' kind-merge_pafs/instance-fw0pnqbk v1 with job batch system ID: 89 and disk: 2.0 Gi, memory: 2.0 Gi, cores: 1, accelerators: [] , preemptible: False Issued job 'merge_pafs' kind-merge_pafs/instance-rjy6w5we v1 with job batch system ID: 90 and disk: 2.0 Gi, memory: 2.0 Gi, cores: 1, accelerators: [] , preemptible: False 2024-11-17 23:09:44.455001: Running the command: "bgzip /public/home/cszx_huangxh/qiujie/collabrators/gulei/maize_graph_pangenome/graph_construction/M C/tmp/toilwf-e8f3fb951a82514fbbc18231b336ac71/57fe/job/tmp8sv3xbob.tmp --threads 1" Got message from job at time 11-17-2024 23:11:04: Job used more disk than requested. For CWL, consider increasing the outdirMin requirement, otherwise , consider increasing the disk requirement. Job 'merge_pafs' kind-merge_pafs/instance-rjy6w5we v1 used 601.53% disk (12.0 GiB [12917855744B] used, 2.0 GiB [2147483648B] requested). 2024-11-17 23:15:10.776809: Successfully ran: "bgzip /public/home/cszx_huangxh/qiujie/collabrators/gulei/maize_graph_pangenome/graph_construction/MC/t mp/toilwf-e8f3fb951a82514fbbc18231b336ac71/57fe/job/tmp8sv3xbob.tmp --threads 1" in 325.4528 seconds Got message from job at time 11-17-2024 23:15:19: Job used more disk than requested. For CWL, consider increasing the outdirMin requirement, otherwise , consider increasing the disk requirement. Job 'merge_pafs' kind-merge_pafs/instance-fw0pnqbk v1 used 413.79% disk (8.3 GiB [8886103552B] used, 2.0 G iB [2147483648B] requested). Issued job 'extract_paf_from_gfa' kind-extract_paf_from_gfa/instance-j0mlg27n v1 with job batch system ID: 91 and disk: 17.5 Gi, memory: 17.5 Gi, core s: 1, accelerators: [], preemptible: False 2024-11-17 23:18:57.600577: Running the command: "rgfa2paf /public/home/cszx_huangxh/qiujie/collabrators/gulei/maize_graph_pangenome/graph_constructio n/MC/tmp/toilwf-e8f3fb951a82514fbbc18231b336ac71/41c7/job/tmphdih1ngh/B73.pangenome.sv.gfa -T id=_MINIGRAPH_| -P id=Zmays_833_Zm_B73_REFERENCE_NAM_5_0 | -i /public/home/cszx_huangxh/qiujie/collabrators/gulei/maize_graph_pangenome/graph_construction/MC/tmp/toilwf-e8f3fb951a82514fbbc18231b336ac71/41c7/ job/tmphdih1ngh/B73.pangenome.sv.gfa.tofilter.paf" 2024-11-17 23:23:55.646828: Successfully ran: "rgfa2paf /public/home/cszx_huangxh/qiujie/collabrators/gulei/maize_graph_pangenome/graph_construction/M C/tmp/toilwf-e8f3fb951a82514fbbc18231b336ac71/41c7/job/tmphdih1ngh/B73.pangenome.sv.gfa -T id=_MINIGRAPH_| -P id=Zmays_833_Zm_B73_REFERENCE_NAM_5_0| - i /public/home/cszx_huangxh/qiujie/collabrators/gulei/maize_graph_pangenome/graph_construction/MC/tmp/toilwf-e8f3fb951a82514fbbc18231b336ac71/41c7/job /tmphdih1ngh/B73.pangenome.sv.gfa.tofilter.paf" in 297.5013 seconds Issued job 'merge_pafs' kind-merge_pafs/instance-0rzveep5 v1 with job batch system ID: 92 and disk: 17.5 Gi, memory: 2.0 Gi, cores: 1, accelerators: [ ], preemptible: False Issued job 'filter_paf_deletions' kind-filter_paf_deletions/instance-d573j9aa v1 with job batch system ID: 93 and disk: 140.2 Gi, memory: 525.8 Gi, co res: 6, accelerators: [], preemptible: False Issued job 'zip_gz' kind-zip_gz/instance-ybejc955 v1 with job batch system ID: 94 and disk: 17.5 Gi, memory: 2.0 Gi, cores: 1, accelerators: [], preem ptible: False 2 jobs are running, 0 jobs are issued and waiting to run 2024-11-17 23:31:21.568863: Running the command: "gzip -c mg.paf.unfiltered" 2024-11-17 23:33:26.262559: Running the command: "vg convert -r 0 -g /public/home/cszx_huangxh/qiujie/collabrators/gulei/maize_graph_pangenome/graph_c onstruction/MC/tmp/toilwf-e8f3fb951a82514fbbc18231b336ac71/1469/job/tmp_usgv341/mg.gfa -p -T /public/home/cszx_huangxh/qiujie/collabrators/gulei/maize _graph_pangenome/graph_construction/MC/tmp/toilwf-e8f3fb951a82514fbbc18231b336ac71/1469/job/tmp_usgv341/mg.gfa.trans" 2024-11-17 23:40:04.769674: Successfully ran: "gzip -c mg.paf.unfiltered" in 522.6991 seconds 2024-11-17 23:51:59.286151: Successfully ran: "vg convert -r 0 -g /public/home/cszx_huangxh/qiujie/collabrators/gulei/maize_graph_pangenome/graph_cons truction/MC/tmp/toilwf-e8f3fb951a82514fbbc18231b336ac71/1469/job/tmp_usgv341/mg.gfa -p -T /public/home/cszx_huangxh/qiujie/collabrators/gulei/maize_gr aph_pangenome/graph_construction/MC/tmp/toilwf-e8f3fb951a82514fbbc18231b336ac71/1469/job/tmp_usgv341/mg.gfa.trans" in 1111.7808 seconds with job-memor y 525.8 Gi
This error seems to be stemming from an invalid PAF file, and does not seem directly linked to system resources.
I'm not too sure how to debug this -- are you able to share your input data so I can try to reproduce it?
This error seems to be stemming from an invalid PAF file, and does not seem directly linked to system resources.
I'm not too sure how to debug this -- are you able to share your input data so I can try to reproduce it?
Thanks for your reply. I am not sure how to transfer the large data to you.Can you give me some tips on your convenience.
My command:
/usr/bin/time -v -o time.log cactus-pangenome ./js $ref.pangenome.list --outDir ./$ref.pangenome --outName $ref.pangenome --reference $reference --vcf --filter $N --haplo --giraffe clip full filter --gbz clip filter full --gfa clip full filter --xg --chrom-vg --odgi --chrom-og --viz --draw --logFile ./$ref.pangenome.log --workDir ./tmp --doubleMem true
I have tested with other little pangenome fasta with no error . But now I tried it in maize pangenome .And the error here perhaps due to different demands for different sources. Maybe due to memory. Can someone give some suggestions about setting the sources request? ( In my one server node, I have 2T memory in total.)
I would appreciate it for any reply! detailed outputlog :