Open jiten-parmar opened 10 months ago
Hello, thank you for your feedback. Could you please provide detailed execution commands so that we can reproduce the issue?
Hello, thank you for your feedback. Could you please provide detailed execution commands so that we can reproduce the issue?
I was performing Haplotype calling using lush_hc after performing lush aligning and lush BQSR step.below is the command I used.
mkdir -p ./outdir3 export LD_LIBRARY_PATH=./bin/LUSH_toolkit-HC:$LD_LIBRARY_PATH ./bin/LUSH_toolkit-HC/lush_hc \ --pcr-indel-model NONE \ --native-active-region-threads 3 \ --native-main-spend-threads 16 \ -I ./outdir3/SRR062634.sort.dup.BQSR.bam \ -R ./data2/hg38.fa \ -O ./outdir3/SRR062634.vcf
hi, Can you check how many available CPU cores your machine has? --native-active-region-threads 3 --native-main-spend-threads 16 are set according to our 56-core machine. "--native-active-region-threads" specifies the number of active-region threads, and "--native-main-spend-threads" specifies the number of worker threads (Consumer threads) used by each active-region thread. The settings need to follow: active-region threads * (Consumer-threads + 2) should be less than or equal to the total number of logical cores on your machine.
hi, Can you check how many available CPU cores your machine has? --native-active-region-threads 3 --native-main-spend-threads 16 are set according to our 56-core machine. "--native-active-region-threads" specifies the number of active-region threads, and "--native-main-spend-threads" specifies the number of worker threads (Consumer threads) used by each active-region thread. The settings need to follow: active-region threads * (Consumer-threads + 2) should be less than or equal to the total number of logical cores on your machine.
I used this formula and changed thread settings, still the same error. If the settings are not proper, then it shouldn't run at all. But how it randomly runs successfully sometime out of like ten trials that I can't figure it out.
hi,don't worry. The following steps may help us troubleshoot the specific issue:
1) Please send a screenshot of the standard output and standard error logs during the runtime.
2) Check if there is a core.*
file in the execution directory? send the core file generated by the segmentation fault to here, if convenient , or use gdb to check the specific information of the core file, and then send us a screenshot? The usage example of gdb is shown in the picture below.
Install gdb tool: apt-get install gdb
or yum install gdb
;
run gdb ./bin/LUSH_toolkit-HC/lush_hc core.108
# core.108 is an example.
Note: If no core file is generated, first run ulimit -c unlimited
on you system, and then execute your lush_hc command as mentioned above. This way, the core file will be generated, if given segmentation fault error.
The same problem. I've used docker option. It doesn't depend on number of threads. Please check the uploaded files. The 'lush-nc' from last version doesn't work. The 'lush_nc' crushes every time on chr_12.
hi, We have updated LUSH_hc to version 2.1.2, Would it be possible for you to confirm if the same issue still persists?
When I run LUSH_HC (haplotype calling) out of like 15 times, 14 times it gives segmentation fault error and only 1 time it will randomly run successfully. Any idea why this is happening? Why so less success rate.