Open zhangrengang opened 2 years ago
Similarly, kmc_tools transform k_31 dump -s k_31.dump
does not work but kmc_tools transform k_31 dump k_31.dump
works.
This is not good :( Could you please provide your input data and how you counted k-mers (i.e. exact kmc
command line). It might also be that something went wrong during k-mer counting (I hope not), so could you also provide your computed kmc database?. If the sizes are large, maybe you would be able to find a smaller set causing this behavior. I would really want to fix this issue, but it may be hard without reproducing. Also, how many cores does your node have?
My command is kmc -m58 -t60 -k31 -ci4 -cs1000000 -fq @test.list test_31 test_31 > test_31.stats && kmc_tools transform test_31 sort test_31.sorted
.
The files are too large for GitHub. Could you provide me a email address to send the files?
When I make a subset of the input data, I find that:
1) when dataset is small (head 100000-2000000 lines), it works;
2) when head 4000000 or 4500000, the status is stoped at in1: 1%
;
3) when head >= 5000000 lines, the status is stoped at in1: 0%
.
The database generated in the normal node also does not work in the abnormal node.
The CPU information of the abnormal node:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7642 48-Core Processor
Stepping: 0
CPU MHz: 2392.417
CPU max MHz: 2300.0000
CPU min MHz: 1500.0000
BogoMIPS: 4591.02
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 16384K
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca
Hi,
I configured my e-mail to be displayed on github, so that you can send me your files. Could you also try to run kmc_tools with a small number of threads? Maybe the issue is related to the many threads used (of course it needs to be fixed anyway, but if it works for a smaller number of threads I will have some ideas about the causes).
Hi, I have sent the files to your email.
I have test kmc
with -t1
and -t4
, but it does not work.
I find the size of dataset also has an effect by changing the option -ci
:
-ci5
: in1: 1%
-ci6
: in1: 3%
-ci7
: in1: 4%
-ci8
: in1: 6%
-ci9
: in1: 100%
(works)
Hi. Could you also test kmc_tools
with lower number of threads? kmc_tools -t16 transform test_31 sort test_31.sorted
I am able to reproduce this bug when I set -t190
(like the default on your node I guess).
I will try to fix this as soon as possible, but I am currently quite busy - nevertheless, I will try to prioritize this.
Yes, it works! Thank you. It is resolved for me.
Is this problem solved?
Hi, I have two nodes in a cluster. In one node, running
kmc_tools
will be completed in a few minutes. But in the other node, runningkmc_tools
will be always inin1: 0%
status and never be completed. The %CPU is 100%. I have tested versions 3.1.1 and the lastest 3.2.1. Any way to fix it? My command iskmc_tools transform k_31 sort k_31.sorted