Open Stavrosnco opened 5 years ago
I get also some hash table problem as well with Kraken2 installed with Homebrew.
The classification runs well with special databases I build unlike Stravrosnco, but with minikraken2_v1_8GB and _v2_8GB, I get: dump_table: Error reading in hash table
The classification with minikraken2_v2_8GB was working just fine with Kraken2 installed from source code.
I would also appreciate any suggestion. Thank you!
Hello,
I have the same issue with kraken2-2.0.8-beta:
Unknown option: fastq-input
Loading database information...classify: Error reading in hash table
Have you fixed your issues ?
Thanks a lot !
@jfourquet2 just a note that I came across this issue when troubleshooting a kraken2 analysis, so maybe this will help. First, the --fastq-input argument is no longer used in kraken2 it appears. Second, kraken2 'classify' step reads in the entire hash table, so you need to give it enough memory to do it or it will throw the 'Loading database information...classify: Error reading in hash table' error when it gets killed for running out of available memory.
I am also experiencing this issue. Has there been any progress in a resolution? I'm trying to load a ~64GB hash file (hash.k2d) on a machine with >400GB of memory. Kraken2 reports the Error reading in hash table
after taking up about 64GB of RAM. I am running:
kraken2 --db /srv/classifier_dbs/kraken2/kraken2_db --threads 64 --report test.kreport2 --output test.tsv --gzip-compressed --paired reads_R1.fq.gz reads_R2.fq.gz
Kraken2 version 2.0.8-beta
Yes - I was able to solve this problem by running Kraken2 on a server with more RAM. Thank you for the feedback!
If I only used 1 thread, I might have been able to run this locally (if there is no high performance cluster available)?
How many of these issues are on PBS SGE or SLURM? I have the same problem (on SLURM) and found that the problem was the default memory allocation to job was 50GB - slightly smaller than my kraken database. Adding
to the submission script foxed the problem for me.
Hello,
Does anyone have a solution to this issue yet? I'm running kraken2 with the MiniKraken2_v1_8GB database. I keep receive this error:
Loading database information...Killed
Cheers, Danny
Hi, I use the version of v.2.1.1 and run it on the server with a larger ram, i still get the followed error: Loading database information...classify: Error reading in hash table
Has anyone could help to solve it?
Best, Hongzhong
Hello, Does anyone have a solution to this issue yet? I'm running kraken2 with the MiniKraken2_v1_8GB database. I keep receive this error:
Loading database information...Killed
Cheers, Danny
@Danny-Science I have the same problem. Were you able to solve this? Thanks!
@jfourquet2 just a note that I came across this issue when troubleshooting a kraken2 analysis, so maybe this will help. First, the --fastq-input argument is no longer used in kraken2 it appears. Second, kraken2 'classify' step reads in the entire hash table, so you need to give it enough memory to do it or it will throw the 'Loading database information...classify: Error reading in hash table' error when it gets killed for running out of available memory.
I am using a 16gb machine with minikraken db which is 8gb. why am I having this error if I have more than enough memory?
Hello,
I have been trying to follow the steps to generate a kraken2 db of all bacteria. I've run the commands
kraken2-build --threads 8 --download-taxonomy --db all_bac
kraken2-build --download-library "bacteria" --db all_bac/ --threads 8
kraken2-build --build --db all_bac/ --threads 8 --max-db-size 8589934592
This resulted in:
Creating sequence ID to taxonomy ID map (step 1)... Sequence ID to taxonomy ID map already present, skipping map creation. Estimating required capacity (step 2)... Estimated hash table requirement: 31314263768 bytes Specifying lower maximum hash table size of 8589934592 bytes Capacity estimation complete. [9m42.202s] Building database files (step 3)... Taxonomy parsed and converted. CHT created with 14 bits reserved for taxid. Completed processing of 29226 sequences, 55998669238 bp Writing data to disk... complete. Database files completed. [13m54.035s] Database construction complete. [Total: 23m36.338s]
However when I try to classify using:
kraken2 --db all_bac/ /mnt/c/Data/minION/e_coli/analysis/MS1/FAH73738_dc131eb88b18432f748a865caf3119fecb4e17d8_ms1_0.fastq
Or even run:
kraken2-inspect --db all_bac/
I get the error message:
dump_table: Error reading in hash table
This is using Kraken version 2.0.7-beta.
Any suggestions on what to do to resolve this? I also experienced the same error using the prebuilt MiniKraken V1 and V2 databases.
Thank you very much!