Open aaronkollasch opened 2 years ago
I recreated the index on a different machine without --split-memory-limit 128G
and this error went away. Perhaps it was a one-off corruption of the index, an issue when specifying --split-memory-limit
, or something specific to the cluster.
Expected Behavior
Hello, I am trying to run batch searches against ColabFoldDB on a SLURM cluster, following the MSA instructions in the README.
Current Behavior
colabfold_search
fails at theexpandaln
step with the error:Full log file: colabfold_search_output.txt
Steps to Reproduce (for bugs)
bash setup_databases.sh [db_folder]
Note:mmseqs createindex
was run with--split-memory-limit 128G
as mmseqs doesn't detect the SLURM job's memory limit otherwise.colabfold_search --db-load-mode 0 --mmseqs mmseqs_5185d3c/bin/mmseqs batch_1/input_sequences.fa [db_folder] batch_1/result_s8
Input sequences: input_sequences.faIt looks like
colabfold_search
uses--split-memory-limit 0
in the prefilter steps and possibly later steps – I don't think this caused the issue as the job only reached 53 GB usage before it errored, but it would be nice to be able to set this to prevent the job from being killed.Context
I'm looking to perform a batch search and the cluster jobs have a 250GiB limit, so I'm using
--db-load-mode 0
, but let me know if that isn't the best option.Your Environment
@thomashopf