Open gundalav opened 2 years ago
I assume your computer does not have enough RAM. How much RAM does your server has?
I am using AWS p3.2xlarge instance. It has around 61GB RAM.
Online searches: Our Colabfold server has ~760GB RAM and keeps full database and index in memory. Batch searches: To perform a batch search you require less memory. But its still approx 1 byte per residue. So I would assume you would probably require at least 90GB. We still need to figure out whats the lower bound for this database.
i have 128G RAM, but i have same erro .
the erro :
Estimated memory consumption: 560G
Process needs more than 38G main memory.
Increase the size of --split or set it to 0 to automatically optimize target database split.
Write VERSION (0)
Write META (1)
Write SCOREMATRIX3MER (4)
Write SCOREMATRIX2MER (3)
Write SCOREMATRIXNAME (2)
Write SPACEDPATTERN (23)
Write GENERATOR (22)
Write DBR1INDEX (5)
Write DBR1DATA (6)
Write DBR2INDEX (7)
Write DBR2DATA (8)
Write HDR1INDEX (18)
Write HDR1DATA (19)
Write ALNINDEX (24)
Write ALNDATA (25)
Index table: counting k-mers
[=================================================================] 100.00% 209.34M 7m 34s 698ms
Index table: Masked residues: 1117805658
Can not allocate entries memory in IndexTable::initMemory
Error: indexdb died
Hi,
I was trying to setup the database. But it breaks upon the execution of this code:
mmseqs createindex colabfold_envdb_202108_db tmp2 --remove-tmp-files 1
The error message I get is this:It works fine with
uniref30_2103.tar.gz
file though.How can I resolve the problem?
G.V.