Open Sharlekin opened 1 year ago
Ok, based on #118 and https://adc-connect.org/v0.15.7/api/libadcc.AdcMemory.html?highlight I seems to identify syntaxes as adcc.memory_pool.initialise('/path_to_scratch/',2000000000,allocator="libxm")
for limiting RAM to 2Gb (test calculation, benzyl radical on 6-31G, 77 orbitals).
Growth of timing is pretty impressive:
UPD: Seems amount of memory specified as second variable is not affecting neither time nor Memory Utilisation (checked with RAMs 2Gb, 4Gb, 50 Gb)
I will report if there will be more interesting results with big system and allocation of 3 Tb RAM.
Regards.
Dear @Sharlekin
The short answer is ADC calculations need a lot of memory. This can be reduced by tricks (caching data on disk, density-fitting etc.), but we don't have any of these properly implemented in adcc
. libxm
in principle allows you to cache tensors on disk as you have reported, but similarly it increases runtimes a lot. This is known behaviour and basically the reason why libxm
is not advertised more.
We currently have no way of hard-limiting memory usage. The max_block_size
does something else (to do with blocks used for tensor contractions). We don't recommend the default to be altered.
Dear ADCC-Team,
I recently started to use ADCC, and already with calculation of 379 basis functions and adc2 I am going over 3Tb RAM, that is actually my limit (openspell calculation, 10 states requested, 31 atom, I can send structure per email if it is needed). I saw AdcMemory class with a lot of interesting keys, and also attributes like cached_eri_blocks and cached_fock_blocks, that perhaps should also help, but didn't found any examples of their proper usage. Stupid inserting them in the way similar to conv_tol attribute ended with:
Could you please show an example how to handle RAM wishes of the code?
Best regards