Closed Echo411 closed 1 year ago
Hi, thanks for your question.
25 GB is safe, but 24 GB should be enough for most cases. Make sure your conda environment is the same as ours.
You may try our exemplar input.
Hi, thanks for your question.
25 GB is safe, but 24 GB should be enough for most cases. Make sure your conda environment is the same as ours.
You may try our exemplar input.
hi, thank you very much for your reply! The input I use is the exemplar input given in the code. Is there any way to reduce memory? Running this code is very important to me, thanks again for your reply!
Upon testing, I've identified a simple way to reduce memory usage in the latest version. One effective solution is to call torch.cuda.empty_cache()
more frequently, as pointed out in this section of the code.
During our evaluation, we observed that the memory consumption was limited to a maximum of 22 GB. Hope this helps you out!
I encourage you to clone the revised version of our repository and give it another shot.
This looks great!
But I ran into an out of memory error while running the code. The device I'm using is RTX 3090, 24G. Could you share some information on how much memory is needed to run this code successfully? Thanks!
The error message is as follows: