Closed mkarikom closed 11 months ago
Hello,
You can reduce number of samples parameter -n 100000
from 100000 to 10000 (or 1000). After that, you can run presample_noise.py
several times if you want more samples. It generates several files. These files contains torch tensors saved with torch.save
. You can combine them into one with torch.cat and save it again into single file. As a result, you will have a lot of samples and will not have any problems with GPU.
I hope it helps.
Best, Pavel
ahh, that makes sense since we can still take the same number of steps in each batch
thanks!
Hi, How can I control how much GPU ram is allocated during pre-sampling? I've noticed that pre-sampling more than 4-5 dimensional categorical needs a lot of memory. For instance, although the 2 and 4 dimensional examples (promoter and bernoulli) run fine, I get the following error when running the sudoku (9-dim) presampling on a 24GB gpu: