Closed MarcoG5 closed 2 years ago
you might be able to refactor Rays
to lazy-load images in __getitem__()
a quicker hack (that may work just as well) is to just create a large swapfile (if you have an SSD) https://www.digitalocean.com/community/tutorials/how-to-add-swap-space-on-ubuntu-16-04 Some amount of Rays will need to fit in GPU RAM tho
you might be able to refactor
Rays
to lazy-load images in__getitem__()
a quicker hack (that may work just as well) is to just create a large swapfile (if you have an SSD) https://www.digitalocean.com/community/tutorials/how-to-add-swap-space-on-ubuntu-16-04 Some amount of Rays will need to fit in GPU RAM tho
Thank u for ur quick answer, I will look into it!
First I would like to thank you for your great work and making it public! I noticed that all the ground truth data needs to be read into memory at once. I understand that it can speed up the training procedure. But if I use my own dataset with large resolution or large data size, my system memory is not enough for training. Is there any method not to read all data at once? Or this algorithm is designed to be this way?