Describe the bug
I use my own dataset which has a big image set like about 1k-2k images. The format is mip-nerf-360. The image set is too big so that I have to use the num_images_to_sample_from and num_times_to_repeat_images params to reduce the memory usage. But I found the dataloader may have a memory leakage problem because everytime the iter time reach the num_times_to_repeat_images, the memory usage increases about 4-8GB. Even I use a server with 128GB memory, it will easily get memory overflow problem and the program is killed before the training process is done.
Could you give some suggestions about this issues? Thanks!
Describe the bug I use my own dataset which has a big image set like about 1k-2k images. The format is mip-nerf-360. The image set is too big so that I have to use the
num_images_to_sample_from
andnum_times_to_repeat_images
params to reduce the memory usage. But I found the dataloader may have a memory leakage problem because everytime the iter time reach thenum_times_to_repeat_images
, the memory usage increases about 4-8GB. Even I use a server with 128GB memory, it will easily get memory overflow problem and the program is killed before the training process is done.Could you give some suggestions about this issues? Thanks!