Closed zfkdsg closed 4 months ago
By the way,my GPU memory is 4GB
Reducing the batch size in the config file may help.
thanks I will try it out。
Additionally, you may
decrease the size of the voxel hashing buffer: https://github.com/PRBonn/PIN_SLAM/blob/1c2fc2fcc910520e0aa9d7e8da2d889f456f0c4b/utils/config.py#L94
These would also take up self.buffer_size*8
Bytes, which equals to 400 MB here.
decrease the training data pool capacity (default value 1e7
):
This can be directly set in the config file, add:
continual:
pool_capacity: 1e6
Hello, when I try to run your code run_demo.yaml,the following error has occurred: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 396.00 MiB. GPU 0 has a total capacity of 3.81 GiB of which 348.00 MiB is free. My gpu is RTX3050,How can I modify this program to make it run properly. Thank you!