Closed Jackiezhou233 closed 7 months ago
Solved by release zoe's gpu after preprocess, now it works by RTX 3060 with 12GB. Modify(demo.py):
import gc
def run(self):
# load feature
self.load_data()
# densify and zoe
self.preprocess()
del self.zoe
gc.collect() # trigger trash recycle (to del zoe)
torch.cuda.empty_cache() # release zoe's gpu
# extract features
self.extraction()
del self.extractor
gc.collect()
torch.cuda.empty_cache()
# registration
torch.cuda.empty_cache()
self.match()
# evaluation
self.eval()
Hi @Jackiezhou233 ,
Well done!
Yours,
RuntimeError: CUDA out of memory. Tried to allocate 968.00 MiB (GPU 0; 11.76 GiB total capacity; 8.87 GiB already allocated; 484.12 MiB free; 9.95 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
My device: RTX 3060 with 12GB capacity.
Is there someway to run this? Thanks!