I use the RTX 3090 to run the demo, but it report the cuda error : out of memory,I am sure that the memory of GPU is enough. Could anyone give me an answer?
These are the error information:
Traceback (most recent call last):
File "/home/chenghuayuan/SLAM/DROID-SLAM/demo.py", line 127, in
droid = Droid(args)
File "/home/chenghuayuan/SLAM/DROID-SLAM/droid_slam/droid.py", line 19, in init
self.load_weights(args.weights)
File "/home/chenghuayuan/SLAM/DROID-SLAM/droid_slam/droid.py", line 51, in load_weights
(k.replace("module.", ""), v) for (k, v) in torch.load(weights).items()])
File "/home/chenghuayuan/anaconda3/envs/droidenv/lib/python3.9/site-packages/torch/serialization.py", line 607, in load
return _load(opened_zipfile, map_location, pickle_module, *pickle_load_args)
File "/home/chenghuayuan/anaconda3/envs/droidenv/lib/python3.9/site-packages/torch/serialization.py", line 882, in _load
result = unpickler.load()
File "/home/chenghuayuan/anaconda3/envs/droidenv/lib/python3.9/site-packages/torch/serialization.py", line 857, in persistent_load
load_tensor(data_type, size, key, _maybe_decode_ascii(location))
File "/home/chenghuayuan/anaconda3/envs/droidenv/lib/python3.9/site-packages/torch/serialization.py", line 846, in load_tensor
loaded_storages[key] = restore_location(storage, location)
File "/home/chenghuayuan/anaconda3/envs/droidenv/lib/python3.9/site-packages/torch/serialization.py", line 175, in default_restore_location
result = fn(storage, location)
File "/home/chenghuayuan/anaconda3/envs/droidenv/lib/python3.9/site-packages/torch/serialization.py", line 157, in _cuda_deserialize
return obj.cuda(device)
File "/home/chenghuayuan/anaconda3/envs/droidenv/lib/python3.9/site-packages/torch/_utils.py", line 79, in _cuda
return newtype(self.size()).copy(self, non_blocking)
File "/home/chenghuayuan/anaconda3/envs/droidenv/lib/python3.9/site-packages/torch/cuda/init.py", line 606, in _lazy_new
return super(_CudaBase, cls).new(cls, args, **kwargs)
RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
`
I use the RTX 3090 to run the demo, but it report the cuda error : out of memory,I am sure that the memory of GPU is enough. Could anyone give me an answer? These are the error information: Traceback (most recent call last): File "/home/chenghuayuan/SLAM/DROID-SLAM/demo.py", line 127, in
droid = Droid(args)
File "/home/chenghuayuan/SLAM/DROID-SLAM/droid_slam/droid.py", line 19, in init
self.load_weights(args.weights)
File "/home/chenghuayuan/SLAM/DROID-SLAM/droid_slam/droid.py", line 51, in load_weights
(k.replace("module.", ""), v) for (k, v) in torch.load(weights).items()])
File "/home/chenghuayuan/anaconda3/envs/droidenv/lib/python3.9/site-packages/torch/serialization.py", line 607, in load
return _load(opened_zipfile, map_location, pickle_module, *pickle_load_args)
File "/home/chenghuayuan/anaconda3/envs/droidenv/lib/python3.9/site-packages/torch/serialization.py", line 882, in _load
result = unpickler.load()
File "/home/chenghuayuan/anaconda3/envs/droidenv/lib/python3.9/site-packages/torch/serialization.py", line 857, in persistent_load
load_tensor(data_type, size, key, _maybe_decode_ascii(location))
File "/home/chenghuayuan/anaconda3/envs/droidenv/lib/python3.9/site-packages/torch/serialization.py", line 846, in load_tensor
loaded_storages[key] = restore_location(storage, location)
File "/home/chenghuayuan/anaconda3/envs/droidenv/lib/python3.9/site-packages/torch/serialization.py", line 175, in default_restore_location
result = fn(storage, location)
File "/home/chenghuayuan/anaconda3/envs/droidenv/lib/python3.9/site-packages/torch/serialization.py", line 157, in _cuda_deserialize
return obj.cuda(device)
File "/home/chenghuayuan/anaconda3/envs/droidenv/lib/python3.9/site-packages/torch/_utils.py", line 79, in _cuda
return newtype(self.size()).copy(self, non_blocking)
File "/home/chenghuayuan/anaconda3/envs/droidenv/lib/python3.9/site-packages/torch/cuda/init.py", line 606, in _lazy_new
return super(_CudaBase, cls).new(cls, args, **kwargs)
RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
`