JunyuanDeng / NeRF-LOAM

[ICCV2023] NeRF-LOAM: Neural Implicit Representation for Large-Scale Incremental LiDAR Odometry and Mapping
MIT License
491 stars 29 forks source link

AttributeError: 'NoneType' object has no attribute 'cuda' #21

Closed hlldy6858 closed 5 months ago

hlldy6858 commented 5 months ago

你好,请问我这个问题是什么原因? 显卡是3060 12G,torch1.10,CUDA11.1,但报错并不是因为爆显存

PatchWorkpp::PatchWorkpp() - INITIALIZATION COMPLETE Decoder( (pe): Same() (pts_linears): ModuleList( (0): Linear(in_features=16, out_features=256, bias=True) (1): Linear(in_features=256, out_features=256, bias=True) ) (sdf_out): Linear(in_features=256, out_features=1, bias=True) ) initializing first_frame: 0 initializing the first frame ... PatchWorkpp::PatchWorkpp() - INITIALIZATION COMPLETE tracking process started! **** tracking frame: 0%| | 0/99 [00:02<?, ?it/s] Process Process-3: Traceback (most recent call last): File "/home/hlldy/anaconda3/envs/nerf/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/home/hlldy/anaconda3/envs/nerf/lib/python3.8/multiprocessing/process.py", line 108, in run self._target(self._args, self._kwargs) File "/home/hlldy/NeRF-LOAM/src/tracking.py", line 83, in spin self.do_tracking(share_data, current_frame, kf_buffer) File "/home/hlldy/NeRF-LOAM/src/tracking.py", line 101, in do_tracking decoder = share_data.decoder.cuda() AttributeError: 'NoneType' object has no attribute 'cuda' ^C^CTraceback (most recent call last):

JunyuanDeng commented 5 months ago

Hello, Because we need the relative pose between t-1 and t-2 frames, which is inaccessible for first frame, so we do more tracking iteration for the first frame.

In your case, it seems that the tracking for the first frame is still running while the mapping need the tracked results.

So you may need wait more time for the first frame. You can adjust the the code here to wait, e.g., 60s.

Hope this can solve your problem!

hlldy6858 commented 5 months ago

Hello, Because we need the relative pose between t-1 and t-2 frames, which is inaccessible for first frame, so we do more tracking iteration for the first frame.

In your case, it seems that the tracking for the first frame is still running while the mapping need the tracked results.

So you may need wait more time for the first frame. You can adjust the the code here to wait, e.g., 60s.

Hope this can solve your problem!

Thank you for your help! I adjust 300s for the wait of code,the problem has now been resolved.

JunyuanDeng commented 5 months ago

Great! Happy to hear that!

JunyuanDeng commented 5 months ago

By the way, you can uncomment all the torch.cuda.empty_cache() in files "src/variations/render_helpers.py" and "src/variations/voxel_helpers.py" to gain some efficiency. While this will centainly add some GPU memory.

hlldy6858 commented 5 months ago

By the way, you can uncomment all the torch.cuda.empty_cache() in files "src/variations/render_helpers.py" and "src/variations/voxel_helpers.py" to gain some efficiency. While this will centainly add some GPU memory.

Thanks for your advice,if I have enough GPU memory,I will try to uncomment it. : )