dunbar12138 / DSNeRF

Code release for DS-NeRF (Depth-supervised Neural Radiance Fields)
https://www.cs.cmu.edu/~dsnerf/
MIT License
759 stars 129 forks source link

DefaultCPUAllocator: not enough memory #50

Open Seagullflysxt opened 2 years ago

Seagullflysxt commented 2 years ago

I decrease N_random,chunk,and N_iters to fit my computer to run trainning. Though I decrease these to small digit,after 4999 iters, [enforce fail at C:\cb\pytorch_1000000000000\work\c10\core\impl\alloc_cpu.cpp:81] data. DefaultCPUAllocator: not enough memory: you tried to allocate 48771072 bytes. happened.

Using rtx3060, on windows 10

dunbar12138 commented 2 years ago

Thanks for your interest!

Honestly I didn't test my code on windows. But it seems you are allocating RAM instead of GPU Memory. Could you make sure you're using CUDA?

Also, do you have the full error log? That might also help.

Seagullflysxt commented 2 years ago

Thanks for your reply!

I changed another device with higher cuda memory(24G),the problem above is solved. But I encounterd another problem: When I came to 4999 iters, a type error happend: Traceback (most recent call last): File "run_nerf.py", line 1134, in train() File "run_nerf.py", line 1075, in train rgbs, disps = render_path(torch.Tensor(poses[i_test]).to(device), hwf, args.chunk, render_kwargs_test, gt_imgs=images[i_test], savedir=testsavedir) TypeError: expected CPU (got CUDA)

Using A5000(24G), on Linux, pytorch 1.7.0

dunbar12138 commented 2 years ago

I cannot reproduce this error on my end. Could you make sure you have pulled the latest version?

yyzzyy78 commented 2 years ago

Thanks for your work! I meet the same error with the latest version of the project, with cuda11.1 and pytorch1.7.0.

yyzzyy78 commented 2 years ago

Thanks for your work! I solved the issue by changing the item torch.Tensor(poses[i_test]).to(device) to poses[i_test]