when I run this code ,I have this question
Traceback (most recent call last):
File "D:\软件\pycharm专业版\PyCharm 2022.1.3\plugins\python\helpers\pydev\pydevd.py", line 1491, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "D:\软件\pycharm专业版\PyCharm 2022.1.3\plugins\python\helpers\pydev_pydev_imps_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "D:/科研/Soblev_INrs/Sobolev_INRs-main/Experiments/inverse_rendering/train.py", line 696, in
train()
File "D:/科研/Soblev_INrs/Sobolev_INRs-main/Experiments/inverse_rendering/train.py", line 613, in train
der_loss = der_mse(rgb, coordinate_s, target_grad_s)
File "D:\科研\Soblev_INrs\Sobolev_INRs-main\Experiments\inverse_rendering\loss.py", line 15, in der_mse
pred_grad_r = diff_operators.gradient(
File "D:\科研\Soblev_INrs\Sobolev_INRs-main\Experiments\inverse_rendering\diff_operators.py", line 7, in gradient
grad = torch.autograd.grad(y, [x], grad_outputs=grad_outputs, create_graph=True)[0]
File "D:\Anaconda\envs\fengxiangNerf\lib\site-packages\torch\autograd__init__.py", line 276, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 6.00 GiB total capacity; 5.17 GiB already allocated; 0 bytes free; 5.31 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting maxsplit
I don't know the reason why this code have question
Sorry for the late reply.
The reason for this RuntimeError is that your GPU memory is not large enough, you can try to run this code on GPU with larger memory.
when I run this code ,I have this question Traceback (most recent call last): File "D:\软件\pycharm专业版\PyCharm 2022.1.3\plugins\python\helpers\pydev\pydevd.py", line 1491, in _exec pydev_imports.execfile(file, globals, locals) # execute the script File "D:\软件\pycharm专业版\PyCharm 2022.1.3\plugins\python\helpers\pydev_pydev_imps_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "D:/科研/Soblev_INrs/Sobolev_INRs-main/Experiments/inverse_rendering/train.py", line 696, in
train()
File "D:/科研/Soblev_INrs/Sobolev_INRs-main/Experiments/inverse_rendering/train.py", line 613, in train
der_loss = der_mse(rgb, coordinate_s, target_grad_s)
File "D:\科研\Soblev_INrs\Sobolev_INRs-main\Experiments\inverse_rendering\loss.py", line 15, in der_mse
pred_grad_r = diff_operators.gradient(
File "D:\科研\Soblev_INrs\Sobolev_INRs-main\Experiments\inverse_rendering\diff_operators.py", line 7, in gradient
grad = torch.autograd.grad(y, [x], grad_outputs=grad_outputs, create_graph=True)[0]
File "D:\Anaconda\envs\fengxiangNerf\lib\site-packages\torch\autograd__init__.py", line 276, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 6.00 GiB total capacity; 5.17 GiB already allocated; 0 bytes free; 5.31 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting maxsplit
I don't know the reason why this code have question