zhengqili / CGIntrinsics

This is the CGIntrinsics implementation described in the paper "CGIntrinsics: Better Intrinsic Image Decomposition through Physically-Based Rendering, Z. Li and N. Snavely, ECCV 2018".
MIT License
75 stars 17 forks source link

RuntimeError #7

Open AlannnZzz opened 5 years ago

AlannnZzz commented 5 years ago

I first got the "[Errno 32] Broken pipe" error, then I changed the number of workers in the DataLoader to 0, and it solved the problem. Now, when I run train.py, I get the Runtime error below:

Traceback (most recent call last):

File "", line 1, in runfile('E:/Research/img_decomp/CGIntrinsics/train.py', wdir='E:/Research/img_decomp/CGIntrinsics')

File "E:\Anaconda\lib\site-packages\spyder\utils\site\sitecustomize.py", line 710, in runfile execfile(filename, namespace)

File "E:\Anaconda\lib\site-packages\spyder\utils\site\sitecustomize.py", line 101, in execfile exec(compile(f.read(), filename, 'exec'), namespace)

File "E:/Research/img_decomp/CGIntrinsics/train.py", line 105, in model.optimize_intrinsics(epoch, data_set_name)

File "E:\Research\img_decomp\CGIntrinsics\models\intrinsic_model.py", line 98, in optimize_intrinsics self.forward_both()

File "E:\Research\img_decomp\CGIntrinsics\models\intrinsic_model.py", line 81, in forward_both self.prediction_R, self.prediction_S = self.netG.forward(self.input_images)

File "E:\Research\img_decomp\CGIntrinsics\models\networks.py", line 1819, in forward return nn.parallel.data_parallel(self.model, input, self.gpu_ids)

File "E:\Anaconda\lib\site-packages\torch\nn\parallel\data_parallel.py", line 183, in data_parallel inputs, module_kwargs = scatter_kwargs(inputs, module_kwargs, device_ids, dim)

File "E:\Anaconda\lib\site-packages\torch\nn\parallel\scatter_gather.py", line 35, in scatter_kwargs inputs = scatter(inputs, target_gpus, dim) if inputs else []

File "E:\Anaconda\lib\site-packages\torch\nn\parallel\scatter_gather.py", line 28, in scatter return scatter_map(inputs)

File "E:\Anaconda\lib\site-packages\torch\nn\parallel\scatter_gather.py", line 15, in scatter_map return list(zip(*map(scatter_map, obj)))

File "E:\Anaconda\lib\site-packages\torch\nn\parallel\scatter_gather.py", line 13, in scatter_map return Scatter.apply(target_gpus, None, dim, obj)

File "E:\Anaconda\lib\site-packages\torch\nn\parallel_functions.py", line 89, in forward outputs = comm.scatter(input, target_gpus, chunk_sizes, ctx.dim, streams)

File "E:\Anaconda\lib\site-packages\torch\cuda\comm.py", line 148, in scatter return tuple(torch._C._scatter(tensor, devices, chunk_sizes, dim, streams))

RuntimeError: CUDA error: invalid device ordinal (exchangeDevice at C:/a/w/1/s/tmp_conda_3.6_090826/conda/conda-bld/pytorch_1550394668685/work/aten/src\ATen/cuda/detail/CUDAGuardImpl.h:28) (no backtrace available)

I did not change the --gpu_ids. What number should I change it to?