xinntao / EDVR

Winning Solution in NTIRE19 Challenges on Video Restoration and Enhancement (CVPR19 Workshops) - Video Restoration with Enhanced Deformable Convolutional Networks. EDVR has been merged into BasicSR and this repo is a mirror of BasicSR.
https://github.com/xinntao/BasicSR
1.48k stars 320 forks source link

Cannot run setup.py develop anymore #143

Open nerdogram opened 4 years ago

nerdogram commented 4 years ago

Not sure what changed but I am getting this error in Colab now:

running develop
running egg_info
writing deform_conv.egg-info/PKG-INFO
writing dependency_links to deform_conv.egg-info/dependency_links.txt
writing top-level names to deform_conv.egg-info/top_level.txt
/usr/local/lib/python3.6/dist-packages/torch/utils/cpp_extension.py:304: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
  warnings.warn(msg.format('we could not find ninja.'))
writing manifest file 'deform_conv.egg-info/SOURCES.txt'
running build_ext
building 'deform_conv_cuda' extension
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.6m -c src/deform_conv_cuda.cpp -o build/temp.linux-x86_64-3.6/src/deform_conv_cuda.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=deform_conv_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
src/deform_conv_cuda.cpp: In function ‘void shape_check(at::Tensor, at::Tensor, at::Tensor*, at::Tensor, int, int, int, int, int, int, int, int, int, int)’:
src/deform_conv_cuda.cpp:65:3: error: ‘AT_CHECK’ was not declared in this scope
   AT_CHECK(weight.ndimension() == 4,
   ^~~~~~~~
src/deform_conv_cuda.cpp:65:3: note: suggested alternative: ‘DCHECK’
   AT_CHECK(weight.ndimension() == 4,
   ^~~~~~~~
   DCHECK
src/deform_conv_cuda.cpp: In function ‘int deform_conv_forward_cuda(at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, int, int, int, int, int, int, int, int, int, int, int)’:
src/deform_conv_cuda.cpp:192:3: error: ‘AT_CHECK’ was not declared in this scope
   AT_CHECK((offset.size(0) == batchSize), "invalid batch size of offset");
   ^~~~~~~~
src/deform_conv_cuda.cpp:192:3: note: suggested alternative: ‘DCHECK’
   AT_CHECK((offset.size(0) == batchSize), "invalid batch size of offset");
   ^~~~~~~~
   DCHECK
src/deform_conv_cuda.cpp: In function ‘int deform_conv_backward_input_cuda(at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, int, int, int, int, int, int, int, int, int, int, int)’:
src/deform_conv_cuda.cpp:298:3: error: ‘AT_CHECK’ was not declared in this scope
   AT_CHECK((offset.size(0) == batchSize), 3, "invalid batch size of offset");
   ^~~~~~~~
src/deform_conv_cuda.cpp:298:3: note: suggested alternative: ‘DCHECK’
   AT_CHECK((offset.size(0) == batchSize), 3, "invalid batch size of offset");
   ^~~~~~~~
   DCHECK
src/deform_conv_cuda.cpp: In function ‘int deform_conv_backward_parameters_cuda(at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, int, int, int, int, int, int, int, int, int, int, float, int)’:
src/deform_conv_cuda.cpp:413:3: error: ‘AT_CHECK’ was not declared in this scope
   AT_CHECK((offset.size(0) == batchSize), "invalid batch size of offset");
   ^~~~~~~~
src/deform_conv_cuda.cpp:413:3: note: suggested alternative: ‘DCHECK’
   AT_CHECK((offset.size(0) == batchSize), "invalid batch size of offset");
   ^~~~~~~~
   DCHECK
src/deform_conv_cuda.cpp: In function ‘void modulated_deform_conv_cuda_forward(at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, int, int, int, int, int, int, int, int, int, int, bool)’:
src/deform_conv_cuda.cpp:493:3: error: ‘AT_CHECK’ was not declared in this scope
   AT_CHECK(input.is_contiguous(), "input tensor has to be contiguous");
   ^~~~~~~~
src/deform_conv_cuda.cpp:493:3: note: suggested alternative: ‘DCHECK’
   AT_CHECK(input.is_contiguous(), "input tensor has to be contiguous");
   ^~~~~~~~
   DCHECK
src/deform_conv_cuda.cpp: In function ‘void modulated_deform_conv_cuda_backward(at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, int, int, int, int, int, int, int, int, int, int, bool)’:
src/deform_conv_cuda.cpp:574:3: error: ‘AT_CHECK’ was not declared in this scope
   AT_CHECK(input.is_contiguous(), "input tensor has to be contiguous");
   ^~~~~~~~
src/deform_conv_cuda.cpp:574:3: note: suggested alternative: ‘DCHECK’
   AT_CHECK(input.is_contiguous(), "input tensor has to be contiguous");
   ^~~~~~~~
   DCHECK
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1

Anything I need to install/update beforehand?

YourSonnet18 commented 4 years ago

It seemed that the error was caused by a different version of torch. I downgraded torch to 1.1 and it worked.