dusty-nv / jetson-inference

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
https://developer.nvidia.com/embedded/twodaystoademo
MIT License
7.85k stars 2.98k forks source link

install-pytorch.sh does not work. #521

Closed ZiCog closed 1 year ago

ZiCog commented 4 years ago

I have a Jetson Nano with the OS installed from nv-jetson-nano-sd-card-image-r32.3.1.zip

As a newbie to Jetson and Python I started working through the Hello AI World pages and all was going well until I came to the "Transfer Learning with PyTorch". Running the "install-pytorch.sh" and selecting "PyTorch c1.1.0 for Pyhon 3.6" seemed to work OK. But the installed torchvision does not work.

$ python3
Python 3.6.9 (default, Nov  7 2019, 10:44:02) 
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print(torch.__version__)
1.1.0
>>> print('CUDA available: ' + str(torch.cuda.is_available()))
CUDA available: True
>>> a = torch.cuda.FloatTensor(2).zero_()
>>> a = torch.cuda.FloatTensor(2).zero_()
>>> print('Tensor a = ' + str(a))
Tensor a = tensor([0., 0.], device='cuda:0')
>>> b = torch.randn(2).cuda()
>>> print('Tensor b = ' + str(b))
Tensor b = tensor([-0.0116, -0.2358], device='cuda:0')
>>> c = a + b
>>> print('Tensor c = ' + str(c))
Tensor c = tensor([-0.0116, -0.2358], device='cuda:0')
>>> import torchvision
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.6/dist-packages/torchvision-0.3.0-py3.6-linux-aarch64.egg/torchvision/__init__.py", line 2, in <module>
  File "/usr/local/lib/python3.6/dist-packages/torchvision-0.3.0-py3.6-linux-aarch64.egg/torchvision/datasets/__init__.py", line 9, in <module>
  File "/usr/local/lib/python3.6/dist-packages/torchvision-0.3.0-py3.6-linux-aarch64.egg/torchvision/datasets/fakedata.py", line 3, in <module>
  File "/usr/local/lib/python3.6/dist-packages/torchvision-0.3.0-py3.6-linux-aarch64.egg/torchvision/transforms/__init__.py", line 1, in <module>
  File "/usr/local/lib/python3.6/dist-packages/torchvision-0.3.0-py3.6-linux-aarch64.egg/torchvision/transforms/transforms.py", line 17, in <module>
  File "/usr/local/lib/python3.6/dist-packages/torchvision-0.3.0-py3.6-linux-aarch64.egg/torchvision/transforms/functional.py", line 5, in <module>
ImportError: cannot import name 'PILLOW_VERSION'
>>> 

I subsequently tried to install PyTorch v1.4.0 from the announcement post on the forum here: https://devtalk.nvidia.com/default/topic/1049071/jetson-nano/pytorch-for-jetson-nano-version-1-4-0-now-available/1

That also failed, sadly I did not record the error.

So how does one install PyTorch on Nano with the current OS such that one can complete the inference exercises?

Can the document be updated to describe whatever installation process does work?

dusty-nv commented 4 years ago

Hi @ZiCog, can you try running pip3 install 'pillow<7' and then see if you can import torchvision ?

Here is the PyTorch bug about this: https://github.com/pytorch/vision/issues/1712

ZiCog commented 4 years ago

Bingo! That did it.

$ pip3 install "pillow<7"
Defaulting to user installation because normal site-packages is not writeable
Collecting pillow<7
  Downloading Pillow-6.2.2.tar.gz (37.8 MB)
     |████████████████████████████████| 37.8 MB 6.4 kB/s 
Building wheels for collected packages: pillow
  Building wheel for pillow (setup.py) ... done
  Created wheel for pillow: filename=Pillow-6.2.2-cp36-cp36m-linux_aarch64.whl size=966497 sha256=d06570fe4c6adf508f1c4a3766f545ff62d77f7628e164733235e37faaf5d87e
  Stored in directory: /home/zicog/.cache/pip/wheels/19/4f/2e/b77ea60ebf837f5b06ed497d4b6798c3e4da683def2ad65ca2
Successfully built pillow
Installing collected packages: pillow
Successfully installed pillow-6.2.2
$ python3
Python 3.6.9 (default, Nov  7 2019, 10:44:02) 
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torchvision
>>> print(torchvision.__version__)
0.3.0
>>> 

Thanks.

dusty-nv commented 4 years ago

OK thanks, I just patched the install-pytorch.sh script in master for this - see commit 83f2ae

I also updated the manual install instructions for torchvision here.

ZiCog commented 4 years ago

Great.

As I said the instructions for installing PyTorch v1.4.0 on that forum page do not work on the Nano. But I guess that is another story.

Anyway the cat-dog training is now underway here.

Thanks again.

flurpo commented 4 years ago

Dusty-nv,

Thanks for the update. I'm running Jetpack 4.3 on my TX2. I ran the wheel files to get pytorch installed on python 2.7 and then python 3.6.9. I seem to be able to import torch and torchvision fine under python 2.7 but I'm only able to import torch on python 3.6.9, no torchvision.

Jetpack 4.3 Ubuntu 18.04 Torch 1.4.0 (+libopenblas-base)

import torchvision

Traceback (most recent call last): File "", line 1, in ModuleNotFoundError: No Module named 'torchvision'

When trying to install torchvision per the link in your message above my installation exits with;

error: command 'aarch64-linux-gnu-gcc' failed with exit status 1

Anything jump out here? I'm so close. Hate to have to rebuild this again. Best,

dusty-nv commented 4 years ago

When trying to install torchvision per the link in your message above my installation exits with;

error: command 'aarch64-linux-gnu-gcc' failed with exit status 1

Hmm were there any other errors listed while building torchvision? Typically you would see a fuller error log above a message like this.

If you are building torchvision for Python 2 and then Python 3, you may need a sudo python setup.py clean in between. And then for Python 3, remember to use sudo python3 setup.py install (instead of python2)

flurpo commented 4 years ago

Wow...thanks so much for quick response dusty-nv- I've been stepping through setup.py all morning. I suspect something in my branch.

I used your setup string;

$ sudo apt-get install libjpeg-dev zlib1g-dev $ git clone --branch build/v0.5.0 https://github.com/pytorch/vision torchvision $ cd torchvision $ sudo python3 setup.py install

Unfortuneately, the log is several 1000 lines long so I appeviated it a bit to show the recycling error;

tx2:/jetson-inference/torchvision$ sudo python setup.py clean Building wheel torchvision-0.5.0 running clean tx2:/jetson-inference/torchvision$ sudo python3 setup.py install Building wheel torchvision-0.5.0 running install running bdist_egg running egg_info creating torchvision.egg-info writing torchvision.egg-info/PKG-INFO writing dependency_links to torchvision.egg-info/dependency_links.txt writing requirements to torchvision.egg-info/requires.txt writing top-level names to torchvision.egg-info/top_level.txt writing manifest file 'torchvision.egg-info/SOURCES.txt' reading manifest file 'torchvision.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no previously-included files matching 'pycache' found under directory '' warning: no previously-included files matching '.py[co]' found under directory '*' writing manifest file 'torchvision.egg-info/SOURCES.txt' installing library code to build/bdist.linux-aarch64/egg running install_lib running build_py creating build creating build/lib.linux-aarch64-3.6 creating build/lib.linux-aarch64-3.6/torchvision copying torchvision/extension.py -> build/lib.linux-aarch64-3.6/torchvision copying torchvision/init.py -> build/lib.linux-aarch64-3.6/torchvision copying torchvision/utils.py -> build/lib.linux-aarch64-3.6/torchvision copying torchvision/version.py -> build/lib.linux-aarch64-3.6/torchvision creating build/lib.linux-aarch64-3.6/torchvision/models copying torchvision/models/alexnet.py -> build/lib.linux-aarch64-3.6/torchvision/models copying torchvision/models/googlenet.py -> build/lib.linux-aarch64-3.6/torchvision/models copying torchvision/models/densenet.py -> build/lib.linux-aarch64-3.6/torchvision/models copying torchvision/models/shufflenetv2.py -> build/lib.linux-aarch64-3.6/torchvision/models copying torchvision/models/init.py -> build/lib.linux-aarch64-3.6/torchvision/models copying torchvision/models/utils.py -> build/lib.linux-aarch64-3.6/torchvision/models copying torchvision/models/resnet.py -> build/lib.linux-aarch64-3.6/torchvision/models copying torchvision/models/inception.py -> build/lib.linux-aarch64-3.6/torchvision/models copying torchvision/models/mnasnet.py -> build/lib.linux-aarch64-3.6/torchvision/models copying torchvision/models/_utils.py -> build/lib.linux-aarch64-3.6/torchvision/models copying torchvision/models/mobilenet.py -> build/lib.linux-aarch64-3.6/torchvision/models copying torchvision/models/squeezenet.py -> build/lib.linux-aarch64-3.6/torchvision/models copying torchvision/models/vgg.py -> build/lib.linux-aarch64-3.6/torchvision/models creating build/lib.linux-aarch64-3.6/torchvision/ops copying torchvision/ops/boxes.py -> build/lib.linux-aarch64-3.6/torchvision/ops copying torchvision/ops/deform_conv.py -> build/lib.linux-aarch64-3.6/torchvision/ops copying torchvision/ops/_register_onnx_ops.py -> build/lib.linux-aarch64-3.6/torchvision/ops copying torchvision/ops/ps_roi_align.py -> build/lib.linux-aarch64-3.6/torchvision/ops copying torchvision/ops/roi_align.py -> build/lib.linux-aarch64-3.6/torchvision/ops copying torchvision/ops/init.py -> build/lib.linux-aarch64-3.6/torchvision/ops copying torchvision/ops/roi_pool.py -> build/lib.linux-aarch64-3.6/torchvision/ops copying torchvision/ops/new_empty_tensor.py -> build/lib.linux-aarch64-3.6/torchvision/ops copying torchvision/ops/_utils.py -> build/lib.linux-aarch64-3.6/torchvision/ops copying torchvision/ops/feature_pyramid_network.py -> build/lib.linux-aarch64-3.6/torchvision/ops copying torchvision/ops/misc.py -> build/lib.linux-aarch64-3.6/torchvision/ops copying torchvision/ops/ps_roi_pool.py -> build/lib.linux-aarch64-3.6/torchvision/ops copying torchvision/ops/poolers.py -> build/lib.linux-aarch64-3.6/torchvision/ops creating build/lib.linux-aarch64-3.6/torchvision/datasets copying torchvision/datasets/cifar.py -> build/lib.linux-aarch64-3.6/torchvision/datasets copying torchvision/datasets/folder.py -> build/lib.linux-aarch64-3.6/torchvision/datasets copying torchvision/datasets/imagenet.py -> build/lib.linux-aarch64-3.6/torchvision/datasets copying torchvision/datasets/caltech.py -> build/lib.linux-aarch64-3.6/torchvision/datasets copying torchvision/datasets/sbu.py -> build/lib.linux-aarch64-3.6/torchvision/datasets copying torchvision/datasets/kinetics.py -> build/lib.linux-aarch64-3.6/torchvision/datasets copying torchvision/datasets/lsun.py -> build/lib.linux-aarch64-3.6/torchvision/datasets copying torchvision/datasets/voc.py -> build/lib.linux-aarch64-3.6/torchvision/datasets copying torchvision/datasets/mnist.py -> build/lib.linux-aarch64-3.6/torchvision/datasets copying torchvision/datasets/init.py -> build/lib.linux-aarch64-3.6/torchvision/datasets copying torchvision/datasets/sbd.py -> build/lib.linux-aarch64-3.6/torchvision/datasets copying torchvision/datasets/utils.py -> build/lib.linux-aarch64-3.6/torchvision/datasets copying torchvision/datasets/usps.py -> build/lib.linux-aarch64-3.6/torchvision/datasets copying torchvision/datasets/hmdb51.py -> build/lib.linux-aarch64-3.6/torchvision/datasets copying torchvision/datasets/phototour.py -> build/lib.linux-aarch64-3.6/torchvision/datasets copying torchvision/datasets/coco.py -> build/lib.linux-aarch64-3.6/torchvision/datasets copying torchvision/datasets/omniglot.py -> build/lib.linux-aarch64-3.6/torchvision/datasets copying torchvision/datasets/cityscapes.py -> build/lib.linux-aarch64-3.6/torchvision/datasets copying torchvision/datasets/fakedata.py -> build/lib.linux-aarch64-3.6/torchvision/datasets copying torchvision/datasets/flickr.py -> build/lib.linux-aarch64-3.6/torchvision/datasets copying torchvision/datasets/stl10.py -> build/lib.linux-aarch64-3.6/torchvision/datasets copying torchvision/datasets/vision.py -> build/lib.linux-aarch64-3.6/torchvision/datasets copying torchvision/datasets/ucf101.py -> build/lib.linux-aarch64-3.6/torchvision/datasets copying torchvision/datasets/semeion.py -> build/lib.linux-aarch64-3.6/torchvision/datasets copying torchvision/datasets/celeba.py -> build/lib.linux-aarch64-3.6/torchvision/datasets copying torchvision/datasets/svhn.py -> build/lib.linux-aarch64-3.6/torchvision/datasets copying torchvision/datasets/video_utils.py -> build/lib.linux-aarch64-3.6/torchvision/datasets creating build/lib.linux-aarch64-3.6/torchvision/io copying torchvision/io/init.py -> build/lib.linux-aarch64-3.6/torchvision/io copying torchvision/io/_video_opt.py -> build/lib.linux-aarch64-3.6/torchvision/io copying torchvision/io/video.py -> build/lib.linux-aarch64-3.6/torchvision/io creating build/lib.linux-aarch64-3.6/torchvision/transforms copying torchvision/transforms/functional.py -> build/lib.linux-aarch64-3.6/torchvision/transforms copying torchvision/transforms/transforms.py -> build/lib.linux-aarch64-3.6/torchvision/transforms copying torchvision/transforms/init.py -> build/lib.linux-aarch64-3.6/torchvision/transforms copying torchvision/transforms/functional_tensor.py -> build/lib.linux-aarch64-3.6/torchvision/transforms copying torchvision/transforms/_transforms_video.py -> build/lib.linux-aarch64-3.6/torchvision/transforms copying torchvision/transforms/_functional_video.py -> build/lib.linux-aarch64-3.6/torchvision/transforms creating build/lib.linux-aarch64-3.6/torchvision/models/segmentation copying torchvision/models/segmentation/init.py -> build/lib.linux-aarch64-3.6/torchvision/models/segmentation copying torchvision/models/segmentation/fcn.py -> build/lib.linux-aarch64-3.6/torchvision/models/segmentation copying torchvision/models/segmentation/_utils.py -> build/lib.linux-aarch64-3.6/torchvision/models/segmentation copying torchvision/models/segmentation/segmentation.py -> build/lib.linux-aarch64-3.6/torchvision/models/segmentation copying torchvision/models/segmentation/deeplabv3.py -> build/lib.linux-aarch64-3.6/torchvision/models/segmentation creating build/lib.linux-aarch64-3.6/torchvision/models/quantization copying torchvision/models/quantization/googlenet.py -> build/lib.linux-aarch64-3.6/torchvision/models/quantization copying torchvision/models/quantization/shufflenetv2.py -> build/lib.linux-aarch64-3.6/torchvision/models/quantization copying torchvision/models/quantization/init.py -> build/lib.linux-aarch64-3.6/torchvision/models/quantization copying torchvision/models/quantization/utils.py -> build/lib.linux-aarch64-3.6/torchvision/models/quantization copying torchvision/models/quantization/resnet.py -> build/lib.linux-aarch64-3.6/torchvision/models/quantization copying torchvision/models/quantization/inception.py -> build/lib.linux-aarch64-3.6/torchvision/models/quantization copying torchvision/models/quantization/mobilenet.py -> build/lib.linux-aarch64-3.6/torchvision/models/quantization creating build/lib.linux-aarch64-3.6/torchvision/models/video copying torchvision/models/video/init.py -> build/lib.linux-aarch64-3.6/torchvision/models/video copying torchvision/models/video/resnet.py -> build/lib.linux-aarch64-3.6/torchvision/models/video creating build/lib.linux-aarch64-3.6/torchvision/models/detection copying torchvision/models/detection/image_list.py -> build/lib.linux-aarch64-3.6/torchvision/models/detection copying torchvision/models/detection/rpn.py -> build/lib.linux-aarch64-3.6/torchvision/models/detection copying torchvision/models/detection/faster_rcnn.py -> build/lib.linux-aarch64-3.6/torchvision/models/detection copying torchvision/models/detection/roi_heads.py -> build/lib.linux-aarch64-3.6/torchvision/models/detection copying torchvision/models/detection/backbone_utils.py -> build/lib.linux-aarch64-3.6/torchvision/models/detection copying torchvision/models/detection/init.py -> build/lib.linux-aarch64-3.6/torchvision/models/detection copying torchvision/models/detection/_utils.py -> build/lib.linux-aarch64-3.6/torchvision/models/detection copying torchvision/models/detection/mask_rcnn.py -> build/lib.linux-aarch64-3.6/torchvision/models/detection copying torchvision/models/detection/generalized_rcnn.py -> build/lib.linux-aarch64-3.6/torchvision/models/detection copying torchvision/models/detection/keypoint_rcnn.py -> build/lib.linux-aarch64-3.6/torchvision/models/detection copying torchvision/models/detection/transform.py -> build/lib.linux-aarch64-3.6/torchvision/models/detection creating build/lib.linux-aarch64-3.6/torchvision/datasets/samplers copying torchvision/datasets/samplers/init.py -> build/lib.linux-aarch64-3.6/torchvision/datasets/samplers copying torchvision/datasets/samplers/clip_sampler.py -> build/lib.linux-aarch64-3.6/torchvision/datasets/samplers running build_ext building 'torchvision._C' extension creating build/temp.linux-aarch64-3.6 creating build/temp.linux-aarch64-3.6/jetson-inference creating build/temp.linux-aarch64-3.6/jetson-inference/torchvision creating build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision creating build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc creating build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cpu creating build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cuda aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DWITH_CUDA -I/jetson-inference/torchvision/torchvision/csrc -I/home/markw/.local/lib/python3.6/site-packages/torch/include -I/home/markw/.local/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/markw/.local/lib/python3.6/site-packages/torch/include/TH -I/home/markw/.local/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.6m -c /jetson-inference/torchvision/torchvision/csrc/vision.cpp -o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/vision.o -O0 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++11 In file included from /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/boxing/KernelFunction.h:5:0, from /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/dispatch/DispatchTable.h:10, from /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/dispatch/OperatorEntry.h:3, from /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/dispatch/Dispatcher.h:3, from /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorMethods.h:10, from /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:12, from /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/Context.h:4, from /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5, from /home/markw/.local/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/markw/.local/lib/python3.6/site-packages/torch/include/torch/script.h:3, from /jetson-inference/torchvision/torchvision/csrc/vision.cpp:2: /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/boxing/kernel_functor.h: In instantiation of ‘typename c10::guts::infer_function_traits::type::return_type c10::detail::call_functor_with_args_fromstack(Functor, c10::Stack, c10::guts::indexsequence<INDEX ...>) [with Functor = c10::detail::WrapRuntimeKernelFunctor<long int ()(), long int, c10::guts::typelist::typelist<> >; bool AllowDeprecatedTypes = true; long unsigned int ...ivalue_arg_indices = {}; typename c10::guts::infer_function_traits::type::return_type = long int; c10::Stack = std::vector<c10::IValue, std::allocator >; c10::guts::index_sequence<INDEX ...> = c10::guts::integer_sequence]’: /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/boxing/kernel_functor.h:202:77: required from ‘typename c10::guts::infer_function_traits::type::return_type c10::detail::call_functor_with_args_from_stack(Functor, c10::Stack) [with Functor = c10::detail::WrapRuntimeKernelFunctor_<long int ()(), long int, c10::guts::typelist::typelist<> >; bool AllowDeprecatedTypes = true; typename c10::guts::infer_function_traits::type::return_type = long int; c10::Stack = std::vector<c10::IValue, std::allocator >]’ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/boxing/kernel_functor.h:234:91: required from ‘static void c10::detail::wrap_kernel_functor_boxed<KernelFunctor, AllowDeprecatedTypes, typename std::enable_if<(! std::is_same<void, typename c10::guts::infer_function_traits::type::returntype>::value), void>::type>::call(c10::OperatorKernel, c10::Stack) [with KernelFunctor = c10::detail::WrapRuntimeKernelFunctor<long int ()(), long int, c10::guts::typelist::typelist<> >; bool AllowDeprecatedTypes = true; c10::Stack = std::vector<c10::IValue, std::allocator >]’ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/boxing/KernelFunction.h:172:7: required from ‘static c10::KernelFunction c10::KernelFunction::makeFromUnboxedFunctor(std::uniqueptr) [with bool AllowLegacyTypes = true; KernelFunctor = c10::detail::WrapRuntimeKernelFunctor<long int ()(), long int, c10::guts::typelist::typelist<> >]’ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/boxing/KernelFunction.h:313:111: required from ‘static c10::KernelFunction c10::KernelFunction::makeFromUnboxedRuntimeFunction(FuncType) [with bool AllowLegacyTypes = true; FuncType = long int()]’ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/op_registration/op_registration.h:517:72: required from ‘c10::guts::enable_if_t<(c10::guts::is_function_type::value && (! std::is_same<FuncType, void(c10::OperatorKernel, std::vector<c10::IValue, std::allocator >)>::value)), c10::RegisterOperators&&> c10::RegisterOperators::op(const string&, FuncType, c10::RegisterOperators::Options&&) && [with FuncType = long int(); c10::guts::enable_if_t<(c10::guts::is_function_type::value && (! std::is_same<FuncType, void(c10::OperatorKernel, std::vector<c10::IValue, std::allocator >)>::value)), c10::RegisterOperators&&> = c10::RegisterOperators&&; std::cxx11::string = std::cxx11::basic_string]’ /jetson-inference/torchvision/torchvision/csrc/vision.cpp:52:57: required from here /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/boxing/kernel_functor.h:191:22: warning: variable ‘num_ivalue_args’ set but not used [-Wunused-but-set-variable] constexpr size_t num_ivalue_args = sizeof...(ivalue_arg_indices); ^~~~~~~ aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DWITH_CUDA -I/jetson-inference/torchvision/torchvision/csrc -I/home/markw/.local/lib/python3.6/site-packages/torch/include -I/home/markw/.local/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/markw/.local/lib/python3.6/site-packages/torch/include/TH -I/home/markw/.local/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.6m -c /jetson-inference/torchvision/torchvision/csrc/cpu/DeformConv_cpu.cpp -o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cpu/DeformConv_cpu.o -O0 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++11 aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DWITH_CUDA -I/jetson-inference/torchvision/torchvision/csrc -I/home/markw/.local/lib/python3.6/site-packages/torch/include -I/home/markw/.local/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/markw/.local/lib/python3.6/site-packages/torch/include/TH -I/home/markw/.local/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.6m -c /jetson-inference/torchvision/torchvision/csrc/cpu/PSROIPool_cpu.cpp -o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cpu/PSROIPool_cpu.o -O0 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++11

dusty-nv commented 4 years ago

Hmm I don't actually see a build error in there. Are you sure the board isn't running out of memory during compilation?

You could keep an eye on sudo tegratstats to monitor the memory usage, or mount additional swap.

flurpo commented 4 years ago

Thanks for the quick replies dusty-nv....I'll go use tegra to make sure I'm not running outta room.

I added the end of installation log below;

                                                                                                                                                                                                                                                                                                                                                                                                                                                     ^

/home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIPool_cuda.cu:249:795: warning: ‘T at::Tensor::data() const [with T = int]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIPool_cuda.cu:249:918: warning: ‘T at::Tensor::data() const [with T = float]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIPool_cuda.cu:249:957: warning: ‘T at::Tensor::data() const [with T = float]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIPool_cuda.cu: In lambda function: /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIPool_cuda.cu:249:1194: warning: ‘T at::Tensor::data() const [with T = c10::Half]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIPool_cuda.cu:249:1227: warning: ‘T at::Tensor::data() const [with T = int]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIPool_cuda.cu:249:1354: warning: ‘T at::Tensor::data() const [with T = c10::Half]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIPool_cuda.cu:249:1397: warning: ‘T at::Tensor::data() const [with T = c10::Half]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T * data() const { ^ ~~ /usr/local/cuda/bin/nvcc -DWITH_CUDA -I/jetson-inference/torchvision/torchvision/csrc -I/home/markw/.local/lib/python3.6/site-packages/torch/include -I/home/markw/.local/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/markw/.local/lib/python3.6/site-packages/torch/include/TH -I/home/markw/.local/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.6m -c /jetson-inference/torchvision/torchvision/csrc/cuda/nms_cuda.cu -o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cuda/nms_cuda.o -DCUDA_NO_HALF_OPERATORS -DCUDA_NO_HALF_CONVERSIONS -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=1 -gencode=arch=compute_62,code=sm_62 -std=c++11 /home/markw/.local/lib/python3.6/site-packages/torch/include/c10/core/TensorTypeSet.h(44): warning: integer conversion resulted in a change of sign

/home/markw/.local/lib/python3.6/site-packages/torch/include/c10/core/TensorTypeSet.h(44): warning: integer conversion resulted in a change of sign

/jetson-inference/torchvision/torchvision/csrc/cuda/nms_cuda.cu: In lambda function: /jetson-inference/torchvision/torchvision/csrc/cuda/nms_cuda.cu:93:104: warning: ‘c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/Dispatch.h:31:1: note: declared here inline at::ScalarType scalar_type(const at::DeprecatedTypeProperties &t) { ^~~ /usr/local/cuda/bin/nvcc -DWITH_CUDA -I/jetson-inference/torchvision/torchvision/csrc -I/home/markw/.local/lib/python3.6/site-packages/torch/include -I/home/markw/.local/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/markw/.local/lib/python3.6/site-packages/torch/include/TH -I/home/markw/.local/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.6m -c /jetson-inference/torchvision/torchvision/csrc/cuda/ROIAlign_cuda.cu -o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cuda/ROIAlign_cuda.o -DCUDA_NO_HALF_OPERATORS -DCUDA_NO_HALF_CONVERSIONS -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=1 -gencode=arch=compute_62,code=sm_62 -std=c++11 /home/markw/.local/lib/python3.6/site-packages/torch/include/c10/core/TensorTypeSet.h(44): warning: integer conversion resulted in a change of sign

/home/markw/.local/lib/python3.6/site-packages/torch/include/c10/core/TensorTypeSet.h(44): warning: integer conversion resulted in a change of sign

/jetson-inference/torchvision/torchvision/csrc/cuda/ROIAlign_cuda.cu: In lambda function: /jetson-inference/torchvision/torchvision/csrc/cuda/ROIAlign_cuda.cu:340:98: warning: ‘c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF(input.type(), "ROIAlign_forward", [&] { ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/Dispatch.h:31:1: note: declared here inline at::ScalarType scalar_type(const at::DeprecatedTypeProperties &t) { ^~~ /jetson-inference/torchvision/torchvision/csrc/cuda/ROIAlign_cuda.cu: In lambda function: /jetson-inference/torchvision/torchvision/csrc/cuda/ROIAlign_cuda.cu:402:97: warning: ‘c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF(grad.type(), "ROIAlign_backward", [&] { ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/Dispatch.h:31:1: note: declared here inline at::ScalarType scalar_type(const at::DeprecatedTypeProperties &t) { ^~~ /usr/local/cuda/bin/nvcc -DWITH_CUDA -I/jetson-inference/torchvision/torchvision/csrc -I/home/markw/.local/lib/python3.6/site-packages/torch/include -I/home/markw/.local/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/markw/.local/lib/python3.6/site-packages/torch/include/TH -I/home/markw/.local/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.6m -c /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu -o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.o -DCUDA_NO_HALF_OPERATORS -DCUDA_NO_HALF_CONVERSIONS -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=1 -gencode=arch=compute_62,code=sm_62 -std=c++11 /home/markw/.local/lib/python3.6/site-packages/torch/include/c10/core/TensorTypeSet.h(44): warning: integer conversion resulted in a change of sign

/home/markw/.local/lib/python3.6/site-packages/torch/include/c10/core/TensorTypeSet.h(44): warning: integer conversion resulted in a change of sign

/jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu: In lambda function: /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:345:342: warning: ‘T at::Tensor::data() const [with T = double]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:345:467: warning: ‘T at::Tensor::data() const [with T = double]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:345:508: warning: ‘T at::Tensor::data() const [with T = double]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:345:541: warning: ‘T at::Tensor::data() const [with T = int]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu: In lambda function: /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:345:769: warning: ‘T at::Tensor::data() const [with T = float]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:345:893: warning: ‘T at::Tensor::data() const [with T = float]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:345:933: warning: ‘T at::Tensor::data() const [with T = float]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:345:966: warning: ‘T at::Tensor::data() const [with T = int]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu: In lambda function: /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:345:1205: warning: ‘T at::Tensor::data() const [with T = c10::Half]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:345:1333: warning: ‘T at::Tensor::data() const [with T = c10::Half]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:345:1377: warning: ‘T at::Tensor::data() const [with T = c10::Half]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:345:1410: warning: ‘T at::Tensor::data() const [with T = int]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu: In lambda function: /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:415:344: warning: ‘T at::Tensor::data() const [with T = double]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:415:377: warning: ‘T at::Tensor::data() const [with T = int]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:415:517: warning: ‘T at::Tensor::data() const [with T = double]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:415:557: warning: ‘T at::Tensor::data() const [with T = double]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu: In lambda function: /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:415:788: warning: ‘T at::Tensor::data() const [with T = float]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:415:821: warning: ‘T at::Tensor::data() const [with T = int]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:415:960: warning: ‘T at::Tensor::data() const [with T = float]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:415:999: warning: ‘T at::Tensor::data() const [with T = float]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu: In lambda function: /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:415:1241: warning: ‘T at::Tensor::data() const [with T = c10::Half]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:415:1274: warning: ‘T at::Tensor::data() const [with T = int]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:415:1417: warning: ‘T at::Tensor::data() const [with T = c10::Half]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:415:1460: warning: ‘T at::Tensor::data() const [with T = c10::Half]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ aarch64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/vision.o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cpu/DeformConv_cpu.o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cpu/PSROIPool_cpu.o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cpu/ROIAlign_cpu.o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cpu/PSROIAlign_cpu.o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cpu/ROIPool_cpu.o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cpu/nms_cpu.o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cuda/ROIPool_cuda.o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cuda/DeformConv_cuda.o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cuda/PSROIPool_cuda.o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cuda/nms_cuda.o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cuda/ROIAlign_cuda.o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.o -L/usr/local/cuda/lib64 -lcudart -o build/lib.linux-aarch64-3.6/torchvision/_C.so building 'torchvision.video_reader' extension creating build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cpu/video_reader aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/jetson-inference/torchvision/torchvision/csrc/cpu/video_reader -I/usr/include -I/jetson-inference/torchvision/torchvision/csrc -I/home/markw/.local/lib/python3.6/site-packages/torch/include -I/home/markw/.local/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/markw/.local/lib/python3.6/site-packages/torch/include/TH -I/home/markw/.local/lib/python3.6/site-packages/torch/include/THC -I/usr/include/python3.6m -c /jetson-inference/torchvision/torchvision/csrc/cpu/video_reader/VideoReader.cpp -o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cpu/video_reader/VideoReader.o -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=video_reader -D_GLIBCXX_USE_CXX11_ABI=1 In file included from /jetson-inference/torchvision/torchvision/csrc/cpu/video_reader/FfmpegDecoder.h:6:0, from /jetson-inference/torchvision/torchvision/csrc/cpu/video_reader/VideoReader.cpp:6: /jetson-inference/torchvision/torchvision/csrc/cpu/video_reader/FfmpegHeaders.h:4:10: fatal error: libavcodec/avcodec.h: No such file or directory

include <libavcodec/avcodec.h>

      ^~~~~~~~~~~~~~~~~~~~~~

compilation terminated. error: command 'aarch64-linux-gnu-gcc' failed with exit status 1

dusty-nv commented 4 years ago

/jetson-inference/torchvision/torchvision/csrc/cpu/video_reader/FfmpegHeaders.h:4:10: fatal error: libavcodec/avcodec.h: No such file or directory

include <libavcodec/avcodec.h>

I see, you must have ffmpeg executable tool installed on your Jetson, but not the libraries.

It looks like if torchvision finds the ffmpeg executable, it assumes the libraries are also installed: https://github.com/pytorch/vision/blob/cca0c77a9ac5aa782b0811d850f246d73b0b4a1b/setup.py#L130

Try running sudo apt-get install libavcodec-dev first.

flurpo commented 4 years ago

Thanks so much dusty-nv....

I ran the install above and ran clean.....still getting the same error below. I'm looking into the tag you gave me above. Best,

                                                                                                    ^

/home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:415:999: warning: ‘T at::Tensor::data() const [with T = float]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu: In lambda function: /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:415:1241: warning: ‘T at::Tensor::data() const [with T = c10::Half]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:415:1274: warning: ‘T at::Tensor::data() const [with T = int]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:415:1417: warning: ‘T at::Tensor::data() const [with T = c10::Half]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T data() const { ^ ~~ /jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.cu:415:1460: warning: ‘T at::Tensor::data() const [with T = c10::Half]’ is deprecated [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/markw/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:1: note: declared here T * data() const { ^ ~~ aarch64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/vision.o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cpu/DeformConv_cpu.o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cpu/PSROIPool_cpu.o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cpu/ROIAlign_cpu.o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cpu/PSROIAlign_cpu.o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cpu/ROIPool_cpu.o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cpu/nms_cpu.o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cuda/ROIPool_cuda.o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cuda/DeformConv_cuda.o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cuda/PSROIPool_cuda.o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cuda/nms_cuda.o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cuda/ROIAlign_cuda.o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cuda/PSROIAlign_cuda.o -L/usr/local/cuda/lib64 -lcudart -o build/lib.linux-aarch64-3.6/torchvision/_C.so building 'torchvision.video_reader' extension creating build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cpu/video_reader aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/jetson-inference/torchvision/torchvision/csrc/cpu/video_reader -I/usr/include -I/jetson-inference/torchvision/torchvision/csrc -I/home/markw/.local/lib/python3.6/site-packages/torch/include -I/home/markw/.local/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/markw/.local/lib/python3.6/site-packages/torch/include/TH -I/home/markw/.local/lib/python3.6/site-packages/torch/include/THC -I/usr/include/python3.6m -c /jetson-inference/torchvision/torchvision/csrc/cpu/video_reader/VideoReader.cpp -o build/temp.linux-aarch64-3.6/jetson-inference/torchvision/torchvision/csrc/cpu/video_reader/VideoReader.o -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=video_reader -D_GLIBCXX_USE_CXX11_ABI=1 In file included from /jetson-inference/torchvision/torchvision/csrc/cpu/video_reader/FfmpegDecoder.h:6:0, from /jetson-inference/torchvision/torchvision/csrc/cpu/video_reader/VideoReader.cpp:6: /jetson-inference/torchvision/torchvision/csrc/cpu/video_reader/FfmpegHeaders.h:5:10: fatal error: libavformat/avformat.h: No such file or directory

include <libavformat/avformat.h>

      ^~~~~~~~~~~~~~~~~~~~~~~~

compilation terminated. error: command 'aarch64-linux-gnu-gcc' failed with exit status 1

flurpo commented 4 years ago

1000 thanks dusty-nv....not sure why but I'm successfully importing torchvision with python3...I believe problem solved!!!!!!!!!!!!!!!

flurpo commented 4 years ago

ah.....looks like torchvision is successfully importing but torchvision sub calls are not working. I'll continue with the FFMPEG libraries

flurpo commented 4 years ago

dusty-nv, thanks so much.....its working fine.....I'm able to get to all the libraries, working well

I simply removed ffmpeg from my system, ran the clean and the torchvision installation worked perfectly as described in your string.

I hope this will help someone else. Maybe bypass the ffmpeg loop in that installation call?

Best wishes, MW