dusty-nv / jetson-inference

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
https://developer.nvidia.com/embedded/twodaystoademo
MIT License
7.89k stars 2.99k forks source link

Building it from docker #735

Closed pradhunmya01 closed 1 year ago

pradhunmya01 commented 4 years ago

Hello, I am facing an issue while building this repo from the source from the docker got an error

**CMake Error: The following variables are used in this project, but they are set to NOTFOUND. Please set them or make sure they are set and tested correctly in the CMake files: CUDA_nppicc_LIBRARY (ADVANCED) linked by target "jetson-utils" in directory /jetson-inference/utils

-- Configuring incomplete, errors occurred! See also "/jetson-inference/build/CMakeFiles/CMakeOutput.log". See also "/jetson-inference/build/CMakeFiles/CMakeError.log"**

Base Image for docker file is this : https://ngc.nvidia.com/catalog/containers/nvidia:l4t-pytorch

any suggestions any advise??

Thank you in advance

dusty-nv commented 4 years ago

Hi @pradhunmya01 , did you set your default docker-runtime to nvidia like shown here?

https://github.com/dusty-nv/jetson-containers#docker-default-runtime

pradhunmya01 commented 4 years ago

Thank you for your quick reply @dusty-nv yes earlier I haven't set the default docker-runtime to "nvidia" after setting it works fine and started compiling further but at one point it stops that's the error:-

[ 65%] Building CXX object CMakeFiles/jetson-inference.dir/calibration/randInt8Calibrator.cpp.o [ 67%] Building CXX object CMakeFiles/jetson-inference.dir/c/segNet.cpp.o [ 67%] Building CXX object CMakeFiles/jetson-inference.dir/c/tensorNet.cpp.o [ 69%] Building CXX object CMakeFiles/jetson-inference.dir/c/imageNet.cpp.o [ 69%] Building CXX object CMakeFiles/jetson-inference.dir/c/detectNet.cpp.o [ 70%] Building CXX object CMakeFiles/jetson-inference.dir/plugins/FlattenConcat.cpp.o [ 71%] Linking CXX shared library aarch64/lib/libjetson-inference.so /usr/bin/ld: cannot find -lnvcaffe_parser collect2: error: ld returned 1 exit status make[2]: ** [aarch64/lib/libjetson-inference.so] Error 1 CMakeFiles/jetson-inference.dir/build.make:257: recipe for target 'aarch64/lib/libjetson-inference.so' failed CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/jetson-inference.dir/all' failed make[1]: [CMakeFiles/jetson-inference.dir/all] Error 2 make: * [all] Error 2 Makefile:129: recipe for target 'all' failed

is it related to base image?

dusty-nv commented 4 years ago

That's odd you would be getting -nvcaffe_parser error from docker build, because when building docker container, I remove that from the CMakeLists with sed in the dockerfile:

https://github.com/dusty-nv/jetson-inference/blob/b689bb353a9f7437f3899068b6b33af6608d8ae9/Dockerfile#L77

However you could try manually replacing nvcaffe_parser with nvparser in your CMakeLists here:

https://github.com/dusty-nv/jetson-inference/blob/3ff544d81891a92050f445c457459db098f4ea0a/CMakeLists.txt#L169

pradhunmya01 commented 4 years ago

Thank you for the help @dusty-nv it works absolutely fine. Thank you

Is there any way we can download all the models with the Dockerfile itself? Instead of manually run ./download-models.sh

Because with docker cmake it's not downloading automatically

cognitiveRobot commented 4 years ago

@pradhunmya01 can you please share your Dockerfile? Thanks.

dusty-nv commented 4 years ago

Is there any way we can download all the models with the Dockerfile itself? Instead of manually run ./download-models.sh

Because with docker cmake it's not downloading automatically

The jetson-inference/data/networks directory is mounted from the host so that when the TensorRT engine is generated the first time you use the model, that TensorRT engine gets saved and is not lost once you shut down the container.

The models get download the first time you run docker/run.sh script.

pradhunmya01 commented 4 years ago

Hello @dusty-nv After running the docker/run.sh I got error:-

head: cannot open '/etc/nv_tegra_release' for reading: No such file or directory reading L4T version from "dpkg-query --show nvidia-l4t-core" dpkg-query: no packages found matching nvidia-l4t-core L4T BSP Version: L4T R. docker/tag.sh: line 18: [: -eq: unary operator expected cannot find compatible jetson-inference docker container for L4T R. please upgrade to the latest JetPack, or build jetson-inference natively from source

which base image should I use for this?

right now I am using nvcr.io/nvidia/l4t-ml:r32.4.3-py3

pradhunmya01 commented 4 years ago

@pradhunmya01 can you please share your Dockerfile? Thanks.

sure @cognitiveRobot

**FROM nvcr.io/nvidia/l4t-ml:r32.4.3-py3 COPY rootrequirements.txt ./ RUN apt-get update && \ apt-get install -y --fix-missing make g++ && \ apt install -y --fix-missing python3-pip libhdf5-serial-dev hdf5-tools python3-h5py && \ apt install -y --fix-missing libjpeg-dev libfreetype6-dev pkg-config libpng-dev && \ apt install -y --fix-missing vim v4l-utils && \ apt install -y --fix-missing libatlas-base-dev gfortran
RUN pip3 install --upgrade pip RUN pip install --upgrade setuptools wheel RUN pip install -r rootrequirements.txt

Compiling and installing Jetson Inference

RUN apt-get update && \
apt-get -y --fix-missing install git cmake libpython3-dev python3-numpy RUN git clone --recursive https://github.com/dusty-nv/jetson-inference RUN cd jetson-inference && \ sed -i 's/nvcaffe_parser/nvparsers/g' CMakeLists.txt && \ mkdir build && \ cd build && \ cmake ../ && \ make -j$(nproc) && \ make install && \ ldconfig && \ /bin/bash -O extglob -c "cd /jetson-inference/build; rm -rf -v !(aarch64|download-models.*)" && \ rm -rf /var/lib/apt/lists/***

kk52099 commented 3 years ago

Hi @pradhunmya01 , did you set your default docker-runtime to nvidia like shown here?

https://github.com/dusty-nv/jetson-containers#docker-default-runtime

@dusty-nv i set default docker-runtime to nvidia, but the error still happened

dusty-nv commented 3 years ago

@dusty-nv i set default docker-runtime to nvidia, but the error still happened

Hi @kk52099, did you reboot your system or restart the docker service after you made the change?

Does docker show the default runtime as nvidia?

$ sudo docker info | grep Default
 Default Runtime: nvidia
bastianhjaeger commented 2 years ago

@dusty-nv I am facing the same issue.

$ sudo docker info | grep Default
 Default Runtime: nvidia

but still

#16 64.55 -- Copying examples/detectnet.py -> detectnet-camera.py
#16 64.55 -- Copying examples/segnet.py -> segnet-console.py
#16 64.55 -- Copying examples/segnet.py -> segnet-camera.py
#16 64.58 CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
#16 64.58 Please set them or make sure they are set and tested correctly in the CMake files:
#16 64.58 CUDA_nppicc_LIBRARY (ADVANCED)
#16 64.58     linked by target "jetson-utils" in directory /jetson_inference_ros/dependencies/jetson-inference/utils
#16 64.58 
#16 64.58 -- Configuring incomplete, errors occurred!

any other suggestions?

dusty-nv commented 2 years ago

Do you have CUDA toolkit installed on your device under /usr/local/cuda? It should have been installed with JetPack

You should see libnppicc.so under /usr/local/cuda/lib64

bastianhjaeger commented 2 years ago

Thanks for the reply.

No, I do not see libnppicc.so. What is JetPack? And when/where should I have installed this?

dusty-nv commented 2 years ago

It should be already installed if you are using the SD card image for Nano or NX.

If you used SDK Manager, it should have installed it for you after flashing. Can you try to do sudo apt-get install nvidia-jetpack ?

markusachtelik commented 2 years ago

@dusty-nv I'm working on this with @bastianhjaeger
jetpack is installed, however under /usr/local/cuda-10.2 . libnppicc.so is there and the runtime seems to be fine too:

$ sudo docker info | grep Default
 Default Runtime: nvidia

we're using this base image: dustynv/ros:foxy-pytorch-l4t-r32.5.0 any more good ideas?