OE4T / meta-tegra

BSP layer for NVIDIA Jetson platforms, based on L4T
MIT License
397 stars 220 forks source link

Jetson and docker #230

Closed triblex closed 4 years ago

triblex commented 4 years ago

I've been successful in using docker on my Jetson Nano production module, but it appears that the Nvidia Container Runtime allows some GPU access that are otherwise unavailable. I can see the Nvidia Container Runtime is available in the binaries from the Nvidia SDK manager, but how to i get the binary installed into the image? Is there a recipe i can bitbake or is this not officially available from the meta-tegra layer?

Thanks!

madisongh commented 4 years ago

There are no recipes currently. I see the various container-related deb packages in the SDK Manager, but I don't know if it's just a matter of unpacking them and installing the relevant files, or whether some tweaks might be needed to deal with environment differences (e.g., Debian/Ubuntu rootfs layout vs. the plainer layout we use with OE/Yocto builds). If someone were willing to take a stab at putting some recipes together for this, that would be great.

triblex commented 4 years ago

I have (successfully) installed Nvidia Container Runtime manually by extracting and copying the necessary files from the following .deb packages from the Nvidia SDK-manager:

Then I've added a json file to /etc/daemon.json with the following content:

{
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    }
}

After which the docker info command shows

...
Runtimes: nvidia runc
...

Which indicates the nvidia container runtime is available. Running a docker with --runtime=nvidia runs without problems. However, when i try to run CUDA samples like the nbody example (based of this nvidia base image with tag 32.2 as I am running a warrior build which supports JetPack 4.2.1) , it returns with the error Error: only 0 Devices available, 1 requested. Exiting. which indicates that the container doesn't have access to GPU acceleration via the Nvidia Container Runtime.

Running a deviceQuery returns with the error:

cudaGetDeviceCount returned 35
-> CUDA driver version is insufficient for CUDA runtime version
Result = FAIL

Which further agree with the suspicion that the Nvidia Container Runtime isn't working as intended.

When i run a deviceQuery outside of docker, i get CUDA Driver Version / Runtime Version 10.0 / 10.0

I've tried a lot of ways to solve this, but I seem to be pretty stuck at this point.

Edit: It's working on the official Nvidia SDK-manager flashed image, so i know it's doable to get it to work via Yocto.

triblex commented 4 years ago

I've found .cvs files in /etc/nvidia-container-runtime/host-files-for-container.d/ that contains files that are merged with the container when running a docker image with nvidia as runtime. I've tried getting all of the files stated in the list but eventually no matter what I've tried with this, an error occours stating that there is a nvidia-container-cli: mount error: (null).

More on this from this post from nvidia devtalk.

triblex commented 4 years ago

I've managed to get CUDA ready GPU acceleration access in the container.. FINALLY! Some amount of lib/dir/sym are needed to be merged into the container. This can be done by creating .csv files in the directory /etc/nvidia-container-runtime/host-files-for-container.d/ (though I am not at this point sure which are needed) Once the needed files has been listed, the docker image can be run with nvidia runtime and the access should be available in the container.

madisongh commented 4 years ago

Great! Any chance you could turn this into a recipe?

paroque28 commented 4 years ago

Hi @triblex, I need to get containers running with CUDA support and turn it into a recipe. How did you manage to make it work? Is it needed to compile with GCC7?

triblex commented 4 years ago

Great! Any chance you could turn this into a recipe?

Sure, I'll take a look at it.

I need to get containers running with CUDA support and turn it into a recipe. How did you manage to make it work?

You need some .csv files that merges listed files into your container running Nvidia Container Runtime. You can find them in the default Nvidia SDK-manager Ubuntu rootfs under /etc/nvidia-container-runtime/host-files-for-container.d/ You can look at them and then make your own .csv files or you can copy paste the default ones, but you need to change a lot of the paths, as they are not the same as in the standard Yocto build rootfs.

Is it needed to compile with GCC7?

You can read about this in the wiki section.

paroque28 commented 4 years ago

@madisongh do you know which license should one use for these recipes? LICENSE or COPYING https://github.com/NVIDIA/libnvidia-container.git

madisongh commented 4 years ago

Looks like multiple licenses are potentially involved. See what it says in the README.md and NOTICE files - the GPL license in COPYING applies if you link against libelf.

paroque28 commented 4 years ago

So, I got libnvidia-container recipe working. I will now do the recipe for nvidia-container-runtime

mclayton7 commented 4 years ago

@paroque28 thanks for working on this, I'm interested in it as well. Have you made this recipe available on github, and will the libnvidia-container/nvidia-container-runtime replace a recipe like meta-virtualization from open-embedded? Thanks!

triblex commented 4 years ago

@paroque28 Will you make these recipes available? If yes, then I won't start spending time on making them from scratch myself ;)

paroque28 commented 4 years ago

@mclayton7 @triblex https://github.com/madisongh/meta-tegra/pull/243

paroque28 commented 4 years ago

I got nvidia-container-toolkit recipe building

rpm -ql tmp-glibc/work/aarch64-wrs-linux/nvidia-container-toolkit/1.0.5-r0/deploy-rpms/aarch64/nvidia-container-toolkit-1.0.5-r0.aarch64.rpm
error: cannot open Packages database in /var/lib/rpm
error: cannot open Packages database in /var/lib/rpm
/etc
/etc/nvidia-container-runtime
/etc/nvidia-container-runtime/config.toml
/usr
/usr/bin
/usr/bin/nvidia-container-toolkit
/usr/libexec
/usr/libexec/oci
/usr/libexec/oci/hooks.d
/usr/libexec/oci/hooks.d/oci-nvidia-hook
/usr/share
/usr/share/licenses
/usr/share/licenses/nvidia-container-toolkit-1.0.5
/usr/share/licenses/nvidia-container-toolkit-1.0.5/LICENSE
/usr/share/oci
/usr/share/oci/hooks.d
/usr/share/oci/hooks.d/oci-nvidia-hook.json

Will try tomorrow and upload if works

triblex commented 4 years ago

@paroque28 I can see you are making the recipes from source, which I am 100% for. But using the pre-configured and pre-build nvidia-container debian packages for the Jetson Nano that is downloaded alongside all the CUDA stuff is also an option. That is what the CUDA recipes are doing (Correct me if I'm wrong). These packages and their versions are pretty much guaranteed to work with one another.

I made some quick recipes to use these pre-build debian packages from the NVIDIA SDK manager and they work just fine. But if you can get it all to work together through source, then by all means, I would actually prefer it. It seems like you almost got it done, I just wanted to mention this.

paroque28 commented 4 years ago

Thanks, @triblex I actually based my work on the original .deb files provided by Nvidia so I didn't miss any file.

What docker image did you use for testing? The nvidia ones https://hub.docker.com/r/nvidia/cuda are only for amd64 and I am using aarch64

paroque28 commented 4 years ago

Hi all, I am getting error:

docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].

I am not sure if I installed correctly the driver for CUDA.

Can you confirm that the device driver is cuda-driver ? I also added cuda-toolkit just in case but still I cannot see the command nvidia-smi

If fact if I run cat /proc/driver/nvidia/version I get

cat: /proc/driver/nvidia/version: No such file or directory
triblex commented 4 years ago

@paroque28

What docker image did you use for testing?

I am using this docker image to test with. It's an official base L4T docker image from NVIDIA.

I am getting error

Try and run deviceQuery to see if CUDA is working on the host. You can get the example from bitbaking cuda-samples.

... but still I cannot see the command nvidia-smi

Unfortunately, nvidia-smi command isn't supported on Tegra-based platforms at this time.

docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].

I had the same issue, but the debug file showed me that it was missing /usr/bin/nvidia-container-runtime-hook, but as nvidia-container-toolkit is replacing this, you can make a symbolic link. You can do something like this in the recipe under do_install: ln -s nvidia-container-toolkit ${D}${execdir}/nvidia-container-runtime-hook

That should fix your problem, otherwise let me know.

paroque28 commented 4 years ago

Hi @triblex ,

Thanks for your reply it was very helpful! I got almost everything working the only thing that I am needing now is the host-files-for-container.d files. I have this folder empty at the moment. Can you please do an ls so that I know which files to include?

Thanks, Pablo

paroque28 commented 4 years ago

I cannot seem to find l4t.csv, where did you get this csv?

I am generating these files from scratch since the final path might change depending on yocto installation.

The thing is that I only have four of them:

If you could post the contents of the l4t.csv that would be great

So far I have:


tmp/etc/nvidia-container-runtime/host-files-for-container.d/cuda.csv:dir, /usr/local/cuda-10.0
tmp/etc/nvidia-container-runtime/host-files-for-container.d/cuda.csv:sym, /usr/loca/cuda
tmp/etc/nvidia-container-runtime/host-files-for-container.d/cudnn.csv:lib, /usr/lib/aarch64-linux-gnu/libcudnn.so.7.6.3
tmp/etc/nvidia-container-runtime/host-files-for-container.d/cudnn.csv:sym, /usr/lib/aarch64-linux-gnu/libcudnn.so.7
tmp/etc/nvidia-container-runtime/host-files-for-container.d/tensorrt.csv:lib, /usr/lib/aarch64-linux-gnu/libnvinfer.so.6.0.1
tmp/etc/nvidia-container-runtime/host-files-for-container.d/tensorrt.csv:lib, /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.6.0.1
tmp/etc/nvidia-container-runtime/host-files-for-container.d/tensorrt.csv:lib, /usr/lib/aarch64-linux-gnu/libnvonnxparser.so.6.0.1
tmp/etc/nvidia-container-runtime/host-files-for-container.d/tensorrt.csv:lib, /usr/lib/aarch64-linux-gnu/libnvonnxparser_runtime.so.6.0.1
tmp/etc/nvidia-container-runtime/host-files-for-container.d/tensorrt.csv:lib, /usr/lib/aarch64-linux-gnu/libnvparsers.so.6.0.1
tmp/etc/nvidia-container-runtime/host-files-for-container.d/tensorrt.csv:sym, /usr/lib/aarch64-linux-gnu/libnvcaffe_parser.so.6
tmp/etc/nvidia-container-runtime/host-files-for-container.d/tensorrt.csv:sym, /usr/lib/aarch64-linux-gnu/libnvcaffe_parser.so.6.0.1
tmp/etc/nvidia-container-runtime/host-files-for-container.d/tensorrt.csv:sym, /usr/lib/aarch64-linux-gnu/libnvinfer.so.6
tmp/etc/nvidia-container-runtime/host-files-for-container.d/tensorrt.csv:sym, /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.6
tmp/etc/nvidia-container-runtime/host-files-for-container.d/tensorrt.csv:sym, /usr/lib/aarch64-linux-gnu/libnvonnxparser.so.6
tmp/etc/nvidia-container-runtime/host-files-for-container.d/tensorrt.csv:sym, /usr/lib/aarch64-linux-gnu/libnvonnxparser_runtime.so.6
tmp/etc/nvidia-container-runtime/host-files-for-container.d/tensorrt.csv:sym, /usr/lib/aarch64-linux-gnu/libnvparsers.so.6
tmp/etc/nvidia-container-runtime/host-files-for-container.d/tensorrt.csv:dir, /usr/src/tensorrt
tmp/etc/nvidia-container-runtime/host-files-for-container.d/visionworks.csv:sym, /usr/lib/libvisionworks_sfm.so
tmp/etc/nvidia-container-runtime/host-files-for-container.d/visionworks.csv:sym, /usr/lib/libvisionworks_sfm.so.0.90
tmp/etc/nvidia-container-runtime/host-files-for-container.d/visionworks.csv:lib, /usr/lib/libvisionworks_sfm.so.0.90.4
tmp/etc/nvidia-container-runtime/host-files-for-container.d/visionworks.csv:lib, /usr/lib/libvisionworks.so
tmp/etc/nvidia-container-runtime/host-files-for-container.d/visionworks.csv:sym, /usr/lib/libvisionworks_tracking.so
tmp/etc/nvidia-container-runtime/host-files-for-container.d/visionworks.csv:sym, /usr/lib/libvisionworks_tracking.so.0.88
tmp/etc/nvidia-container-runtime/host-files-for-container.d/visionworks.csv:lib, /usr/lib/libvisionworks_tracking.so.0.88.2
madisongh commented 4 years ago

@paroque28 I If you look in the L4T BSP, in nv_tegra/config.tbz2, you'll see an etc/nvidia-container-runtime/host-files-for-container.d/l4t.csv. Is that the file you're looking for?

triblex commented 4 years ago

@paroque28 It IS the file that @madisongh said yes. I took this file and modified it.

This is the l4t.csv file i got to work with nvidia-container-runtime:

dev, /dev/fb0
dev, /dev/fb1
dev, /dev/nvhost-as-gpu
dev, /dev/nvhost-ctrl
dev, /dev/nvhost-ctrl-gpu
dev, /dev/nvhost-dbg-gpu
dev, /dev/nvhost-gpu
dev, /dev/nvhost-nvdec
dev, /dev/nvhost-nvdec1
dev, /dev/nvhost-prof-gpu
dev, /dev/nvhost-vic
dev, /dev/nvhost-ctrl-nvdla0
dev, /dev/nvhost-ctrl-nvdla1
dev, /dev/nvhost-nvdla0
dev, /dev/nvhost-nvdla1
dev, /dev/nvidiactl
dev, /dev/nvmap
dev, /dev_dc_0
dev, /dev_dc_1
dev, /dev_dc_ctrl
dev, /dev/nvhost-msenc
dev, /dev/nvhost-nvenc1
dev, /dev/nvhost-nvjpg
dir, /lib/firmware21x
lib, /usr/lib/libv4l2.so.0
lib, /usr/lib/weston/desktop-shell.so
lib, /usr/lib/weston/drm-backend.so
lib, /usr/lib/weston/EGLWLInputEventExample
lib, /usr/lib/weston/EGLWLMockNavigation
lib, /usr/lib/weston/gl-renderer.so
lib, /usr/lib/weston/hmi-controller.so
lib, /usr/lib/weston/ivi-controller.so
lib, /usr/lib/weston/ivi-shell.so
lib, /usr/lib/weston/LayerManagerControl
lib, /usr/lib/weston/libilmClient.so.2.2.0
lib, /usr/lib/weston/libilmCommon.so.2.2.0
lib, /usr/lib/weston/libilmControl.so.2.2.0
lib, /usr/lib/weston/libilmInput.so.2.2.0
lib, /usr/lib/weston/libweston-6.so.0
lib, /usr/lib/weston/libweston-desktop-6.so.0
lib, /usr/lib/weston/simple-weston-client
lib, /usr/lib/weston/spring-tool
lib, /usr/lib/weston/wayland-backend.so
lib, /usr/lib/weston/weston
lib, /usr/lib/weston/weston-calibrator
lib, /usr/lib/weston/weston-clickdot
lib, /usr/lib/weston/weston-cliptest
lib, /usr/lib/weston/weston-debug
lib, /usr/lib/weston/weston-desktop-shell
lib, /usr/lib/weston/weston-dnd
lib, /usr/lib/weston/weston-eventdemo
lib, /usr/lib/weston/weston-flower
lib, /usr/lib/weston/weston-fullscreen
lib, /usr/lib/weston/weston-image
lib, /usr/lib/weston/weston-info
lib, /usr/lib/weston/weston-keyboard
lib, /usr/lib/weston/weston-launch
lib, /usr/lib/weston/weston-multi-resource
lib, /usr/lib/weston/weston-resizor
lib, /usr/lib/weston/weston-scaler
lib, /usr/lib/weston/weston-screenshooter
lib, /usr/lib/weston/weston-simple-dmabuf-egldevice
lib, /usr/lib/weston/weston-simple-egl
lib, /usr/lib/weston/weston-simple-shm
lib, /usr/lib/weston/weston-simple-touch
lib, /usr/lib/weston/weston-smoke
lib, /usr/lib/weston/weston-stacking
lib, /usr/lib/weston/weston-subsurfaces
lib, /usr/lib/weston/weston-terminal
lib, /usr/lib/weston/weston-transformed
lib, /usr/lib/gstreamer-1.0/libgstnvarguscamerasrc.so
lib, /usr/lib/gstreamer-1.0/libgstnvcompositor.so
lib, /usr/lib/gstreamer-1.0/libgstnvdrmvideosink.so
lib, /usr/lib/gstreamer-1.0/libgstnveglglessink.so
lib, /usr/lib/gstreamer-1.0/libgstnveglstreamsrc.so
lib, /usr/lib/gstreamer-1.0/libgstnvegltransform.so
lib, /usr/lib/gstreamer-1.0/libgstnvivafilter.so
lib, /usr/lib/gstreamer-1.0/libgstnvjpeg.so
lib, /usr/lib/gstreamer-1.0/libgstnvtee.so
lib, /usr/lib/gstreamer-1.0/libgstnvvidconv.so
lib, /usr/lib/gstreamer-1.0/libgstnvvideo4linux2.so
lib, /usr/lib/gstreamer-1.0/libgstnvvideocuda.so
lib, /usr/lib/gstreamer-1.0/libgstnvvideosink.so
lib, /usr/lib/gstreamer-1.0/libgstnvvideosinks.so
lib, /usr/lib/gstreamer-1.0/libgstomx.so
lib, /usr/lib/libgstnvegl-1.0.so.0
lib, /usr/lib/libgstnvexifmeta.so
lib, /usr/lib/libgstnvivameta.so
lib, /usr/lib/libnvsample_cudaprocess.so
lib, /usr/lib/aarch64-linux-gnu/tegra-egl/ld.so.conf
lib, /usr/lib/aarch64-linux-gnu/tegra-egl/libEGL_nvidia.so.0
lib, /usr/lib/aarch64-linux-gnu/tegra-egl/libGLESv1_CM_nvidia.so.1
lib, /usr/lib/aarch64-linux-gnu/tegra-egl/libGLESv2_nvidia.so.2
lib, /usr/lib/aarch64-linux-gnu/tegra-egl/nvidia.json
lib, /usr/lib/libcuda.so.1.1
lib, /usr/lib/libdrm.so.2
lib, /usr/lib/libGLX_nvidia.so.0
lib, /usr/lib/libnvapputil.so
lib, /usr/lib/libnvargus.so
lib, /usr/lib/libnvargus_socketclient.so
lib, /usr/lib/libnvargus_socketserver.so
lib, /usr/lib/libnvavp.so
lib, /usr/lib/libnvbuf_fdmap.so.1.0.0
lib, /usr/lib/libnvbufsurface.so.1.0.0
lib, /usr/lib/libnvbufsurftransform.so.1.0.0
lib, /usr/lib/libnvbuf_utils.so.1.0.0
lib, /usr/lib/libnvcameratools.so
lib, /usr/lib/libnvcamerautils.so
lib, /usr/lib/libnvcam_imageencoder.so
lib, /usr/lib/libnvcamlog.so
lib, /usr/lib/libnvcamv4l2.so
lib, /usr/lib/libnvcolorutil.so
lib, /usr/lib/libnvdc.so
lib, /usr/lib/libnvddk_2d_v2.so
lib, /usr/lib/libnvddk_vic.so
lib, /usr/lib/libnvdsbufferpool.so.1.0.0
lib, /usr/lib/libnveglstream_camconsumer.so
lib, /usr/lib/libnveglstreamproducer.so
lib, /usr/lib/libnveventlib.so
lib, /usr/lib/libnvexif.so
lib, /usr/lib/libnvfnet.so
lib, /usr/lib/libnvfnetstoredefog.so
lib, /usr/lib/libnvfnetstorehdfx.so
lib, /usr/lib/libnvgbm.so
lib, /usr/lib/libnvgov_boot.so
lib, /usr/lib/libnvgov_camera.so
lib, /usr/lib/libnvgov_force.so
lib, /usr/lib/libnvgov_generic.so
lib, /usr/lib/libnvgov_gpucompute.so
lib, /usr/lib/libnvgov_graphics.so
lib, /usr/lib/libnvgov_il.so
lib, /usr/lib/libnvgov_spincircle.so
lib, /usr/lib/libnvgov_tbc.so
lib, /usr/lib/libnvgov_ui.so
lib, /usr/lib/libnvidia-eglcore.so.32.3.1
lib, /usr/lib/libnvidia-egl-wayland.so
lib, /usr/lib/libnvidia-fatbinaryloader.so.32.3.1
lib, /usr/lib/libnvidia-glcore.so.32.3.1
lib, /usr/lib/libnvidia-glsi.so.32.3.1
lib, /usr/lib/libnvidia-glvkspirv.so.32.3.1
lib, /usr/lib/libnvidia-ptxjitcompiler.so.32.3.1
lib, /usr/lib/libnvidia-rmapi-tegra.so.32.3.1
lib, /usr/lib/libnvidia-tls.so.32.3.1
lib, /usr/lib/libnvid_mapper.so.1.0.0
lib, /usr/lib/libnvimp.so
lib, /usr/lib/libnvjpeg.so
lib, /usr/lib/libnvll.so
lib, /usr/lib/libnvmedia.so
lib, /usr/lib/libnvmm_contentpipe.so
lib, /usr/lib/libnvmmlite_image.so
lib, /usr/lib/libnvmmlite.so
lib, /usr/lib/libnvmmlite_utils.so
lib, /usr/lib/libnvmmlite_video.so
lib, /usr/lib/libnvmm_parser.so
lib, /usr/lib/libnvmm.so
lib, /usr/lib/libnvmm_utils.so
lib, /usr/lib/libnvodm_imager.so
lib, /usr/lib/libnvofsdk.so
lib, /usr/lib/libnvomxilclient.so
lib, /usr/lib/libnvomx.so
lib, /usr/lib/libnvosd.so
lib, /usr/lib/libnvos.so
lib, /usr/lib/libnvparser.so
lib, /usr/lib/libnvphsd.so
lib, /usr/lib/libnvphs.so
lib, /usr/lib/libnvrm_gpu.so
lib, /usr/lib/libnvrm_graphics.so
lib, /usr/lib/libnvrm.so
lib, /usr/lib/libnvscf.so
lib, /usr/lib/libnvtestresults.so
lib, /usr/lib/libnvtnr.so
lib, /usr/lib/libnvtracebuf.so
lib, /usr/lib/libnvtvmr.so
lib, /usr/lib/libnvv4l2.so
lib, /usr/lib/libnvv4lconvert.so
lib, /usr/lib/libnvvulkan-producer.so
lib, /usr/lib/libnvwinsys.so
lib, /usr/lib/libsensors.hal-client.nvs.so
lib, /usr/lib/libsensors_hal.nvs.so
lib, /usr/lib/libsensors.l4t.no_fusion.nvs.so
lib, /usr/lib/libtegrav4l2.so
lib, /usr/lib/libv4l2_nvvidconv.so
lib, /usr/lib/libv4l2_nvvideocodec.so
lib, /usr/lib/nvidia_icd.json
lib, /etc/vulkan/icd.d/nvidia_icd.json
sym, /usr/lib/libdrm_nvdc.so
sym, /usr/lib/aarch64-linux-gnu/libv4l2.so.0.0.999999
sym, /usr/lib/aarch64-linux-gnu/libv4lconvert.so.0.0.999999
sym, /usr/lib/aarch64-linux-gnu/libv4l/plugins/nv/libv4l2_nvvidconv.so
sym, /usr/lib/aarch64-linux-gnu/libv4l/plugins/nv/libv4l2_nvvideocodec.so
sym, /usr/lib/libcuda.so
sym, /usr/lib/libnvbufsurface.so
sym, /usr/lib/libnvbufsurftransform.so
sym, /usr/lib/libnvbuf_utils.so
sym, /usr/lib/libnvid_mapper.so
lib, /usr/share/glvnd/egl_vendor.d/10-nvidia.json
lib, /lib/firmware/tegra21x/nvhost_nvdec020_ns.fw

Some items from the list are not needed and without any recipes from yocto, but they will be ignored if they are missing. F.x. /dev/nvhost-ctrl-nvdla0 and /dev/nvhost-ctrl-nvdla1 and a few other listed items from /dev aren't created even in the default NVIDIA-sdk manager image. I haven't gone into depth of what is needed or not, but the debug log can maybe help. Otherwise, as the listed items are just ignored if they are missing, I don't see a reason to remove them as the listings are from NVIDIA themselves.

triblex commented 4 years ago

Btw @paroque28 , the tensorrt and cudnn recipe installs the libraries in /usr/lib/ directly, and not /usr/lib/aarch64-linux-gnu/, so you have to change these .csv files aswell.

paroque28 commented 4 years ago

Thanks @triblex I finally made it work :) Your help was so useful.

root@jetson-nano-qspi-sd:~# docker run -v /usr/bin/cuda-samples/:/usr/bin/cuda-samples/ -it --runtime nvidia nvcr.io/nvidia/l4t-base:r32.3
root@85f6ce9ce7d5:/# /usr/bin/cuda-samples/UnifiedMemoryStreams
GPU Device 0: "NVIDIA Tegra X1" with compute capability 5.3

Executing tasks on host / device
Task [0], thread [0] executing on device (363)
Task [1], thread [1] executing on device (791)
Task [3], thread [2] executing on device (414)
Task [2], thread [3] executing on device (753)
Task [4], thread [0] executing on device (762)
Task [5], thread [2] executing on device (756)
Task [6], thread [1] executing on device (952)
Task [7], thread [3] executing on device (763)
Task [8], thread [0] executing on device (942)
Task [9], thread [2] executing on device (853)
Task [10], thread [1] executing on device (463)
Task [11], thread [3] executing on device (375)
Task [12], thread [0] executing on device (102)
Task [13], thread [2] executing on device (937)
Task [14], thread [1] executing on host (64)
Task [15], thread [3] executing on device (538)
Task [16], thread [0] executing on device (922)
Task [17], thread [1] executing on host (86)
Task [18], thread [2] executing on device (298)
Task [19], thread [3] executing on device (693)
Task [20], thread [1] executing on device (686)
Task [21], thread [0] executing on device (673)
Task [22], thread [2] executing on device (758)
Task [23], thread [3] executing on device (911)
Task [25], thread [0] executing on device (899)
Task [24], thread [1] executing on device (272)
Task [26], thread [2] executing on device (918)
Task [27], thread [3] executing on host (64)
Task [28], thread [0] executing on device (697)
Task [29], thread [1] executing on device (321)
Task [30], thread [2] executing on device (923)
Task [31], thread [3] executing on device (591)
Task [32], thread [0] executing on device (248)
Task [33], thread [1] executing on device (948)
Task [34], thread [2] executing on device (853)
Task [35], thread [3] executing on device (570)
Task [36], thread [0] executing on device (680)
Task [37], thread [1] executing on device (165)
Task [38], thread [2] executing on device (225)
Task [39], thread [3] executing on device (970)
All Done!

I will be uploading this soon

paroque28 commented 4 years ago

https://github.com/madisongh/meta-tegra/pull/248

triblex commented 4 years ago

There seem to be a problem getting access to the CSI camera through the docker containers. Have you tried this? I'm using the Raspberry Pi camera and running the pipeline:

gst-launch-1.0 -e nvarguscamerasrc num-buffers=-1 ! 'video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)60/1' ! nvvidconv flip-method=0 ! nvvidconv ! nvegltransform ! udpsink host=$HOST_IP port=$GST_PORT

The result is this error message:

Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
=== gst-launch-1.0[65]: Connection established (7FB09771D0)SCF: Error NotSupported: Failed to load EGL library (in src/services/gl/GLService.cpp, function initializeEGLExportFunctions(), line 190)
SCF: Error NotSupported:  (propagating from src/services/gl/GLService.cpp, function initialize(), line 147)
SCF: Error NotSupported:  (propagating from src/services/gl/GLService.cpp, function startService(), line 46)
SCF: Error NotSupported:  (propagating from src/components/ServiceHost.cpp, function startServices(), line 138)
SCF: Error NotSupported:  (propagating from src/api/CameraDriver.cpp, function initialize(), line 168)
SCF: Error InvalidState: Services are already stopped (in src/components/ServiceHost.cpp, function stopServicesInternal(), line 188)
SCF: Error NotSupported:  (propagating from src/api/CameraDriver.cpp, function getCameraDriver(), line 109)
(Argus) Error NotSupported:  (propagating from src/api/GlobalProcessState.cpp, function createCameraProvider(), line 204)
=== gst-launch-1.0[65]: CameraProvider initialized (0x7fac014ec0)Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:532 No cameras available
Got EOS from element "pipeline0".
Execution ended after 0:00:00.032214229
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
=== gst-launch-1.0[65]: CameraProvider destroyed (0x7fac014ec0)(Argus) Error EndOfFile: Unexpected error in reading socket (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 266)
(Argus) Error EndOfFile: Receive worker failure, notifying 1 waiting threads (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 340)
(Argus) Error InvalidState: Argus client is exiting with 1 outstanding client threads (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 357)
(Argus) Error EndOfFile: Receiving thread terminated with error (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadWrapper(), line 368)
(Argus) Error EndOfFile: Client thread received an error from socket (in src/rpc/socket/client/ClientSocketManager.cpp, function send(), line 145)
(Argus) Error EndOfFile:  (propagating from src/rpc/socket/client/SocketClientDispatch.cpp, function dispatch(), line 87)

Using the flag -e DISPLAY=0 just gives a new error:

nvbuf_utils: Could not get EGL display connection

I'm running a headless Yocto/Poky-Zeus build.

triblex commented 4 years ago

The above problem with using gstreamer with CSI connected camera through the docker can be solved with setting the correct arguments when running the container. Note the flag --ipc=host and mounting the volume -v /tmp/argus_socket:/tmp/argus_socket. The following line worked for me on the Nvidia SDK-Manager official image:

docker run --net=host --runtime nvidia --rm --ipc=host -v /tmp/.X11-unix/:/tmp/.X11-unix/ -v /tmp/argus_socket:/tmp/argus_socket --cap-add SYS_PTRACE -e DISPLAY=$DISPLAY -it nvcr.io/nvidia/l4t-base:r32.3.1

However, running the l4t-base:r32.3.1 image in a yocto build still poses an issue. I believe it is because the docker image is looking for libraries in wrong places as the official csv files has different paths. I'm trying to link all the added files from the csv files to the "correct places".

There could also very well be a problem with the versions of gstreamer, as the docker image has gstreamer 1.14 but the yocto-zeus has gstreamer 1.16.

paroque28 commented 4 years ago

FYI For some reason, only l4t.csv is generated when I use the wip-container-32.3.1 branch. I will be on top of that

dremsol commented 4 years ago

@paroque28 I'm currently evaluating the suggestions made by @triblex in #248 . So far I've been able to verify the following;

-sym, ${base_libdir}/aarch64-linux-gnu/libv4l2.so.0.0.999999
-sym, ${base_libdir}/aarch64-linux-gnu/libv4lconvert.so.0.0.999999
-sym, ${base_libdir}/aarch64-linux-gnu/libv4l/plugins/nv/libv4l2_nvvidconv.so
-sym, ${base_libdir}/aarch64-linux-gnu/libv4l/plugins/nv/libv4l2_nvvideocodec.so
+sym, ${libdir}/libv4l2.so.0
+sym, ${libdir}/libv4lconvert.so.0
+lib, ${libdir}/libv4l2_nvvidconv.so
+lib, ${libdir}/libv4l2_nvvideocodec.so

Currently still verifying how to deal with the following;

root@jetson-nano-qspi-sd:~# find / -name "libdrm*"
/usr/lib/libdrm.so.2.4.0
/usr/lib/tegra/libdrm.so.2
/usr/lib/libdrm.so.2
root@jetson-nano-qspi-sd:~# ls -al /usr/lib/libdrm.so.2.4.0
-rwxr-xr-x    1 root     root         67416 Feb 28 14:21 /usr/lib/libdrm.so.2.4.0
root@jetson-nano-qspi-sd:~# ls -al /usr/lib/libdrm.so.2    
lrwxrwxrwx    1 root     root            15 Feb 28 14:21 /usr/lib/libdrm.so.2 -> libdrm.so.2.4.0
root@jetson-nano-qspi-sd:~# ls -al /usr/lib/tegra/libdrm.so.2
-rw-r--r--    1 root     root        120256 Feb 28 14:21 /usr/lib/tegra/libdrm.so.2 

The best would be to link all the entries of the .csv file against the packages needed in build config. Do you have any thoughts on that?

dremsol commented 4 years ago

@triblex I do agree with you that there are still some bits missing. Running a Docker container with following command (nvargus-daemon is running and /dev/video0 is mounted with RaspberryPi camera);

docker run -it --rm --net=host --runtime nvidia -v /tmp/argus_socket:/tmp/argus_socket nvcr.io/nvidia/deepstream-l4t:4.0.2-19.12-base

volume mapping the argus_socket and running the following gsstreamer pipeline;

~# gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080' ! nvv4l2h264enc insert-sps-pps=true ! h264parse ! rtph264pay pt=96 ! udpsink host=127.0.0.1 port=8001 sync=false -e

(gst-plugin-scanner:20): GStreamer-WARNING **: 15:16:54.547: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_dewarper.so': libnppig.so.10.0: cannot open shared object file: No such file or directory

(gst-plugin-scanner:20): GStreamer-WARNING **: 15:16:54.564: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_infer.so': libnvparsers.so.6: cannot open shared object file: No such file or directory
WARNING: erroneous pipeline: no element "nvarguscamerasrc"

still complains about missing libraries. Both are present in host at:

/usr/lib/libnvparsers.so.6
/usr/local/cuda-10.0/lib/libnppig.so.10.0

But not present in l4t.csv. Last but not least it seems that pipeline is not able to communicate with ARGUS api according to missing nvarguscamerasrc.

All of the above is not very satisfying but wanted to share before start of the weekend.

paroque28 commented 4 years ago

Oh I see it now, it's because you need to add each package-container-csv to the IMAGE_INSTALL. I wil see if there's an automated way of doing this with yocto.

@wremie I will look into this and share my thoughts. I currently don't have a Raspberrypi camera but will find one

dremsol commented 4 years ago

@paroque28 just stumbled upon the issue created by @triblex in nvidia-docker. I think you appreciate the post as I'm experiencing similar issues.

triblex commented 4 years ago

@wremie

I found that linking or copying all the files from the path /usr/lib/gstreamer1.0/ to /usr/lib/aarch64-linux-gnu/gstreamer1.0/ inside the docker image, fixes the missing nvarguscamerasrc issue (along some other possible issues).

But even if all the same files are present on the host, as on the official NVIDIA SDK-Manager Ubuntu image and are correctly listed in the l4t.csv file to be mounted in the docker image, there are still problems. The paths that are changed from the original l4t.csv, needs to be linked or copied to the "correct" places inside the docker image or change where the docker image is looking for them.

Maybe a startup script that runs the linking of all the l4t.csv listed files that are in the "wrong" path

That and then gstreamer probably needs to be upgraded to 1.16 in the docker image or downgraded to 1.14 on the host.

triblex commented 4 years ago

I've downgraded gstreamer to 1.14. And then I've attempted to link all the files to places where the official nvidia docker image assumes they're located.

I ran this pipieline as before (works outside docker container):

gst-launch-1.0 -e nvarguscamerasrc num-buffers=-1 ! 'video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)60/1' ! nvvidconv flip-method=0 ! nvvidconv ! nvegltransform ! udpsink host=$HOST_IP port=$GST_PORT

I got to this error:

Setting pipeline to PAUSED ...
[ 2415.341857] (NULL device *): nvhost_channelctl: invalid cmd 0x80685600
[ 2415.350034] (NULL device *): nvhost_channelctl: invalid cmd 0x80685600
[ 2415.357153] (NULL device *): nvhost_channelctl: invalid cmd 0x80685600
Failed to query video capabilities: Inappropriate ioctl for device
libv4l2: error getting capabilities: Inappropriate ioctl for device
ERROR: Pipeline doesn't want to pause.
ERROR: from element /GstPipeline:pipeline0/nvv4l2h264enc:nvv4l2h264enc0: Error getting capabilities for device '/dev/nvhost-msenc': It isn't a v4l2 driver. Check if it is a v4l1 driver.
Additional debug info:
v4l2_calls.c(98): gst_v4l2_get_capabilities (): /GstPipeline:pipeline0/nvv4l2h264enc:nvv4l2h264enc0:
system error: Inappropriate ioctl for device
Setting pipeline to NULL ...
Freeing pipeline ...
dremsol commented 4 years ago

@triblex thanks for the work. If i remember correctly you mount the device node as -v /dev/video0:/dev/video0.

However, according to deepstream-l4t CSI should be mounted as -v /tmp/argus_socket:/tmp/argus_socket

Maybe this is worth trying?

triblex commented 4 years ago

@wremie I've actually tried running the docker with both. I am using this docker run command:

docker run --net=host --runtime nvidia --rm --ipc=host -v /tmp/.X11-unix/:/tmp/.X11-unix/ -v /tmp/argus_socket:/tmp/argus_socket --device=/dev/video0:/dev/video0 --cap-add SYS_PTRACE -e DISPLAY=$DISPLAY -it nvcr.io/nvidia/l4t-base:r32.3.1

And the camera access is there, as v4l2-ctl --all shows all about the pi camera.

The copy script i've made:

mkdir -p /usr/lib/aarch64-linux-gnu/libv4l/plugins/nv/
mkdir -p /usr/lib/aarch64-linux-gnu/tegra
cp -avr /usr/lib/libv4l2.so.0 /usr/lib/aarch64-linux-gnu/libv4l2.so.0
cp -avr /usr/lib/gstreamer-1.0/libgstnvarguscamerasrc.so /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvarguscamerasrc.so
cp -avr /usr/lib/gstreamer-1.0/libgstnvcompositor.so /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvcompositor.so
cp -avr /usr/lib/gstreamer-1.0/libgstnvdrmvideosink.so /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvdrmvideosink.so
cp -avr /usr/lib/gstreamer-1.0/libgstnveglglessink.so /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnveglglessink.so
cp -avr /usr/lib/gstreamer-1.0/libgstnveglstreamsrc.so /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnveglstreamsrc.so
cp -avr /usr/lib/gstreamer-1.0/libgstnvegltransform.so /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvegltransform.so
cp -avr /usr/lib/gstreamer-1.0/libgstnvivafilter.so /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvivafilter.so
cp -avr /usr/lib/gstreamer-1.0/libgstnvjpeg.so /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvjpeg.so
cp -avr /usr/lib/gstreamer-1.0/libgstnvtee.so /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvtee.so
cp -avr /usr/lib/gstreamer-1.0/libgstnvvidconv.so /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvvidconv.so
cp -avr /usr/lib/gstreamer-1.0/libgstnvvideo4linux2.so /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvvideo4linux2.so
cp -avr /usr/lib/gstreamer-1.0/libgstnvvideoconv.so /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvvideoconv.so
cp -avr /usr/lib/gstreamer-1.0/libgstnvvideocuda.so /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvvideocuda.so
cp -avr /usr/lib/gstreamer-1.0/libgstnvvideosink.so /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvvideosink.so
cp -avr /usr/lib/gstreamer-1.0/libgstnvvideosinks.so /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvvideosinks.so
cp -avr /usr/lib/gstreamer-1.0/libgstomx.so /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstomx.so
cp -avr /usr/lib/libgstnvegl-1.0.so.0 /usr/lib/aarch64-linux-gnu/libgstnvegl-1.0.so.0
cp -avr /usr/lib/libgstnvexifmeta.so /usr/lib/aarch64-linux-gnu/libgstnvexifmeta.so
cp -avr /usr/lib/libgstnvivameta.so /usr/lib/aarch64-linux-gnu/libgstnvivameta.so
cp -avr /usr/lib/libnvsample_cudaprocess.so /usr/lib/aarch64-linux-gnu/libnvsample_cudaprocess.so
cp -avr /usr/lib/ld.so.conf /usr/lib/aarch64-linux-gnu/tegra-egl/ld.so.conf
cp -avr /usr/lib/libEGL_nvidia.so.0 /usr/lib/aarch64-linux-gnu/tegra-egl/libEGL_nvidia.so.0
cp -avr /usr/lib/libGLESv1_CM_nvidia.so.1 /usr/lib/aarch64-linux-gnu/tegra-egl/libGLESv1_CM_nvidia.so.1
cp -avr /usr/lib/libGLESv2_nvidia.so.2 /usr/lib/aarch64-linux-gnu/tegra-egl/libGLESv2_nvidia.so.2
cp -avr /usr/lib/nvidia.json /usr/lib/aarch64-linux-gnu/tegra-egl/nvidia.json
cp -avr /usr/lib/libcuda.so.1.1 /usr/lib/aarch64-linux-gnu/tegra/libcuda.so.1.1
cp -avr /usr/lib/libdrm.so.2 /usr/lib/aarch64-linux-gnu/tegra/libdrm.so.2
cp -avr /usr/lib/libGLX_nvidia.so.0 /usr/lib/aarch64-linux-gnu/tegra/libGLX_nvidia.so.0
cp -avr /usr/lib/libnvapputil.so /usr/lib/aarch64-linux-gnu/tegra/libnvapputil.so
cp -avr /usr/lib/libnvargus.so /usr/lib/aarch64-linux-gnu/tegra/libnvargus.so
cp -avr /usr/lib/libnvargus_socketclient.so /usr/lib/aarch64-linux-gnu/tegra/libnvargus_socketclient.so
cp -avr /usr/lib/libnvargus_socketserver.so /usr/lib/aarch64-linux-gnu/tegra/libnvargus_socketserver.so
cp -avr /usr/lib/libnvavp.so /usr/lib/aarch64-linux-gnu/tegra/libnvavp.so
cp -avr /usr/lib/libnvbuf_fdmap.so.1.0.0 /usr/lib/aarch64-linux-gnu/tegra/libnvbuf_fdmap.so.1.0.0
cp -avr /usr/lib/libnvbufsurface.so.1.0.0 /usr/lib/aarch64-linux-gnu/tegra/libnvbufsurface.so.1.0.0
cp -avr /usr/lib/libnvbufsurftransform.so.1.0.0 /usr/lib/aarch64-linux-gnu/tegra/libnvbufsurftransform.so.1.0.0
cp -avr /usr/lib/libnvbuf_utils.so.1.0.0 /usr/lib/aarch64-linux-gnu/tegra/libnvbuf_utils.so.1.0.0
cp -avr /usr/lib/libnvcameratools.so /usr/lib/aarch64-linux-gnu/tegra/libnvcameratools.so
cp -avr /usr/lib/libnvcamerautils.so /usr/lib/aarch64-linux-gnu/tegra/libnvcamerautils.so
cp -avr /usr/lib/libnvcam_imageencoder.so /usr/lib/aarch64-linux-gnu/tegra/libnvcam_imageencoder.so
cp -avr /usr/lib/libnvcamlog.so /usr/lib/aarch64-linux-gnu/tegra/libnvcamlog.so
cp -avr /usr/lib/libnvcamv4l2.so /usr/lib/aarch64-linux-gnu/tegra/libnvcamv4l2.so
cp -avr /usr/lib/libnvcolorutil.so /usr/lib/aarch64-linux-gnu/tegra/libnvcolorutil.so
cp -avr /usr/lib/libnvdc.so /usr/lib/aarch64-linux-gnu/tegra/libnvdc.so
cp -avr /usr/lib/libnvddk_2d_v2.so /usr/lib/aarch64-linux-gnu/tegra/libnvddk_2d_v2.so
cp -avr /usr/lib/libnvddk_vic.so /usr/lib/aarch64-linux-gnu/tegra/libnvddk_vic.so
cp -avr /usr/lib/libnvdsbufferpool.so.1.0.0 /usr/lib/aarch64-linux-gnu/tegra/libnvdsbufferpool.so.1.0.0
cp -avr /usr/lib/libnveglstream_camconsumer.so /usr/lib/aarch64-linux-gnu/tegra/libnveglstream_camconsumer.so
cp -avr /usr/lib/libnveglstreamproducer.so /usr/lib/aarch64-linux-gnu/tegra/libnveglstreamproducer.so
cp -avr /usr/lib/libnveventlib.so /usr/lib/aarch64-linux-gnu/tegra/libnveventlib.so
cp -avr /usr/lib/libnvexif.so /usr/lib/aarch64-linux-gnu/tegra/libnvexif.so
cp -avr /usr/lib/libnvfnet.so /usr/lib/aarch64-linux-gnu/tegra/libnvfnet.so
cp -avr /usr/lib/libnvfnetstoredefog.so /usr/lib/aarch64-linux-gnu/tegra/libnvfnetstoredefog.so
cp -avr /usr/lib/libnvfnetstorehdfx.so /usr/lib/aarch64-linux-gnu/tegra/libnvfnetstorehdfx.so
cp -avr /usr/lib/libnvgov_boot.so /usr/lib/aarch64-linux-gnu/tegra/libnvgov_boot.so
cp -avr /usr/lib/libnvgov_camera.so /usr/lib/aarch64-linux-gnu/tegra/libnvgov_camera.so
cp -avr /usr/lib/libnvgov_force.so /usr/lib/aarch64-linux-gnu/tegra/libnvgov_force.so
cp -avr /usr/lib/libnvgov_generic.so /usr/lib/aarch64-linux-gnu/tegra/libnvgov_generic.so
cp -avr /usr/lib/libnvgov_gpucompute.so /usr/lib/aarch64-linux-gnu/tegra/libnvgov_gpucompute.so
cp -avr /usr/lib/libnvgov_graphics.so /usr/lib/aarch64-linux-gnu/tegra/libnvgov_graphics.so
cp -avr /usr/lib/libnvgov_il.so /usr/lib/aarch64-linux-gnu/tegra/libnvgov_il.so
cp -avr /usr/lib/libnvgov_spincircle.so /usr/lib/aarch64-linux-gnu/tegra/libnvgov_spincircle.so
cp -avr /usr/lib/libnvgov_tbc.so /usr/lib/aarch64-linux-gnu/tegra/libnvgov_tbc.so
cp -avr /usr/lib/libnvgov_ui.so /usr/lib/aarch64-linux-gnu/tegra/libnvgov_ui.so
cp -avr /usr/lib/libnvidia-eglcore.so.32.2.0 /usr/lib/aarch64-linux-gnu/tegra/libnvidia-eglcore.so.32.2.0
cp -avr /usr/lib/libnvidia-egl-wayland.so /usr/lib/aarch64-linux-gnu/tegra/libnvidia-egl-wayland.so
cp -avr /usr/lib/libnvidia-fatbinaryloader.so.32.2.0 /usr/lib/aarch64-linux-gnu/tegra/libnvidia-fatbinaryloader.so.32.2.0
cp -avr /usr/lib/libnvidia-glcore.so.32.2.0 /usr/lib/aarch64-linux-gnu/tegra/libnvidia-glcore.so.32.2.0
cp -avr /usr/lib/libnvidia-glsi.so.32.2.0 /usr/lib/aarch64-linux-gnu/tegra/libnvidia-glsi.so.32.2.0
cp -avr /usr/lib/libnvidia-glvkspirv.so.32.2.0 /usr/lib/aarch64-linux-gnu/tegra/libnvidia-glvkspirv.so.32.2.0
cp -avr /usr/lib/libnvidia-ptxjitcompiler.so.32.2.0 /usr/lib/aarch64-linux-gnu/tegra/libnvidia-ptxjitcompiler.so.32.2.0
cp -avr /usr/lib/libnvidia-rmapi-tegra.so.32.2.0 /usr/lib/aarch64-linux-gnu/tegra/libnvidia-rmapi-tegra.so.32.2.0
cp -avr /usr/lib/libnvidia-tls.so.32.2.0 /usr/lib/aarch64-linux-gnu/tegra/libnvidia-tls.so.32.2.0
cp -avr /usr/lib/libnvid_mapper.so.1.0.0 /usr/lib/aarch64-linux-gnu/tegra/libnvid_mapper.so.1.0.0
cp -avr /usr/lib/libnvimp.so /usr/lib/aarch64-linux-gnu/tegra/libnvimp.so
cp -avr /usr/lib/libnvjpeg.so /usr/lib/aarch64-linux-gnu/tegra/libnvjpeg.so
cp -avr /usr/lib/libnvll.so /usr/lib/aarch64-linux-gnu/tegra/libnvll.so
cp -avr /usr/lib/libnvmedia.so /usr/lib/aarch64-linux-gnu/tegra/libnvmedia.so
cp -avr /usr/lib/libnvmm_contentpipe.so /usr/lib/aarch64-linux-gnu/tegra/libnvmm_contentpipe.so
cp -avr /usr/lib/libnvmmlite_image.so /usr/lib/aarch64-linux-gnu/tegra/libnvmmlite_image.so
cp -avr /usr/lib/libnvmmlite.so /usr/lib/aarch64-linux-gnu/tegra/libnvmmlite.so
cp -avr /usr/lib/libnvmmlite_utils.so /usr/lib/aarch64-linux-gnu/tegra/libnvmmlite_utils.so
cp -avr /usr/lib/libnvmmlite_video.so /usr/lib/aarch64-linux-gnu/tegra/libnvmmlite_video.so
cp -avr /usr/lib/libnvmm_parser.so /usr/lib/aarch64-linux-gnu/tegra/libnvmm_parser.so
cp -avr /usr/lib/libnvmm.so /usr/lib/aarch64-linux-gnu/tegra/libnvmm.so
cp -avr /usr/lib/libnvmm_utils.so /usr/lib/aarch64-linux-gnu/tegra/libnvmm_utils.so
cp -avr /usr/lib/libnvodm_imager.so /usr/lib/aarch64-linux-gnu/tegra/libnvodm_imager.so
cp -avr /usr/lib/libnvofsdk.so /usr/lib/aarch64-linux-gnu/tegra/libnvofsdk.so
cp -avr /usr/lib/libnvomxilclient.so /usr/lib/aarch64-linux-gnu/tegra/libnvomxilclient.so
cp -avr /usr/lib/libnvomx.so /usr/lib/aarch64-linux-gnu/tegra/libnvomx.so
cp -avr /usr/lib/libnvosd.so /usr/lib/aarch64-linux-gnu/tegra/libnvosd.so
cp -avr /usr/lib/libnvos.so /usr/lib/aarch64-linux-gnu/tegra/libnvos.so
cp -avr /usr/lib/libnvparser.so /usr/lib/aarch64-linux-gnu/tegra/libnvparser.so
cp -avr /usr/lib/libnvphsd.so /usr/lib/aarch64-linux-gnu/tegra/libnvphsd.so
cp -avr /usr/lib/libnvphs.so /usr/lib/aarch64-linux-gnu/tegra/libnvphs.so
cp -avr /usr/lib/libnvrm_gpu.so /usr/lib/aarch64-linux-gnu/tegra/libnvrm_gpu.so
cp -avr /usr/lib/libnvrm_graphics.so /usr/lib/aarch64-linux-gnu/tegra/libnvrm_graphics.so
cp -avr /usr/lib/libnvrm.so /usr/lib/aarch64-linux-gnu/tegra/libnvrm.so
cp -avr /usr/lib/libnvscf.so /usr/lib/aarch64-linux-gnu/tegra/libnvscf.so
cp -avr /usr/lib/libnvtestresults.so /usr/lib/aarch64-linux-gnu/tegra/libnvtestresults.so
cp -avr /usr/lib/libnvtnr.so /usr/lib/aarch64-linux-gnu/tegra/libnvtnr.so
cp -avr /usr/lib/libnvtracebuf.so /usr/lib/aarch64-linux-gnu/tegra/libnvtracebuf.so
cp -avr /usr/lib/libnvtvmr.so /usr/lib/aarch64-linux-gnu/tegra/libnvtvmr.so
cp -avr /usr/lib/libnvv4l2.so /usr/lib/aarch64-linux-gnu/tegra/libnvv4l2.so
cp -avr /usr/lib/libnvv4lconvert.so /usr/lib/aarch64-linux-gnu/tegra/libnvv4lconvert.so
cp -avr /usr/lib/libnvvulkan-producer.so /usr/lib/aarch64-linux-gnu/tegra/libnvvulkan-producer.so
cp -avr /usr/lib/libnvwinsys.so /usr/lib/aarch64-linux-gnu/tegra/libnvwinsys.so
cp -avr /usr/lib/libsensors.hal-client.nvs.so /usr/lib/aarch64-linux-gnu/tegra/libsensors.hal-client.nvs.so
cp -avr /usr/lib/libsensors_hal.nvs.so /usr/lib/aarch64-linux-gnu/tegra/libsensors_hal.nvs.so
cp -avr /usr/lib/libsensors.l4t.no_fusion.nvs.so /usr/lib/aarch64-linux-gnu/tegra/libsensors.l4t.no_fusion.nvs.so
cp -avr /usr/lib/libtegrav4l2.so /usr/lib/aarch64-linux-gnu/tegra/libtegrav4l2.so
cp -avr /usr/lib/libv4l2_nvvidconv.so /usr/lib/aarch64-linux-gnu/tegra/libv4l2_nvvidconv.so
cp -avr /usr/lib/libv4l2_nvvideocodec.so /usr/lib/aarch64-linux-gnu/tegra/libv4l2_nvvideocodec.so
cp -avr /usr/lib/nvidia_icd.json /usr/lib/aarch64-linux-gnu/tegra/nvidia_icd.json
cp -avr /usr/lib/libv4l2.so.0 /usr/lib/aarch64-linux-gnu/libv4l2.so.0.0.999999
cp -avr /usr/lib/libv4lconvert.so.0 /usr/lib/aarch64-linux-gnu/libv4lconvert.so.0.0.999999
cp -avr /usr/lib/libv4l/plugins/libv4l2_nvvidconv.so /usr/lib/aarch64-linux-gnu/libv4l/plugins/nv/libv4l2_nvvidconv.so
cp -avr /usr/lib/libv4l/plugins/libv4l2_nvvideocodec.so /usr/lib/aarch64-linux-gnu/libv4l/plugins/nv/libv4l2_nvvideocodec.so

I'm stuck at this point, as I don't really know what else I could try.

dremsol commented 4 years ago

@triblex I've been digging through all issues and PR's in the several repo's out there but didn't found extra clues on top of what you have done so far. At least it doesn't seem NVIDIA has priority on Jetson regarding the matter.

@paroque28 did you have any chance to test the containerised CSI camera pipeline?