dusty-nv / jetson-inference

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
https://developer.nvidia.com/embedded/twodaystoademo
MIT License
7.78k stars 2.98k forks source link

AMD64 container could NOT work as expected. #1609

Open CapBarbossa opened 1 year ago

CapBarbossa commented 1 year ago

I pulled an AMD64 container from: https://hub.docker.com/layers/dustynv/jetson-inference/22.06/images/sha256-9cb5cd3fe78da96faa013414190e1603b617969ed40b6dd456650fee55f46c0e?context=explore Then I run video-viewer /dev/video0, it complains: ==============Complain Starts================== [cuda] cudaGraphicsGLRegisterBuffer(&interop, allocDMA(type), cudaGraphicsRegisterFlagsFromGL(flags)) [cuda] OS call failed or operation not supported on this OS (error 304) (hex 0x130) [cuda] /work/utils/display/glTexture.cpp:360 X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 130 (MIT-SHM) Minor opcode of failed request: 3 (X_ShmPutImage) Value in failed request: 0x640 Serial number of failed request: 59 Current serial number in output stream: 60

==============Complain Ends================== The GPU I am using is GPU 0: NVIDIA GeForce RTX 3090 The CUDA version installed on host machine is CUDA Version: 12.0, this docker image using of which is CUDA Version: 11.4. ChatGPT already told me that CUDA version mismathch doesn't matter, and I've already browse this topic quite a while. So, Do NOT ask something low and I am NOT a damn newbee. I found in a blog[https://forums.developer.nvidia.com/t/how-to-build-jetson-inference-in-host-pc/53522/2] in google that this guy modified something in CMakeList.txt. But am not sure where to do this modification. It seems like this is related to CUDA version mismatch. Could you help me with this? Thanks in advance.

dusty-nv commented 1 year ago

@CapBarbossa the jetson-inference container for x86_64 is a beta feature and not officially supported, and you may find some things that don't totally work, for example the OpenGL display stuff in particular. That said, others have indicated it worked for them (and myself), although we had older GPU cards that you, so you may need to rebuild the container against a newer version of the NGC pytorch container: https://github.com/dusty-nv/jetson-inference/blob/56604b07da34bdf32c02e1201ab99c413b47bc85/docker/build.sh#L48

Also, did you start the container with the docker/run.sh script from jetson-inference or with your own command? Does the non-display stuff work like running imagenet/detectnet on a file?

dusty-nv commented 1 year ago

It may also be worth pointing out that OpenGL/CUDA interoperability isn't going to work with X11 forwarding/tunneling if you are viewing this PC's display remotely. For that, I recommend just streaming the video over RTP/RTSP/WebRTC.