dusty-nv / jetson-containers

Machine Learning Containers for NVIDIA Jetson and JetPack-L4T
MIT License
2.38k stars 482 forks source link

An Issue when running an app using TensorRT model on docker container #19

Open rose-jinyang opened 4 years ago

rose-jinyang commented 4 years ago

Hello How are you? Thanks for contributing this project. I made a jetson-docker image from the l4t image: nvcr.io/nvidia/l4t-tensorflow:r32.4.3-tf1.15-py3. I am using Jetson-Xavier NX. I installed jetson sd card image for JetPack 4.4 on my jetson. CUDA: 10.2 cuDNN: 8.0 TensorRT: 7.1.3 I made a tensorrt engine for deep learning model and this model engine works well on host(Jetson). I built another tensorrt engine and an app with this trt engine on docker container too. I ran the docker container as the following on host(Jetson). sudo docker run -it --runtime nvidia myimage I met the following issue when started the app using the above tensorrt engine on this container.

[E] [TRT] coreReadArchive.cpp (38) - Serialization Error in verifyHeader: 0 (Version tag does not match) [E] [TRT] INVALID_STATE: std::exception [E] [TRT] INVALID_CONFIG: Deserialize the cuda engine failed.

I thought that this might be due to mismatch of tensorrt version and compared two versions but two versions are equal. How can I fix this issue? Thanks

dusty-nv commented 4 years ago

Is the L4T version the same? If not, the TensorRT version might be minorly different although it appears equal - try regenerating the engine on same L4T version.


From: rose-jinyang notifications@github.com Sent: Sunday, September 27, 2020 6:26:32 AM To: dusty-nv/jetson-containers jetson-containers@noreply.github.com Cc: Subscribed subscribed@noreply.github.com Subject: [dusty-nv/jetson-containers] An Issue when running an app using TensorRT model on docker container (#19)

Hello How are you? Thanks for contributing this project. I made a jetson-docker image from the l4t image: nvcr.io/nvidia/l4t-tensorflow:r32.4.3-tf1.15-py3. I am using Jetson-Xavier NX. I installed jetson sd card image for JetPack 4.4 on my jetson. CUDA: 10.2 cuDNN: 8.0 TensorRT: 7.1.3 I made a tensorrt engine for deep learning model and this model engine works well on host(Jetson). I built another tensorrt engine and an app with this trt engine on docker container too. I ran the docker container as the following on host(Jetson). sudo docker run -it --runtime nvidia myimage I met the following issue when starts the app using the above tensorrt engine on this container. [E] [TRT] coreReadArchive.cpp (38) - Serialization Error in verifyHeader: 0 (Version tag does not match) [E] [TRT] INVALID_STATE: std::exception [E] [TRT] INVALID_CONFIG: Deserialize the cuda engine failed. I thought that this is due to mismatch of tensorrt version and compared two versions but two versions are equal. How can I fix this issue? Thanks

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-containers/issues/19, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ADVEGK3WXBUOVGH6N3B22BTSH4HNRANCNFSM4R3PFKAA.

rose-jinyang commented 4 years ago

Thanks