dusty-nv / jetson-inference

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
https://developer.nvidia.com/embedded/twodaystoademo
MIT License
7.74k stars 2.97k forks source link

‘Dims3’ in namespace ‘nvinfer1’ does not name a type!! #98

Closed ShervinAr closed 6 years ago

ShervinAr commented 7 years ago

Hello, I got the following error while trying to compile jetson codes:

/home/shervin/Desktop/jetson-inference/tensorNet.h:159:12: error: ‘Dims3’ in namespace ‘nvinfer1’ does not name a type nvinfer1::Dims3 mInputDims;

I would be thankful if you could provide some comments on how to resolve the issue

dusty-nv commented 7 years ago

Which JetPack version are you using? Is this compiling onboard the Jetson?

On Jun 24, 2017 3:39 AM, ShervinAr notifications@github.com wrote:

Hello, I got the following error while trying to compile jetson codes:

/home/shervin/Desktop/jetson-inference/tensorNet.h:159:12: error: ‘Dims3’ in namespace ‘nvinfer1’ does not name a type nvinfer1::Dims3 mInputDims;

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/98, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AOpDKxECG56PfgPuICUJaiYerP_CWNTbks5sHL0zgaJpZM4OEQaW.


This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

ShervinAr commented 7 years ago

@dusty-nv Thanks for your response. It is JetPack 3.0 and TensorRT 2.1. I am compiling the codes on my host machine (Ubuntu 16.04) and not on Jetson board.

Is it possible that "Dims3" has changed name in the new TensorRT release?

dusty-nv commented 7 years ago

Shervin, Jetson does not support TensorRT 2.1 yet. It will be available in an upcoming release, at which time the GitHub repo will be fixed too.

On Jun 24, 2017 8:17 AM, ShervinAr notifications@github.com wrote:

@dusty-nvhttps://github.com/dusty-nv Thanks for your response. It is JetPack 3.0 and TensorRT 2.1. I am compiling the codes on my host machine (Ubuntu 16.04) and not on Jetson board.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/98#issuecomment-310835199, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AOpDK6s8GOghQw8LirFbUy4pTZb0XKvvks5sHP5GgaJpZM4OEQaW.


This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

ShervinAr commented 7 years ago

@dusty-nv Dustin, so I should switch back to TensorRT 1.0?

dusty-nv commented 7 years ago

For the time being, yes.

On Jun 24, 2017 8:20 AM, ShervinAr notifications@github.com wrote:

@dusty-nvhttps://github.com/dusty-nv Dustin, so I should switch back to TensorRT 1.0?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/98#issuecomment-310835369, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AOpDK3tu_MuvIMLGS6xeH_JQ8qv-NMD4ks5sHP8ggaJpZM4OEQaW.


This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

ShervinAr commented 7 years ago

@dusty-nv many thanks but it seems like TensorRT 1.0 have no available version for Ubuntu 16.04 on a host machine! Is it possible to install and run TensorRT 1.0 (which only supports Ubuntu 14.04 on a host machine) on an Ubuntu 16.04 host?

dusty-nv commented 7 years ago

You don't need TensorRT for the host, it compiled and runs onboard Jetson.

On Jun 24, 2017 8:41 AM, ShervinAr notifications@github.com wrote:

@dusty-nvhttps://github.com/dusty-nv many thanks but it seems like TensorRT 1.0 have no available version for Ubuntu 16.04 on a host machine! Is it possible to install and run TensorRT 1.0 (which only supports Ubuntu 14.04 on a host machine) on an Ubuntu 16.04 host?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/98#issuecomment-310836343, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AOpDK3yOI_T77316mAZt2jUZNDanSYLDks5sHQPygaJpZM4OEQaW.


This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

dusty-nv commented 7 years ago

Also I bet if you tried the 14.04 version on 16.04, it may still work ok?

On Jun 24, 2017 8:44 AM, Dustin Franklin dustinf@nvidia.com wrote:

You don't need TensorRT for the host, it compiled and runs onboard Jetson.

On Jun 24, 2017 8:41 AM, ShervinAr notifications@github.com wrote:

@dusty-nvhttps://github.com/dusty-nv many thanks but it seems like TensorRT 1.0 have no available version for Ubuntu 16.04 on a host machine! Is it possible to install and run TensorRT 1.0 (which only supports Ubuntu 14.04 on a host machine) on an Ubuntu 16.04 host?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/98#issuecomment-310836343, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AOpDK3yOI_T77316mAZt2jUZNDanSYLDks5sHQPygaJpZM4OEQaW.


This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

ShervinAr commented 7 years ago

@dusty-nv I would like to compile the code on the host too. Is there any way to do that with the provided GitHub repo codes?

I was able to do that previously when I used a 14.04 Ubuntu host and the previous release of GitHub repo for jetson you had provided

ShervinAr commented 7 years ago

@dusty-nv Now I switched back to tensorRT 1.0 on the Host (Ubuntu 16.04) and compiled jetson codes on the host but got errors again:

/home/shervin/Desktop/jetson-inference/tensorNet.h:159:12: error: ‘Dims3’ in namespace ‘nvinfer1’ does not name a type nvinfer1::Dims3 mInputDims;

dusty-nv commented 7 years ago

Some people have gotten it to work on desktop with minor changes, but I don't officially support it because the gstreamer/camera stuff is different between Jetson and desktop too.

It looks like you need to dig into the NVInfer.h header on desktop and see what's up with Dims3 declaration vs. Jetson.

On Jun 24, 2017 8:47 AM, ShervinAr notifications@github.com wrote:

@dusty-nvhttps://github.com/dusty-nv I would like to compile the code on the host too. Is there any way to do that with the provided GitHub repo codes?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/98#issuecomment-310836567, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AOpDK1qeP3eUSu-ZPWsL2PFsh-GM-UDNks5sHQU3gaJpZM4OEQaW.


This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

aprentis commented 7 years ago

@ShervinAr Have you solved this issue? I`m having same problem.

ShervinAr commented 7 years ago

@aprentis I have switched to an earlier version of Jetpack and the errors are gone

milinddeore commented 7 years ago

Today i upgraded to Jetpack 3.1 on my Jetson TX2 and got the same issue, in fact quite a few other errors:

Earlier the same codebase was compiling and working fine on Jetpack 3.0. I guess this is due GIE(TensroRT) updates.

Is there a quick fix that can unblock me? Thanks in advance.

[ 35%] Building CXX object CMakeFiles/jetson-inference.dir/tensorNet.cpp.o
In file included from /home/nvidia/boxer/jetson-inference/tensorNet.cpp:5:0:
/home/nvidia/boxer/jetson-inference/tensorNet.h:159:12: error: ‘Dims3’ in namespace ‘nvinfer1’ does not name a type
  nvinfer1::Dims3 mInputDims;
            ^
/home/nvidia/boxer/jetson-inference/tensorNet.h:164:13: error: ‘Dims3’ in namespace ‘nvinfer1’ does not name a type
   nvinfer1::Dims3 dims;
             ^
/home/nvidia/boxer/jetson-inference/tensorNet.cpp: In constructor ‘tensorNet::tensorNet()’:
/home/nvidia/boxer/jetson-inference/tensorNet.cpp:32:10: error: ‘mInputDims’ was not declared in this scope
  memset(&mInputDims, 0, sizeof(nvinfer1::Dims3));
          ^
/home/nvidia/boxer/jetson-inference/tensorNet.cpp:32:32: error: ‘Dims3’ is not a member of ‘nvinfer1’
  memset(&mInputDims, 0, sizeof(nvinfer1::Dims3));
                                ^
/home/nvidia/boxer/jetson-inference/tensorNet.cpp: In member function ‘bool tensorNet::ProfileModel(const string&, const string&, const std::vector<std::__cxx11::basic_string<char> >&, unsigned int, std::ostream&)’:
/home/nvidia/boxer/jetson-inference/tensorNet.cpp:85:58: error: ‘createInferBuilder’ was not declared in this scope
  nvinfer1::IBuilder* builder = createInferBuilder(gLogger);
                                                          ^
/home/nvidia/boxer/jetson-inference/tensorNet.cpp:85:58: note: suggested alternatives:
In file included from /home/nvidia/boxer/jetson-inference/tensorNet.h:9:0,
                 from /home/nvidia/boxer/jetson-inference/tensorNet.cpp:5:
/usr/include/aarch64-linux-gnu/NvInfer.h:2742:18: note:   ‘nvinfer1::{anonymous}::createInferBuilder’
 inline IBuilder* createInferBuilder(ILogger& logger)
                  ^
/usr/include/aarch64-linux-gnu/NvInfer.h:2742:18: note:   ‘nvinfer1::{anonymous}::createInferBuilder’
/home/nvidia/boxer/jetson-inference/tensorNet.cpp:153:34: error: no matching function for call to ‘nvinfer1::ICudaEngine::serialize(std::ostream&)’
  engine->serialize(gieModelStream);
                                  ^
In file included from /home/nvidia/boxer/jetson-inference/tensorNet.h:9:0,
                 from /home/nvidia/boxer/jetson-inference/tensorNet.cpp:5:
/usr/include/aarch64-linux-gnu/NvInfer.h:2322:23: note: candidate: virtual nvinfer1::IHostMemory* nvinfer1::ICudaEngine::serialize() const
  virtual IHostMemory* serialize() const = 0;
                       ^
/usr/include/aarch64-linux-gnu/NvInfer.h:2322:23: note:   candidate expects 0 arguments, 1 provided
/home/nvidia/boxer/jetson-inference/tensorNet.cpp: In member function ‘bool tensorNet::LoadNetwork(const char*, const char*, const char*, const char*, const std::vector<std::__cxx11::basic_string<char> >&, uint32_t)’:
/home/nvidia/boxer/jetson-inference/tensorNet.cpp:217:59: error: ‘createInferBuilder’ was not declared in this scope
   nvinfer1::IBuilder* builder = createInferBuilder(gLogger);
                                                           ^
/home/nvidia/boxer/jetson-inference/tensorNet.cpp:217:59: note: suggested alternatives:
In file included from /home/nvidia/boxer/jetson-inference/tensorNet.h:9:0,
                 from /home/nvidia/boxer/jetson-inference/tensorNet.cpp:5:
/usr/include/aarch64-linux-gnu/NvInfer.h:2742:18: note:   ‘nvinfer1::{anonymous}::createInferBuilder’
 inline IBuilder* createInferBuilder(ILogger& logger)
                  ^
/usr/include/aarch64-linux-gnu/NvInfer.h:2742:18: note:   ‘nvinfer1::{anonymous}::createInferBuilder’
/home/nvidia/boxer/jetson-inference/tensorNet.cpp:234:56: error: ‘createInferRuntime’ was not declared in this scope
  nvinfer1::IRuntime* infer = createInferRuntime(gLogger);
                                                        ^
/home/nvidia/boxer/jetson-inference/tensorNet.cpp:234:56: note: suggested alternatives:
In file included from /home/nvidia/boxer/jetson-inference/tensorNet.h:9:0,
                 from /home/nvidia/boxer/jetson-inference/tensorNet.cpp:5:
/usr/include/aarch64-linux-gnu/NvInfer.h:2755:18: note:   ‘nvinfer1::{anonymous}::createInferRuntime’
 inline IRuntime* createInferRuntime(ILogger& logger)
                  ^
/usr/include/aarch64-linux-gnu/NvInfer.h:2755:18: note:   ‘nvinfer1::{anonymous}::createInferRuntime’
/home/nvidia/boxer/jetson-inference/tensorNet.cpp:242:77: error: no matching function for call to ‘nvinfer1::IRuntime::deserializeCudaEngine(std::stringstream&)’
  nvinfer1::ICudaEngine* engine = infer->deserializeCudaEngine(gieModelStream);
                                                                             ^
In file included from /home/nvidia/boxer/jetson-inference/tensorNet.h:9:0,
                 from /home/nvidia/boxer/jetson-inference/tensorNet.cpp:5:
/usr/include/aarch64-linux-gnu/NvInfer.h:2667:33: note: candidate: virtual nvinfer1::ICudaEngine* nvinfer1::IRuntime::deserializeCudaEngine(const void*, std::size_t, nvinfer1::IPluginFactory*)
  virtual nvinfer1::ICudaEngine* deserializeCudaEngine(const void *blob, std::size_t size, IPluginFactory* pluginF
                                 ^
/usr/include/aarch64-linux-gnu/NvInfer.h:2667:33: note:   candidate expects 3 arguments, 1 provided
/home/nvidia/boxer/jetson-inference/tensorNet.cpp:281:2: error: ‘Dims3’ is not a member of ‘nvinfer1’
  nvinfer1::Dims3 inputDims  = engine->getBindingDimensions(inputIndex);
  ^
/home/nvidia/boxer/jetson-inference/tensorNet.cpp:282:37: error: ‘inputDims’ was not declared in this scope
  size_t inputSize  = maxBatchSize * inputDims.c * inputDims.h * inputDims.w * sizeof(float);
                                     ^
/home/nvidia/boxer/jetson-inference/tensorNet.cpp:309:3: error: ‘Dims3’ is not a member of ‘nvinfer1’
   nvinfer1::Dims3 outputDims = engine->getBindingDimensions(outputIndex);
   ^
/home/nvidia/boxer/jetson-inference/tensorNet.cpp:310:38: error: ‘outputDims’ was not declared in this scope
   size_t outputSize = maxBatchSize * outputDims.c * outputDims.h * outputDims.w * sizeof(float);
                                      ^
/home/nvidia/boxer/jetson-inference/tensorNet.cpp:328:5: error: ‘struct tensorNet::outputLayer’ has no member named ‘dims’
   l.dims = outputDims;
     ^
/home/nvidia/boxer/jetson-inference/tensorNet.cpp:335:2: error: ‘mInputDims’ was not declared in this scope
  mInputDims      = inputDims;
  ^
AndrewVGr commented 7 years ago

Today i upgraded to Jetpack 3.1 on my Jetson TX2 and got the same issue, in fact quite a few other errors:

Earlier the same codebase was compiling and working fine on Jetpack 3.0. I guess this is due GIE(TensroRT) updates.

I've got the same situation.

It looks like some changes in update broke the jetson-inference...

dusty-nv commented 7 years ago

Hi guys, now that JetPack 3.1 is public, give me a day or two and I will update the repo. Trying to do it backwards compatible with the previous version (i.e. with #ifdefs), but we'll see.

On Jul 25, 2017 6:48 AM, AndrewVGr notifications@github.com wrote:

Today i upgraded to Jetpack 3.1 on my Jetson TX2 and got the same issue, in fact quite a few other errors:

Earlier the same codebase was compiling and working fine on Jetpack 3.0. I guess this is due GIE(TensroRT) updates.

I've got the same situation.

It looks like some changes in update broke the jetson-inference...

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/98#issuecomment-317701145, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AOpDK_6xATJ9PMHvE42mcllDvzXoC6euks5sRcgQgaJpZM4OEQaW.


This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

milinddeore commented 7 years ago

@dusty-nv You have always been very helpful. Thanks for all the kind help.

Looking forward to your updated repo.

clement-masson commented 7 years ago

@dusty-nv Thanks ! I'm also stuck with jetpack 3.1 on TX2 !

shaktidhar commented 7 years ago

In the same boat. JetPack 3.1, Tx1 .

dusty-nv commented 7 years ago

Hi guys, support for TensorRT2 is working now in master with commit e40bd6

For backwards compatibility, it's still building for previous JetPack's still on TensorRT1 using some macros I added. There were no outward-facing changes to the vision primitive APIs themselves.

kent-anderson commented 7 years ago

Missed this in my pull by minutes.
Thanks Dusty, this compile went cleanly.

Latest Jetson/JetPack and TX2 as I just started this morning.

Kent

clement-masson commented 7 years ago

@dusty-nv thanks. No problem compiling now.

However, when I want to run imagenet-console specifying a custom net (which worked under TX1/TensorRT 1 with jetson-inference) :

~/jetson-inference_trt-2.1/build/aarch64/bin/imagenet-console $input output.jpg \
--prototxt=$caffeModel/prototxt \
--model=$caffeModel/caffemodel \
--labels=$caffeModel/labels.txt \
--input_blob=data \
--output_blob=$(cat $caffeModel/output_blob.txt) \
--batch_size=2

I've got the following error :

imagenet-console: cudnnEngine.cpp:605: bool nvinfer1::cudnn::Engine::deserialize(const void, std::size_t, nvinfer1::IPluginFactory): Assertion `size >= bsize && "Mismatch between allocated memory size and expected size of serialized engine."' failed.

Do you know i the problem comes from TensorRT or jetson-inference ?

Thanks a lot !!

shaktidhar commented 7 years ago

In my case, compilation works. I can test the imagenet_console... But running imagenet_camera runs into an error:

`ubuntu@tegra-ubuntu:~/jetson-inference/build/aarch64/bin$ ./imagenet-camera googlenet imagenet-camera args (2): 0 [./imagenet-camera] 1 [googlenet]

[gstreamer] initialized gstreamer, version 1.8.3.0 [gstreamer] gstreamer decoder pipeline string: nvcamerasrc fpsRange="30.0 30.0" ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12 ! nvvidconv flip-method=2 ! video/x-raw ! appsink name=mysink

imagenet-camera: successfully initialized video device width: 1280 height: 720 depth: 12 (bpp)

imageNet -- loading classification network model from: -- prototxt networks/googlenet.prototxt -- model networks/bvlc_googlenet.caffemodel -- class_labels networks/ilsvrc12_synset_words.txt -- input_blob 'data' -- output_blob 'prob' -- batch_size 2

[GIE] attempting to open cache file networks/bvlc_googlenet.caffemodel.2.tensorcache [GIE] loading network profile from cache... networks/bvlc_googlenet.caffemodel.2.tensorcache [GIE] platform has FP16 support. [GIE] networks/bvlc_googlenet.caffemodel loaded [GIE] CUDA engine context initialized with 2 bindings [GIE] networks/bvlc_googlenet.caffemodel input binding index: 0 [GIE] networks/bvlc_googlenet.caffemodel input dims (b=2 c=3 h=224 w=224) size=1204224 [cuda] cudaAllocMapped 1204224 bytes, CPU 0x100ce0000 GPU 0x100ce0000 [GIE] networks/bvlc_googlenet.caffemodel output 0 prob binding index: 1 [GIE] networks/bvlc_googlenet.caffemodel output 0 prob dims (b=2 c=1000 h=1 w=1) size=8000 [cuda] cudaAllocMapped 8000 bytes, CPU 0x100e20000 GPU 0x100e20000 networks/bvlc_googlenet.caffemodel initialized. [GIE] networks/bvlc_googlenet.caffemodel loaded imageNet -- loaded 1000 class info entries networks/bvlc_googlenet.caffemodel initialized. default X screen 0: 1920 x 1080 [OpenGL] glDisplay display window initialized [OpenGL] creating 1280x720 texture loaded image fontmapA.png (256 x 512) 2097152 bytes [cuda] cudaAllocMapped 2097152 bytes, CPU 0x100f20000 GPU 0x100f20000 [cuda] cudaAllocMapped 8192 bytes, CPU 0x100e22000 GPU 0x100e22000 [gstreamer] gstreamer transitioning pipeline to GST_STATE_PLAYING Socket read error. Camera Daemon stopped functioning..... gst_nvcamera_open() failed ret=0 [gstreamer] gstreamer failed to set pipeline state to PLAYING (error 0)

imagenet-camera: failed to open camera for streaming `

dusty-nv commented 7 years ago

Hmm camera works over here, it looks like your gstreamer CSI camera is having difficulty opening the stream. Can you recompile the program with a different #USE_CAMERA define near the top of imagenet-camera, >= 0 for V4L2 USB webcam. Also does imagenet-console work for you?

On Jul 26, 2017 2:11 PM, Shakti Dhar notifications@github.com wrote:

In my case, compilation works. I can test the imagenet_console... But running imagenet_camera runs into an error:

`ubuntu@tegra-ubuntu:~/jetson-inference/build/aarch64/bin$ ./imagenet-camera googlenet imagenet-camera args (2): 0 [./imagenet-camera] 1 [googlenet]

[gstreamer] initialized gstreamer, version 1.8.3.0 [gstreamer] gstreamer decoder pipeline string: nvcamerasrc fpsRange="30.0 30.0" ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12 ! nvvidconv flip-method=2 ! video/x-raw ! appsink name=mysink

imagenet-camera: successfully initialized video device width: 1280 height: 720 depth: 12 (bpp)

imageNet -- loading classification network model from: -- prototxt networks/googlenet.prototxt -- model networks/bvlc_googlenet.caffemodel -- class_labels networks/ilsvrc12_synset_words.txt -- input_blob 'data' -- output_blob 'prob' -- batch_size 2

[GIE] attempting to open cache file networks/bvlc_googlenet.caffemodel.2.tensorcache [GIE] loading network profile from cache... networks/bvlc_googlenet.caffemodel.2.tensorcache [GIE] platform has FP16 support. [GIE] networks/bvlc_googlenet.caffemodel loaded [GIE] CUDA engine context initialized with 2 bindings [GIE] networks/bvlc_googlenet.caffemodel input binding index: 0 [GIE] networks/bvlc_googlenet.caffemodel input dims (b=2 c=3 h=224 w=224) size=1204224 [cuda] cudaAllocMapped 1204224 bytes, CPU 0x100ce0000 GPU 0x100ce0000 [GIE] networks/bvlc_googlenet.caffemodel output 0 prob binding index: 1 [GIE] networks/bvlc_googlenet.caffemodel output 0 prob dims (b=2 c=1000 h=1 w=1) size=8000 [cuda] cudaAllocMapped 8000 bytes, CPU 0x100e20000 GPU 0x100e20000 networks/bvlc_googlenet.caffemodel initialized. [GIE] networks/bvlc_googlenet.caffemodel loaded imageNet -- loaded 1000 class info entries networks/bvlc_googlenet.caffemodel initialized. default X screen 0: 1920 x 1080 [OpenGL] glDisplay display window initialized [OpenGL] creating 1280x720 texture loaded image fontmapA.png (256 x 512) 2097152 bytes [cuda] cudaAllocMapped 2097152 bytes, CPU 0x100f20000 GPU 0x100f20000 [cuda] cudaAllocMapped 8192 bytes, CPU 0x100e22000 GPU 0x100e22000 [gstreamer] gstreamer transitioning pipeline to GST_STATE_PLAYING Socket read error. Camera Daemon stopped functioning..... gst_nvcamera_open() failed ret=0 [gstreamer] gstreamer failed to set pipeline state to PLAYING (error 0)

imagenet-camera: failed to open camera for streaming `

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/98#issuecomment-318137051, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AOpDK0778koj-1F02c-YtHuL-QJHG0ATks5sR4FKgaJpZM4OEQaW.


This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

shaktidhar commented 7 years ago

@dusty-nv
Yes, the imagenet-console works fine.

clement-masson commented 7 years ago

@dusty-nv never mind. My problem was due to a pre-existing .tensorcache dating from tensorRT v1 times ! imagenet-console works smoothly with provided networks.

nicks165 commented 7 years ago

I am not able to run it on NVdia discrete graphics and face similar issue: When i run in 1060 i have this issue

error: ‘Dims3’ in namespace ‘nvinfer1’ does not name a typenvinfer1::Dims3 dims. I am trying to build this yolo network on tensoRT.

dusty-nv commented 6 years ago

Hi, the repo is verified against TensorRT 1.0, 2.1, and 3.0 RC using JetPack on TX1/TX2.

You may need to make changes to the includes or paths in CMakeLists.txt to get it building for desktop.