dusty-nv / jetson-inference

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
https://developer.nvidia.com/embedded/twodaystoademo
MIT License
7.77k stars 2.97k forks source link

DetectNet crashing jetson TX1 on jepack 2.3.1, 3.0, 3.1 #141

Closed raaka1 closed 7 years ago

raaka1 commented 7 years ago

Dear all,

detect net is crashing JetsonTX1 module i have tried jepack 2.3.1, 3.0, 3.1.

Imagenet is working fine tough.

./detectnet-console drone_0427.png result.png coco-airplane

`./detectnet-console drone_0427.png result.png coco-airplane detectnet-console args (4): 0 [./detectnet-console] 1 [drone_0427.png] 2 [result.png] 3 [coco-airplane]

detectNet -- loading detection network model from: -- prototxt networks/DetectNet-COCO-Airplane/deploy.prototxt -- model networks/DetectNet-COCO-Airplane/snapshot_iter_22500.caffemodel -- input_blob 'data' -- output_cvg 'coverage' -- output_bbox 'bboxes' -- mean_pixel 0.000000 -- threshold 0.500000 -- batch_size 2

[GIE] attempting to open cache file networks/DetectNet-COCO-Airplane/snapshot_iter_22500.caffemodel.2.tensorcache [GIE] cache file not found, profiling network model [GIE] platform has FP16 support. [GIE] loading networks/DetectNet-COCO-Airplane/deploy.prototxt networks/DetectNet-COCO-Airplane/snapshot_iter_22500.caffemodel [GIE] retrieved output tensor 'coverage' [GIE] retrieved output tensor 'bboxes' [GIE] configuring CUDA engine [GIE] building CUDA engine`

anyone can confirm this ? I have tried 4 different modules

dusty-nv commented 7 years ago

Hi Ravi, can you provide the command line you are running? Thanks.

-------- Original message -------- From: Ravi Kiran notifications@github.com Date: 9/15/17 6:02 AM (GMT-05:00) To: dusty-nv/jetson-inference jetson-inference@noreply.github.com Cc: Subscribed subscribed@noreply.github.com Subject: [dusty-nv/jetson-inference] DetectNet crashing jetson TX1 on jepack 2.3.1, 3.0, 3.1 (#141)

Dear all,

detect net is crashing JetsonTX1 module i have tried jepack 2.3.1, 3.0, 3.1.

Imagenet is working fine tough.

anyone can confirm this ? I have tried 4 different modules

- You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/141, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AOpDK84c1b5lWBSH15OiDl8tJLrroR5jks5sikrKgaJpZM4PYwNw.


This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

raaka1 commented 7 years ago

Hi Dustin, I have totally tried on 9 modules so far it crashes.

./detectnet-console drone_0427.png result.png coco-airplane

dusty-nv commented 7 years ago

Hi Ravi, I was able to run that command without incident on TX1/TX2. Where is it crashing?

From: Ravi Kiran [mailto:notifications@github.com] Sent: Friday, September 15, 2017 9:52 AM To: dusty-nv/jetson-inference jetson-inference@noreply.github.com Cc: Dustin Franklin dustinf@nvidia.com; Comment comment@noreply.github.com Subject: Re: [dusty-nv/jetson-inference] DetectNet crashing jetson TX1 on jepack 2.3.1, 3.0, 3.1 (#141)

Hi Dustin, I have totally tried on 9 modules so far it crashes.

./detectnet-console drone_0427.png result.png coco-airplane

— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/141#issuecomment-329788767, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AOpDKxm43P1Zphpzquga_1tv5ZAch97Jks5sioDogaJpZM4PYwNw.


This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

dusty-nv commented 7 years ago

BTW you may want to try re-downloading models if you haven’t already...

From: Dustin Franklin Sent: Friday, September 15, 2017 10:00 AM To: 'dusty-nv/jetson-inference' reply@reply.github.com; dusty-nv/jetson-inference jetson-inference@noreply.github.com Cc: Comment comment@noreply.github.com Subject: RE: [dusty-nv/jetson-inference] DetectNet crashing jetson TX1 on jepack 2.3.1, 3.0, 3.1 (#141)

Hi Ravi, I was able to run that command without incident on TX1/TX2. Where is it crashing?

From: Ravi Kiran [mailto:notifications@github.com] Sent: Friday, September 15, 2017 9:52 AM To: dusty-nv/jetson-inference jetson-inference@noreply.github.com<mailto:jetson-inference@noreply.github.com> Cc: Dustin Franklin dustinf@nvidia.com<mailto:dustinf@nvidia.com>; Comment comment@noreply.github.com<mailto:comment@noreply.github.com> Subject: Re: [dusty-nv/jetson-inference] DetectNet crashing jetson TX1 on jepack 2.3.1, 3.0, 3.1 (#141)

Hi Dustin, I have totally tried on 9 modules so far it crashes.

./detectnet-console drone_0427.png result.png coco-airplane

— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/141#issuecomment-329788767, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AOpDKxm43P1Zphpzquga_1tv5ZAch97Jks5sioDogaJpZM4PYwNw.


This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

raaka1 commented 7 years ago

Hi Dustin,

I have tried, although we train our own models and they are working fine with imagenet. BTW we develop our own custom carrier boards. This is hte first time we are experiencing this problem.

`-rw-rw-r-- 1 nvidia nvidia 3629 Sep 14 10:06 alexnet.prototxt

-rw-rw-r-- 1 nvidia nvidia 243862414 Sep 14 10:06 bvlc_alexnet.caffemodel -rw-rw-r-- 1 nvidia nvidia 53533754 Sep 14 10:07 bvlc_googlenet.caffemodel -rw-rw-r-- 1 nvidia nvidia 14148724 Sep 15 10:02 bvlc_googlenet.caffemodel.2.tensorcache drwxrwxr-x 2 nvidia nvidia 4096 May 6 14:26 DetectNet-COCO-Airplane drwxrwxr-x 2 nvidia nvidia 4096 May 5 13:29 DetectNet-COCO-Bottle drwxrwxr-x 2 nvidia nvidia 4096 May 5 13:29 DetectNet-COCO-Chair drwxrwxr-x 2 nvidia nvidia 4096 May 5 12:57 DetectNet-COCO-Dog -rw-rw-r-- 1 nvidia nvidia 42924 Sep 14 09:56 detectnet.prototxt drwxrwxr-x 2 nvidia nvidia 4096 Sep 26 2016 facenet-120 drwxrwxr-x 2 nvidia nvidia 4096 Apr 11 11:56 FCN-Alexnet-Aerial-FPV-720p drwxrwxr-x 2 nvidia nvidia 4096 Nov 30 2016 FCN-Alexnet-Cityscapes-HD drwxrwxr-x 2 nvidia nvidia 4096 Apr 11 17:03 FCN-Alexnet-Pascal-VOC -rw-rw-r-- 1 nvidia nvidia 38223495 Sep 14 10:08 GoogleNet-ILSVRC12-subset.tar -rw-rw-r-- 1 nvidia nvidia 35861 Sep 14 10:07 googlenet.prototxt -rw-rw-r-- 1 nvidia nvidia 31675 Sep 14 09:56 ilsvrc12_synset_words.txt drwxrwxr-x 2 nvidia nvidia 4096 Sep 24 2016 multiped-500 drwxrwxr-x 2 nvidia nvidia 4096 Sep 22 2016 ped-100 ` ./detectnet-console drone_0427.png result.png coco-airplane detectnet-console args (4): 0 [./detectnet-console] 1 [drone_0427.png] 2 [result.png] 3 [coco-airplane]

detectNet -- loading detection network model from: -- prototxt networks/DetectNet-COCO-Airplane/deploy.prototxt -- model networks/DetectNet-COCO-Airplane/snapshot_iter_22500.caffemodel -- input_blob 'data' -- output_cvg 'coverage' -- output_bbox 'bboxes' -- mean_pixel 0.000000 -- threshold 0.500000 -- batch_size 2

[GIE] attempting to open cache file networks/DetectNet-COCO-Airplane/snapshot_iter_22500.caffemodel.2.tensorcache [GIE] cache file not found, profiling network model [GIE] platform has FP16 support. [GIE] loading networks/DetectNet-COCO-Airplane/deploy.prototxt networks/DetectNet-COCO-Airplane/snapshot_iter_22500.caffemodel [GIE] retrieved output tensor 'coverage' [GIE] retrieved output tensor 'bboxes' [GIE] configuring CUDA engine [GIE] building CUDA engine

dusty-nv commented 7 years ago

OK, can you run the other DetectNet models? What about imagenet-console or segnet-console?

If it crashes during the ‘building CUDA engine’ stage, this is inside TensorRT, so the model could be corrupted. Try removing build directory, re-creating and running cmake again. It will re-download the models.

From: Ravi Kiran [mailto:notifications@github.com] Sent: Friday, September 15, 2017 10:13 AM To: dusty-nv/jetson-inference jetson-inference@noreply.github.com Cc: Dustin Franklin dustinf@nvidia.com; Comment comment@noreply.github.com Subject: Re: [dusty-nv/jetson-inference] DetectNet crashing jetson TX1 on jepack 2.3.1, 3.0, 3.1 (#141)

Hi Dustin,

I have tried, although we train our own models and they are working fine with imagenet. BTW we develop our own custom carrier boards. This is hte first time we are experiencing this problem.

`-rw-rw-r-- 1 nvidia nvidia 3629 Sep 14 10:06 alexnet.prototxt

-rw-rw-r-- 1 nvidia nvidia 243862414 Sep 14 10:06 bvlc_alexnet.caffemodel -rw-rw-r-- 1 nvidia nvidia 53533754 Sep 14 10:07 bvlc_googlenet.caffemodel -rw-rw-r-- 1 nvidia nvidia 14148724 Sep 15 10:02 bvlc_googlenet.caffemodel.2.tensorcache drwxrwxr-x 2 nvidia nvidia 4096 May 6 14:26 DetectNet-COCO-Airplane drwxrwxr-x 2 nvidia nvidia 4096 May 5 13:29 DetectNet-COCO-Bottle drwxrwxr-x 2 nvidia nvidia 4096 May 5 13:29 DetectNet-COCO-Chair drwxrwxr-x 2 nvidia nvidia 4096 May 5 12:57 DetectNet-COCO-Dog -rw-rw-r-- 1 nvidia nvidia 42924 Sep 14 09:56 detectnet.prototxt drwxrwxr-x 2 nvidia nvidia 4096 Sep 26 2016 facenet-120 drwxrwxr-x 2 nvidia nvidia 4096 Apr 11 11:56 FCN-Alexnet-Aerial-FPV-720p drwxrwxr-x 2 nvidia nvidia 4096 Nov 30 2016 FCN-Alexnet-Cityscapes-HD drwxrwxr-x 2 nvidia nvidia 4096 Apr 11 17:03 FCN-Alexnet-Pascal-VOC -rw-rw-r-- 1 nvidia nvidia 38223495 Sep 14 10:08 GoogleNet-ILSVRC12-subset.tar -rw-rw-r-- 1 nvidia nvidia 35861 Sep 14 10:07 googlenet.prototxt -rw-rw-r-- 1 nvidia nvidia 31675 Sep 14 09:56 ilsvrc12_synset_words.txt drwxrwxr-x 2 nvidia nvidia 4096 Sep 24 2016 multiped-500 drwxrwxr-x 2 nvidia nvidia 4096 Sep 22 2016 ped-100 ` ./detectnet-console drone_0427.png result.png coco-airplane detectnet-console args (4): 0 [./detectnet-console] 1 [drone_0427.png] 2 [result.png] 3 [coco-airplane]

detectNet -- loading detection network model from: -- prototxt networks/DetectNet-COCO-Airplane/deploy.prototxt -- model networks/DetectNet-COCO-Airplane/snapshot_iter_22500.caffemodel -- input_blob 'data' -- output_cvg 'coverage' -- output_bbox 'bboxes' -- mean_pixel 0.000000 -- threshold 0.500000 -- batch_size 2

[GIE] attempting to open cache file networks/DetectNet-COCO-Airplane/snapshot_iter_22500.caffemodel.2.tensorcache [GIE] cache file not found, profiling network model [GIE] platform has FP16 support. [GIE] loading networks/DetectNet-COCO-Airplane/deploy.prototxt networks/DetectNet-COCO-Airplane/snapshot_iter_22500.caffemodel [GIE] retrieved output tensor 'coverage' [GIE] retrieved output tensor 'bboxes' [GIE] configuring CUDA engine [GIE] building CUDA engine

— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/141#issuecomment-329794373, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AOpDK9VXTw7ca8tUo3Je9GROufhiy9LXks5sioXjgaJpZM4PYwNw.


This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

raaka1 commented 7 years ago

It might be file system error, i will confirm you once we test. coco-dog is also crashed

`nvidia@tegra-ubuntu:~/Desktop/jetson-inference/aarch64/bin$ ./detectnet-console dog_1.jpg output_1.jpg coco-dog detectnet-console args (4): 0 [./detectnet-console] 1 [dog_1.jpg] 2 [output_1.jpg] 3 [coco-dog]

detectNet -- loading detection network model from: -- prototxt networks/DetectNet-COCO-Dog/deploy.prototxt -- model networks/DetectNet-COCO-Dog/snapshot_iter_38600.caffemodel -- input_blob 'data' -- output_cvg 'coverage' -- output_bbox 'bboxes' -- mean_pixel 0.000000 -- threshold 0.500000 -- batch_size 2

[GIE] attempting to open cache file networks/DetectNet-COCO-Dog/snapshot_iter_38600.caffemodel.2.tensorcache [GIE] cache file not found, profiling network model [GIE] platform has FP16 support. [GIE] loading networks/DetectNet-COCO-Dog/deploy.prototxt networks/DetectNet-COCO-Dog/snapshot_iter_38600.caffemodel [GIE] retrieved output tensor 'coverage' [GIE] retrieved output tensor 'bboxes' [GIE] configuring CUDA engine [GIE] building CUDA engine`

raaka1 commented 7 years ago

Imagenet output

`nvidia@tegra-ubuntu:~/Desktop/jetson-inference/aarch64/bin$ ./imagenet-console granny_smith_0.jpg result

imagenet-console args (3): 0 [./imagenet-console] 1 [granny_smith_0.jpg] 2 [result]

imageNet -- loading classification network model from: -- prototxt networks/googlenet.prototxt -- model networks/bvlc_googlenet.caffemodel -- class_labels networks/ilsvrc12_synset_words.txt -- input_blob 'data' -- output_blob 'prob' -- batch_size 2

[GIE] attempting to open cache file networks/bvlc_googlenet.caffemodel.2.tensorcache [GIE] cache file not found, profiling network model [GIE] platform has FP16 support. [GIE] loading networks/googlenet.prototxt networks/bvlc_googlenet.caffemodel [GIE] retrieved output tensor 'prob' [GIE] configuring CUDA engine [GIE] building CUDA engine [GIE] completed building CUDA engine [GIE] network profiling complete, writing cache to networks/bvlc_googlenet.caffemodel.2.tensorcache [GIE] completed writing cache to networks/bvlc_googlenet.caffemodel.2.tensorcache [GIE] networks/bvlc_googlenet.caffemodel loaded [GIE] CUDA engine context initialized with 2 bindings [GIE] networks/bvlc_googlenet.caffemodel input binding index: 0 [GIE] networks/bvlc_googlenet.caffemodel input dims (b=2 c=3 h=224 w=224) size=1204224 [cuda] cudaAllocMapped 1204224 bytes, CPU 0x100ce0000 GPU 0x100ce0000 [GIE] networks/bvlc_googlenet.caffemodel output 0 prob binding index: 1 [GIE] networks/bvlc_googlenet.caffemodel output 0 prob dims (b=2 c=1000 h=1 w=1) size=8000 [cuda] cudaAllocMapped 8000 bytes, CPU 0x100e20000 GPU 0x100e20000 networks/bvlc_googlenet.caffemodel initialized. [GIE] networks/bvlc_googlenet.caffemodel loaded imageNet -- loaded 1000 class info entries networks/bvlc_googlenet.caffemodel initialized. loaded image granny_smith_0.jpg (400 x 400) 2560000 bytes [cuda] cudaAllocMapped 2560000 bytes, CPU 0x100f20000 GPU 0x100f20000 [GIE] layer conv1/7x7_s2 + conv1/relu_7x7 input reformatter 0 - 1.212136 ms [GIE] layer conv1/7x7_s2 + conv1/relu_7x7 - 6.156458 ms [GIE] layer pool1/3x3_s2 - 1.308177 ms [GIE] layer pool1/norm1 - 0.408281 ms [GIE] layer conv2/3x3_reduce + conv2/relu_3x3_reduce - 0.730313 ms [GIE] layer conv2/3x3 + conv2/relu_3x3 - 8.533854 ms [GIE] layer conv2/norm2 - 1.090625 ms [GIE] layer pool2/3x3_s2 - 0.813073 ms [GIE] layer inception_3a/1x1 + inception_3a/relu_1x1 || inception_3a/3x3_reduce + inception_3a/relu_3x3_reduce || inception_3a/5x5_reduce + inception_3a/relu_5x5_reduce - 1.304167 ms [GIE] layer inception_3a/3x3 + inception_3a/relu_3x3 - 3.426562 ms [GIE] layer inception_3a/5x5 + inception_3a/relu_5x5 - 0.634375 ms [GIE] layer inception_3a/pool - 0.451511 ms [GIE] layer inception_3a/pool_proj + inception_3a/relu_pool_proj - 0.532447 ms [GIE] layer inception_3a/1x1 copy - 0.072240 ms [GIE] layer inception_3b/1x1 + inception_3b/relu_1x1 || inception_3b/3x3_reduce + inception_3b/relu_3x3_reduce || inception_3b/5x5_reduce + inception_3b/relu_5x5_reduce - 2.532188 ms [GIE] layer inception_3b/3x3 + inception_3b/relu_3x3 - 4.157552 ms [GIE] layer inception_3b/5x5 + inception_3b/relu_5x5 - 3.283906 ms [GIE] layer inception_3b/pool - 0.586875 ms [GIE] layer inception_3b/pool_proj + inception_3b/relu_pool_proj - 0.670312 ms [GIE] layer inception_3b/1x1 copy - 0.114115 ms [GIE] layer pool3/3x3_s2 - 0.618437 ms [GIE] layer inception_4a/1x1 + inception_4a/relu_1x1 || inception_4a/3x3_reduce + inception_4a/relu_3x3_reduce || inception_4a/5x5_reduce + inception_4a/relu_5x5_reduce - 1.383750 ms [GIE] layer inception_4a/3x3 + inception_4a/relu_3x3 - 1.018438 ms [GIE] layer inception_4a/5x5 + inception_4a/relu_5x5 - 0.417656 ms [GIE] layer inception_4a/pool - 0.304323 ms [GIE] layer inception_4a/pool_proj + inception_4a/relu_pool_proj - 0.463281 ms [GIE] layer inception_4a/1x1 copy - 0.062657 ms [GIE] layer inception_4b/1x1 + inception_4b/relu_1x1 || inception_4b/3x3_reduce + inception_4b/relu_3x3_reduce || inception_4b/5x5_reduce + inception_4b/relu_5x5_reduce - 1.456979 ms [GIE] layer inception_4b/3x3 + inception_4b/relu_3x3 - 1.147656 ms [GIE] layer inception_4b/5x5 + inception_4b/relu_5x5 - 0.556250 ms [GIE] layer inception_4b/pool - 0.327500 ms [GIE] layer inception_4b/pool_proj + inception_4b/relu_pool_proj - 0.483906 ms [GIE] layer inception_4b/1x1 copy - 0.055625 ms [GIE] layer inception_4c/1x1 + inception_4c/relu_1x1 || inception_4c/3x3_reduce + inception_4c/relu_3x3_reduce || inception_4c/5x5_reduce + inception_4c/relu_5x5_reduce - 1.453229 ms [GIE] layer inception_4c/3x3 + inception_4c/relu_3x3 - 1.448438 ms [GIE] layer inception_4c/5x5 + inception_4c/relu_5x5 - 0.556198 ms [GIE] layer inception_4c/pool - 0.328385 ms [GIE] layer inception_4c/pool_proj + inception_4c/relu_pool_proj - 0.483542 ms [GIE] layer inception_4c/1x1 copy - 0.051562 ms [GIE] layer inception_4d/1x1 + inception_4d/relu_1x1 || inception_4d/3x3_reduce + inception_4d/relu_3x3_reduce || inception_4d/5x5_reduce + inception_4d/relu_5x5_reduce - 1.459948 ms [GIE] layer inception_4d/3x3 + inception_4d/relu_3x3 - 1.778282 ms [GIE] layer inception_4d/5x5 + inception_4d/relu_5x5 - 0.681250 ms [GIE] layer inception_4d/pool - 0.328125 ms [GIE] layer inception_4d/pool_proj + inception_4d/relu_pool_proj - 0.483645 ms [GIE] layer inception_4d/1x1 copy - 0.047813 ms [GIE] layer inception_4e/1x1 + inception_4e/relu_1x1 || inception_4e/3x3_reduce + inception_4e/relu_3x3_reduce || inception_4e/5x5_reduce + inception_4e/relu_5x5_reduce - 2.202708 ms [GIE] layer inception_4e/3x3 + inception_4e/relu_3x3 - 3.754271 ms [GIE] layer inception_4e/5x5 + inception_4e/relu_5x5 - 0.966511 ms [GIE] layer inception_4e/pool - 0.329062 ms [GIE] layer inception_4e/pool_proj + inception_4e/relu_pool_proj - 0.668802 ms [GIE] layer inception_4e/1x1 copy - 0.071458 ms [GIE] layer pool4/3x3_s2 - 0.267188 ms [GIE] layer inception_5a/1x1 + inception_5a/relu_1x1 || inception_5a/3x3_reduce + inception_5a/relu_3x3_reduce || inception_5a/5x5_reduce + inception_5a/relu_5x5_reduce - 1.691615 ms [GIE] layer inception_5a/3x3 + inception_5a/relu_3x3 - 1.218906 ms [GIE] layer inception_5a/5x5 + inception_5a/relu_5x5 - 0.958125 ms [GIE] layer inception_5a/pool - 0.187812 ms [GIE] layer inception_5a/pool_proj + inception_5a/relu_pool_proj - 0.454115 ms [GIE] layer inception_5a/1x1 copy - 0.040781 ms [GIE] layer inception_5b/1x1 + inception_5b/relu_1x1 || inception_5b/3x3_reduce + inception_5b/relu_3x3_reduce || inception_5b/5x5_reduce + inception_5b/relu_5x5_reduce - 1.677604 ms [GIE] layer inception_5b/3x3 + inception_5b/relu_3x3 - 2.592552 ms [GIE] layer inception_5b/5x5 + inception_5b/relu_5x5 - 1.373282 ms [GIE] layer inception_5b/pool - 0.188229 ms [GIE] layer inception_5b/pool_proj + inception_5b/relu_pool_proj - 0.424844 ms [GIE] layer inception_5b/1x1 copy - 0.046145 ms [GIE] layer pool5/7x7_s1 - 0.724427 ms [GIE] layer loss3/classifier input reformatter 0 - 0.043855 ms [GIE] layer loss3/classifier - 0.850989 ms [GIE] layer loss3/classifier output reformatter 0 - 0.045000 ms [GIE] layer prob - 0.092813 ms [GIE] layer prob output reformatter 0 - 0.043073 ms [GIE] layer network time - 76.330208 ms class 0948 - 0.999512 (Granny Smith) imagenet-console: 'granny_smith_0.jpg' -> 99.95117% class #948 (Granny Smith) loaded image fontmapA.png (256 x 512) 2097152 bytes [cuda] cudaAllocMapped 2097152 bytes, CPU 0x1012c0000 GPU 0x1012c0000 [cuda] cudaAllocMapped 8192 bytes, CPU 0x100e22000 GPU 0x100e22000 imagenet-console: attempting to save output image to 'result' failed to save 400x400 output image to result imagenet-console: failed to save output image to 'result'

shutting down... `

raaka1 commented 7 years ago

segnet crashing too. I will try to flash the hardware again with Jetpack 3.0 and try

nvidia@tegra-ubuntu:~/Desktop/jetson-inference/aarch64/bin$ ./segnet-console drone_0428.png output_0428.png segnet-console args (3): 0 [./segnet-console] 1 [drone_0428.png] 2 [output_0428.png]

segNet -- loading segmentation network model from: -- prototxt: networks/FCN-Alexnet-Cityscapes-HD/deploy.prototxt -- model: networks/FCN-Alexnet-Cityscapes-HD/snapshot_iter_367568.caffemodel -- labels: networks/FCN-Alexnet-Cityscapes-HD/cityscapes-labels.txt -- colors: networks/FCN-Alexnet-Cityscapes-HD/cityscapes-deploy-colors.txt -- input_blob 'data' -- output_blob 'score_fr_21classes' -- batch_size 2

[GIE] attempting to open cache file networks/FCN-Alexnet-Cityscapes-HD/snapshot_iter_367568.caffemodel.2.tensorcache [GIE] cache file not found, profiling network model [GIE] platform has FP16 support. [GIE] loading networks/FCN-Alexnet-Cityscapes-HD/deploy.prototxt networks/FCN-Alexnet-Cityscapes-HD/snapshot_iter_367568.caffemodel [GIE] retrieved output tensor 'score_fr_21classes' [GIE] configuring CUDA engine [GIE] building CUDA engine

raaka1 commented 7 years ago

it seems to be issue with our custom carrier board. I have copied tensorcache from Jetson devkit and it works fine now.

When generating tensorcache on our custom carrier board, processing usage goes 100% on single core and its crashing

Thanks

dusty-nv commented 7 years ago

While generating tensorcache, TensorRT runs active kernel profiling on the module. Does your carrier board supply enough current to the module for GPU? Can it run samples like nbody?

-------- Original message -------- From: Ravi Kiran notifications@github.com Date: 9/17/17 3:08 AM (GMT-05:00) To: dusty-nv/jetson-inference jetson-inference@noreply.github.com Cc: Dustin Franklin dustinf@nvidia.com, Comment comment@noreply.github.com Subject: Re: [dusty-nv/jetson-inference] DetectNet crashing jetson TX1 on jepack 2.3.1, 3.0, 3.1 (#141)

it seems to be issue with our custom carrier board. I have copied tensorcache from Jetson devkit and it works fine now.

When generating tensorcache on our custom carrier board, processing usage goes 100% on single core and its crashing

Thanks

- You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/141#issuecomment-330025727, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AOpDKyKUqfKADjQWtgbV58yBvxzsOwFcks5sjMVMgaJpZM4PYwNw.


This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

raaka1 commented 7 years ago

Thanks Dusty, it working with higher voltage power supply

emiliol0pez commented 6 years ago

Same problem here. Jetson TX2 development kit with Jetpack 3.2 Developer Preview.

ImageNet works, DetectNet and SegNet don't. It gets stuck at "Building CUDA Engine" step.

raaka1 commented 6 years ago

Hi Emilio, use the original power supply from Nvidia

emiliol0pez commented 6 years ago

Hi Ravi,

Sorry, I forgot to mention that I'm using the Nvidia power supply for the development kit.