dusty-nv / jetson-inference

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
https://developer.nvidia.com/embedded/twodaystoademo
MIT License
7.86k stars 2.98k forks source link

Running your own model on the Jetson TX1 #71

Closed curiouser001 closed 1 year ago

curiouser001 commented 7 years ago

I've trained a network with the help of NVIDIA DIGITS. When I downloaded the model, the folder contained the following:

deploy.prototxt info.json labels.txt mean.binaryproto original.prototx snapshot_iter_630.caffemodel solver.prototxt train_val.prototxt

I have pasted this folder / jetson-inference / data / networks /. Then I open imagenet-console.cpp. I changed it to, imageNet* net = imageNet::Create( "networks/samplemodel/deploy.prototxt", "networks/samplemodel/snapshot_iter_2430.caffemodel","networks/samplemodel/mean.binaryproto","networks/samplemodel/labels.txt");

I recompiled it after I saved it. This is the result when I run the program;

/jetson-inference/build/x86_64/bin$ sudo ./imagenet-console orange_0.jpg o1.jpg
[sudo] password for monster: 
imagenet-console
  args (3):  0 [./imagenet-console]  1 [orange_0.jpg]  2 [o1.jpg]  

[GIE]  attempting to open cache file networks/samplemodel/snapshot_iter_2430.caffemodel.2.tensorcache
[GIE]  cache file not found, profiling network model
[GIE]  platform does not have FP16 support.
[GIE]  loading networks/samplemodel/deploy.prototxt networks/samplemodel/snapshot_iter_2430.caffemodel
[GIE]  failed to retrieve tensor for output 'prob'
[GIE]  configuring CUDA engine
[GIE]  building CUDA engine

I'm very happy if you help...

dusty-nv commented 7 years ago

Does the output layer of your different prototxt have a different output layer name than 'prob'? If so you will need to specify the layer name. Check the image Net header for the full Create function parameters or the reference docs here: https://rawgit.com/dusty-nv/jetson-inference/master/docs/html/index.html

On Apr 11, 2017 8:03 AM, curiouser001 notifications@github.com wrote:

I've trained a network with the help of NVIDIA DIGITS. When I downloaded the model, the folder contained the following:

deploy.prototxt info.json labels.txt mean.binaryproto original.prototx snapshot_iter_630.caffemodel solver.prototxt train_val.prototxt

I have pasted this folder / jetson-inference / data / networks /. Then I open imagenet-console.cpp. I changed it to, imageNet* net = imageNet::Create( "networks/samplemodel/deploy.prototxt", "networks/samplemodel/snapshot_iter_2430.caffemodel","networks/samplemodel/mean.binaryproto","networks/samplemodel/labels.txt");

I recompiled it after I saved it. This is the result when I run the program;

/jetson-inference/build/x86_64/bin$ sudo ./imagenet-console orange_0.jpg o1.jpg [sudo] password for monster: imagenet-console args (3): 0 [./imagenet-console] 1 [orange_0.jpg] 2 [o1.jpg]

[GIE] attempting to open cache file networks/samplemodel/snapshot_iter_2430.caffemodel.2.tensorcache [GIE] cache file not found, profiling network model [GIE] platform does not have FP16 support. [GIE] loading networks/samplemodel/deploy.prototxt networks/samplemodel/snapshot_iter_2430.caffemodel [GIE] failed to retrieve tensor for output 'prob' [GIE] configuring CUDA engine [GIE] building CUDA engine

I'm very happy if you help...

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/71, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AOpDK0GidUUx5CWQRc_ye03YOcLwmZX6ks5ru2vDgaJpZM4M5_8M.


This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

curiouser001 commented 7 years ago

Thank you for your answer. In the deploy.prototxt file Last layer:

layer { name: "softmax" type: "Softmax" bottom: "fc8" top: "softmax" } I changed to:

layer { name: "prob" type: "Softmax" bottom: "fc8" top: "softmax" }

/jetson-inference/build/x86_64/bin$ sudo ./imagenet-console orange_0.jpg o1.jpg imagenet-console args (3): 0 [./imagenet-console] 1 [orange_0.jpg] 2 [o1.jpg]

[GIE]  attempting to open cache file networks/samplemodel/snapshot_iter_630.caffemodel.2.tensorcache
[GIE]  cache file not found, profiling network model
[GIE]  platform does not have FP16 support.
[GIE]  loading networks/samplemodel/deploy.prototxt networks/samplemodel/snapshot_iter_630.caffemodel
[GIE]  failed to retrieve tensor for output 'prob'
[GIE]  configuring CUDA engine
[GIE]  building CUDA engine

But the same problem continues.

curiouser001 commented 7 years ago

Hi, I changed: imageNet* net = imageNet::Create( "networks/samplemodel/deploy.prototxt", "networks/samplemodel/snapshot_iter_2430.caffemodel","networks/samplemodel/mean.binaryproto","networks/samplemodel/labels.txt","data","softmax");

The error has changed.

/jetson-inference/build/x86_64/bin$ sudo ./imagenet-console orange_0.jpg o8.jpg
imagenet-console
  args (3):  0 [./imagenet-console]  1 [orange_0.jpg]  2 [o8.jpg]  

[GIE]  attempting to open cache file networks/samplemodel/snapshot_iter_630.caffemodel.2.tensorcache
[GIE]  loading network profile from cache... networks/samplemodel/snapshot_iter_630.caffemodel.2.tensorcache
[GIE]  platform does not have FP16 support.
[GIE]  networks/samplemodel/snapshot_iter_630.caffemodel loaded
[GIE]  CUDA engine context initialized with 2 bindings
[GIE]  networks/samplemodel/snapshot_iter_630.caffemodel input  binding index:  0
[GIE]  networks/samplemodel/snapshot_iter_630.caffemodel input  dims (b=2 c=3 h=227 w=227) size=1236696
[cuda]  cudaAllocMapped 1236696 bytes, CPU 0x10218000000 GPU 0x10218000000
[GIE]  networks/samplemodel/snapshot_iter_630.caffemodel output 0 softmax  binding index:  1
[GIE]  networks/samplemodel/snapshot_iter_630.caffemodel output 0 softmax  dims (b=2 c=2 h=1 w=1) size=16
[cuda]  cudaAllocMapped 16 bytes, CPU 0x10218200000 GPU 0x10218200000
networks/samplemodel/snapshot_iter_630.caffemodel initialized.
[GIE]  networks/samplemodel/snapshot_iter_630.caffemodel loaded
imageNet -- loaded 0 class info entries
imageNet -- failed to load synset class descriptions  (0 / 0 of 2)
imageNet -- failed to initialize.
imagenet-console:   failed to initialize imageNet
dusty-nv commented 7 years ago

That’s because in your .caffemodel, the tensor blob is still named as ‘softmax’. You need to change over the imagenet-console application to use ‘softmax’ instead of prob’.

imageNet* net = imageNet::Create(“my_prototxt.txt”, “my.caffemodel”, NULL, “my_labels.txt”, “data”, “softmax”);

From: curiouser001 [mailto:notifications@github.com] Sent: Tuesday, April 11, 2017 8:44 AM To: dusty-nv/jetson-inference Cc: Dustin Franklin; Comment Subject: Re: [dusty-nv/jetson-inference] Running your own model on the Jetson TX1 (#71)

Thank you for your answer. In the deploy.prototxt file Last layer:

layer { name: "softmax" type: "Softmax" bottom: "fc8" top: "softmax" } I changed to:

layer { name: "prob" type: "Softmax" bottom: "fc8" top: "softmax" }

/jetson-inference/build/x86_64/bin$ sudo ./imagenet-console orange_0.jpg o1.jpg imagenet-console args (3): 0 [./imagenet-console] 1 [orange_0.jpg] 2 [o1.jpg]

[GIE] attempting to open cache file networks/samplemodel/snapshot_iter_630.caffemodel.2.tensorcache

[GIE] cache file not found, profiling network model

[GIE] platform does not have FP16 support.

[GIE] loading networks/samplemodel/deploy.prototxt networks/samplemodel/snapshot_iter_630.caffemodel

[GIE] failed to retrieve tensor for output 'prob'

[GIE] configuring CUDA engine

[GIE] building CUDA engine

But the same problem continues.

— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/71#issuecomment-293248228, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AOpDK-ajV9yToI8SaiHrXiVoQkHaeKUzks5ru3V6gaJpZM4M5_8M.


This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

dusty-nv commented 7 years ago

If you look at the bottom of the error log, it’s failing to load the class labels. It looks like there should be 2 labels in there but it’s not getting any…

From: curiouser001 [mailto:notifications@github.com] Sent: Tuesday, April 11, 2017 9:10 AM To: dusty-nv/jetson-inference Cc: Dustin Franklin; Comment Subject: Re: [dusty-nv/jetson-inference] Running your own model on the Jetson TX1 (#71)

Hi, I changed: imageNet* net = imageNet::Create( "networks/samplemodel/deploy.prototxt", "networks/samplemodel/snapshot_iter_2430.caffemodel","networks/samplemodel/mean.binaryproto","networks/samplemodel/labels.txt","data","softmax");

The error has changed.

/jetson-inference/build/x86_64/bin$ sudo ./imagenet-console orange_0.jpg o8.jpg

imagenet-console

args (3): 0 [./imagenet-console] 1 [orange_0.jpg] 2 [o8.jpg]

[GIE] attempting to open cache file networks/samplemodel/snapshot_iter_630.caffemodel.2.tensorcache

[GIE] loading network profile from cache... networks/samplemodel/snapshot_iter_630.caffemodel.2.tensorcache

[GIE] platform does not have FP16 support.

[GIE] networks/samplemodel/snapshot_iter_630.caffemodel loaded

[GIE] CUDA engine context initialized with 2 bindings

[GIE] networks/samplemodel/snapshot_iter_630.caffemodel input binding index: 0

[GIE] networks/samplemodel/snapshot_iter_630.caffemodel input dims (b=2 c=3 h=227 w=227) size=1236696

[cuda] cudaAllocMapped 1236696 bytes, CPU 0x10218000000 GPU 0x10218000000

[GIE] networks/samplemodel/snapshot_iter_630.caffemodel output 0 softmax binding index: 1

[GIE] networks/samplemodel/snapshot_iter_630.caffemodel output 0 softmax dims (b=2 c=2 h=1 w=1) size=16

[cuda] cudaAllocMapped 16 bytes, CPU 0x10218200000 GPU 0x10218200000

networks/samplemodel/snapshot_iter_630.caffemodel initialized.

[GIE] networks/samplemodel/snapshot_iter_630.caffemodel loaded

imageNet -- loaded 0 class info entries

imageNet -- failed to load synset class descriptions (0 / 0 of 2)

imageNet -- failed to initialize.

imagenet-console: failed to initialize imageNet

— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/71#issuecomment-293255818, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AOpDK_63FuHTayxePedOctbUF8fhFzXoks5ru3uagaJpZM4M5_8M.


This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

curiouser001 commented 7 years ago

I do not understand the problem. I have retrain the network. But the same problem continues. There are 2 labels in labels.txt:

cat dog

dusty-nv commented 7 years ago

I remember now why your labels probably aren’t loading correctly. By default, the labels are configured for imageNet, which includes a 9-character synset-ID prefix to each label. For example:

n01443537 goldfish, Carassius auratus

n01484850 great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias

Here n01443537 and n01484850 are the synset-IDs. These are discarded by the imagenet-console application, but need to be included in the file to parse correctly.

Can you modify your labels.txt file to the following:

n00000001 cat n00000002 dog

From: curiouser001 [mailto:notifications@github.com] Sent: Tuesday, April 11, 2017 10:14 AM To: dusty-nv/jetson-inference Cc: Dustin Franklin; Comment Subject: Re: [dusty-nv/jetson-inference] Running your own model on the Jetson TX1 (#71)

I do not understand the problem. I have retrain the network. But the same problem continues. There are 2 labels in labels.txt:

cat dog

— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/71#issuecomment-293276510, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AOpDKwlkWoVf91Q9nHG18_CI5-Osnw7Sks5ru4qsgaJpZM4M5_8M.


This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

curiouser001 commented 7 years ago

The program worked. Thank you very much. I am grateful to you... I wish to study well ...

dusty-nv commented 7 years ago

That’s good, thanks for your persistence and good luck!!

From: curiouser001 [mailto:notifications@github.com] Sent: Tuesday, April 11, 2017 10:46 AM To: dusty-nv/jetson-inference Cc: Dustin Franklin; Comment Subject: Re: [dusty-nv/jetson-inference] Running your own model on the Jetson TX1 (#71)

The program worked. Thank you very much. I am grateful to you... I wish to study well ...

— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/71#issuecomment-293286816, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AOpDKz3-yTBhIxLdefpwNJNZg_642fcIks5ru5JDgaJpZM4M5_8M.


This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

curiouser001 commented 7 years ago

I did it with your help. I thank you so much...

curiouser001 commented 7 years ago

Hi, I'm trying to install a DetectNet model on a JetSton TX1 that I trained. This is the method I use:

  Create ("networks / demo / deploy.prototxt", "networks / demo / snapshot_iter_5104.caffemodel", NULL, threshold, DETECTNET_DEFAULT_INPUT, DETECTNET_DEFAULT_COVERAGE, DETECTNET_DEFAULT_BBOX, maxBatchSize);

I get an error like this:


detectnet-camera
  args (2):  0 [./detectnet-camera]  1 [other]  

[gstreamer] initialized gstreamer, version 1.2.4.0
[gstreamer] gstreamer decoder pipeline string:
v4l2src device=/dev/video0 ! video/x-raw, width=(int)1280, height=(int)720, format=RGB ! videoconvert ! video/x-raw, format=RGB ! videoconvert !appsink name=mysink

detectnet-camera:  successfully initialized video device
    width:  1280
   height:  720
    depth:  24 (bpp)

detectNet -- loading segmentation network model from:
          -- prototxt:   networks/demo/deploy.prototxt
          -- model:      networks/demo/snapshot_iter_5104.caffemodel
          -- input_blob  'data'
          -- output_cvg  'coverage'
          -- output_bbox 'bboxes'
          -- threshold   0.500000
          -- batch_size  2

[GIE]  attempting to open cache file networks/demo/snapshot_iter_5104.caffemodel.2.tensorcache
[GIE]  cache file not found, profiling network model
[GIE]  platform does not have FP16 support.
[GIE]  loading networks/demo/deploy.prototxt networks/demo/snapshot_iter_5104.caffemodel
could not parse layer type Python
[GIE]  failed to parse caffe network
failed to load networks/demo/snapshot_iter_5104.caffemodel
detectNet -- failed to initialize.
detectnet-camera:   failed to initialize imageNet

Where am I making mistakes. I'm very happy if you help...

dusty-nv commented 7 years ago

Hi, you have to remove the Python layer(s) from the end of the network.

On Apr 17, 2017 12:21 AM, curiouser001 notifications@github.com wrote:

Hi, I'm trying to install a DetectNet model on a JetSton TX1 that I trained. This is the method I use:

Create ("networks / demo / deploy.prototxt", "networks / demo / snapshot_iter_5104.caffemodel", NULL, threshold, DETECTNET_DEFAULT_INPUT, DETECTNET_DEFAULT_COVERAGE, DETECTNET_DEFAULT_BBOX, maxBatchSize);

I get an error like this:

detectnet-camera args (2): 0 [./detectnet-camera] 1 [other]

[gstreamer] initialized gstreamer, version 1.2.4.0 [gstreamer] gstreamer decoder pipeline string: v4l2src device=/dev/video0 ! video/x-raw, width=(int)1280, height=(int)720, format=RGB ! videoconvert ! video/x-raw, format=RGB ! videoconvert !appsink name=mysink

detectnet-camera: successfully initialized video device width: 1280 height: 720 depth: 24 (bpp)

detectNet -- loading segmentation network model from: -- prototxt: networks/demo/deploy.prototxt -- model: networks/demo/snapshot_iter_5104.caffemodel -- input_blob 'data' -- output_cvg 'coverage' -- output_bbox 'bboxes' -- threshold 0.500000 -- batch_size 2

[GIE] attempting to open cache file networks/demo/snapshot_iter_5104.caffemodel.2.tensorcache [GIE] cache file not found, profiling network model [GIE] platform does not have FP16 support. [GIE] loading networks/demo/deploy.prototxt networks/demo/snapshot_iter_5104.caffemodel could not parse layer type Python [GIE] failed to parse caffe network failed to load networks/demo/snapshot_iter_5104.caffemodel detectNet -- failed to initialize. detectnet-camera: failed to initialize imageNet

Where am I making mistakes. I'm very happy if you help...

— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/71#issuecomment-294402418, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AOpDK-TAVOpC0JAgfCYasyG__ukr4F8Xks5rwujjgaJpZM4M5_8M.


This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

curiouser001 commented 7 years ago

Thank you so much, it worked...

curiouser001 commented 7 years ago

Hi, Is this the best way to get the best detection on the Jetson TX1?

juanluisrosaramos commented 6 years ago

I just want to comment that I had the same labels problem and the proposed solution by dusty-nv of labeling with the prefixes worked perfectly for me

n00000001 sad n00000002 anger n00000003 happy etc...

My model is trained from a VGG16 network and used for face analysis

Thank you very much

AndreV84 commented 6 years ago

if I have in .prototxt the content as the below, what should I specify in the sequence: ./imagenet-console bottle_0.jpg out_b.jpg --prototxt=/path/name.prototxt --model=/path/name.caffemodel --labels=/path/labels.txt --input_blob=data --output_blob=softmax

name: "Oxford102_VGG16" input: "data" input_dim: 1 input_dim: 3 input_dim: 224 input_dim: 224 layers { bottom: "data" top: "conv1_1" name: "conv1_1" type: CONVOLUTION convolution_param { num_output: 64 pad: 1 kernel_size: 3 } } layers { bottom: "conv1_1" top: "conv1_1" name: "relu1_1" type: RELU } layers { bottom: "conv1_1" top: "conv1_2" name: "conv1_2" type: CONVOLUTION convolution_param { num_output: 64 pad: 1 kernel_size: 3 } } layers { bottom: "conv1_2" top: "conv1_2" name: "relu1_2" type: RELU } layers { bottom: "conv1_2" top: "pool1" name: "pool1" type: POOLING pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layers { bottom: "pool1" top: "conv2_1" name: "conv2_1" type: CONVOLUTION convolution_param { num_output: 128 pad: 1 kernel_size: 3 } } layers { bottom: "conv2_1" top: "conv2_1" name: "relu2_1" type: RELU } layers { bottom: "conv2_1" top: "conv2_2" name: "conv2_2" type: CONVOLUTION convolution_param { num_output: 128 pad: 1 kernel_size: 3 } } layers { bottom: "conv2_2" top: "conv2_2" name: "relu2_2" type: RELU } layers { bottom: "conv2_2" top: "pool2" name: "pool2" type: POOLING pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layers { bottom: "pool2" top: "conv3_1" name: "conv3_1" type: CONVOLUTION convolution_param { num_output: 256 pad: 1 kernel_size: 3 } } layers { bottom: "conv3_1" top: "conv3_1" name: "relu3_1" type: RELU } layers { bottom: "conv3_1" top: "conv3_2" name: "conv3_2" type: CONVOLUTION convolution_param { num_output: 256 pad: 1 kernel_size: 3 } } layers { bottom: "conv3_2" top: "conv3_2" name: "relu3_2" type: RELU } layers { bottom: "conv3_2" top: "conv3_3" name: "conv3_3" type: CONVOLUTION convolution_param { num_output: 256 pad: 1 kernel_size: 3 } } layers { bottom: "conv3_3" top: "conv3_3" name: "relu3_3" type: RELU } layers { bottom: "conv3_3" top: "pool3" name: "pool3" type: POOLING pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layers { bottom: "pool3" top: "conv4_1" name: "conv4_1" type: CONVOLUTION convolution_param { num_output: 512 pad: 1 kernel_size: 3 } } layers { bottom: "conv4_1" top: "conv4_1" name: "relu4_1" type: RELU } layers { bottom: "conv4_1" top: "conv4_2" name: "conv4_2" type: CONVOLUTION convolution_param { num_output: 512 pad: 1 kernel_size: 3 } } layers { bottom: "conv4_2" top: "conv4_2" name: "relu4_2" type: RELU } layers { bottom: "conv4_2" top: "conv4_3" name: "conv4_3" type: CONVOLUTION convolution_param { num_output: 512 pad: 1 kernel_size: 3 } } layers { bottom: "conv4_3" top: "conv4_3" name: "relu4_3" type: RELU } layers { bottom: "conv4_3" top: "pool4" name: "pool4" type: POOLING pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layers { bottom: "pool4" top: "conv5_1" name: "conv5_1" type: CONVOLUTION convolution_param { num_output: 512 pad: 1 kernel_size: 3 } } layers { bottom: "conv5_1" top: "conv5_1" name: "relu5_1" type: RELU } layers { bottom: "conv5_1" top: "conv5_2" name: "conv5_2" type: CONVOLUTION convolution_param { num_output: 512 pad: 1 kernel_size: 3 } } layers { bottom: "conv5_2" top: "conv5_2" name: "relu5_2" type: RELU } layers { bottom: "conv5_2" top: "conv5_3" name: "conv5_3" type: CONVOLUTION convolution_param { num_output: 512 pad: 1 kernel_size: 3 } } layers { bottom: "conv5_3" top: "conv5_3" name: "relu5_3" type: RELU } layers { bottom: "conv5_3" top: "pool5" name: "pool5" type: POOLING pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layers { bottom: "pool5" top: "fc6" name: "fc6" type: INNER_PRODUCT inner_product_param { num_output: 4096 } } layers { bottom: "fc6" top: "fc6" name: "relu6" type: RELU } layers { bottom: "fc6" top: "fc6" name: "drop6" type: DROPOUT dropout_param { dropout_ratio: 0.5 } } layers { bottom: "fc6" top: "fc7" name: "fc7" type: INNER_PRODUCT inner_product_param { num_output: 4096 } } layers { bottom: "fc7" top: "fc7" name: "relu7" type: RELU } layers { bottom: "fc7" top: "fc7" name: "drop7" type: DROPOUT dropout_param { dropout_ratio: 0.5 } } layers { bottom: "fc7" top: "fc8_" name: "fc8_" type: INNER_PRODUCT inner_product_param { num_output: 6 } } layers { bottom: "fc8_" top: "prob" name: "prob" type: SOFTMAX } prototxt.txt

dusty-nv commented 6 years ago

Hi @AndreV84 , the name of the SOFTMAX layer in your prototxt is prob, so --output_blob=prob

AndreV84 commented 6 years ago

screenshot from 2018-10-11 00-12-53

AndreV84 commented 6 years ago

screenshot from 2018-10-11 00-24-26

AndreV84 commented 6 years ago

~/jetson-inference/build/aarch64/bin$ ./imagenet-camera --prototxt=myp.rototxt --model=my.caffemodel --labels=/labels.txt --input_blob=data --output_blob=prob imagenet-camera args (6): 0 [./imagenet-camera] 1 [--prototxt=my.prototxt] 2 [--model=my.caffemodel] 3 [--labels=/labels.txt] 4 [--input_blob=data] 5 [--output_blob=prob]

[gstreamer] initialized gstreamer, version 1.14.1.0 [gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVCAMERA [gstreamer] gstCamera pipeline string: nvcamerasrc fpsRange="30.0 30.0" ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12 ! nvvidconv flip-method=2 ! video/x-raw ! appsink name=mysink [gstreamer] gstCamera failed to create pipeline [gstreamer] (no element "nvcamerasrc") [gstreamer] failed to init gstCamera (GST_SOURCE_NVCAMERA) [gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVARGUS [gstreamer] gstCamera pipeline string: nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, framerate=30/1, format=(string)NV12 ! nvvidconv flip-method=2 ! video/x-raw ! appsink name=mysink [gstreamer] gstCamera successfully initialized with GST_SOURCE_NVARGUS

imagenet-camera: successfully initialized video device width: 1280 height: 720 depth: 12 (bpp)

imageNet -- loading classification network model from: -- prototxt my.prototxt -- model /my.caffemodel -- class_labels labels.txt -- input_blob 'data' -- output_blob 'prob' -- batch_size 2

[TRT] TensorRT version 5.0.0 [TRT] attempting to open cache file my.tensorcache [TRT] cache file not found, profiling network model [TRT] platform has FP16 support. [TRT] loading /my.caffemodel [TRT] failed to retrieve tensor for output 'prob' [TRT] configuring CUDA engine [TRT] building CUDA engine [TRT] Unused Input: data [TRT] failed to build CUDA engine failed to load my.caffemodel failed to load my.caffemodel imageNet -- failed to initialize. imagenet-console: failed to initialize imageNet

dusty-nv commented 6 years ago

I'm not sure why it isn't finding prob layer. It may be possible that it's not actually in the binary caffemodel protobuf. For testing purposes, can you try using --output_blob=fc8_ to see if it can find that blob?

AndreV84 commented 6 years ago

[TRT] failed to retrieve tensor for output 'fc8_'

dusty-nv commented 6 years ago

It would seem that the caffemodel is corrupt, or of a different protobuf version such that TensorRT is not able to find the blobs? You might want to keep trying different blob names to see if any are found, or use nvcaffeparser1::IBlobNameToTensor interface to print out all the blobs that it loaded. You may also want to try disabling FP16 by changing this line to true: https://github.com/dusty-nv/jetson-inference/blob/6e16563f404af35be08993a44ad3650210f40351/tensorNet.cpp#L56

AndreV84 commented 6 years ago

It seems that I am to change the line nvcaffeparser1::ICaffeParser* parser = nvcaffeparser1::createCaffeParser(); into the line nvcaffeparser1::IBlobNameToTensor in the tensorNet.cpp file and disable FP16, thx, dusty, I will

AndreV84 commented 6 years ago

now when I am provided with three files below I try again the same procedure:

http://www.robots.ox.ac.uk/%7Evgg/software/very_deep/caffe/VGG_ILSVRC_16_layers.caffemodel https://gist.githubusercontent.com/ksimonyan/211839e770f7b538e2d8/raw/ded9363bd93ec0c770134f4e387d8aaaaa2407ce/VGG_ILSVRC_16_layers_deploy.prototxt https://raw.githubusercontent.com/dusty-nv/jetson-inference/master/data/networks/ilsvrc12_synset_words.txt

The result is as follows for both prob and softmax: `~/jetson-inference/build/aarch64/bin$ ./imagenet-camera --prototxt=/home/nvidia/jetson-inference/data/networks/VGG_ILSVRC_16_layers_deploy.prototxt --model=/home/nvidia/jetson-inference/data/networks/VGG_ILSVRC_16_layers.caffemodel --labels=/home/nvidia/jetson-inference/data/networks/ilsvrc12_synset_words.txt --input_blob=data --output_blob=prob imagenet-camera args (6): 0 [./imagenet-camera] 1 [--prototxt=/home/nvidia/jetson-inference/data/networks/VGG_ILSVRC_16_layers_deploy.prototxt] 2 [--model=/home/nvidia/jetson-inference/data/networks/VGG_ILSVRC_16_layers.caffemodel] 3 [--labels=/home/nvidia/jetson-inference/data/networks/ilsvrc12_synset_words.txt] 4 [--input_blob=data] 5 [--output_blob=prob]

[gstreamer] initialized gstreamer, version 1.14.1.0 [gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVCAMERA [gstreamer] gstCamera pipeline string: nvcamerasrc fpsRange="30.0 30.0" ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12 ! nvvidconv flip-method=2 ! video/x-raw ! appsink name=mysink [gstreamer] gstCamera failed to create pipeline [gstreamer] (no element "nvcamerasrc") [gstreamer] failed to init gstCamera (GST_SOURCE_NVCAMERA) [gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVARGUS [gstreamer] gstCamera pipeline string: nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, framerate=30/1, format=(string)NV12 ! nvvidconv flip-method=2 ! video/x-raw ! appsink name=mysink [gstreamer] gstCamera successfully initialized with GST_SOURCE_NVARGUS

imagenet-camera: successfully initialized video device width: 1280 height: 720 depth: 12 (bpp)

imageNet -- loading classification network model from: -- prototxt /home/nvidia/jetson-inference/data/networks/VGG_ILSVRC_16_layers_deploy.prototxt -- model /home/nvidia/jetson-inference/data/networks/VGG_ILSVRC_16_layers.caffemodel -- class_labels /home/nvidia/jetson-inference/data/networks/ilsvrc12_synset_words.txt -- input_blob 'data' -- output_blob 'prob' -- batch_size 2

[TRT] TensorRT version 5.0.0 [TRT] attempting to open cache file /home/nvidia/jetson-inference/data/networks/VGG_ILSVRC_16_layers.caffemodel.2.tensorcache [TRT] cache file not found, profiling network model [TRT] platform does not have FP16 support. [TRT] loading /home/nvidia/jetson-inference/data/networks/VGG_ILSVRC_16_layers_deploy.prototxt /home/nvidia/jetson-inference/data/networks/VGG_ILSVRC_16_layers.caffemodel [TRT] failed to retrieve tensor for output 'prob' [TRT] configuring CUDA engine [TRT] building CUDA engine [TRT] Unused Input: data [TRT] failed to build CUDA engine failed to load /home/nvidia/jetson-inference/data/networks/VGG_ILSVRC_16_layers.caffemodel failed to load /home/nvidia/jetson-inference/data/networks/VGG_ILSVRC_16_layers.caffemodel imageNet -- failed to initialize. imagenet-console: failed to initialize imageNet ~/jetson-inference/build/aarch64/bin$ ./imagenet-camera --prototxt=/home/nvidia/jetson-inference/data/networks/VGG_ILSVRC_16_layers_deploy.prototxt --model=/home/nvidia/jetson-inference/data/networks/VGG_ILSVRC_16_layers.caffemodel --labels=/home/nvidia/jetson-inference/data/networks/ilsvrc12_synset_words.txt --input_blob=data --output_blob=softmax imagenet-camera args (6): 0 [./imagenet-camera] 1 [--prototxt=/home/nvidia/jetson-inference/data/networks/VGG_ILSVRC_16_layers_deploy.prototxt] 2 [--model=/home/nvidia/jetson-inference/data/networks/VGG_ILSVRC_16_layers.caffemodel] 3 [--labels=/home/nvidia/jetson-inference/data/networks/ilsvrc12_synset_words.txt] 4 [--input_blob=data] 5 [--output_blob=softmax]

[gstreamer] initialized gstreamer, version 1.14.1.0 [gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVCAMERA [gstreamer] gstCamera pipeline string: nvcamerasrc fpsRange="30.0 30.0" ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12 ! nvvidconv flip-method=2 ! video/x-raw ! appsink name=mysink [gstreamer] gstCamera failed to create pipeline [gstreamer] (no element "nvcamerasrc") [gstreamer] failed to init gstCamera (GST_SOURCE_NVCAMERA) [gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVARGUS [gstreamer] gstCamera pipeline string: nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, framerate=30/1, format=(string)NV12 ! nvvidconv flip-method=2 ! video/x-raw ! appsink name=mysink [gstreamer] gstCamera successfully initialized with GST_SOURCE_NVARGUS

imagenet-camera: successfully initialized video device width: 1280 height: 720 depth: 12 (bpp)

imageNet -- loading classification network model from: -- prototxt /home/nvidia/jetson-inference/data/networks/VGG_ILSVRC_16_layers_deploy.prototxt -- model /home/nvidia/jetson-inference/data/networks/VGG_ILSVRC_16_layers.caffemodel -- class_labels /home/nvidia/jetson-inference/data/networks/ilsvrc12_synset_words.txt -- input_blob 'data' -- output_blob 'softmax' -- batch_size 2

[TRT] TensorRT version 5.0.0 [TRT] attempting to open cache file /home/nvidia/jetson-inference/data/networks/VGG_ILSVRC_16_layers.caffemodel.2.tensorcache [TRT] cache file not found, profiling network model [TRT] platform does not have FP16 support. [TRT] loading /home/nvidia/jetson-inference/data/networks/VGG_ILSVRC_16_layers_deploy.prototxt /home/nvidia/jetson-inference/data/networks/VGG_ILSVRC_16_layers.caffemodel [TRT] failed to retrieve tensor for output 'softmax' [TRT] configuring CUDA engine [TRT] building CUDA engine [TRT] Unused Input: data [TRT] failed to build CUDA engine failed to load /home/nvidia/jetson-inference/data/networks/VGG_ILSVRC_16_layers.caffemodel failed to load /home/nvidia/jetson-inference/data/networks/VGG_ILSVRC_16_layers.caffemodel imageNet -- failed to initialize. imagenet-console: failed to initialize imageNet `

dusty-nv commented 6 years ago

I am not sure if the issue is with this particular network topology, maybe it has a problem parsing so it doesn't find those layers. Are you able to manually load other networks, like the ones that come with the repo, similar to this step? https://github.com/dusty-nv/jetson-inference#loading-custom-models-on-jetson

AndreV84 commented 6 years ago

Thank you for your response. That one seems to work: ` ./imagenet-camera --prototxt=$NET/deploy.prototxt --model=$NET/snapshot_iter_184080.caffemodel --labels=$NET/labels.txt --input_blob=data --output_blob=softmax imagenet-camera args (6): 0 [./imagenet-camera] 1 [--prototxt=networks/GoogleNet-ILSVRC12-subset/deploy.prototxt] 2 [--model=networks/GoogleNet-ILSVRC12-subset/snapshot_iter_184080.caffemodel] 3 [--labels=networks/GoogleNet-ILSVRC12-subset/labels.txt] 4 [--input_blob=data] 5 [--output_blob=softmax]

[gstreamer] initialized gstreamer, version 1.14.1.0 [gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVCAMERA [gstreamer] gstCamera pipeline string: nvcamerasrc fpsRange="30.0 30.0" ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12 ! nvvidconv flip-method=2 ! video/x-raw ! appsink name=mysink [gstreamer] gstCamera failed to create pipeline [gstreamer] (no element "nvcamerasrc") [gstreamer] failed to init gstCamera (GST_SOURCE_NVCAMERA) [gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVARGUS [gstreamer] gstCamera pipeline string: nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, framerate=30/1, format=(string)NV12 ! nvvidconv flip-method=2 ! video/x-raw ! appsink name=mysink [gstreamer] gstCamera successfully initialized with GST_SOURCE_NVARGUS

imagenet-camera: successfully initialized video device width: 1280 height: 720 depth: 12 (bpp)

imageNet -- loading classification network model from: -- prototxt networks/GoogleNet-ILSVRC12-subset/deploy.prototxt -- model networks/GoogleNet-ILSVRC12-subset/snapshot_iter_184080.caffemodel -- class_labels networks/GoogleNet-ILSVRC12-subset/labels.txt -- input_blob 'data' -- output_blob 'softmax' -- batch_size 2

[TRT] TensorRT version 5.0.0 [TRT] attempting to open cache file networks/GoogleNet-ILSVRC12-subset/snapshot_iter_184080.caffemodel.2.tensorcache [TRT] cache file not found, profiling network model [TRT] platform does not have FP16 support. [TRT] loading networks/GoogleNet-ILSVRC12-subset/deploy.prototxt networks/GoogleNet-ILSVRC12-subset/snapshot_iter_184080.caffemodel [TRT] retrieved output tensor 'softmax' [TRT] configuring CUDA engine [TRT] building CUDA engine `

AndreV84 commented 6 years ago

but as my colleague points out the only network that comes with the repo that seems to work is the GoogleNet-ILSVRC12-subset/ May be you could point out to other custom networks that proved to work with imagenet-console/camera? Thank you

dusty-nv commented 6 years ago

Here are the paths to the networks that come with the repo that are tested against imagenet-console / camera: https://github.com/dusty-nv/jetson-inference/blob/6e16563f404af35be08993a44ad3650210f40351/imageNet.cpp#L108

I have not tried that particular variant of VGG16 before, it seems like it is not loading correctly for some reason.

From: AndreV84 notifications@github.com Sent: Thursday, October 11, 2018 11:52 AM To: dusty-nv/jetson-inference jetson-inference@noreply.github.com Cc: Dustin Franklin dustinf@nvidia.com; Mention mention@noreply.github.com Subject: Re: [dusty-nv/jetson-inference] Running your own model on the Jetson TX1 (#71)

but as my colleague points out the only network that comes with the repo that seems to work is the GoogleNet-ILSVRC12-subset/ May be you could point out to other custom networks that proved to work with imagenet-console/camera? Thank you

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/71#issuecomment-429009233, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AOpDKzpunhYIhaBZxAlPqQ51wYRnfbNYks5uj2kWgaJpZM4M5_8M.


This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

AndreV84 commented 6 years ago

thx

AndreV84 commented 6 years ago

may I know if there will be a support of various models or same models but for multiple cameras in jetson-inference for tx or xavier?

dusty-nv commented 6 years ago

You can do multiple models and multiple cameras now, you just need to customize the example programs to load multiple models (i.e. create more imageNet object instances) and/or open multiple camera instances for streaming. There have also been forks from the community that process one camera with multiple models.

AndreV84 commented 6 years ago

Is imagenet-camera a good fit If my objective is to recognize 4 type of bottles [ 4 brands ] . When I train a model in DIGITS and then classify one image the precision seems to be rather accurate. But when I download the model from DIGITS, and pass it with parameters to imagenet-camera it just says with good FPS rate [30fps] that it is 100% a certain type, despite of what object and in what form is positioned in a front of camera. The model is https://storage.googleapis.com/sample-input/20181012-171807-cdbf_epoch_30.0.tar.gz . Please advise what direction will be a good fit for given objectives. Thanks

AndreV84 commented 6 years ago

I upgraded the obsolete prototxt file somehow manually , and now I am getting a different error: `./imagenet-console bottle_0.jpg bbb.jpg imagenet-console args (3): 0 [./imagenet-console] 1 [bottle_0.jpg] 2 [bbb.jpg]

imageNet -- loading classification network model from: -- prototxt networks/productRecognizer.prototxt -- model networks/productRecognizer.caffemodel -- class_labels networks/labels.txt -- input_blob 'data' -- output_blob 'prob' -- batch_size 2

[TRT] TensorRT version 5.0.0 [TRT] attempting to open cache file networks/productRecognizer.caffemodel.2.tensorcache [TRT] cache file not found, profiling network model [TRT] platform does not have FP16 support. [TRT] loading networks/productRecognizer.prototxt networks/productRecognizer.caffemodel [TRT] retrieved output tensor 'prob' [TRT] configuring CUDA engine [TRT] building CUDA engine [TRT] completed building CUDA engine [TRT] network profiling complete, writing cache to networks/productRecognizer.caffemodel.2.tensorcache [TRT] completed writing cache to networks/productRecognizer.caffemodel.2.tensorcache [TRT] networks/productRecognizer.caffemodel loaded [TRT] CUDA engine context initialized with 2 bindings [TRT] networks/productRecognizer.caffemodel input binding index: 0 [TRT] networks/productRecognizer.caffemodel input dims (b=2 c=3 h=224 w=224) size=1204224 [cuda] cudaAllocMapped 1204224 bytes, CPU 0x243bf5000 GPU 0x243bf5000 [TRT] networks/productRecognizer.caffemodel output 0 prob binding index: 1 [TRT] networks/productRecognizer.caffemodel output 0 prob dims (b=2 c=6 h=1 w=1) size=48 [cuda] cudaAllocMapped 48 bytes, CPU 0x216b89200 GPU 0x216b89200 networks/productRecognizer.caffemodel initialized. [TRT] networks/productRecognizer.caffemodel loaded imageNet -- loaded 39 class info entries imageNet -- failed to load synset class descriptions (39 / 39 of 6) imageNet -- failed to initialize. imagenet-console: failed to initialize imageNet `

dusty-nv commented 6 years ago

Hi Andre, it appears that the network output has 6 classes, but the class description file has 39 classes?

-------- Original message -------- From: AndreV84 notifications@github.com Date: 10/12/18 9:30 PM (GMT-05:00) To: dusty-nv/jetson-inference jetson-inference@noreply.github.com Cc: Dustin Franklin dustinf@nvidia.com, Mention mention@noreply.github.com Subject: Re: [dusty-nv/jetson-inference] Running your own model on the Jetson TX1 (#71)

I upgraded the obsolete prototxt file somehow manually , and now I am getting a different error: `./imagenet-console bottle_0.jpg bbb.jpg imagenet-console args (3): 0 [./imagenet-console] 1 [bottle_0.jpg] 2 [bbb.jpg]

imageNet -- loading classification network model from: -- prototxt networks/productRecognizer.prototxt -- model networks/productRecognizer.caffemodel -- class_labels networks/labels.txt -- input_blob 'data' -- output_blob 'prob' -- batch_size 2

[TRT] TensorRT version 5.0.0 [TRT] attempting to open cache file networks/productRecognizer.caffemodel.2.tensorcache [TRT] cache file not found, profiling network model [TRT] platform does not have FP16 support. [TRT] loading networks/productRecognizer.prototxt networks/productRecognizer.caffemodel [TRT] retrieved output tensor 'prob' [TRT] configuring CUDA engine [TRT] building CUDA engine [TRT] completed building CUDA engine [TRT] network profiling complete, writing cache to networks/productRecognizer.caffemodel.2.tensorcache [TRT] completed writing cache to networks/productRecognizer.caffemodel.2.tensorcache [TRT] networks/productRecognizer.caffemodel loaded [TRT] CUDA engine context initialized with 2 bindings [TRT] networks/productRecognizer.caffemodel input binding index: 0 [TRT] networks/productRecognizer.caffemodel input dims (b=2 c=3 h=224 w=224) size=1204224 [cuda] cudaAllocMapped 1204224 bytes, CPU 0x243bf5000 GPU 0x243bf5000 [TRT] networks/productRecognizer.caffemodel output 0 prob binding index: 1 [TRT] networks/productRecognizer.caffemodel output 0 prob dims (b=2 c=6 h=1 w=1) size=48 [cuda] cudaAllocMapped 48 bytes, CPU 0x216b89200 GPU 0x216b89200 networks/productRecognizer.caffemodel initialized. [TRT] networks/productRecognizer.caffemodel loaded imageNet -- loaded 39 class info entries imageNet -- failed to load synset class descriptions (39 / 39 of 6) imageNet -- failed to initialize. imagenet-console: failed to initialize imageNet `

- You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/71#issuecomment-429500475, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AOpDK4ynb6trrDyQtNSRWSl4GNjky2NKks5ukUIUgaJpZM4M5_8M.


This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

AndreV84 commented 6 years ago

exactly, after I left 4 lines [by some tricky puzzle when I left 6 lines it calculated them as 8] it got to load 6/6 fine and all worked, but now, due to the dataset peculiarities the model rather recognizes hand, than bottle :)

That resembles to me one anecdote. A dozen of years ago military guys were to recognize tanks with that sort of algorithms, and they did, after they have spent years of research and prepared data sets and analyzed them...But within time it turned out that the network determined rather weather than tanks, because all the dataset images have been filmed in cloudy weather. And the trained neuronet as a result with great extent of probability distincted cloudy days versus sunny instead of detecting tanks

AndreV84 commented 6 years ago

could you provide some hint on how to get with the given dataset, the imagenet-camera to detect 4 types of bottles. Though may be that will never get to production , without huuuge dataset? [the given dataset is like 34mb, and you will have a wish to have a look at it I can share it for you could provide your expert advise on improvement of it]

AndreV84 commented 6 years ago

source: https://www.gwern.net/Tanks

AndreV84 commented 6 years ago

"The neural network had learned to distinguish cloudy days from sunny days, instead of distinguishing camouflaged tanks from empty forest" source : https://arxiv.org/pdf/1705.05627.pdf

AndreV84 commented 6 years ago

may be you know hot to feed into digits a list of > 100,000 images for multiple images classification with trained model? When I start uploading the list of files a new browser tab opens but it never shows anything, seems processing or halt forever