dusty-nv / jetson-inference

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
https://developer.nvidia.com/embedded/twodaystoademo
MIT License
7.89k stars 2.99k forks source link

How can I simultaneously run ped-100 detection on my monitor and output the detect images to folder without mask? #1626

Closed mmhzlrj closed 1 year ago

mmhzlrj commented 1 year ago
#!/usr/bin/python3

import argparse

parser = argparse.ArgumentParser()

parser.add_argument("input",type=str)

parser.add_argument("--output",type=str,default="display://0")

parser.add_argument("--network",type=str,default="ssd-mobilenet-v2")

parser.add_argument("--threshold",type=float,default=0.5)

opt = parser.parse_known_args()[0]

import jetson.utils

input = jetson.utils.videoSource(opt.input)

output = jetson.utils.videoOutput(opt.output)

import jetson.inference

net = jetson.inference.detectNet(opt.network,threshold=opt.threshold)

while output.IsStreaming():

    img = input.Capture()

    detections = net.Detect(img) 

    for detection in detections:

        output.Render(img)

$ ./testArgv.py <inputsource> --network=ped-100 --output=<outputDirPath>

I can only get mask detect images by running this script. Any suggestions? Thank you.

dusty-nv commented 1 year ago

@mmhzlrj, pass in overlay='none' argument to net.Detect(), like this:

detections = net.Detect(img, overlay='none')

that will disable the color mask. Also, you might be interested in this detectnet-snap.py sample that saves the cropped images of the detected objects: https://github.com/dusty-nv/jetson-inference/blob/master/python/examples/detectnet-snap.py

mmhzlrj commented 1 year ago

@dusty-nv Thank you for the recommendation about detectnet-snap.py. It works really great.

What if I want live camera stream detect objects in the video with overlay that shows on monitor, but save the snapshots without overlay? Because I think the overlay on those detections can hold my attention if I sit in front of the monitor, and I can see clearly what were there without overlay on snapshots. If I simply pass 'none' to overlay, I get both live stream and snapshots without overlay.

Besides, as for snapshots, how can I set more options? Such as box size(overlay box), full screen size or both. I noticed some snapshots I captured were not fully boxed. For example, I use ped-100 on detectnet. when I using it to detected a moving person, the detectnet might not box in this person's head some times.

Looking forward your suggestions.

dusty-nv commented 1 year ago

What if I want live camera stream detect objects in the video with overlay that shows on monitor, but save the snapshots without overlay? Because I think the overlay on those detections can hold my attention if I sit in front of the monitor, and I can see clearly what were there without overlay on snapshots. If I simply pass 'none' to overlay, I get both live stream and snapshots without overlay.

You can continue to pass overlay='none' to net.Detect(), and then call net.Overlay() on the image after you have saved the snapshots:

# after net.Detect() and snapshots are saved
net.Overlay(img, detections, overlay='box,labels,conf')
output.Render(img)

I noticed some snapshots I captured were not fully boxed. For example, I use ped-100 on detectnet. when I using it to detected a moving person, the detectnet might not box in this person's head some times.

You could try manually expanding the bounding boxes to give yourself some extra room when saving the snapshots, or trying a better model for detecting people (the pednet-100 model is old). For example, the TAO PeopleNet model is quite good: https://github.com/dusty-nv/jetson-inference/blob/master/docs/detectnet-tao.md

mmhzlrj commented 1 year ago

What if I want live camera stream detect objects in the video with overlay that shows on monitor, but save the snapshots without overlay? Because I think the overlay on those detections can hold my attention if I sit in front of the monitor, and I can see clearly what were there without overlay on snapshots. If I simply pass 'none' to overlay, I get both live stream and snapshots without overlay.

You can continue to pass overlay='none' to net.Detect(), and then call net.Overlay() on the image after you have saved the snapshots:

# after net.Detect() and snapshots are saved
net.Overlay(img, detections, overlay='box,labels,conf')
output.Render(img)

I noticed some snapshots I captured were not fully boxed. For example, I use ped-100 on detectnet. when I using it to detected a moving person, the detectnet might not box in this person's head some times.

You could try manually expanding the bounding boxes to give yourself some extra room when saving the snapshots, or trying a better model for detecting people (the pednet-100 model is old). For example, the TAO PeopleNet model is quite good: https://github.com/dusty-nv/jetson-inference/blob/master/docs/detectnet-tao.md

# after net.Detect() and snapshots are saved
net.Overlay(img, detections, overlay='box,labels,conf')
output.Render(img)

Thanks a lot,.@dusty-nvThis is just what I want. I got another issue when I was tring to use peoplenet. And Here is the log:

jet@jet-nano:~/Desktop$ /home/jet/Desktop/detectnet-snap.py --model=peoplenet
[OpenGL] glDisplay -- X screen 0 resolution:  1280x720
[OpenGL] glDisplay -- X window resolution:    1280x720
[OpenGL] glDisplay -- display device initialized (1280x720)
[video]  created glDisplay from display://0
------------------------------------------------
glDisplay video options:
------------------------------------------------
  -- URI: display://0
     - protocol:  display
     - location:  0
  -- deviceType: display
  -- ioType:     output
  -- width:      1280
  -- height:     720
  -- frameRate:  0
  -- numBuffers: 4
  -- zeroCopy:   true
------------------------------------------------

detectNet -- loading detection network model from:
          -- prototxt     
          -- model        networks/peoplenet_deployable_quantized_v2.6.1/resnet34_peoplenet_int8.etlt.engine
          -- input_blob   'input_1'
          -- output_cvg   'output_cov/Sigmoid'
          -- output_bbox  'output_bbox/BiasAdd'
          -- mean_pixel   0.000000
          -- class_labels networks/peoplenet_deployable_quantized_v2.6.1/labels.txt
          -- class_colors networks/peoplenet_deployable_quantized_v2.6.1/colors.txt
          -- threshold    0.500000
          -- batch_size   1

[TRT]    TensorRT version 8.2.1
[TRT]    loading NVIDIA plugins...
[TRT]    Registered plugin creator - ::GridAnchor_TRT version 1
[TRT]    Registered plugin creator - ::GridAnchorRect_TRT version 1
[TRT]    Registered plugin creator - ::NMS_TRT version 1
[TRT]    Registered plugin creator - ::Reorg_TRT version 1
[TRT]    Registered plugin creator - ::Region_TRT version 1
[TRT]    Registered plugin creator - ::Clip_TRT version 1
[TRT]    Registered plugin creator - ::LReLU_TRT version 1
[TRT]    Registered plugin creator - ::PriorBox_TRT version 1
[TRT]    Registered plugin creator - ::Normalize_TRT version 1
[TRT]    Registered plugin creator - ::ScatterND version 1
[TRT]    Registered plugin creator - ::RPROI_TRT version 1
[TRT]    Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT]    Registered plugin creator - ::BatchedNMSDynamic_TRT version 1
[TRT]    Could not register plugin creator -  ::FlattenConcat_TRT version 1
[TRT]    Registered plugin creator - ::CropAndResize version 1
[TRT]    Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT]    Registered plugin creator - ::EfficientNMS_TRT version 1
[TRT]    Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1
[TRT]    Registered plugin creator - ::EfficientNMS_TFTRT_TRT version 1
[TRT]    Registered plugin creator - ::Proposal version 1
[TRT]    Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT]    Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT]    Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT]    Registered plugin creator - ::Split version 1
[TRT]    Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT]    Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT]    detected model format - engine  (extension '.engine')
[TRT]    loading network plan from engine cache... 
[TRT]    failed to load engine cache from 
[TRT]    failed to load 
[TRT]    detectNet -- failed to initialize.
Traceback (most recent call last):
  File "/home/jet/Desktop/detectnet-snap.py", line 39, in <module>
    net = detectNet(args.network, sys.argv, args.threshold)
Exception: jetson.inference -- detectNet failed to load network
jet@jet-nano:~/Desktop$ 

This issue fixed. And a small suggestion. I kept removing/networks/peoplenet_deployable_quantized_v2.6.1/this folder and ran$ tao-model-downloader.sh peoplenet_deployable_quantized_v2.6.1 again and again until the color.txt can be downloaded. I discovered that I can download archives from https://api.ngc.nvidia.com/ easier than the archives from https://nvidia.box.com/. I wonder can thoes archives be together on https://api.ngc.nvidia.com/ that some users like me can download more easily. I am tring to download peoplenet_pruned_quantized_v2.3.2 and attempting to retry download of https://nvidia.box.com/shared/static/s5ok5wgf2rn38jhj7zi0x9e8fw0wqnyr.txt. I think it is colors.txt.

I can download peoplenet_pruned archives from https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/peoplenet , but I don't know how to use resnet34_peoplenet_pruned_int8.etlt and resnet34_peoplenet_pruned_int8.txt make resnet34_peoplenet_pruned_int8.etlt.engine. I have already got tao-converter and colors.txt inside /networks/peoplenet_deployable_quantized_v2.6.1/

Can I download peoplenet_pruned archives from https://github.com/dusty-nv/jetson-inference/releases ?

dusty-nv commented 1 year ago

I wonder can thoes archives be together on https://api.ngc.nvidia.com/ that some users like me can download more easily.

colors.txt is a file that I added and cannot add to NGC, so that is why it's hosted on nvidia.box.com instead. However you should be able to just skip the colors.txt if it's giving you problems (you can remove it from here)

but I don't know how to use resnet34_peoplenet_pruned_int8.etlt and resnet34_peoplenet_pruned_int8.txt make resnet34_peoplenet_pruned_int8.etlt.engine.

The tao-model-downloader.sh script should automatically do this for you. If it has problems downloading the files from nvidia.box.com, I would comment those out so it can still build the TensorRT engine for you from the ETLT. Or you can look in the script to see how it does it.

mmhzlrj commented 1 year ago

I wonder can thoes archives be together on https://api.ngc.nvidia.com/ that some users like me can download more easily.

colors.txt is a file that I added and cannot add to NGC, so that is why it's hosted on nvidia.box.com instead. However you should be able to just skip the colors.txt if it's giving you problems (you can remove it from here)

but I don't know how to use resnet34_peoplenet_pruned_int8.etlt and resnet34_peoplenet_pruned_int8.txt make resnet34_peoplenet_pruned_int8.etlt.engine.

The tao-model-downloader.sh script should automatically do this for you. If it has problems downloading the files from nvidia.box.com, I would comment those out so it can still build the TensorRT engine for you from the ETLT. Or you can look in the script to see how it does it.

Hi, @dusty-nv. I just did what you told me that to remove download_file "colors.txt" "https://nvidia.box.com/shared/static/s5ok5wgf2rn38jhj7zi0x9e8fw0wqnyr.txt" in/tao-model-downloader.sh. I removed it from both Line 234 and Line 250. and save it. And then I ran tao-model-downloader.sh peoplenet_pruned_quantized_v2.3.2 to see what happen. Unless I read the log wrong, I can still see it was trying to download color.txt. Luckly, it was download this time. So I was wonder this script was not running from my local file, or the script would always download the latest script from somewhere and then ran it, which still has download_file "colors.txt" "https://nvidia.box.com/shared/static/s5ok5wgf2rn38jhj7zi0x9e8fw0wqnyr.txt" in it.

Anyway, I got resnet34_peoplenet_pruned_int8.etlt.engine finally. There were so many times I cannot download 'color.txt' this file and after 10 times retry it just break and I cannot skip download color.txt to make.engine file. I tried to put # before both Line 234 and Line 250, but it was still attempt to download color.txt. If you don't mind, would you explain why this happened?

This is the log when I successfully built TensorRT engine 'resnet34_peoplenet_pruned_int8.etlt.engine' : log.txt

dusty-nv commented 1 year ago

I removed it from both Line 234 and Line 250. and save it. And then I ran tao-model-downloader.sh peoplenet_pruned_quantized_v2.3.2 to see what happen. Unless I read the log wrong, I can still see it was trying to download color.txt

Ah, okay - after you made changes to tools/tao-model-downloader.sh, try re-running this:

cd jetson-inference/build
cmake ../
make
sudo make install

This will copy your updated tools/tao-model-downloader.sh into jetson-inference/build/aarch64/bin and /usr/local/bin. When you run it, it should then have your changes in it.

mmhzlrj commented 1 year ago

I removed it from both Line 234 and Line 250. and save it. And then I ran tao-model-downloader.sh peoplenet_pruned_quantized_v2.3.2 to see what happen. Unless I read the log wrong, I can still see it was trying to download color.txt

Ah, okay - after you made changes to tools/tao-model-downloader.sh, try re-running this:

cd jetson-inference/build
cmake ../
make
sudo make install

This will copy your updated tools/tao-model-downloader.sh into jetson-inference/build/aarch64/bin and /usr/local/bin. When you run it, it should then have your changes in it.

Thank you @dusty-nv . I tried what you said and everything works so great. I have another question want to know. I was using peoplenet_pruned_quantized_v2.3.2 model to run on the detectnet-snap.py. Is it possible to control the number of output pictures? It seems like generating detected images per frames. What if I want it to generate images when it detect new people or faces within 10 seconds? If it detects a same person or face within 10 seconds, just skips it. How can I make it happen?

dusty-nv commented 1 year ago

What if I want it to generate images when it detect new people or faces within 10 seconds?

For that you would have to have some type of tracking or re-identification and additional logic. There is basic tracking that was added to detectNet that you could try: https://github.com/dusty-nv/jetson-inference/blob/master/docs/detectnet-tracking.md

Then when a detection has a new TrackID (or the detection's TrackFrames = minFrames), that is a new object and you could save the snapshot. Deepstream has much more robust tracking algorithms available.

e-velin commented 1 year ago

Hi! how is it possible to save two video outputs - one with detections and one without detections? also, is it possible to disable showing the window that shows the video realtime? I woud like to get only the video output files

dusty-nv commented 1 year ago

@e-velin if you use the --headless flag it will skip trying to make the GUI window, and to output two separate video files with different content, you would need to make two videoOutput interfaces