naisy / train_ssd_mobilenet

Train ssd_mobilenet of the Tensorflow Object Detection API with your own data.
MIT License
62 stars 24 forks source link

nms version for roadsign example #9

Closed davideCremona closed 4 years ago

davideCremona commented 4 years ago

Hi, I'm experimenting with your repository and I've started with following the tutorial. I've successfully trained an ssd mobilenet model with the steps in readme.md, exported the frozen graph but now I want to test it on a video. To test the model I'm using your linked repository for realtime object detection. I'm confused with the nms_version. What version should I use? I'm copying the .yml file for clarity:

---
image_input: 'images'       # input image dir
movie_input: '/home/DATI/insulators/SSD_mobilenet/test_videos/route226_argentina.mp4'    # mp4 or avi. Movie file.
#camera_input: 0            # USB Webcam on PC
camera_input: 1             # USB Webcam on TX2
## Input Must be OpenCV readable
## Onboard camera on Xavier (with TX2 onboard camera)
#camera_input: "nvarguscamerasrc ! video/x-raw(memory:NVMM), width=1280, height=720,format=NV12, framerate=120/1 ! nvvidconv ! video/x-raw,format=I420 ! videoflip method=rotate-180 ! appsink"
## Onboard camera on TX2 ### (need: apt-get install libxine2)
#camera_input: "nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)I420, framerate=(fraction)30/1 ! nvvidconv flip-method=0 ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink"
## Onboard or external RTSP feed
#camera_input: "rtspsrc location=rtsp://127.0.0.1:8554/test latency=500 ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw, width=(int)1280, height=(int)720, format=(string)BGRx ! videoconvert ! appsink"

force_gpu_compatible: False # If True with visualize False, speed up. Forces all CPU tensors to be allocated with Cuda pinned memory.
save_to_file: True         # movie or camera: ./output_movie/output_unixtime.avi. Save it in avi format to prevent compression degradation. Requires a lot of disk space.
                            # image: ./output_image/PATH_TO_FILE. Save it in image file.
visualize: True             # True: Show result image. False: Without image show.
vis_worker: False           # True: Visualization run on process. (With visuzalize:True)
max_vis_fps: 0              # >=1: Limit of show fps. 0: No limit - means try to spend full machine power for visualization. (With visualize:True.)
vis_text: True              # Display fps on result image. (With visualize:True.)
max_frames: 5000            # >=1: Quit when frames done. 0: no exit. (With visualize:False)
width: 600                  # Camera width.
height: 600                 # Camera height.
fps_interval: 5             # FPS console out interval and FPS stream length.
det_interval: 100           # interval [frames] to print detections to console
det_th: 0.5                 # detection threshold for det_intervall
worker_threads: 4           # parallel detection for Mask R-CNN.
split_model: True           # Splits Model into a GPU and CPU session for SSD/Faster R-CNN.
log_device: False           # Logs GPU / CPU device placement
allow_memory_growth: True   # limits memory allocation to the actual needs
debug_mode: False           # Show FPS spike value
split_shape: 1917           # 1917, 3000, 3309, 5118, 7326, 51150. ExpandDims_1's shape.

model_type: 'nms_v2'
model_path: '/home/DATI/insulators/SSD_mobilenet/output_models/frozen_inference_graph.pb'
label_path: '/home/davidecremona/PycharmProjects/train_ssd_mobilenet/roadsign_data/roadsign_label_map.pbtxt'
num_classes: 4

Thank you in advance, Davide.

naisy commented 4 years ago

Hi @davideCremona, Probably nms_v2, but if it doesn't work, analyze the model structure. About Split Model Since ssd mobilenet of realtime object detection split by non-maximum suppression, an error will occur if this structure is different.

You can also use: split_model: False this setting is slow but maybe work.

davideCremona commented 4 years ago

Setting split_model: False works, I'm getting around 60fps on nvidia RTX 2080, but visualizing no detections in realtime nor in the saved video, probably because it does not contain any speed_20, speed_10, speed_30 or stop.

Can you please share your test video? I will do a realtime object detection on that to see if everything works as expected. Thank you again, Davide.

naisy commented 4 years ago

I don't have a sample video as I only use it for real-time detection using a usb camera. Please try to see my youtube video with a usb camera. Realtime Object Detection on TX2 cd ~/github/realtime_object_detection python run_stream.py

or use train_ssd_mobilenet/roadsign_data/PascalVOC/JPEGImages data. cp -r ~/github/train_ssd_mobilenet/roadsign_data/PascalVOC/JPEGImages ~/github/realtime_object_detection/images cd ~/github/realtime_object_detection python run_image.py

davideCremona commented 4 years ago

Hi, I've used some code of your realtime object detection repository and it all works perfectly. Thanks for the support