Closed github2016-yuan closed 3 years ago
Thanks. What do you mean by real-world application? This works for individual videos already. If you mean live-streaming videos, the way to do so is to chop up the video stream into small video chunks like 30 seconds, and then run the system and visualization in parallel (currently they run separately). This way you get at least a 30-second delay. You can have smaller chunks but the accuracy may decrease.
Great job first and thanks for your sharing. I follow your orders and I get vis_video.mp4 at last. In fact, to get vis_video.mp4, I do a lot : (1)Run object detection & tracking on the test videos (2)To visualize the tracking results:
You get the detection result in (1) and then visualize it in the video. You then just use ffmpeg to get .mp4 from lots of frames extracted from the source .mp4 file. I wonder if it is possible to do detection and tracking in the video directly and visualize it at the same time ? You know I am concered about real world appliaction? Thanks