Open earcz opened 5 years ago
For the object detection part, visualization can be done in real-time. For every input frame, DNN (which is based on YOLO v1) outputs matrix of Nx6, where N is number of detected objects. For each object there are 6 numbers: class(label), probability and bounding box coordinates (x, y, width, height). See this code for more information.
Hi,
Similar to this issue, did you visualize the detected objects in Fig. 5 (in your paper) in real-time or after flight? I want to visualize real-time object detection from Jetson to QGC via telemetry. Have you ever experienced this and is there a simple way to accomplish this ?
Thanks.