dusty-nv / ros_deep_learning

Deep learning inference nodes for ROS / ROS2 with support for NVIDIA Jetson and TensorRT
866 stars 258 forks source link

In detections message: source_img is empty #59

Closed WaldoPepper closed 3 years ago

WaldoPepper commented 3 years ago

Hi,

I currently try to evaluate the performance of my detection node with a recorded rosbag I added ground truth to for every image in the sequence.

Unfortunately the source_img information is not provided in the detections message:

 ...
 bbox:
      center:
        x: 132.41796875
        y: 419.493591309
        theta: 0.0
      size_x: 133.533309937
      size_y: 229.306762695
    source_img:
      header:
        seq: 0
        stamp:
          secs: 0
          nsecs: 0
        frame_id: ''
      height: 0
      width: 0
      encoding: ''
      is_bigendian: 0
      step: 0
      data: []

Is there a possibility to determine what exact image a certain detection is derived from?

dusty-nv commented 3 years ago

Hmm since another node sends detectNet the images, I am not sure how you would determine the image filename that an image subscriber receives. Do you know if that is possible in ROS?

WaldoPepper commented 3 years ago

I think, taking the header (essentialy the time stamp) of the image used for detection and publish it in the detections message as the detections source_img/header together with the detection, should be sufficient to identify the image in a rosbag stream of images.

k3street commented 3 years ago

There isn't a defined message in ROS that captures the img file and metadata, what I do is time stamp a frame in a data store like postgres with all metadata about the box.

dusty-nv commented 3 years ago

OK, gotcha - I timestamp the detection message here: https://github.com/dusty-nv/ros_deep_learning/blob/ac40e93413f4b7cb911a18c0e4d5daac479234d4/src/node_detectnet.cpp#L148

Although it seems there is some other header in the message for another timestamp. You could try setting that other header if you want.

WaldoPepper commented 3 years ago

msg.header.stamp is the entire messages time stamp which I would like to keep at ROS_TIME_NOW().

I think more along the lines of adding something like

detMsg.source_img.header = input->header;

to add a source image-related time stamp to each individual set of detections.

I will try this and let you know how it works...

WaldoPepper commented 3 years ago

Ok, did a (very) quick test, it works so far:

   bbox: 
      center: 
        x: 589.929748535
        y: 646.913085938
        theta: 0.0
      size_x: 202.356445312
      size_y: 240.17388916
    source_img: 
      header: 
        seq: 2339
        stamp: 
          secs: 1607068054
          nsecs: 348318751
        frame_id: "//detectnet_link"
      height: 0
      width: 0
      encoding: ''
      is_bigendian: 0
      step: 0
      data: []

A nice side-effect is, that the frame_id of the camera which took the image, is now published, too, which may help with calculating the apparent position of the identified object relative to a /tf frame of choice.

Using this node has been and still is a very pleasant experience, thanks again for providing it... :-)

WaldoPepper commented 3 years ago

Looking deeper into it I found the correlation I was looking for.

I recorded a rosbag from the topics of interest /video_source/raw and /detectnet/detections

rosbag record /video_source/raw /detectnet/detections

When viewing them in rqt_bag I can identify exactly from which image a particular detection was derived from:

Screenshot from 2020-12-04 11-30-20

WaldoPepper commented 3 years ago

Problem solved, I will try to make a PR after I learned more about the implications of the code change...