amuamushu / adv_avod_ssn

Deep Sensor Fusion for Single Source Robustness
https://ayushmore.github.io/2022-03-07-improving-robustness-via-adversarial-training/
0 stars 0 forks source link

Generate Object Detection Figures. #9

Closed amuamushu closed 2 years ago

amuamushu commented 2 years ago

For the checkpoint report, we should display the object detection results like so: image.

In the image, we should have the bounding boxes, confidence scores, and labels.

Currently, the code for generating the plot does not run so we would need to figure out how to get it running.

Image Source: https://towardsdatascience.com/object-detection-with-10-lines-of-code-d6cb4d86f606

amuamushu commented 2 years ago

Tried

python3 utils_sin/viz_sample_ex.py --data_dir=../teams/DSC180A_FA21_A00/a15/avod_data/Kitti/object/training --out_dir=./outputs/pyramid_cars_with_aug_simple/ --img_idx=0

but this just generates the corrupted sources.

amuamushu commented 2 years ago

Ran demos/show_predictions_2d.py after tweaking per https://github.com/kujason/avod#viewing-results. Was able to get the figures! Need to figure out what the numbers mean though.

000152

amuamushu commented 2 years ago

From comparing the detection results and the images, it seems the left value is the confidence score. I am still unsure of the right value.

outputs/pyramid_cars_with_aug_simple/predictions/images_2d/predictions/val/120000/0.1/000630.png: 000630

outputs/pyramid_cars_with_aug_simple/predictions/kitti_native_eval/0.1_val/120000/data/000630.txt

Car -1 -1 -10.0 590.841 170.283 623.885 202.207 1.679 1.714 4.036 -0.136 1.544 39.985 -1.544 0.939

Car -1 -1 -10.0 432.119 179.829 460.399 197.977 1.44 1.503 3.893 -13.759 2.046 60.694 1.569 0.168

Car -1 -1 -10.0 328.322 172.618 405.234 204.358 1.71 1.658 3.669 -13.394 1.697 39.747 -3.112 0.127

Description of data format: https://github.com/kujason/avod/wiki/Data-Formats

amuamushu commented 2 years ago

Score is also mentioned in wavedata.wavedata.tools.obj_detection.obj_utils.ObjectLabel image

amuamushu commented 2 years ago

First Value

After taking a deep dive into show_predictions_2d.py, I found out that the second score is the IoU score, which is an evaluation metric that measures the accuracy of an object detector source. For IoU, a score closer to 1 is better.

I found this out by looking at draw_prediciton_info() where the IoU score is added as the second part of the label:

    if draw_iou and len(ground_truth) > 0:
        if draw_score:
            label += ', '
        iou = evaluation.two_d_iou(pred_box_2d, ground_truth)
        label += "{:.3f}".format(max(iou))

Second Value

I understand the first value to be the confidence score because of the docstring in ObjectLabel as mentioned earlier (https://github.com/amuamushu/adv_avod_ssn/issues/9#issuecomment-1029541935). I know the first value in the label is ObjectLabel.score because of this line in draw_prediction_info():

    if draw_score:
        label += "{:.2f}".format(pred_obj.score)

TL;DR

first score: confidence score second score: IoU score