kujason / avod

Code for 3D object detection for autonomous driving
MIT License
939 stars 347 forks source link

How to generate testing results? #6

Closed kminemur closed 6 years ago

kminemur commented 6 years ago

I can run all of the instructions, however, I'm not sure how to generate results on the testing dataset for Kitti benchmark submission. Could you tell me how to do it?

AbhinavDS commented 6 years ago

Yes, I have the same query. The issue seems to be lack of ground plane data for testing dataset. How to generate the ground planes for testing dataset (or any other new data)?

P.S. For now i fixed the ground plane coefficients to a,b,c,d = [0,-1,0,1.73] (height of lidar in KITTI data).

kujason commented 6 years ago

The planes are generated with an in-house plane estimation algorithm, and we include the output here for easier training on labelled data. We recommend looking into one of the many other available methods for ground plane estimation to generate these files if needed.

Using a constant ground plane for training is also a viable option to try, although we would recommend using a, b, c, d = [0, -1, 0, 1.65] instead, as the ground plane is in the camera co-ordinate frame, and the camera is 1.65m above the ground (http://www.cvlibs.net/datasets/kitti/setup.php).

yzhou-saic commented 6 years ago

In order to repeat the same results as reported in the paper, during testing, should we fix the ground plane as a, b, c, d = [0, -1, 0, 1.65] ? I think most of the people care about this.

asharakeh commented 6 years ago

@yzhou-saic you will not get the same results unless you use our ground plane estimation algorithm, which we will not be releasing any time soon.

However, any ground plane estimation algorithm should work in theory.

kminemur commented 6 years ago

@kujason @asharakeh thank you for the responses.

I manually created text.txt and planes directory filling txts with

Plane

Width 4 Height 1 0.0e+00 -1.0e+00 0.0e+00 1.65e+00`

in /Kitti/object, then run

python avod/experiments/run_inference.py --checkpoint_name='avod_cars_example' --data_split='test' --ckpt_indices=120 --device='0'

This way works!!

I have other question, how to interpret the output values in /avod/data/outputs/avod_cars_example/predictions/final_predictions_and_scores?

I put an example here: 3.91767 1.73724 6.20433 3.30604 1.53459 1.41908 -1.33451 0.99933 0.00000

I guess these output values are: x, y, z, l, w, h, ry, score, ???. Am I correct?

Thanks in advance.

kujason commented 6 years ago

The output is in box_3d + score detection format (more info here). To convert to KITTI label format, you can use the provided scripts/offline_eval/save_kitti_predictions.py script.

kminemur commented 6 years ago

My issue is solved. Thanks.

kargarisaac commented 5 years ago

@yzhou-saic you will not get the same results unless you use our ground plane estimation algorithm, which we will not be releasing any time soon.

However, any ground plane estimation algorithm should work in theory.

Hi, I want to use your algorithm on my own dataset. I have developed an algorithm for ground plane estimation. I cannot find respective part of the code for ground plane estimation. Is it possible to feed my point cloud data with separated ground point (I mean extract points above the ground and feed them directly into the network)? I want to know where is the ground estimation (ground plane usage) in your pipeline?

Thanks