Closed JunrQ closed 3 years ago
Thank you for your interest in our project and questions!
As for the training set, you are correct about it. We only specify the test set in our benchmark (see paragraph 2 of Section Dataset Construction in the paper). The researchers are free to use all the data and annotations in the original Waymo training set.
Thank you for asking about the evaluation protocol! We will update it soon and specify the evaluation more clearly and flexibly. But in response to your question, here are some quick steps:
${result_folder}/${NAME}/summary/
, results for tracklet ID is stored in ${ID}.json
. result_folder
stores all the tracking results, NAME
is your algorithm name. Please refer to Section 4.2 in the README.md for further details.
-- result_folder
-- NAME
-- summary
-- ID.json
{
frame_index0: {'bbox0': ..., 'bbox1': ..., 'motion': ..., }
frame_index1: ...
frame_indexN: ...
}
Please let me know if you have further questions!
@JunrQ I’ll close this issue. Feel free to contact us for any further communication.
Thank you for sharing the code.
I am not sure whether I understand correctly that you only specify the test dataset (in the
./benchmark/
directory) but not the training set (because you don't need training)?Another question is that how can I test other algorithms on the benchmark? Do you plan to provide the releated APIs?