AICPS / RS2G

This repository contains the code for our paper titled "RS2G: Data-Driven Scene-Graph Extraction and Embedding for Robust Autonomous Perception and Scenario Understanding"
https://arxiv.org/abs/2304.08600
MIT License
13 stars 2 forks source link

The used Traffic-Anomaly dataset preprocessing #1

Open orsveri opened 6 days ago

orsveri commented 6 days ago

Hello, Thank you for sharing this great work!

I was able to run training on the CARLA dataset and get approximately the same results as in the paper. However, I struggle with running it on the Traffic-Anomaly (DoTA) dataset. How did you split the data? This dataset contains 4677 videos, while it is said in the paper that you used only 620.

As I understood, I need to generate a .pkl file with scene graphs as described here: instruction from roadscene2vec repo To do this, I need the corresponding bev.json file with the correct parameters. Could you provide it? Also, I am not sure about the config, do I only need to set the data path and the bev.json path and keep all the other parameters as they are?

orsveri commented 5 days ago

It also says in the "data/dataset.py" file:

Datasets must be structured as follows:
# dataset_path / <sequence_id> / raw_images / <image files> (sorted in ascending filename order.)
# dataset_path / <sequence_id> / gt_data / <ground truth data files> (sorted in ascending filename order.)
# dataset_path / <sequence_id> / label.txt (sorted ascending filename order or simply one for entire sequence.)
# dataset_path / <sequence_id> / metadata.txt (sorted in ascending filename order or one for the entire sequence.)

What should be inside of the label.txt and gt_data files for DoTA?