Closed xiaoran-roosevelt closed 6 months ago
Hi, both of these datasets have annotation available for road, anomaly and ignore regions. This is sufficient for evaluation of the anomaly detection methods. These datasets are not used for training, thus full semantic annotation is not required nor provided as far as I know. For further questions and details, I would suggest to contact the authors of the respective datasets.
Originally posted by @vojirt in #1 (comment) But the annotation files of these datasets only contain three values: 0 for normal pixels, 1 for anomalous pixels, and 255 for ignored regions. However, the normal region includes both road and non-road areas, so how do we differentiate between roads and non-roads?
This is an anomaly ground truth image from the Road Anomaly dataset, which only annotates anomalous and non-anomalous pixels without distinguishing between road and non-road areas. On the other hand, another image from the Lost and Found dataset annotates roads and anomalies on the roads. Therefore, I would like to know how your paper calculates anomalies only on the road for the Road Anomaly dataset.
Now I understand your question. I have annotated manually the road region (in the same coarse fashion as in the FS LaF dataset) for the RoadAnomaly datasets and merged it with the original annotation such that I have 3 labels (road, non-road, anomaly). Here is the link to the annotation: https://drive.google.com/file/d/1SaRT_c-AfCzEBUBXZrKBccHwz5Wl6W94/view?usp=sharing
Cheers!
Thanks
Originally posted by @vojirt in https://github.com/vojirt/DaCUP/issues/1#issuecomment-2082541755