zhengthomastang / 2018AICity_TeamUW

The winning method in Track 1 and Track 3 at the 2nd AI City Challenge Workshop in CVPR 2018 - Official Implementation
http://openaccess.thecvf.com/content_cvpr_2018_workshops/w3/html/Tang_Single-Camera_and_Inter-Camera_CVPR_2018_paper.html
553 stars 198 forks source link

Manually labeling - how to input? #11

Open koenie64 opened 5 years ago

koenie64 commented 5 years ago

Hi! We are trying to reproduce /reuse the scripts. However we get stuck at the manual labelling. In the darknet aicity.data, a reference is made to: train = /home/ipl_gpu/Aotian/darknet2/data/aicity/train.txt valid = /home/ipl_gpu/Aotian/darknet2/data/aicity/validation.txt

are these the class (sedan etc.) labels? How should we format those files? Thjanks

zhengthomastang commented 5 years ago

Hi @koenie64 . Thank you for using our software package. As the datasets of the 2018 AI City Challenge belong to NVIDIA and they are not available currently, you cannot train the model. But our pretrained model is provided for testing. Note that it may not perform well on other datasets. Feel free to use other state-of-the-art detectors like Faster R-CNN, SSD, YOLOv3, etc.

koenie64 commented 5 years ago

Hi! We understand that - we will use the new 2019 dataset. I was more interested in the structure of the manual labeling (format of the text file) so that we can reuse the script with new data.

Thanks! Koen

On Wed, May 1, 2019 at 4:52 AM Zheng (Thomas) Tang notifications@github.com wrote:

Hi @koenie64 https://github.com/koenie64 . Thank you for using our software package. As the datasets of the 2018 AI City Challenge belong to NVIDIA and they are not available currently, you cannot train the model. But our pretrained model is provided for testing. Note that it may not perform well on other datasets. Feel free to use other state-of-the-art detectors like Faster R-CNN, SSD, YOLOv3, etc.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/zhengthomastang/2018AICity_TeamUW/issues/11#issuecomment-488071543, or mute the thread https://github.com/notifications/unsubscribe-auth/AL4OGGADGSN456ZNHGI2ENTPTCIPTANCNFSM4HJKZHQA .

zhengthomastang commented 5 years ago

Hi @koenie64! For the 2019 AI City Challenge, I do not suggest you to train a detector using the provided ground truth of MTMC tracking, because only vehicles passing through multiple cameras are annotated. In other words, a lot of cars, especially those parking by the streets, are not labeled. You will not get good performance if training on those data. However, the performance of the provided detection baselines should be good enough, especially SSD512. Let me know if you still have any questions.

Ujang24 commented 4 years ago

Hi @koenie64 . Thank you for using our software package. As the datasets of the 2018 AI City Challenge belong to NVIDIA and they are not available currently, you cannot train the model. But our pretrained model is provided for testing. Note that it may not perform well on other datasets. Feel free to use other state-of-the-art detectors like Faster R-CNN, SSD, YOLOv3, etc.

Could you please mention the steps for using YOLOv3 for the object detector?

zhengthomastang commented 4 years ago

@Ujang24 The easiest way is to run the pretrained models (ImageNet or COCO) and change the output format to match with our definition. Or you can follow the tutorial of YOLOv3 to train your own models.

Ujang24 commented 4 years ago

@Ujang24 The easiest way is to run the pretrained models (ImageNet or COCO) and change the output format to match with our definition. Or you can follow the tutorial of YOLOv3 to train your own models.

Thanks. It helps. Sorry for the late reply.