NVlabs / mask-auto-labeler

Other
160 stars 13 forks source link

Problems about datasets and labels #17

Closed liusurufeng closed 1 year ago

liusurufeng commented 1 year ago

I apologize for bothering you,there is a quention for me about the datasets and labels in the coco.py,the code as follow:

training_config = { 'train_img_data_dir': 'data/coco/train2017', 'val_img_data_dir': 'data/coco/val2017', 'test_img_data_dir': 'data/coco/test2017', 'dataset_type': 'coco', 'train_ann_path': "/saccadenet/Saccadenet/data/coco/annotations/boxes_train2017.json",
'val_ann_path': "data/coco/annotations/instances_val2017.json" ------(1) }

generating_pseudo_label_config = { 'train_img_data_dir': 'data/coco/train2017', 'train_ann_path': "/saccadenet/Saccadenet/data/coco/annotations/boxes_train2017.json", 'val_img_data_dir': 'data/coco/train2017', ----(2) 'dataset_type': 'coco', 'val_ann_path': "/saccadenet/Saccadenet/data/coco/annotations/boxes_train2017.json", ----(3) }

In the place of (1):Is the instances_val2017.json an annotation file with segmentation information or only bounding box annotations? If my annotation file contains segmentation information, but I want to achieve weak segmentation, can I directly use it here?maybe I need to reprocess the annotations of the dataset to have only bounding box information? In the place of(2): Why are training set images used for the validation set? In the place of(3): Why are training set annotations used for this place? Shouldn't it be the validation set?

I apologize once again for bothering you and hope to receive your response!