Closed pra-dan closed 3 years ago
I find much clarity lacking in the training process. Additional info on training dataset format is missing in
toolkits/label_conversion/README.md
. I understand that it will be update sometime soon.The docs specify the training data to be formatted as:
# The id represent the correspondence relation ├─dataset root │ ├─images/ id.jpg │ ├─det_annotations/ id.json │ ├─da_seg_annotations/ id.png │ ├─ll_seg_annotations/ id.png
But the dataset downloaded from the bdd100k site has the following structure.
. └── segmentation ├── __MACOSX │ └── test ├── test │ ├── __MACOSX │ │ └── test │ └── test │ └── raw_images ├── train │ ├── __MACOSX │ │ └── train │ └── train │ ├── class_color │ ├── class_id │ ├── instance_color │ ├── instance_id │ └── raw_images └── val ├── __MACOSX │ └── val └── val ├── class_color ├── class_id ├── instance_color ├── instance_id └── raw_images
- Its unclear which among
instance_color
,class_id
andinstance_id
denotedet_annotations
,da_seg_annotations
,ll_seg_annotations
. All of them are masks. I dont' intend to use the object detection part, so the json conversion shouldn't be very necessary for now.- The
lib/config/default.py
contains params such as_C.DATASET.DATAROOT = '/home/zwt/bdd/bdd100k/images/100k' # the path of images folder _C.DATASET.LABELROOT = '/home/zwt/bdd/bdd100k/labels/100k' # the path of det_annotations folder _C.DATASET.MASKROOT = '/home/zwt/bdd/bdd_seg_gt' # the path of da_seg_annotations folder _C.DATASET.LANEROOT = '/home/zwt/bdd/bdd_lane_gt'
It would be better if more info can be provided for the paths such that it can be generalised.
Thanks for your suggestion for our project, we will make improvement for it as soon as possible! Hope you can continue to pay attention to our project!
I find much clarity lacking in the training process. Additional info on training dataset format is missing in
toolkits/label_conversion/README.md
. I understand that it will be update sometime soon.The docs specify the training data to be formatted as:
# The id represent the correspondence relation ├─dataset root │ ├─images/ id.jpg │ ├─det_annotations/ id.json │ ├─da_seg_annotations/ id.png │ ├─ll_seg_annotations/ id.png
But the dataset downloaded from the bdd100k site has the following structure.
. └── segmentation ├── __MACOSX │ └── test ├── test │ ├── __MACOSX │ │ └── test │ └── test │ └── raw_images ├── train │ ├── __MACOSX │ │ └── train │ └── train │ ├── class_color │ ├── class_id │ ├── instance_color │ ├── instance_id │ └── raw_images └── val ├── __MACOSX │ └── val └── val ├── class_color ├── class_id ├── instance_color ├── instance_id └── raw_images
- Its unclear which among
instance_color
,class_id
andinstance_id
denotedet_annotations
,da_seg_annotations
,ll_seg_annotations
. All of them are masks. I dont' intend to use the object detection part, so the json conversion shouldn't be very necessary for now.- The
lib/config/default.py
contains params such as_C.DATASET.DATAROOT = '/home/zwt/bdd/bdd100k/images/100k' # the path of images folder _C.DATASET.LABELROOT = '/home/zwt/bdd/bdd100k/labels/100k' # the path of det_annotations folder _C.DATASET.MASKROOT = '/home/zwt/bdd/bdd_seg_gt' # the path of da_seg_annotations folder _C.DATASET.LANEROOT = '/home/zwt/bdd/bdd_lane_gt'
It would be better if more info can be provided for the paths such that it can be generalised.
I have uploaded our training label to Google Drive, which can be downloaded through the link in Readme.md. md. Thank you for your attention and suggestions to our project
I find much clarity lacking in the training process. Additional info on training dataset format is missing in
toolkits/label_conversion/README.md
. I understand that it will be update sometime soon.The docs specify the training data to be formatted as:
But the dataset downloaded from the bdd100k site has the following structure.
Its unclear which among
instance_color
,class_id
andinstance_id
denotedet_annotations
,da_seg_annotations
,ll_seg_annotations
. All of them are masks. I dont' intend to use the object detection part, so the json conversion shouldn't be very necessary for now.The
lib/config/default.py
contains params such asIt would be better if more info can be provided for the paths such that it can be generalised.