Open QuangDucc opened 7 months ago
Hi,
Generally speaking, if you want to generate the masks with object detection labels (bounding boxes), you need to stack all boxes together for each image (object=1, background=0). If you want to generate the masks with segmentation labels, you can simply set all nonzero values to 1.
Btw, for the segmentation label, I mean the colormap labels for instance segmentation.
If you have more questions, please let me know.
I used the json_to_dataset.py file that comes with labelme to generate a json folder of .jpg files, which contains three .png files. Which .png file should be used for training? Or should I use some other way to generate .png images? I'm really looking forward to your help answering my questions. (The following three pictures are three .png images generated by labelme)
I used the json_to_dataset.py file that comes with labelme to generate a json folder of .jpg files, which contains three .png files. Which .png file should be used for training? Or should I use some other way to generate .png images? I'm really looking forward to your help answering my questions. (The following three pictures are three .png images generated by labelme)
The second one looks reasonable. But the label should not be RGB image, you need to transfer it into grayscale. (i.e. extract the red channel of the image)
OK! I will do some processing on the dataset again.
Hello, Thank you for sharing the code.
I have a question about the dataset Bdd100k you were used in your experiment. As you describe, the training and valid dataset will following this format:
root/ ├──images/ │ ├── train/ │ │ ├── 000001.jpg │ │ ├── 000002.jpg │ │ ├── ......
├──masks/ │ ├── val/ │ │ ├── 000001.png │ │ ├── 000002.png │ │ ├── ......
I downloaded the dataset but no part of it following this format. Can you explain more details the part of Bdd100k was used and how to setup the dataset for training.
Thank you in advance for your support.