Open andreasalamanos opened 5 years ago
Dear rasputin1917,
I want to use Mask-RCNN to train on my custom dataset (just one class).
First of all, I use Labelme to create groundtruth for each image. Each image has a .json file as its ground-truth.
After that, I don't know what should I do...
I want to know how to convert my groundtruth file to a format that Mask-rcnn can use. Do you have any tools ?
Looking forward to hearing from you, thanks in advance.
With best regards, Ming
@May-forever
I want to know how to convert my groundtruth file to a format that Mask-rcnn can use. Do you have any tools ?
Yes. What format are your annotations in currently? The Matterport Mask RCNN implementation supports the VIA region JSON format. I have a converter tool, though need to know your current format (like Pascal VOC XML
or COCO JSON
) to see if it's supported.
@May-forever
I want to know how to convert my groundtruth file to a format that Mask-rcnn can use. Do you have any tools ?
Yes. What format are your annotations in currently? The Matterport Mask RCNN implementation supports the VIA region JSON format. I have a converter tool, though need to know your current format (like
Pascal VOC XML
orCOCO JSON
) to see if it's supported.
my dataset is in via json format , now i want to convert it into coco segmentation format
@RiyazAina-DeepML you should use labelme2coco package to convert your annotations to coco format. It is really easy.
Installation:
pip install labelme2coco
Usage:
# import package
import labelme2coco
# set directory that contains labelme annotations and image files
labelme_folder = "tests/data/labelme_annot"
# set path for coco json to be saved
save_json_path = "tests/data/test_coco.json"
# conert labelme annotations to coco
labelme2coco.convert(labelme_folder, save_json_path)
Hi, I have binary masks in .png format. but to train the Mask RCNN model with custom data, need masks as .json file. So is there any ways to convert binary masks into .json file directly with coodinates.
@jeyatharani You can use cv2.findContours() function to find polynomial points. Then write it in json file.
@May-forever
I want to know how to convert my groundtruth file to a format that Mask-rcnn can use. Do you have any tools ?
Yes. What format are your annotations in currently? The Matterport Mask RCNN implementation supports the VIA region JSON format. I have a converter tool, though need to know your current format (like
Pascal VOC XML
orCOCO JSON
) to see if it's supported.
I have coco json format and I want to convert it to format supported by mask rcnn that is VIA region json format. Please guide how can I do
@May-forever
I want to know how to convert my groundtruth file to a format that Mask-rcnn can use. Do you have any tools ?
Yes. What format are your annotations in currently? The Matterport Mask RCNN implementation supports the VIA region JSON format. I have a converter tool, though need to know your current format (like
Pascal VOC XML
orCOCO JSON
) to see if it's supported.I have coco json format and I want to convert it to format supported by mask rcnn that is VIA region json format. Please guide how can I do
did you find the answer to your question, because i have the same problem, i have my annotations in coco json format and i want to convert it to vgg json format
Hello, thanks for Mask-RCNN!
I have some questions. Perhaps its basic but I am new at the field. I have created .png binary masks for every image of my training dataset, and then a .json file using this. My images looks like the nucleus ones. Small randomly distributed sources, some times of a few pixels.
So my repository with the training dataset consists of (names and directory names not accurate):
So the format of my json file is:
[{"id": 1, "category_id": 1, "bbox": [134.0, 85.0, 1.0, 1.0], "width": 227, "area": 1, "height": 227, "iscrowd": 0, "segmentation": [[134.0, 85.5, 133.5, 85.0, 134.0, 84.5, 134.5, 85.0, 134.0, 85.5]], "image_id": 1}, {"id": 2, "category_id": 1, "bbox": [155.0, 92.0, 19.0, 32.0], "width": 227, "area": 434, "height": 227, "iscrowd": 0, "segmentation": [[162.0, 123.5, 158.0, 123.5, 155.5, 121.0, 155.5, 119.0, 154.5, 118.0, 154.5, 110.0, 155.5, 109.0, 155.5, 106.0, 156.5, 105.0, 156.5, 103.0, 157.5, 102.0, 157.5, 101.0, 158.5, 100.0, 158.5, 99.0, 160.5, 97.0, 160.5, 96.0, 164.0, 92.5, 165.0, 92.5, 166.0, 91.5, 170.0, 91.5, 172.5, 94.0, 172.5, 96.0, 173.5, 97.0, 173.5, 104.0, 172.5, 105.0, 172.5, 108.0, 171.5, 109.0, 171.5, 111.0, 170.5, 112.0, 170.5, 114.0, 168.5, 116.0, 168.5, 117.0, 166.5, 119.0, 166.5, 120.0, 165.0, 121.5, 164.0, 121.5, 162.0, 123.5]], "image_id": 1}, {"id": 3, "category_id": 1, "bbox": [164.0, 136.0, 4.0, 2.0], "width": 227, "area": 7, "height": 227, "iscrowd": 0, "segmentation": [[167.0, 137.5, 165.0, 137.5, 163.5, 136.0, 164.0, 135.5, 167.0, 135.5, 167.5, 136.0, 167.5, 137.0, 167.0, 137.5]], "image_id": 1},....etc......
I have a few questions:
1. So is it a format that this specific implementation of Mask-RCNN, accepts for training??? 2. Which is the right format? Can please anyone explain?? As I saw from the balloons json format, it doesn't look like the official coco format, right?? 3. Is there a clever way to transform my binary masks into the right .json training format??
I need to train the model with my exclusively own custom data. So I have to create my own training dataset. I don't think I could do the segmentation manually with a package, because I have a few hundred thousands of images, with some hundreds of objects in it. So converting my binary masks as training dataset, is the best option
Please some help anybody!