uber-research / UPSNet

UPSNet: A Unified Panoptic Segmentation Network
Other
648 stars 120 forks source link

Training on own dataset #104

Open discretecoder opened 4 years ago

discretecoder commented 4 years ago

I am trying to train the network on a custom dataset where I have the rgb images and panoptic labels in png format. I am following your tip and trying to get the labels in coco format. and I have following questions. It would be a great help if you could help me with them.

  1. I created coco format instances_train/val2017.json files following

http://www.immersivelimit.com/tutorials/create-coco-annotations-from-scratch

(but since i have panoptic labels i have included semantic and instances both in the json---would that be a problem?) and I have created the panoptic_coco_categories.json file as well.
so the original rgb images, images with panoptic labels in png format, categories json file and the instance json file is what I have with this. would it be sufficient for training?

  1. How do I run inference on my own images with cityscapes or coco weights? -without providing gt. panoptic inference based only on rgb image input.

  2. I was wondering if you use some coco-annotator tool to get the annotations for your dataset to coco format.

  3. How can I train on images with more than or less than 3 channels?

  4. Is it possible to train only with rgb images and labels in png format? how would that work? thanks :)

wshilaji commented 3 years ago

Have you solved your problem?I am trying to train the network on a custom dataset too. How to create Panoptic_{}2017.json?