uncbiag / SimpleClick

SimpleClick: Interactive Image Segmentation with Simple Vision Transformers (ICCV 2023)
MIT License
209 stars 32 forks source link

Can I train with my own dataset? #31

Closed Zarxrax closed 7 months ago

Zarxrax commented 7 months ago

How would I need to set up my dataset to be able to train with it? Do I need to modify some code? Currently my dataset consists of images + alpha masks. What format would I need to convert it to?

qinliuliuqin commented 7 months ago

@Zarxrax Hi, yes, you can train on your own dataset. You need to create your own dataset. Please refer to https://github.com/uncbiag/SimpleClick/tree/v1.0/isegm/data/datasets

Zarxrax commented 7 months ago

Okay I think I see. I downloaded several of the datasets, and a couple of them look similar to mine, so I think it should be easy enough.

My next question is, when I run the training script, how do I specify which dataset to train on? I see in the config.yml I can put the path to my dataset. Would I simply delete any datasets from that file which I dont want to use, and then it will automatically choose mine?

qinliuliuqin commented 7 months ago

Okay I think I see. I downloaded several of the datasets, and a couple of them look similar to mine, so I think it should be easy enough.

My next question is, when I run the training script, how do I specify which dataset to train on? I see in the config.yml I can put the path to my dataset. Would I simply delete any datasets from that file which I dont want to use, and then it will automatically choose mine?

No, you need to specify your training dataset. For example, if you use COCO+LVIS as the training set, you need to write it in the code, as shown here https://github.com/uncbiag/SimpleClick/blob/f33112e5a37ac97b9adfb9cda0106830e77b5b7d/models/iter_mask/plainvit_base448_cocolvis_itermask.py#L87

The config.yml only tells where to find the training data, for example, the COCO+LVIS dataset in the above case.

Zarxrax commented 7 months ago

Ah, perfect! Thank you.