FZfangzheng / Swin-Transformer-Semantic-Segmentation-Without-mmsegmentation

Unofficial implementation of Swin-Transformer-Semantic-Segmentation, relatively independent code, easy to add to other models.
Apache License 2.0
15 stars 1 forks source link

Minimal code for training a specific segmentation architecture with a particular backbone on a custom data set? #2

Open deshwalmahesh opened 2 years ago

deshwalmahesh commented 2 years ago

Hi, If you have experience with these, can you update a minimal code for training a custom data set, say on just 1 dummy image for 1 epoch.

For example if I want to train SWIN-B (or any other SWIN model) with DeepLabv3+ (or any other like FPN etc ) using focal / dice loss on a data set where I have images -> masks. Just like your demo.py file, can you write something for training.

When I pulled the mmseg repo to do that, I had lots of errors and get_started.md does not exist which had info about the training preparations of dataset.

FZfangzheng commented 2 years ago

Hi, sorry I may not have time to update this code in the near future. What is missing in this code is the data set, you may consider customizing the data input. One thing to note is to modify the corresponding num_classes used to fit the number of classes in the dataset. def init(self, in_channels=512, in_index=2, channels=256, num_convs=1, concat_input=False, dropout_ratio=0.1, num_classes=3, https://github.com/FZfangzheng/Swin-Transformer-Semantic-Segmentation-Without-mmsegmentation/blob/becd4c412946f571aad0a2a505226274b59c9001/swin_transformer/decode_heads/uper_head.py#L24