lkevinzc / dance

Codes for "DANCE: A Deep Attentive Contour Model for Efficient Instance Segmentation", WACV2021
67 stars 13 forks source link

train or apply to a different dataset #25

Closed Nal44 closed 9 months ago

Nal44 commented 9 months ago

Hi,

I really like your work! I would like to apply dance to another dataset , so I need to train from scratch , it this possible ?

Also, do I need to apply any filters on the images or normalization techniques before training ?

Alternatively, I would like to use your algorithm for refining another segmentation model, hence using the masks generated from the 1st algorithm as seeds for DANCE ,so it cant make refinements by finding the "real contours".

Could option 1 or 2 (or both) is possible ? and how to do so ?

Thanks a lot,

lkevinzc commented 9 months ago

Thanks for your interest! 1) Possible to train on new dataset, and you could use pre-trained ResNet as initialisation to train segmentation models. 2) You could choose the same normalization as done in the pre-training phase (usually it's (image - mean) / std). 3) To refine another segmentation model, you could use the idea of iterative refining. You can first generate the initial contour based on another segmentation model's result, then start from there to train DANCE. Basically this is to replace the bounding box detector with an initial mask segmentor.

Nal44 commented 9 months ago

Hi, Thanks a lot,

for 3) , how do I start using your code ? I checked I don't understand how to run it ... ? could you give me some instructions , or the general concept using your code ? That will make it easier to implement,

Perhaps an example will be ideal, load the model , load the mask segmentation (replace the bounding as you suggested) then run the model ?

Thanks a lot,

Nal44 commented 9 months ago

Hi,

Could you explain as a start how to run the inference on a new image (or new dataset), that will help as a start, then I can did into the code and found out how to load the "seeds" as initial mask segmentor ?

That will be greatly appreciated, Thanks

lkevinzc commented 9 months ago

Hi,

Could you explain as a start how to run the inference on a new image (or new dataset), that will help as a start, then I can did into the code and found out how to load the "seeds" as initial mask segmentor ?

That will be greatly appreciated, Thanks

Hi, I think you could use the command here (https://github.com/lkevinzc/dance?tab=readme-ov-file#evaluation) to do inference. Then you may trace the code to see the implementation of the iterative refinement process, and see how to apply this idea to your work.

Nal44 commented 9 months ago

hi , oK thanks this is a start :)

If I understand correctly the boxes are generated using this file in the file /core/structures) /points_set.py , correct ?

The only thing that I am not sure is : -do I need the boxes (bboxes) first (in my case the seeds as initial segmentation) then calculate the extreme points OR -the extreme points are first then the bbbox are generated ?

From that the polygons will be created and the algorithm will do the next steps.

Can you please clarify ? that will be enough for me to start :)

Thanks a lot,

lkevinzc commented 9 months ago

Hi @Nal44 , yes I use structures in https://github.com/lkevinzc/dance/blob/master/core/structures/pointset.py for postprocessing (https://github.com/lkevinzc/dance/blob/master/core/modeling/postprocessing.py).

I think if you want to use a segmentation model to output a seed contour then do iterative refinement, the first thing is to generate contour vertices from your initial mask. The extreme points may not be important in your case because that is an even more coarse-grained representation than your initial mask (a polygon constructed from 4 extreme points v.s. a polygon from an initial mask).

In my opinion, you could convert your initial (inaccurate) mask to a set of well-distributed vertices along the contour to form the seed polygon.

Nal44 commented 9 months ago

Hi ,

Ok that is helpful :) , I will try over the weekend :D .

One thing though : my masks are slightly smaller (most of the time) than the objects, hence the active contour needs to inflate (like balloon), does the algorithm can do inflate / deflate or only deflate to find the objects ?

Thanks a lot,

lkevinzc commented 9 months ago

Hi, it should handle both inflate / deflate cases, as long as the ground truth labels are correctly provided for training.

Nal44 commented 9 months ago

Hi,

Ok great ! I also need to either convert my files into the COCO format or adapt the code to run with .tif files..

Now lets code ! thanks a lot for the answers, hopefully I will manage to make it work .