google-research / deeplab2

DeepLab2 is a TensorFlow library for deep labeling, aiming to provide a unified and state-of-the-art TensorFlow codebase for dense pixel labeling tasks.
Apache License 2.0
998 stars 157 forks source link

About `window_size` in MOAT #146

Open edwardyehuang opened 1 year ago

edwardyehuang commented 1 year ago

I found window_size is None in MOAT https://github.com/google-research/deeplab2/blob/eb66c852f86c2add70cf7067dd5430ddb2df3b5f/model/pixel_encoder/moat.py#L347-L360 https://github.com/google-research/deeplab2/blob/eb66c852f86c2add70cf7067dd5430ddb2df3b5f/model/pixel_encoder/moat.py#L405

Is only global attention used for segmentation tasks?

Chenglin-Yang commented 1 year ago

Thanks for your interest!

Please see https://github.com/google-research/deeplab2/blob/7a01a7165e97b3325ad7ea9b6bcc02d67fecd07a/model/layers/moat_blocks.py#L329 for how to specify the desired window size for the use case.

Our setting can be found in the experimental sections on paper, but I can provide the information here: For COCO object detection, we use window based attention for the third stage with size 14x14 and global attention for the fourth stage. For ADE20K semantic segmentation, we use global attention for both third and fourth stages.

edwardyehuang commented 1 year ago

Thanks for your interest!

Please see

https://github.com/google-research/deeplab2/blob/7a01a7165e97b3325ad7ea9b6bcc02d67fecd07a/model/layers/moat_blocks.py#L329

for how to specify the desired window size for the use case. Our setting can be found in the experimental sections on paper, but I can provide the information here: For COCO object detection, we use window based attention for the third stage with size 14x14 and global attention for the fourth stage. For ADE20K semantic segmentation, we use global attention for both third and fourth stages.

Thanks for your point out.

I also noticed the implementation of the global window is flawed.

When using the global window size, the current implementation will still record a fixed window size, depending on the input size in the build stage. Therefore, if the given input size is different from the recorded size, the global will be limited or direct raised error (e.g., smaller input size than recorded window size)

Chenglin-Yang commented 1 year ago

Thank you for finding this typo.

If you want to evaluate the model with an input size that is different from your training phase, you will need to create another model that is built with that input size and loads the weights. This is how the current tensorflow model works.