pUmpKin-Co / offical-IndexNet

This is the official implementation of the IndexNet.
Apache License 2.0
10 stars 1 forks source link

Training process #3

Closed yeelinsen closed 1 year ago

yeelinsen commented 1 year ago

@pUmpKin-Co Hi, I've read the code but didn't find the self-supervised pretrain and fine-tune process in it, could you please offer the unsupervised pretrain and fine-tune code? In addition, I found the ConvMLP didn't use 1x1 conv layers, but linear layers instead in the IndexNet which is inconsistent with the paper.

pUmpKin-Co commented 1 year ago

Hi~! I have updated the pretrain code trainPL.py and deeplabv3+ fine-tune code main.py.
During the implementation, we found that 1d BN + Linear is faster than 2d BN + 1x1 Conv and has the similar effect. So we choose BN 1d + Linear, which is just equivalent to a training trick.

yeelinsen commented 1 year ago

Thank you for your patient reply. Respect