qubvel / segmentation_models.pytorch

Semantic segmentation models with 500+ pretrained convolutional and transformer-based backbones.
https://smp.readthedocs.io/
MIT License
9.14k stars 1.63k forks source link

Are SMP pretrained encoders' weights freezed by default? (since there's no any parameter for that) #793

Closed neotod closed 11 months ago

neotod commented 12 months ago

for SM keras, there's some parameter for the encoder that you can set when you build then model class: it stated here -> https://segmentation-models.readthedocs.io/en/latest/tutorial.html#fine-tuning

I wonder if such a thing exists in SMP (pytorch version) too? because not freezing the encoder's weights will ruin them while training. I want only train my decoder while encoder's weights are freezed (AKA fine tunning) are encoders' weights freezed by default?

shmak2000 commented 12 months ago

I used this code to freeze the weights explicitly, and judging by GPU memory usage, they are not frozen by default.

for param in model.encoder.parameters():
    param.requires_grad = False
ocramz-mhc commented 3 months ago

Just ran into this :D Would be great to have a line or two in the API docs about this.