Closed kuldeepbrd1 closed 7 months ago
I think I figured out how to freeze stages now and it runs! But, how do I verify formally that it is successfully freezing the weights?
FYI
I've added a method to freeze weights in models.backbones.LiteHRNet
def _freeze_stages(self):
"""Freeze parameters."""
if self.frozen_stages >= 0:
if self.stem:
self.stem.eval()
for param in self.stem.parameters():
param.requires_grad = False
else:
self.norm1.eval()
for m in [self.conv1, self.norm1]:
for param in m.parameters():
param.requires_grad = False
for i in range(1, self.frozen_stages + 1):
m = getattr(self, f'stage{i}')
m.eval()
for param in m.parameters():
param.requires_grad = False
Thanks a lot for the work and making the code available. It's very easy to get started with it for custom use. I could easily train and test but I wanted to know the following:
How do I freeze a layer or a stage during training for fine-tuning? During fine-tuning I don't want to change weights in all the stages/layers that are in the base model.
I was thinking about:
How do I accomplish this?
Any help or suggestion is greatly appreciated :)