HRNet / Lite-HRNet

This is an official pytorch implementation of Lite-HRNet: A Lightweight High-Resolution Network.
Apache License 2.0
820 stars 126 forks source link

Fine-tuning with frozen layers/stages #34

Closed kuldeepbrd1 closed 7 months ago

kuldeepbrd1 commented 3 years ago

Thanks a lot for the work and making the code available. It's very easy to get started with it for custom use. I could easily train and test but I wanted to know the following:

How do I freeze a layer or a stage during training for fine-tuning? During fine-tuning I don't want to change weights in all the stages/layers that are in the base model.

I was thinking about:

  1. freezing everything but the final keypoint detection head
  2. freezing everything but the last stage and keypoint detection head.

How do I accomplish this?

Any help or suggestion is greatly appreciated :)

kuldeepbrd1 commented 3 years ago

I think I figured out how to freeze stages now and it runs! But, how do I verify formally that it is successfully freezing the weights?

FYI I've added a method to freeze weights in models.backbones.LiteHRNet

    def _freeze_stages(self):
        """Freeze parameters."""
        if self.frozen_stages >= 0:
            if self.stem:
                self.stem.eval()
                for param in self.stem.parameters():
                    param.requires_grad = False
            else:
                self.norm1.eval()
                for m in [self.conv1, self.norm1]:
                    for param in m.parameters():
                        param.requires_grad = False

        for i in range(1, self.frozen_stages + 1):
            m = getattr(self, f'stage{i}')
            m.eval()
            for param in m.parameters():
                param.requires_grad = False

See: https://github.com/kuldeepbrd1/Lite-HRNet/blob/979c0faefcf637c81f8815663f254c90f72acfda/models/backbones/litehrnet.py#L860