Closed zyl1336110861 closed 2 years ago
Adam_onecycle is just a wrapper of adam, but call different optimizer.step().
I remember that I have fixed this error long long ago, as shown here: https://github.com/open-mmlab/OpenPCDet/blob/9c8e583e28e45582565175f848b7a2538eb6acf6/tools/train_utils/optimization/fastai_optim.py#L141-L143
Maybe you need to check whether you are using the latest codes.
Thanks for your reply. But we find that if we use the _torch.nograd,the weights of the layers still change although not so quickly. After thousands of iterations, there is a lot difference from the origin weights. We add the _torch.nograd at the forward function of VoxelBackBone8x(under backbones_3d folder). We have checked the fastai_optim.py file and sure that it is the same as your newest version.
If we use the adam optimizer, this is no problem. But the accuracy of the model will decrease. So is there any solution to successfully freeze some layers if we have to use the adam_onecycle optimizer?
Hello, I want to know how you freeze some layers? Is that so?
for key ,value in dict(model.named_parameters()).item(): value.require = True if key.split(".")[0] == 'VoxelBackBone8x': value.require_grad = False
Hello, like this:
class VoxelBackBone8x(nn.Module): @torch.no_grad() def forward(self, x):
I print the weight like this:
for name,parameters in model.named_parameters(): if name == 'backbone_3d.conv_input.0.weight': print(name, ':', parameters[0,0,0,:,:])
After 100 or some more iterations, the weight will change slowly(after xx epochs, the change will be large).
@sshaoshuai Did you meet this problem before?
@zyl1336110861 Have tried to set it with require_grad = False?
Yes! We have tried it!
This issue is stale because it has been open for 30 days with no activity.
This issue was closed because it has been inactive for 14 days since being marked as stale.
@sshaoshuai Hi, thank you for your sharing! I want to freeze some modules using the command _with torch.no_grad_, and the requires_grad is exactly false. However, the weights of this module are still updated(the default optimizer is adam_onecycle). If I replace the adamonecycle optimizer with the adam_, the weights actually stop updating. So I just want to know how to freeze the module with the adam_onecycle optimizer?