Closed Barry-Chen-yup closed 1 year ago
Yes, just change layer index of YOLOR. Using cfg/training/.yaml for training, and using cfg/deploy/.yaml for reparameterization. You need change 255 to (nc+5)*anchors for your custom models.
Hi, thank you, But, I don' t understudent "255 to (nc+5)anchors" 's meaning and where can I change. (nc+5)anchors sounds like "darknet cfg" filter parameter.
model.load_state_dict(intersect_state_dict, strict=False)
model.names = ckpt['model'].names
model.nc = ckpt['model'].nc
# reparametrized YOLOR, 255=(80 + 5) * 3, 80 is coco number class.
for i in range(255):
model.state_dict()['model.105.m.0.weight'].data[i, :, :, :] *= state_dict['model.105.im.0.implicit'].data[:, i, : :].squeeze()
model.state_dict()['model.105.m.1.weight'].data[i, :, :, :] *= state_dict['model.105.im.1.implicit'].data[:, i, : :].squeeze()
model.state_dict()['model.105.m.2.weight'].data[i, :, :, :] *= state_dict['model.105.im.2.implicit'].data[:, i, : :].squeeze()
Thanks!
For anyone encountering issues with this, you also need to change the layer number from 105 to 77 for YoloV7-Tiny. Basically you need to run the reparametrization ops on the last layer of your model.
For anyone encountering issues with this, you also need to change the layer number from 105 to 77 for YoloV7-Tiny. Basically you need to run the reparametrization ops on the last layer of your model.
Even with this changes I still have the key error problem like:
KeyError: 'model.77.im.0.implicit'
I would start by printing your state dict to see which layer number is the last one for your specific model. It is possible that something changed in the YoloV7Tiny architecture since I wrote the comment year ago. Or you're using a custom scale of backbone and thus the number needs to be different.
Something like this should give you a hint:
print(model.state_dict().keys())
I would start by printing your state dict to see which layer number is the last one for your specific model. It is possible that something changed in the YoloV7Tiny architecture since I wrote the comment year ago. Or you're using a custom scale of backbone and thus the number needs to be different.
Something like this should give you a hint:
print(model.state_dict().keys())
this is printed:
------- odict_keys(['model.0.conv.weight', 'model.0.bn.weight', 'model.0.bn.bias', 'model.0.bn.running_mean', 'model.0.bn.running_var', 'model.0.bn.num_batches_tracked', 'model.1.conv.weight', 'model.1.bn.weight', 'model.1.bn.bias', 'model.1.bn.running_mean', 'model.1.bn.running_var', 'model.1.bn.num_batches_tracked', 'model.2.conv.weight', 'model.2.bn.weight', 'model.2.bn.bias', 'model.2.bn.running_mean', 'model.2.bn.running_var', 'model.2.bn.num_batches_tracked', 'model.3.conv.weight', 'model.3.bn.weight', 'model.3.bn.bias', 'model.3.bn.running_mean', 'model.3.bn.running_var', 'model.3.bn.num_batches_tracked', 'model.4.conv.weight', 'model.4.bn.weight', 'model.4.bn.bias', 'model.4.bn.running_mean', 'model.4.bn.running_var', 'model.4.bn.num_batches_tracked', 'model.5.conv.weight', 'model.5.bn.weight', 'model.5.bn.bias', 'model.5.bn.running_mean', 'model.5.bn.running_var', 'model.5.bn.num_batches_tracked', 'model.7.conv.weight', 'model.7.bn.weight', 'model.7.bn.bias', 'model.7.bn.running_mean', 'model.7.bn.running_var', 'model.7.bn.num_batches_tracked', 'model.9.conv.weight', 'model.9.bn.weight', 'model.9.bn.bias', 'model.9.bn.running_mean', 'model.9.bn.running_var', 'model.9.bn.num_batches_tracked', 'model.10.conv.weight', 'model.10.bn.weight', 'model.10.bn.bias', 'model.10.bn.running_mean', 'model.10.bn.running_var', 'model.10.bn.num_batches_tracked', 'model.11.conv.weight', 'model.11.bn.weight', 'model.11.bn.bias', 'model.11.bn.running_mean', 'model.11.bn.running_var', 'model.11.bn.num_batches_tracked', 'model.12.conv.weight', 'model.12.bn.weight', 'model.12.bn.bias', 'model.12.bn.running_mean', 'model.12.bn.running_var', 'model.12.bn.num_batches_tracked', 'model.14.conv.weight', 'model.14.bn.weight', 'model.14.bn.bias', 'model.14.bn.running_mean', 'model.14.bn.running_var', 'model.14.bn.num_batches_tracked', 'model.16.conv.weight', 'model.16.bn.weight', 'model.16.bn.bias', 'model.16.bn.running_mean', 'model.16.bn.running_var', 'model.16.bn.num_batches_tracked', 'model.17.conv.weight', 'model.17.bn.weight', 'model.17.bn.bias', 'model.17.bn.running_mean', 'model.17.bn.running_var', 'model.17.bn.num_batches_tracked', 'model.18.conv.weight', 'model.18.bn.weight', 'model.18.bn.bias', 'model.18.bn.running_mean', 'model.18.bn.running_var', 'model.18.bn.num_batches_tracked', 'model.19.conv.weight', 'model.19.bn.weight', 'model.19.bn.bias', 'model.19.bn.running_mean', 'model.19.bn.running_var', 'model.19.bn.num_batches_tracked', 'model.21.conv.weight', 'model.21.bn.weight', 'model.21.bn.bias', 'model.21.bn.running_mean', 'model.21.bn.running_var', 'model.21.bn.num_batches_tracked', 'model.23.conv.weight', 'model.23.bn.weight', 'model.23.bn.bias', 'model.23.bn.running_mean', 'model.23.bn.running_var', 'model.23.bn.num_batches_tracked', 'model.24.conv.weight', 'model.24.bn.weight', 'model.24.bn.bias', 'model.24.bn.running_mean', 'model.24.bn.running_var', 'model.24.bn.num_batches_tracked', 'model.25.conv.weight', 'model.25.bn.weight', 'model.25.bn.bias', 'model.25.bn.running_mean', 'model.25.bn.running_var', 'model.25.bn.num_batches_tracked', 'model.26.conv.weight', 'model.26.bn.weight', 'model.26.bn.bias', 'model.26.bn.running_mean', 'model.26.bn.running_var', 'model.26.bn.num_batches_tracked', 'model.28.conv.weight', 'model.28.bn.weight', 'model.28.bn.bias', 'model.28.bn.running_mean', 'model.28.bn.running_var', 'model.28.bn.num_batches_tracked', 'model.29.conv.weight', 'model.29.bn.weight', 'model.29.bn.bias', 'model.29.bn.running_mean', 'model.29.bn.running_var', 'model.29.bn.num_batches_tracked', 'model.30.conv.weight', 'model.30.bn.weight', 'model.30.bn.bias', 'model.30.bn.running_mean', 'model.30.bn.running_var', 'model.30.bn.num_batches_tracked', 'model.35.conv.weight', 'model.35.bn.weight', 'model.35.bn.bias', 'model.35.bn.running_mean', 'model.35.bn.running_var', 'model.35.bn.num_batches_tracked', 'model.37.conv.weight', 'model.37.bn.weight', 'model.37.bn.bias', 'model.37.bn.running_mean', 'model.37.bn.running_var', 'model.37.bn.num_batches_tracked', 'model.38.conv.weight', 'model.38.bn.weight', 'model.38.bn.bias', 'model.38.bn.running_mean', 'model.38.bn.running_var', 'model.38.bn.num_batches_tracked', 'model.40.conv.weight', 'model.40.bn.weight', 'model.40.bn.bias', 'model.40.bn.running_mean', 'model.40.bn.running_var', 'model.40.bn.num_batches_tracked', 'model.42.conv.weight', 'model.42.bn.weight', 'model.42.bn.bias', 'model.42.bn.running_mean', 'model.42.bn.running_var', 'model.42.bn.num_batches_tracked', 'model.43.conv.weight', 'model.43.bn.weight', 'model.43.bn.bias', 'model.43.bn.running_mean', 'model.43.bn.running_var', 'model.43.bn.num_batches_tracked', 'model.44.conv.weight', 'model.44.bn.weight', 'model.44.bn.bias', 'model.44.bn.running_mean', 'model.44.bn.running_var', 'model.44.bn.num_batches_tracked', 'model.45.conv.weight', 'model.45.bn.weight', 'model.45.bn.bias', 'model.45.bn.running_mean', 'model.45.bn.running_var', 'model.45.bn.num_batches_tracked', 'model.47.conv.weight', 'model.47.bn.weight', 'model.47.bn.bias', 'model.47.bn.running_mean', 'model.47.bn.running_var', 'model.47.bn.num_batches_tracked', 'model.48.conv.weight', 'model.48.bn.weight', 'model.48.bn.bias', 'model.48.bn.running_mean', 'model.48.bn.running_var', 'model.48.bn.num_batches_tracked', 'model.50.conv.weight', 'model.50.bn.weight', 'model.50.bn.bias', 'model.50.bn.running_mean', 'model.50.bn.running_var', 'model.50.bn.num_batches_tracked', 'model.52.conv.weight', 'model.52.bn.weight', 'model.52.bn.bias', 'model.52.bn.running_mean', 'model.52.bn.running_var', 'model.52.bn.num_batches_tracked', 'model.53.conv.weight', 'model.53.bn.weight', 'model.53.bn.bias', 'model.53.bn.running_mean', 'model.53.bn.running_var', 'model.53.bn.num_batches_tracked', 'model.54.conv.weight', 'model.54.bn.weight', 'model.54.bn.bias', 'model.54.bn.running_mean', 'model.54.bn.running_var', 'model.54.bn.num_batches_tracked', 'model.55.conv.weight', 'model.55.bn.weight', 'model.55.bn.bias', 'model.55.bn.running_mean', 'model.55.bn.running_var', 'model.55.bn.num_batches_tracked', 'model.57.conv.weight', 'model.57.bn.weight', 'model.57.bn.bias', 'model.57.bn.running_mean', 'model.57.bn.running_var', 'model.57.bn.num_batches_tracked', 'model.58.conv.weight', 'model.58.bn.weight', 'model.58.bn.bias', 'model.58.bn.running_mean', 'model.58.bn.running_var', 'model.58.bn.num_batches_tracked', 'model.60.conv.weight', 'model.60.bn.weight', 'model.60.bn.bias', 'model.60.bn.running_mean', 'model.60.bn.running_var', 'model.60.bn.num_batches_tracked', 'model.61.conv.weight', 'model.61.bn.weight', 'model.61.bn.bias', 'model.61.bn.running_mean', 'model.61.bn.running_var', 'model.61.bn.num_batches_tracked', 'model.62.conv.weight', 'model.62.bn.weight', 'model.62.bn.bias', 'model.62.bn.running_mean', 'model.62.bn.running_var', 'model.62.bn.num_batches_tracked', 'model.63.conv.weight', 'model.63.bn.weight', 'model.63.bn.bias', 'model.63.bn.running_mean', 'model.63.bn.running_var', 'model.63.bn.num_batches_tracked', 'model.65.conv.weight', 'model.65.bn.weight', 'model.65.bn.bias', 'model.65.bn.running_mean', 'model.65.bn.running_var', 'model.65.bn.num_batches_tracked', 'model.66.conv.weight', 'model.66.bn.weight', 'model.66.bn.bias', 'model.66.bn.running_mean', 'model.66.bn.running_var', 'model.66.bn.num_batches_tracked', 'model.68.conv.weight', 'model.68.bn.weight', 'model.68.bn.bias', 'model.68.bn.running_mean', 'model.68.bn.running_var', 'model.68.bn.num_batches_tracked', 'model.69.conv.weight', 'model.69.bn.weight', 'model.69.bn.bias', 'model.69.bn.running_mean', 'model.69.bn.running_var', 'model.69.bn.num_batches_tracked', 'model.70.conv.weight', 'model.70.bn.weight', 'model.70.bn.bias', 'model.70.bn.running_mean', 'model.70.bn.running_var', 'model.70.bn.num_batches_tracked', 'model.71.conv.weight', 'model.71.bn.weight', 'model.71.bn.bias', 'model.71.bn.running_mean', 'model.71.bn.running_var', 'model.71.bn.num_batches_tracked', 'model.73.conv.weight', 'model.73.bn.weight', 'model.73.bn.bias', 'model.73.bn.running_mean', 'model.73.bn.running_var', 'model.73.bn.num_batches_tracked', 'model.74.conv.weight', 'model.74.bn.weight', 'model.74.bn.bias', 'model.74.bn.running_mean', 'model.74.bn.running_var', 'model.74.bn.num_batches_tracked', 'model.75.conv.weight', 'model.75.bn.weight', 'model.75.bn.bias', 'model.75.bn.running_mean', 'model.75.bn.running_var', 'model.75.bn.num_batches_tracked', 'model.76.conv.weight', 'model.76.bn.weight', 'model.76.bn.bias', 'model.76.bn.running_mean', 'model.76.bn.running_var', 'model.76.bn.num_batches_tracked', 'model.77.anchors', 'model.77.anchor_grid', 'model.77.m.0.weight', 'model.77.m.0.bias', 'model.77.m.1.weight', 'model.77.m.1.bias', 'model.77.m.2.weight', 'model.77.m.2.bias'])
Actually, I new here and trying to understand why, and how to solve it. Colab gives always errors...
Your model is missing the "implicit" field for each layer. There is no point doing re-parametrization on it. I'm making a guess here, but seems to me like the training algorithm did it already for you and pruned the final model once it finished training.
I haven't been using YoloV7 recently, so I might be wrong about this but I vaguely recall there was a feature like that.
Q1: Can YOLOv7-tiny use Re-parameterization.ipynb? Q2: I don't know how to use Re-parameterization.ipynb. Is the step of Re-parameterization.ipynb use trained model of customer dataset .pt and run Re-parameterization.ipynb?