Closed DiegoFernandezC closed 1 year ago
I encountered same problem. The cosine similarity is very close to 1 for different person, difficult to do person ReID depending on this similarity score.
@turinaf, I found my bug. In my case, I did not load properly the model weights. I started by modifying the demo.py
file, with this approach you can easily generate a folder with a gallery and query image, and compute distances between that to check the model performance. You are probably experiencing a similar problem with the weights. For the BoT ResNet, I have to add the path in the Base-bagtricks.yml
file under MODEL.WEIGHTS
property.
Thank you @DiegoFernandezC, I am using market_bot_R50. I was able to load the model weights by passing its path in command line like this:
python demo/demo.py --config-file configs/Market1501/bagtricks_R50.yml --input runs/detect/exp2/crops/person/*.jpg --opts MODEL.WEIGHTS tools/deploy/weights/market_bot_R50.pth
The result of running this was as follows:
It says some model parameters or buffers are not found in the checkpoint. And also The checkpoint state_dict contains keys that are not used by the model. I don't know why this is happening. But it works and I can extract features and compare, like we said before, it's performance is not good.
Base-bagtricks.yml file doesn't have MODEL.WEIGHTS
property. Did you add it yourself? When I do that it's throwing another error saying KeyError: 'Non-existent config key: MODEL.WEIGHTS'
@turinaf I noticed this warning today. I started to debug the code, and I need to add a function that calls in the load
function on the checkpoint.py
file. This function looks like this:
def _rename_params(self, checkpoint):
# This is to make the checkpoint compatible with the new model structure
new_checkpoint = {}
print("heads.classifier.weight" in checkpoint["model"].keys())
for key, value in checkpoint["model"].items():
new_key = key
if "heads.classifier.weight" in key:
new_key = key.replace("heads.classifier.weight", "heads.weight")
elif "heads.bnneck.weight" in key:
new_key = key.replace("heads.bnneck.weight", "heads.bottleneck.0.weight")
elif "heads.bnneck.bias" in key:
new_key = key.replace("heads.bnneck.bias", "heads.bottleneck.0.bias")
elif "heads.bnneck.running_mean" in key:
new_key = key.replace("heads.bnneck.running_mean", "heads.bottleneck.0.running_mean")
elif "heads.bnneck.running_var" in key:
new_key = key.replace("heads.bnneck.running_var", "heads.bottleneck.0.running_var")
new_checkpoint[new_key] = value
checkpoint["model"] = new_checkpoint
return checkpoint
Using this function you can rename the keys that the load function is not finding on the model weights. I did not worry about the weights that the model is not using. I suppose these keys are not useful today. Renaming the not found parameters the model improves his performance a lot, mainly by separating the embeddings and giving you utils scores.
I can not fix the heads.weight
warning, the key is changed correctly, but the model continues to alert that he can not find it. Let me know if you can solve this one.
Hi, thank you for this amazing project.
At the moment, I'm studying different techniques to do re-identification, and the checkpoints available on the model zoo are really helpful for me.
I would like to evaluate the embeddings generated by the different architectures presented in the model zoo section.
Any suggestion or repo to use the different backbones to load the checkpoints to get the embeddings easily?
More details: I tried to use the ResNet50 backbone to process an image and then pass the result to the embedding head to generate the features for each detection. When I tried to measure the cosine similarity between different objects the results are really near to 1 (for the correct object but also for the different ones) something unexpected. Currently, my primary objective is to load the weights and use them to process an image to obtain the final embedding.