Closed bilal6414 closed 2 months ago
Thanks for the interest. The provided weights are pre-trained on ImageNet. So yes, you do need to fine-tune the model on your own dataset.
And for customized datasets, you can create a new python file to make it similar to what exist now in the reid/data
folder. And then the training script in this repo can be directly utilized.
Let me know if there are more problems on this.
Thanks for the interest. The provided weights are pre-trained on ImageNet. So yes, you do need to fine-tune the model on your own dataset. And for customized datasets, you can create a new python file to make it similar to what exist now in the
reid/data
folder. And then the training script in this repo can be directly utilized. Let me know if there are more problems on this.
Thank you so much for your prompt response, I surely will train it on custom dataset. Is it possible you can share weights that you are trained on VehicleID dataset. As I want test on vehicles and bikers images.
I'm sorry but currently I don't have any trained models on Re-ID datastes. If you want to use the model for vehicles and bikers re-identification, it is better to construct a dataset more similar to the actual scenarios. Although MSINet has improved the generalization by a large margin, the direct cross-domain performance is still relatively poor. My another work on continual Re-ID helps improve the generalization by a color distribution shuffle operation, which might also be useful for you. Please refer to https://github.com/vimar-gu/ColorPromptReID/blob/57ed2ac17c5239542a426818051cb588defa4b42/reid/trainers.py#L41
Thank you so much ! I have retrained the model on my own dataset which has 851 clsseses, these picture are of bikers currently I used MSInet . these are results => Computing DistMat with euclidean distance Validation Results - Epoch[349] mAP: 82.4% CMC curve, Rank-1 :75.0% CMC curve, Rank-5 :100.0% CMC curve, Rank-10 :100.0%
I just want to make infrence to match embedings , can you please let me know or you share a simple script which will give embedings and eculean distance between two embedings. Currently I am trying but I got several errors , size miss match which is I think becasue of class miss match,,, and like these
You can refer to the code in reid/utils/metrics.py
:
https://github.com/vimar-gu/MSINet/blob/2a8845b6b3d1a3b8baeb864b92f9423c2dc711ee/reid/utils/metrics.py#L131-L136
Here the distance between groups of features are calculated. The distance calculation is not related to classes, where the features should be in the shape of [sample_number]*[feature_len]
.
The matrix calculation operation indeed is kind of tricky. Try figuring it out by using different operations :)
I am using pre trained weights to get embedding and then calculating difference to ReID of images. But I am not geting results as I was expection and mentioned in paper. Please let me know do I need to train on my own dataset first. secondly please review code that I am using to get embedding .
`import os import sys import torch import random import numpy as np import csv import matplotlib.pyplot as plt from PIL import Image from torchvision import transforms from torch.backends import cudnn from reid.utils.logging import Logger from reid.models.msinet import msinet_x1_0 from reid.utils.serialization import copy_state_dict def count_parameters(model): return np.sum(np.fromiter((np.prod(v.size()) for name, v in model.named_parameters() if 'classifier' not in name), dtype=np.float32)) / 1e6 def preprocess_image(image_path, height, width): transform = transforms.Compose([ transforms.Resize((height, width)), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), ]) image = Image.open(image_path).convert('RGB') image = transform(image) image = image.unsqueeze(0) # Add batch dimension return image def extract_embedding(model, image_tensor): model.eval() with torch.no_grad(): embedding = model(image_tensor.cuda()) return embedding.cpu().numpy() def euclidean_distance(embedding1, embedding2): return np.linalg.norm(embedding1 - embedding2) class Args: def init(self):
data
def main(): args = Args() seed = args.seed random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) cudnn.deterministic = True cudnn.benchmark = True sys.stdout = Logger(os.path.join(args.logs_dir, 'log.txt')) print('Running with:\n{}'.format(args)) num_classes = 751 # Set number of classes according to your dataset model = msinet_x1_0(args, num_classes) print('Model Params: {}'.format(count_parameters(model))) model = model.cuda()
if name == 'main': main()`