Closed HouBiaoLiu closed 3 years ago
It's a research problem not solved yet. Maybe you can just combine these two datasets to train, but there will be little drop in these two datasets.
[07/17 15:21:09] fastreid.evaluation.evaluator INFO: Total inference pure compute time: 0:00:16 (1.083325 s / img per device) [07/17 15:21:32] fastreid.engine.defaults INFO: Evaluation results for LargeVehicleID in csv format: [07/17 15:21:32] fastreid.evaluation.testing INFO: Task: Rank-1 [07/17 15:21:32] fastreid.evaluation.testing INFO: 77.7% [07/17 15:21:32] fastreid.evaluation.testing INFO: Task: Rank-5 [07/17 15:21:32] fastreid.evaluation.testing INFO: 90.3% [07/17 15:21:32] fastreid.evaluation.testing INFO: Task: Rank-10 [07/17 15:21:32] fastreid.evaluation.testing INFO: 95.0% [07/17 15:21:32] fastreid.evaluation.testing INFO: Task: mAP [07/17 15:21:32] fastreid.evaluation.testing INFO: 83.3% [07/17 15:21:32] fastreid.evaluation.testing INFO: Task: mINP [07/17 15:21:32] fastreid.evaluation.testing INFO: 83.3% [07/17 15:21:32] fastreid.evaluation.testing INFO: Task: TPR@FPR=0.0001 [07/17 15:21:32] fastreid.evaluation.testing INFO: 8.8% [07/17 15:21:32] fastreid.evaluation.testing INFO: Task: TPR@FPR=0.001 [07/17 15:21:32] fastreid.evaluation.testing INFO: 48.8% [07/17 15:21:32] fastreid.evaluation.testing INFO: Task: TPR@FPR=0.01 [07/17 15:21:32] fastreid.evaluation.testing INFO: 98.4% [07/17 15:21:32] fastreid.engine.defaults INFO: prepare test set [07/17 15:21:35] fastreid.data.datasets.bases INFO: => Loaded SmallVeRiWild [07/17 15:21:35] fastreid.data.datasets.bases INFO: ---------------------------------------- [07/17 15:21:35] fastreid.data.datasets.bases INFO: subset | # ids | # images | # cameras [07/17 15:21:35] fastreid.data.datasets.bases INFO: ---------------------------------------- [07/17 15:21:35] fastreid.data.datasets.bases INFO: query | 3000 | 3000 | 105 [07/17 15:21:35] fastreid.data.datasets.bases INFO: gallery | 3000 | 38861 | 146 [07/17 15:21:35] fastreid.data.datasets.bases INFO: ---------------------------------------- [07/17 15:21:35] fastreid.evaluation.evaluator INFO: Start inference on 41861 images [07/17 15:23:15] fastreid.evaluation.evaluator INFO: Inference done 11/41. 0.9825 s / img. ETA=0:00:32 [07/17 15:24:17] fastreid.evaluation.evaluator INFO: Inference done 21/41. 1.0777 s / img. ETA=0:01:26 [07/17 15:24:49] fastreid.evaluation.evaluator INFO: Inference done 37/41. 1.1272 s / img. ETA=0:00:12 [07/17 15:24:59] fastreid.evaluation.evaluator INFO: Total inference time: 0:01:50.250082 (3.062502 s / img per device) [07/17 15:24:59] fastreid.evaluation.evaluator INFO: Total inference pure compute time: 0:00:41 (1.151221 s / img per device) [07/17 15:26:12] fastreid.engine.defaults INFO: Evaluation results for SmallVeRiWild in csv format: [07/17 15:26:12] fastreid.evaluation.testing INFO: Task: Rank-1 [07/17 15:26:12] fastreid.evaluation.testing INFO: 93.6% [07/17 15:26:12] fastreid.evaluation.testing INFO: Task: Rank-5 [07/17 15:26:12] fastreid.evaluation.testing INFO: 97.3% [07/17 15:26:12] fastreid.evaluation.testing INFO: Task: Rank-10 [07/17 15:26:12] fastreid.evaluation.testing INFO: 98.6% [07/17 15:26:12] fastreid.evaluation.testing INFO: Task: mAP [07/17 15:26:12] fastreid.evaluation.testing INFO: 74.1% [07/17 15:26:12] fastreid.evaluation.testing INFO: Task: mINP [07/17 15:26:12] fastreid.evaluation.testing INFO: 46.5% [07/17 15:26:12] fastreid.evaluation.testing INFO: Task: TPR@FPR=0.0001 [07/17 15:26:12] fastreid.evaluation.testing INFO: 8.3% [07/17 15:26:12] fastreid.evaluation.testing INFO: Task: TPR@FPR=0.001 [07/17 15:26:12] fastreid.evaluation.testing INFO: 57.3% [07/17 15:26:12] fastreid.evaluation.testing INFO: Task: TPR@FPR=0.01 [07/17 15:26:12] fastreid.evaluation.testing INFO: 99.3%
From the log.txt we can find out the map is much lower than the seperate trained SmallVeRiWild result, but Rank-1 is near the seperate trained SmallVeRiWild result
It's an open question and maybe you can check this paper to get some help.
We will conduct some experiments on this question. Maybe this weekend, we can get some conclusions.
These days, we just conduct some experiments and found the number of domains will boost the performance.
If the performance dropped when you combine two domains, you can combine more domains and the performance will get better.
The training set is Market1501+CUHK03+MSMT17
and DukeMTMC
is directly transferred results.
These days, we just conduct some experiments and found the number of domains will boost the performance.
If the performance dropped when you combine two domains, you can combine more domains and the performance will get better.
The training set is
Market1501+CUHK03+MSMT17
andDukeMTMC
is directly transferred results.
Hi,
Thanks!
@L1aoXingyu Ok, thanks for your reply! I have read your suggested paper. As we see, it adopts the osnet, replaces global average pooling operation with global depthwise convolution and uses AM-softmax loss. So what is your experiment setup? The other question is when we want to improve the performance on veriwild, we should introduce the camera-aware model and direction-aware model?
Compared to your result, it shows better?
- Which config did you used ?
- Have you try using GAN generated data ? How's the results?
- To address cross-domain problem, have you try r50-ibn-b backbone ? according to the paper, this backbone performs better in cross-domain.
Thanks!
circleSoftmax
and tripletloss
, use SGD optimiser with lr=0.01, training 60 epochs with cosine lr scheduler.@HouBiaoLiu The result in the paper is combineall training, and if I combine all the datasets and add my private data, R50-ibn-a can get results below.
Your result is very appealing on person reid experiment! As in the vehicle reid, the veri-wild datasets have many angles, when combined with vehicleID to train, we should apply affine transform
Do you mean you use keypoint to apply the affine transform to train vehicle re-id?
When we combine both vehicleID and veri-wild to train, we should use data augmentantion such as random affine transform on vehicleID to enhance the model's genernal ability on veriwild
@L1aoXingyu Today I check the code in the common.py, when I use multiple datasets to train, and if dataset A and dataset B have the same ids, the relabel operation would combine the id, the get_pids function seems no effect. ps: these datasets are from very different paths.
""" @author: liaoxingyu @contact: sherlockliao01@gmail.com """
import torch from torch.utils.data import Dataset
from .data_utils import read_image
class CommDataset(Dataset): """Image Person ReID Dataset"""
def __init__(self, img_items, transform=None, relabel=True):
self.transform = transform
self.relabel = relabel
self.pid_dict = {}
if self.relabel:
self.img_items = []
pids = set()
for i, item in enumerate(img_items):
pid = self.get_pids(item[0], item[1])
self.img_items.append((item[0], pid, item[2])) # replace pid
pids.add(pid)
self.pids = pids
print (len(self.pids))
self.pid_dict = dict([(p, i) for i, p in enumerate(self.pids)])
else:
self.img_items = img_items
def __len__(self):
return len(self.img_items)
def __getitem__(self, index):
img_path, pid, camid = self.img_items[index]
img = read_image(img_path)
if self.transform is not None: img = self.transform(img)
if self.relabel: pid = self.pid_dict[pid]
return {
'images': img,
'targets': pid,
'camid': camid,
'img_path': img_path
}
@staticmethod
def get_pids(file_path, pid):
""" Suitable for muilti-dataset training """
if 'cuhk03' in file_path: prefix = 'cuhk'
else: prefix = file_path.split('/')[1]
return prefix + '_' + str(pid)
def update_pid_dict(self, pid_dict):
self.pid_dict = pid_dict
the train is normal, the total id nums is set in the configure file, may be this bug causes that the map is much lower than the seperate trained SmallVeRiWild result? @L1aoXingyu
@L1aoXingyu when I use vim to open this file events.out.tfevents.1594058219.ubuntu.19110.0, I found it shows error code! So what is the ture way to open this file?
when I use vim to open this file events.out.tfevents.1594058219.ubuntu.19110.0, I found it shows error code! So what is the ture way to open this file?
This should be opened with tensorboard
with command tensorboard --logdir dir-path/to/file
Your CommDataset
is too old, you should check the latest version and there are many updates.
Ok, thanks for your reply! I use tensorboard to oberserve the rank curve and cls_accuracy curve. Another question is how to use this project to train fine-grained classification? Specially some classes have only four or less images in them also for the reid task.
Another question is how to use this project to train fine-grained classification? Specially some classes have only four or less images in them also for the reid task.
This might be called long-tail classification, not fine-grained classification. Actually it's very easy to make fastreid to support classification problem, you only need to define a custom classifier head. We will support it very soon.
It seems you were using MGN arch. Actually, you want to boost the performance on Duke, you'd better add more datasets.
omains, you can combine more domains and the
Hi, can you share your training parameters. I combine Market, DukeMTMC, CUHK, and PRID, and get the rank-1 73%
any update on this issue?
This issue is stale because it has been open for 30 days with no activity.
This issue was closed because it has been inactive for 14 days since being marked as stale.
These days, we just conduct some experiments and found the number of domains will boost the performance.
If the performance dropped when you combine two domains, you can combine more domains and the performance will get better.
The training set is
Market1501+CUHK03+MSMT17
andDukeMTMC
is directly transferred results.
请问,如何将多个数据集的评估结果在一个表中显示?我对多个数据集评估, 是分开的.
We can get an excellent model training separately, but how to get a more general model supporting multiple datasets?