dyhBUPT / BUPTCampus

[TIFS 2023] Video-based Visible-Infrared Person Re-Identification with Auxiliary Samples
Apache License 2.0
11 stars 1 forks source link

数据集 #2

Closed Velpro-collab closed 10 months ago

Velpro-collab commented 11 months ago

你好,请问一下,MITML这个方法有没有使用auxiliary set.

dyhBUPT commented 11 months ago

Hi, in our experiments, the auxiliary set is not used for all other SOTA methods.

Velpro-collab commented 11 months ago

Hi, in our experiments, the auxiliary set is not used for all other SOTA methods. 还有一个问题query和gallery是不是包含了可见和红外样本的,

Velpro-collab commented 11 months ago

其他SOTA方法使用的训练测试方法,是不是和开源的代码一样,还是使用SOTA的训练方法

dyhBUPT commented 11 months ago

Hi, in our experiments, the auxiliary set is not used for all other SOTA methods. 还有一个问题query和gallery是不是包含了可见和红外样本的,

I don't understand your question well. Maybe you can refer to our codes for details.

dyhBUPT commented 11 months ago

其他SOTA方法使用的训练测试方法,是不是和开源的代码一样,还是使用SOTA的训练方法

We reimplement all SOTA methods using their own training codes.

Velpro-collab commented 11 months ago

作者,您好,你们制作的这个数据集对于视频可见红外行人重识别的研究很有价值,MITML这篇论文是我们同实验室师兄的作品,我们也想在你们的数据集上测试一下,以便和我们后续的工作进行对比,但是我们复现效果特别差,能不能开源一下你们在MITML这个方法的代码,如果可以将不甚感激,我的邮箱:211861921@qq.com

dyhBUPT commented 11 months ago

Thanks for your interest in our work, MITML is an excellent work, which greatly inspires our work. I'm sorry but I didn't clean up the codes for these reimplemented SOTA methods. You can try to convert the BUPTCampus to 'HITSZ-VCM' style first, then you can conveniently train MITML on BUPTCampus with minor modifications. Best wishes.

Velpro-collab commented 10 months ago
# distance
if opt.distance == 'cosine':
    distance = 1 - query_feats @ gallery_feats.T
else:
    distance = euclidean_dist(query_feats, gallery_feats)

CMC, MAP = [], []

# evaluate (intra/inter-modality)
for q_modal in (0, 1):
    for g_modal in (0, 1):
        q_mask = query_modals == q_modal
        g_mask = gallery_modals == g_modal
        tmp_distance = distance[q_mask, :][:, g_mask]
        tmp_qid = query_pids[q_mask]
        tmp_gid = gallery_pids[g_mask]
        tmp_cmc, tmp_ap = evaluate(tmp_distance, tmp_qid, tmp_gid, opt)
        CMC.append(tmp_cmc * 100)
        MAP.append(tmp_ap * 100)
        if show:
            print_metrics(
                tmp_cmc, tmp_ap,
                prefix='{:<3}->{:<3}:  '.format(MODALITY_[q_modal], MODALITY_[g_modal])
            )

# evaluate (omni-modality)
cmc, ap = evaluate(distance, query_pids, gallery_pids, opt)
CMC.append(cmc * 100)
MAP.append(ap * 100)

作者你好,这段代码里为什么cmc和ap要乘以100?

dyhBUPT commented 10 months ago

It's used to scale the score from [0, 1] to [0, 100]

Velpro-collab commented 10 months ago

Thanks for your interest in our work, MITML is an excellent work, which greatly inspires our work. I'm sorry but I didn't clean up the codes for these reimplemented SOTA methods. You can try to convert the BUPTCampus to 'HITSZ-VCM' style first, then you can conveniently train MITML on BUPTCampus with minor modifications. Best wishes.

作者,你好,我的方法是在你们开源代码的基础上仅仅将 BUPTCampus 的dataloader返回的数据形式改成 'HITSZ-VCM'的形式,但模型的训练结果是不理想,请问一下,对于 BUPTCampus 你们做了哪些数据预处理?

dyhBUPT commented 10 months ago

Hi, no special data processing methods are used. Please refer to these lines for data processing: https://github.com/dyhBUPT/BUPTCampus/blob/c29bb4879c8bd958ac0cd924c426e3057981a4a4/utils.py#L48-L64