Closed woshazhule closed 4 years ago
You can follow the steps in "In-shop Clothes Retrieval" to split the data. Specifically, "consumer" clothes should be the "query", while "shop" should be the "gallery".
You can follow the steps in "In-shop Clothes Retrieval" to split the data. Specifically, "consumer" clothes should be the "query", while "shop" should be the "gallery".
I'm sorry to bother you again.But after my split consumer2shop retrieval dataset,I find some porblem in the number of gallery images with a ICCV paper.Could you give me some help? Liu et al. [34] released DeepFashion dataset. It has 239,557 images, 195,540 customer-to-shop pairs, and 45,392 gallery images.https://arxiv.org/pdf/1908.11754.pdf. (ICCV2019 ORAL)
I got 35,163 gallery images.(shop as gallery in train+val+test) I got 23,393 gallery images.(shop as gallery in val+test) I got 11,663 gallery images.(shop as gallery in test) I can't get 45,392 gallery images in any way. The follow is my split code:
import os
import string
def split_img():
fn = open('list_eval_partition.txt').readlines()
train = open('train_img.txt', 'w')
query = open('query_img.txt', 'w')
gallery = open('gallery_img.txt', 'w')
idpre=8888888
imgprelist=['XXX']
for i, line in enumerate(fn[2:]):
aline = line.strip('\n').split()
img1, img2, ind, prefix = aline[0], aline[1], aline[2],aline[3]
idnow=ind[3:]
if idnow!=idpre:
imgprelist=[]
idpre=idnow
#print(idnow)
idnow=int(str(idnow))
idnow=str(idnow)
if prefix == 'train':
train.write(img1)
train.write(' ')
train.write(idnow)
train.write('\n')
if img2 not in imgprelist:
train.write(img2)
train.write(' ')
train.write(idnow)
train.write('\n')
imgprelist.append(img2)
#test or val+test as gallery
else:
query.write(img1)
query.write(' ')
query.write(idnow)
query.write('\n')
if img2 not in imgprelist:
gallery.write(img2)
gallery.write(' ')
gallery.write(idnow)
gallery.write('\n')
imgprelist.append(img2)
train.close()
query.close()
gallery.close()
if __name__ == '__main__':
split_img()
@woshazhule hello, l am also doing some research on the paper you mentioned,"https://arxiv.org/pdf/1908.11754.pdf. (ICCV2019 ORAL)", I think we can communicate with each other.My email is 1456909695@qq.com. Feel free to send me an email.
Could you tell me your split information about train/query/gallery?Or anything wrong with my split code.
@woshazhule hi, this is my code and you will get the gallery shape is (45392,) import pandas as pd data = pd.read_table('list_eval_partition.txt', sep='\s+', header=0, skiprows=[0]) gallery = data['image_pair_name_2'] gallery = gallery.drop_duplicates() gallery.shape
@woshazhule hi, this is my code and you will get the gallery shape is (45392,) import pandas as pd data = pd.read_table('list_eval_partition.txt', sep='\s+', header=0, skiprows=[0]) gallery = data['image_pair_name_2'] gallery = gallery.drop_duplicates() gallery.shape
Thank you.I got where I was wrong.But I am more confused about train/query split in that paper.It means that the paper use all (train/val/test)shop images as gallery.But in the README.txt(deepfashion dataset) says that the gallery set here are all the shop images in "val + test" set.Could you give me some advice?
well, i think the query consist of the consumer photos of test set.The train set is just used to train the model. sorry, i don't understand what you mean clearly. Could you express you question explicitly?
------------------ 原始邮件 ------------------ 发件人: "woshazhule"<notifications@github.com>; 发送时间: 2019年11月19日(星期二) 下午3:44 收件人: "open-mmlab/mmfashion"<mmfashion@noreply.github.com>; 抄送: "1456909695"<1456909695@qq.com>;"Comment"<comment@noreply.github.com>; 主题: Re: [open-mmlab/mmfashion] How do you deal with consumer-to-shop clothing retrieval dataset? (#3)
@woshazhule hi, this is my code and you will get the gallery shape is (45392,) import pandas as pd data = pd.read_table('list_eval_partition.txt', sep='\s+', header=0, skiprows=[0]) gallery = data['image_pair_name_2'] gallery = gallery.drop_duplicates() gallery.shape
Thank you.I got where I was wrong.But I really confuse about train/query split in that paper.Because all (train/val/test)shop images are served as gallery.Could you give some advice?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
我认为还是按照benchmark来把,或者你可以两种都实验看一下结果,就是gallery图片数量多一点和少一点而已,其他没有影响。而且query只需要在test过程中把test的consumer作为query就行了,val应该就只是来evaluate一下loss把。这是我的理解,希望对你有帮助。
------------------ 原始邮件 ------------------ 发件人: "woshazhule"<notifications@github.com>; 发送时间: 2019年11月19日(星期二) 下午4:14 收件人: "open-mmlab/mmfashion"<mmfashion@noreply.github.com>; 抄送: "1456909695"<1456909695@qq.com>; "Comment"<comment@noreply.github.com>; 主题: Re: [open-mmlab/mmfashion] How do you deal with consumer-to-shop clothing retrieval dataset? (#3)
well, i think the query consist of the consumer photos of test set.The train set is just used to train the model. sorry, i don't understand what you mean clearly. Could you express you question explicitly? … ------------------ 原始邮件 ------------------ 发件人: "woshazhule"<notifications@github.com>; 发送时间: 2019年11月19日(星期二) 下午3:44 收件人: "open-mmlab/mmfashion"<mmfashion@noreply.github.com>; 抄送: "1456909695"<1456909695@qq.com>;"Comment"<comment@noreply.github.com>; 主题: Re: [open-mmlab/mmfashion] How do you deal with consumer-to-shop clothing retrieval dataset? (#3) @woshazhule hi, this is my code and you will get the gallery shape is (45392,) import pandas as pd data = pd.read_table('list_eval_partition.txt', sep='\s+', header=0, skiprows=[0]) gallery = data['image_pair_name_2'] gallery = gallery.drop_duplicates() gallery.shape Thank you.I got where I was wrong.But I really confuse about train/query split in that paper.Because all (train/val/test)shop images are served as gallery.Could you give some advice? — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
我用中文讲吧!!! Deepfashion 数据集分为了train/val/test三部分.它的README.txt文件中讲val+test划分作为gallery。按照README.txt和上面小姐姐的回答的意思就是train中的图片用来训练,val+test中的consumer图片作为query,val+test中的shop图片作为gallery。 但是ICCV那篇文章的gallery图片是train/val/test中的shop图片;我非常困惑它的train图片和query图片是怎么划分的。它的gallery划分已经和数据集当中的建议出现了偏差
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
@woshazhule 你好,请问你在Consumer-to-Shop 中的topk retrieval accuracy 是怎么求的,是和在In-shop retrieval 中tp/revelent_num一样计算的吗
@woshazhule 你好,请问你在Consumer-to-Shop 中的topk retrieval accuracy 是怎么求的,是和在In-shop retrieval 中tp/revelent_num一样计算的吗
跟其他做检索得方式一样的求法。那篇文章做retrieval不知道有没有先用bounding box把衣物相关区域裁出来,你有看到吗?
bounding box肯定是用了的
---Original--- From: "woshazhule"<notifications@github.com> Date: Sat, Nov 30, 2019 16:42 PM To: "open-mmlab/mmfashion"<mmfashion@noreply.github.com>; Cc: "Comment"<comment@noreply.github.com>;"Sydney"<1456909695@qq.com>; Subject: Re: [open-mmlab/mmfashion] How do you deal with consumer-to-shop clothing retrieval dataset? (#3)
@woshazhule 你好,请问你在Consumer-to-Shop 中的topk retrieval accuracy 是怎么求的,是和在In-shop retrieval 中tp/revelent_num一样计算的吗
跟其他做检索得方式一样的求法。那篇文章做retrieval不知道有没有先用bounding box把衣物相关区域裁出来,你有看到吗?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
bounding box肯定是用了的 … ---Original--- From: "woshazhule"<notifications@github.com> Date: Sat, Nov 30, 2019 16:42 PM To: "open-mmlab/mmfashion"<mmfashion@noreply.github.com>; Cc: "Comment"<comment@noreply.github.com>;"Sydney"<1456909695@qq.com>; Subject: Re: [open-mmlab/mmfashion] How do you deal with consumer-to-shop clothing retrieval dataset? (#3) @woshazhule 你好,请问你在Consumer-to-Shop 中的topk retrieval accuracy 是怎么求的,是和在In-shop retrieval 中tp/revelent_num一样计算的吗 跟其他做检索得方式一样的求法。那篇文章做retrieval不知道有没有先用bounding box把衣物相关区域裁出来,你有看到吗? — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
我发现标注有些问题,你看看下面这些图片明明一样的衣服结果标注的id不同: img/TOPS/T_Shirt/id_00000218/shop_01.jpg img/TOPS/Summer_Wear/id_00000559/shop_01.jpg img/TOPS/Summer_Wear/id_00000514/shop_01.jpg img/TOPS/T_Shirt/id_00000218/shop_02.jpg img/TOPS/T_Shirt/id_00000598/shop_01.jpg img/TOPS/T_Shirt/id_00005749/shop_01.jpg img/TOPS/T_Shirt/id_00005080/shop_01.jpg 有可能是因为分了train,validation,test的划分,所以相同的衣服在不同的划分中标注不同的id。但是那篇文章把train,validation,test的shop图片全部当作gallery,在我的实验中证明这种trick比benchmark的划分高不少。
嗯,好像是的。但应该算是噪声把。我想问一下关于《Fashion Retrieval via Graph Reasoning Networks on a Similarity Pyramid》这篇论文中,他说的ground truth s 是怎么算的?另外他文中的投影矩阵P 和T_in和T_out也没有说清楚。
------------------ 原始邮件 ------------------ 发件人: "woshazhule"<notifications@github.com>; 发送时间: 2019年11月30日(星期六) 晚上7:10 收件人: "open-mmlab/mmfashion"<mmfashion@noreply.github.com>; 抄送: "1456909695"<1456909695@qq.com>; "Comment"<comment@noreply.github.com>; 主题: Re: [open-mmlab/mmfashion] How do you deal with consumer-to-shop clothing retrieval dataset? (#3)
bounding box肯定是用了的 … ---Original--- From: "woshazhule"<notifications@github.com> Date: Sat, Nov 30, 2019 16:42 PM To: "open-mmlab/mmfashion"<mmfashion@noreply.github.com>; Cc: "Comment"<comment@noreply.github.com>;"Sydney"<1456909695@qq.com>; Subject: Re: [open-mmlab/mmfashion] How do you deal with consumer-to-shop clothing retrieval dataset? (#3) @woshazhule 你好,请问你在Consumer-to-Shop 中的topk retrieval accuracy 是怎么求的,是和在In-shop retrieval 中tp/revelent_num一样计算的吗 跟其他做检索得方式一样的求法。那篇文章做retrieval不知道有没有先用bounding box把衣物相关区域裁出来,你有看到吗? — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
我发现标注有些问题,你看看下面这些图片明明一样的衣服结果标注的id不同: img/TOPS/T_Shirt/id_00000218/shop_01.jpg img/TOPS/Summer_Wear/id_00000559/shop_01.jpg img/TOPS/Summer_Wear/id_00000514/shop_01.jpg img/TOPS/T_Shirt/id_00000218/shop_02.jpg img/TOPS/T_Shirt/id_00000598/shop_01.jpg img/TOPS/T_Shirt/id_00005749/shop_01.jpg img/TOPS/T_Shirt/id_00005080/shop_01.jpg
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
其实我也没搞懂它的gt s是什么意思,它的意思是embedding,但我们传统理解匹配的gt就是一个标量。投影矩阵P 和T_in和T_out感觉就是可以训练的矩阵。
嗯,谢谢,我昨天也发邮件给共同一作问了这个问题,他更我说s就是个标量为0或者1,T_in和T_out是可学习的矩阵。对于s我就是在最后加了个全连接层把128为映射成2维再求loss,不知道对不对,我今天中午问了作者还没有回我。
------------------ 原始邮件 ------------------ 发件人: "woshazhule"<notifications@github.com>; 发送时间: 2019年12月12日(星期四) 晚上8:59 收件人: "open-mmlab/mmfashion"<mmfashion@noreply.github.com>; 抄送: "1456909695"<1456909695@qq.com>; "Comment"<comment@noreply.github.com>; 主题: Re: [open-mmlab/mmfashion] How do you deal with consumer-to-shop clothing retrieval dataset? (#3)
嗯,好像是的。但应该算是噪声把。我想问一下关于《Fashion Retrieval via Graph Reasoning Networks on a Similarity Pyramid》这篇论文中,他说的ground truth s 是怎么算的?另外他文中的投影矩阵P 和T_in和T_out也没有说清楚。 … ------------------ 原始邮件 ------------------ 发件人: "woshazhule"<notifications@github.com>; 发送时间: 2019年11月30日(星期六) 晚上7:10 收件人: "open-mmlab/mmfashion"<mmfashion@noreply.github.com>; 抄送: "1456909695"<1456909695@qq.com>; "Comment"<comment@noreply.github.com>; 主题: Re: [open-mmlab/mmfashion] How do you deal with consumer-to-shop clothing retrieval dataset? (#3) bounding box肯定是用了的 … ---Original--- From: "woshazhule"<notifications@github.com> Date: Sat, Nov 30, 2019 16:42 PM To: "open-mmlab/mmfashion"<mmfashion@noreply.github.com>; Cc: "Comment"<comment@noreply.github.com>;"Sydney"<1456909695@qq.com>; Subject: Re: [open-mmlab/mmfashion] How do you deal with consumer-to-shop clothing retrieval dataset? (#3) @woshazhule 你好,请问你在Consumer-to-Shop 中的topk retrieval accuracy 是怎么求的,是和在In-shop retrieval 中tp/revelent_num一样计算的吗 跟其他做检索得方式一样的求法。那篇文章做retrieval不知道有没有先用bounding box把衣物相关区域裁出来,你有看到吗? — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe. 我发现标注有些问题,你看看下面这些图片明明一样的衣服结果标注的id不同: img/TOPS/T_Shirt/id_00000218/shop_01.jpg img/TOPS/Summer_Wear/id_00000559/shop_01.jpg img/TOPS/Summer_Wear/id_00000514/shop_01.jpg img/TOPS/T_Shirt/id_00000218/shop_02.jpg img/TOPS/T_Shirt/id_00000598/shop_01.jpg img/TOPS/T_Shirt/id_00005749/shop_01.jpg img/TOPS/T_Shirt/id_00005080/shop_01.jpg — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
其实我也没搞懂它的gt s是什么意思,它的意思是embedding,但我们传统理解匹配的gt就是一个标量。投影矩阵P 和T_in和T_out感觉就是可以训练的矩阵。
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
你好,请问你有steet2shop的数据集吗?官网给的那个图片链接好多都失效了
------------------ 原始邮件 ------------------ 发件人: "woshazhule"<notifications@github.com>; 发送时间: 2019年12月12日(星期四) 晚上8:59 收件人: "open-mmlab/mmfashion"<mmfashion@noreply.github.com>; 抄送: "1456909695"<1456909695@qq.com>; "Comment"<comment@noreply.github.com>; 主题: Re: [open-mmlab/mmfashion] How do you deal with consumer-to-shop clothing retrieval dataset? (#3)
嗯,好像是的。但应该算是噪声把。我想问一下关于《Fashion Retrieval via Graph Reasoning Networks on a Similarity Pyramid》这篇论文中,他说的ground truth s 是怎么算的?另外他文中的投影矩阵P 和T_in和T_out也没有说清楚。 … ------------------ 原始邮件 ------------------ 发件人: "woshazhule"<notifications@github.com>; 发送时间: 2019年11月30日(星期六) 晚上7:10 收件人: "open-mmlab/mmfashion"<mmfashion@noreply.github.com>; 抄送: "1456909695"<1456909695@qq.com>; "Comment"<comment@noreply.github.com>; 主题: Re: [open-mmlab/mmfashion] How do you deal with consumer-to-shop clothing retrieval dataset? (#3) bounding box肯定是用了的 … ---Original--- From: "woshazhule"<notifications@github.com> Date: Sat, Nov 30, 2019 16:42 PM To: "open-mmlab/mmfashion"<mmfashion@noreply.github.com>; Cc: "Comment"<comment@noreply.github.com>;"Sydney"<1456909695@qq.com>; Subject: Re: [open-mmlab/mmfashion] How do you deal with consumer-to-shop clothing retrieval dataset? (#3) @woshazhule 你好,请问你在Consumer-to-Shop 中的topk retrieval accuracy 是怎么求的,是和在In-shop retrieval 中tp/revelent_num一样计算的吗 跟其他做检索得方式一样的求法。那篇文章做retrieval不知道有没有先用bounding box把衣物相关区域裁出来,你有看到吗? — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe. 我发现标注有些问题,你看看下面这些图片明明一样的衣服结果标注的id不同: img/TOPS/T_Shirt/id_00000218/shop_01.jpg img/TOPS/Summer_Wear/id_00000559/shop_01.jpg img/TOPS/Summer_Wear/id_00000514/shop_01.jpg img/TOPS/T_Shirt/id_00000218/shop_02.jpg img/TOPS/T_Shirt/id_00000598/shop_01.jpg img/TOPS/T_Shirt/id_00005749/shop_01.jpg img/TOPS/T_Shirt/id_00005080/shop_01.jpg — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
其实我也没搞懂它的gt s是什么意思,它的意思是embedding,但我们传统理解匹配的gt就是一个标量。投影矩阵P 和T_in和T_out感觉就是可以训练的矩阵。
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
你好,请问你有steet2shop的数据集吗?官网给的那个图片链接好多都失效了
我也没有street2shop的数据集。老哥,你找到了吗?我打算在上面也做点补充实验
没有。。。,你那个论文结果能复现出来了吗?
------------------ 原始邮件 ------------------ 发件人: "woshazhule"<notifications@github.com>; 发送时间: 2019年12月24日(星期二) 下午3:12 收件人: "open-mmlab/mmfashion"<mmfashion@noreply.github.com>; 抄送: "1456909695"<1456909695@qq.com>; "Comment"<comment@noreply.github.com>; 主题: Re: [open-mmlab/mmfashion] How do you deal with consumer-to-shop clothing retrieval dataset? (#3)
你好,请问你有steet2shop的数据集吗?官网给的那个图片链接好多都失效了 … ------------------ 原始邮件 ------------------ 发件人: "woshazhule"<notifications@github.com>; 发送时间: 2019年12月12日(星期四) 晚上8:59 收件人: "open-mmlab/mmfashion"<mmfashion@noreply.github.com>; 抄送: "1456909695"<1456909695@qq.com>; "Comment"<comment@noreply.github.com>; 主题: Re: [open-mmlab/mmfashion] How do you deal with consumer-to-shop clothing retrieval dataset? (#3) 嗯,好像是的。但应该算是噪声把。我想问一下关于《Fashion Retrieval via Graph Reasoning Networks on a Similarity Pyramid》这篇论文中,他说的ground truth s 是怎么算的?另外他文中的投影矩阵P 和T_in和T_out也没有说清楚。 … ------------------ 原始邮件 ------------------ 发件人: "woshazhule"<notifications@github.com>; 发送时间: 2019年11月30日(星期六) 晚上7:10 收件人: "open-mmlab/mmfashion"<mmfashion@noreply.github.com>; 抄送: "1456909695"<1456909695@qq.com>; "Comment"<comment@noreply.github.com>; 主题: Re: [open-mmlab/mmfashion] How do you deal with consumer-to-shop clothing retrieval dataset? (#3) bounding box肯定是用了的 … ---Original--- From: "woshazhule"<notifications@github.com> Date: Sat, Nov 30, 2019 16:42 PM To: "open-mmlab/mmfashion"<mmfashion@noreply.github.com>; Cc: "Comment"<comment@noreply.github.com>;"Sydney"<1456909695@qq.com>; Subject: Re: [open-mmlab/mmfashion] How do you deal with consumer-to-shop clothing retrieval dataset? (#3) @woshazhule 你好,请问你在Consumer-to-Shop 中的topk retrieval accuracy 是怎么求的,是和在In-shop retrieval 中tp/revelent_num一样计算的吗 跟其他做检索得方式一样的求法。那篇文章做retrieval不知道有没有先用bounding box把衣物相关区域裁出来,你有看到吗? — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe. 我发现标注有些问题,你看看下面这些图片明明一样的衣服结果标注的id不同: img/TOPS/T_Shirt/id_00000218/shop_01.jpg img/TOPS/Summer_Wear/id_00000559/shop_01.jpg img/TOPS/Summer_Wear/id_00000514/shop_01.jpg img/TOPS/T_Shirt/id_00000218/shop_02.jpg img/TOPS/T_Shirt/id_00000598/shop_01.jpg img/TOPS/T_Shirt/id_00005749/shop_01.jpg img/TOPS/T_Shirt/id_00005080/shop_01.jpg — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe. 其实我也没搞懂它的gt s是什么意思,它的意思是embedding,但我们传统理解匹配的gt就是一个标量。投影矩阵P 和T_in和T_out感觉就是可以训练的矩阵。 — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
我也没有street2shop的数据集。老哥,你找到了吗?我打算在上面也做点补充实验
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
你好,请问你有steet2shop的数据集吗?官网给的那个图片链接好多都失效了
我也没有street2shop的数据集。老哥,你找到了吗?我打算在上面也做点补充实验
请问,你能复现出论文的结果吗?
The DeepFashion dataset is now officially released at: http://mmlab.ie.cuhk.edu.hk/projects/DeepFashion.html , and can be downloaded from: https://drive.google.com/open?id=0B7EVK8r0v71pQ2FuZ0k0QnhBQnc (Google Drive) or https://pan.baidu.com/s/1PwJq0U2UPBWKkZvOR2lefQ (Baidu Drive).
发自我的iPhone
在 2020年2月20日,下午11:15,Sydney notifications@github.com 写道:
你好,请问你有steet2shop的数据集吗?官网给的那个图片链接好多都失效了
我也没有street2shop的数据集。老哥,你找到了吗?我打算在上面也做点补充实验
请问,你能复现出论文的结果吗?
— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://urldefense.com/v3/__https://github.com/open-mmlab/mmfashion/issues/3?email_source=notifications&email_token=AJNCJY4EUR2SS2TSRYKYN6DRD55QPA5CNFSM4JKUHR22YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEMRYBLI*issuecomment-589529261__;Iw!!OToaGQ!9Ma0W3ZqnzdaQNa_G2MAMIzNe2OHEZF9Dnuan7nErdCxp0TEvxgzay_EEl6-whB_ZA$, or unsubscribehttps://urldefense.com/v3/__https://github.com/notifications/unsubscribe-auth/AJNCJY4TMBTB4GFB7G7UJ5LRD55QPANCNFSM4JKUHR2Q__;!!OToaGQ!9Ma0W3ZqnzdaQNa_G2MAMIzNe2OHEZF9Dnuan7nErdCxp0TEvxgzay_EEl57SHTZzg$.
你好,请问你有steet2shop的数据集吗?官网给的那个图片链接好多都失效了
我也没有street2shop的数据集。老哥,你找到了吗?我打算在上面也做点补充实验
请问,你能复现出论文的结果吗?
不好意思,前段时间有点忙,我没复现那篇论文
Thank you for your excellent work.But I want to ask how do you split your dataset in consumer-to-shop retrieval experiment.(According to your paper deepfashionv1)In you note,you say that the gallery set here are all the shop images in "val + test" set.I don't know where is the query set when eval.Can you tell me some details on how to standard split consumer-to-shop dataset as in-shop dataset train\query\gallery. Thank you!