Open Lotayou opened 4 years ago
Basically we compare the features of two clothes and decide whether they match or not. Thus, if you'd like access the features as the retrieval results, you can save the embeddings of query and gallery. Please modify mmfashion/apis/test_retriever.py as follows: `def _non_dist_test(model, query_set, gallery_set, cfg, validate=False):
model = MMDataParallel(model, device_ids=cfg.gpus.test).cuda()
model.eval()
query_embeds = _process_embeds(query_set, model, cfg)
gallery_embeds = _process_embeds(gallery_set, model, cfg)
query_embeds_np = np.array(query_embeds)
gallery_embeds_np = np.array(gallery_embeds)
**# save embeddings
np.save('query_embeds.npy', query_embeds_np)
np.save('gallery_embeds.npy', gallery_embeds_np)**
e = Evaluator(
cfg.data.query.id_file,
cfg.data.gallery.id_file,
extract_feature=cfg.extract_feature)
e.evaluate(query_embeds_np, gallery_embeds_np)`
@veralauee Thanks for the help. By the way, is the order of the embedding identical to the order in query list (and gallery list)?
Hi @veralauee, I'm thinking about checking the best matches suggested by the provided retriever, is there a way for me to access the retrieval results of a given query instead of just getting the topK recall? I think such a feature would be helpful for me to analysis the visual and semantic consistency of the query and topK suggestions, and provide useful guidance to the design of embedding extractor and similarity metric. Much appreciated!