ymcidence / Zero-Shot-Sketch-Image-Hashing

Zero-shot Sketch-Image Hashing
19 stars 3 forks source link

How are word vectors generated. #2

Open 1069066484 opened 5 years ago

1069066484 commented 5 years ago

There are some issues I am not sure about after reading the paper and the codes.

  1. Semantic labels, also class names are used to generate word vectors in the training phase.
  2. And semantic labels are not necessary for new unseen samples, which are not included in the training set.
  3. So, images or sketches are not required to generate word vectors, but just the corresponding class names are required. I'm new to field of NLP and please correct me if I made some mistakes. Great thanks to you!
ymcidence commented 5 years ago

Hi there

  1. please only use the class names for word vector extraction. There is no attribute or other types of semantic label.
  2. The setting isn't transductive. Please do not use the unseen class names during training. For test, the labels are not needed actually, but I still stored them to make the data reading function simple and reusable.
  3. Please google 'word vector' or refer to the work we mentioned in the main paper for word vector extraction. Any word vector models pre-trained on large-scale corpus work for ZSIH. In general, you can find the code both in C and Python.
1069066484 commented 4 years ago

Thanks for your reply. I have figured it out. And I noticed CMT is used as a baseline in your paper. I'm not clear about how CMT is used for SBIR. CMT was firstly proposed for zero-shot classification. To apply this model to SBIR, according to my understanding of the paper, two networks are trained as embedding functions to respectively map the images and sketches to semantic space. Then KNN algorithm can be used for image retrieval. Am I correct? Thanks.