Closed velvetcake closed 6 years ago
Hi
Thanks for your interest in our work.
What I meant was I cannot share the datasets with you due to license issues. Do you need the data for the wiki and nus-wide datasets ? If yes please send me your email address and I will send a google drive link for it.
For the LableMe and Pascal Dataset kindly download the dataset from the places as instructed in the paper. If you are unable to do that, contact me again and I will have to upload to drive or something.
For experiments involving cnn features you have to extract the features using MatConvNet as the data is too large to be shared.
Thanks again.
Thank you for your reply. Thank you very much for the data provided. My email is 178332747@qq.com.
Is the text data you retrieved is the label of the image?
The text data is in most cases the tags associated with the images. Kindly please look into the original author's papers to get an idea of the textual modality associated with each dataset.
I will read your essay carefully.
Hi, how did you get the textual characteristics of the pascal dataset?
Please refer to this S. J. Hwang and K. Grauman. Accounting for the relative importance of objects in image retrieval. In BMVC, 2010. In their website they have provided the dataset and the extracted features. In addition the text features I believe I took was the absrank. Kindly also refer to the paper V. Ranjan, N. Rasiwasia, and C. Jawahar, “Multi-label cross-modal retrieval,” in ICCV, 2015, pp. 4094–4102.
Let me know if you face any further issues.
Thank you for your help.
" I am unable to give the codes for the normal cross-modal operations due to license issues. " Can not we experiment with the code you provided? PS:Can you share the .mat data used in the experiment?