PKU-ICST-MIPL / CM-GANS_TOMM2019

Source code of our TOMM 2019 paper "CM-GANs: Cross-modal Generative Adversarial Networks for Common Representation Learning".
17 stars 6 forks source link

Is data the features generated by VGG19 and TextCNN? #2

Open Jingyilang opened 5 years ago

Jingyilang commented 5 years ago

In paper, you say the image/text is first through VGG19/TextCNN. However, I can not find the VGG19 or TextCNN in train.lua. So I guess the data you used here is not original data but the features generated by VGG19 and TextCNN offline. Is that true?

PKU-ICST-MIPL commented 4 years ago

Yes, we use pre-trained features. You can extract VGG19 and TextCNN features of your custom datasets easily as there exists multiple implementations of them on github.

rashidbaloch commented 4 years ago

Yes, we use pre-trained features. You can extract VGG19 and TextCNN features of your custom datasets easily as there exists multiple implementations of them on github.

I have a RAW Pascal Sentence dataset having Images and Text based on 20 Classes. I am having difficulty in extracting feature Vector for Image(VGG19) and Text(Sentence CNN). Could you please share some insights or point me to a direction where I can find any implementations on that part. I want to extract features for Images using Resnet50 etc and for Text using LSTM etc.

Thank You

daisystar commented 4 years ago

Yes, we use pre-trained features. You can extract VGG19 and TextCNN features of your custom datasets easily as there exists multiple implementations of them on github.

请问您的文本特征是从textcnn的哪一层提取的呢?有经过fc层吗?是直接将类别数设置为300吗?