ruotianluo / ImageCaptioning.pytorch

I decide to sync up this repo and self-critical.pytorch. (The old master is in old master branch for archive)
MIT License
1.43k stars 412 forks source link

where in the code did you extract the fake_region, conv_feat, conv_feat_embed from the image? #85

Open homelifes opened 5 years ago

homelifes commented 5 years ago

Hi, In the Adaptive Attention model, your inputs to the forward function are: def forward(self, h_out, fake_region, conv_feat, conv_feat_embed) Where in the code have you extracted those features? In the final core model, it is written opts, however i cannot see any with these names. Waiting for your reply

ruotianluo commented 5 years ago

https://github.com/ruotianluo/ImageCaptioning.pytorch/blob/master/models/AttModel.py#L362

homelifes commented 5 years ago

@ruotianluo Thanks for your reply. So is the att_feats extracted from the prepro_feats.py file, which is of size (7,7,2048)? And what about p_att_feats? May you tell me from where we originally get it?

homelifes commented 5 years ago

@ruotianluo can you kindly answer? Thanks

ruotianluo commented 5 years ago

yes. P_* means projected, it's s function of att_fests, used for speed up.