jwyang / faster-rcnn.pytorch

A faster pytorch implementation of faster r-cnn
MIT License
7.71k stars 2.33k forks source link

Getting features for each image region #27

Closed claudiogreco closed 6 years ago

claudiogreco commented 6 years ago

Hello,

is it possible to use this implementation to get features for each image region like they do in the bottom-up attention model of Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering?

Thank you!

jwyang commented 6 years ago

@claudiogreco, Yes, we have that locally. We will add feature extraction part to this repo.

claudiogreco commented 6 years ago

Thank you! It would be very useful for my work! Do you roughly know when you will commit this feature?

jwyang commented 6 years ago

@claudiogreco , it will be very soon, tomorrow or the day after it.

claudiogreco commented 6 years ago

Hi @jwyang ,

do you have news about this feature?

Thank you!

jiasenlu commented 6 years ago

Hi @claudiogreco I'll try to upload the feature extraction part in 1 or 2 days.

dotannn commented 6 years ago

@jiasenlu - any news on this?

ObadaAljabasini commented 6 years ago

@jiasenlu any new updates?

dotannn commented 6 years ago

hey - I've been able to amend the code for extracting the features pretty easily

in the class _fasterRCNN forward method, return also the pooled_feat parameter - it contains feature vector of each region proposal.

return statement of _fasterRCNN.forward() :

return rois, cls_prob, bbox_pred, rpn_loss_cls, rpn_loss_bbox, RCNN_loss_cls, RCNN_loss_bbox, rois_label, **pooled_feat**

jwyang commented 6 years ago

Closing this issue for now, thanks for @dotannn !