airsplay / py-bottom-up-attention

PyTorch bottom-up attention with Detectron2
Apache License 2.0
229 stars 56 forks source link

Integration with LXMERT #6

Open johntiger1 opened 4 years ago

johntiger1 commented 4 years ago

If I want to use this repo to extract RCNN image features to train LXMERT, how can I do that? Do I just dump the features from

# Show the boxes, labels, and features
pred = instances.to('cpu')
v = Visualizer(im[:, :, :], MetadataCatalog.get("vg"), scale=1.2)
v = v.draw_instance_predictions(pred)
showarray(v.get_image()[:, :, ::-1])
print('instances:\n', instances)
print()
print('boxes:\n', instances.pred_boxes)
print()
print('Shape of features:\n', features.shape)

(from https://github.com/airsplay/py-bottom-up-attention/blob/master/demo/demo_feature_extraction_attr.ipynb)

into a .tsv file?

Btw, what is the difference between with and without attributes? Thanks!

airsplay commented 4 years ago

Yea; It would work well (at least in my test).

But the NMS approach would be the best to use this one: https://github.com/airsplay/py-bottom-up-attention/blob/834fa8b8123657fe6fa6b27c069015b824e07646/demo/detectron2_mscoco_proposal_maxnms.py#L54-L65

johntiger1 commented 4 years ago

Thank you, I will try the Non Maximal Suppression. But, just curious, does this mean that other SOTA recurrent vision models could be used too in the future? rCNN is now several years old, I was wondering if you experimented with more modern vision models, and perhaps can get better performance

airsplay commented 4 years ago

Hmmm... This code does not provide a training, just the weight converted. from the original CAFFE weight.

You could try this and switch the backbone: https://github.com/MILVLG/bottom-up-attention.pytorch

yezhengli-Mr9 commented 3 years ago

If I want to use this repo to extract RCNN image features to train LXMERT, how can I do that? Do I just dump the features from

# Show the boxes, labels, and features
pred = instances.to('cpu')
v = Visualizer(im[:, :, :], MetadataCatalog.get("vg"), scale=1.2)
v = v.draw_instance_predictions(pred)
showarray(v.get_image()[:, :, ::-1])
print('instances:\n', instances)
print()
print('boxes:\n', instances.pred_boxes)
print()
print('Shape of features:\n', features.shape)

(from https://github.com/airsplay/py-bottom-up-attention/blob/master/demo/demo_feature_extraction_attr.ipynb)

into a .tsv file?

Btw, what is the difference between with and without attributes? Thanks!

Hi @johntiger1, before I finish coding my project:

How long does it take to extract NLPR2's 107,292 images when LXMERT takes around 5 to 6 hours for the training split and 1 to 2 hours for the valid and test splits?

Would you mind taking a time estimate? Thanks.

yezhengli-Mr9 commented 3 years ago

If I want to use this repo to extract RCNN image features to train LXMERT, how can I do that? Do I just dump the features from

# Show the boxes, labels, and features
pred = instances.to('cpu')
v = Visualizer(im[:, :, :], MetadataCatalog.get("vg"), scale=1.2)
v = v.draw_instance_predictions(pred)
showarray(v.get_image()[:, :, ::-1])
print('instances:\n', instances)
print()
print('boxes:\n', instances.pred_boxes)
print()
print('Shape of features:\n', features.shape)

(from https://github.com/airsplay/py-bottom-up-attention/blob/master/demo/demo_feature_extraction_attr.ipynb) into a .tsv file? Btw, what is the difference between with and without attributes? Thanks!

Hi @johntiger1, before I finish coding my project:

How long does it take to extract NLPR2's 107,292 images when LXMERT takes around 5 to 6 hours for the training split and 1 to 2 hours for the valid and test splits?

Would you mind taking a time estimate? Thanks.

Hi @johntiger1 , I get my solution for this question of time estimate and summarize it here. Thanks anyway.