aimagelab / show-control-and-tell

Show, Control and Tell: A Framework for Generating Controllable and Grounded Captions. CVPR 2019
https://arxiv.org/abs/1811.10652
BSD 3-Clause "New" or "Revised" License
282 stars 62 forks source link

how to get detection features #10

Closed a1391651300 closed 5 years ago

a1391651300 commented 5 years ago

Could you please show the codes on how can we get the detection features? You know we can only run the test dataSet through downloading the detection feature

marcellacornia commented 5 years ago

Hi @a1391651300, thanks for your interest in our work.

We extracted the detection features by using the code of the Bottom-up Top-down paper (i.e. https://github.com/peteanderson80/bottom-up-attention).

a1391651300 commented 5 years ago

Tank you fo your answer very much. Does it means that I can infer new images on the detction code?