Open ohwi opened 3 years ago
I saw little difference at the backbone. The paper uses ViT and this work uses CNN.
Hey, Thanks for the feedback.
This work is inspired from Facebook AI's (Detection Transformer) which aims to do object detection with transformers.
The paper you've enclosed is very recent work on this similar topic, but they have not provided any implementation.
Thank you for your reply.
I think I understand the structure of your work. Thank you!!
Hi @saahiluppal, I am trying to understand where the object detection part is occurring in the code, and what exact algorithm you're using.
Hey, The model is not doing Object Detection at any phase.
Image is fed to a resnet and this backbone will give us the feature embedding along with the corresponding mask for the image. Then these features and mask are fed to the transformer, and the rest is handled by attention.
That is the versatility of attention mechanism.
PS: Recent research shows that doing "Object Detection" prior to "Image Captioning" doesn't bring any additional improvement, instead it will just increase complexity.
PS: Recent research shows that doing "Object Detection" prior to "Image Captioning" doesn't bring any additional improvement, instead it will just increase complexity.
Hi. Would you let me know what is the paper you referenced? Thank you.
I've read it in ablation studies of some paper, not sure which paper. I'll share the name of the paper as soon as i come across it again.
Have you found which paper the structure of this code refers to?Thanks
Hi. Thank you for your impressive work.
I've read your work and want to understand your model clearly.
From #2 , I know there is no paper, but I found similar paper with your work.
Does the figure below explain your work?
Thank you!