harvardnlp / var-attn

Latent Alignment and Variational Attention
https://arxiv.org/abs/1807.03756
MIT License
326 stars 59 forks source link

Exact Marginalization?? #3

Open wlj6816 opened 5 years ago

wlj6816 commented 5 years ago

It is confusing me that categorical attention with exact evidence sets inference_network_type not Nan. In my option, categorical attention with exact evidence(Marginal Likelihood) is as follow:

image the inference network produces the parameter of q(z) of variational attention.

da03 commented 5 years ago

Yeah, it was just a placeholder, q is actually not being used in that mode.

wlj6816 commented 5 years ago

Yeah, it was just a placeholder, q is actually not being used in that mode.

Thank you very much. I am working on a similar task as your paper and need to compare our performance with your method. But you do not apply any of the commonly-utilized techniques to improve performance on VQA like "Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering". I need some details of the experiences about VQA. Could I get the code of VQA for this paper? Can I add your WeChat?