jz462 / Large-Scale-VRD.pytorch

Implementation for the AAAI2019 paper "Large-scale Visual Relationship Understanding"
https://arxiv.org/abs/1804.10660
MIT License
144 stars 25 forks source link

Interpret object, subject and relationship embeddings #24

Open achireistefan opened 4 years ago

achireistefan commented 4 years ago

Hello all,

I am strugelling for a while to interpret the object, subject and relationship embeddings extracted as described by @jz462 in issue 8.

I am looking to extract the similar_by_vector word from the gensim word2vec model but the provided embeddings have len=1024 and the google word2vec model uses vectors of len=300. I did not managed to update the google word2vec model to interpret the provided embeddings, maybe I got it all wrong and this is not the solution. Is there a model to interpret this embeddings? If not what is the sollution to interpret the results in order to visualise the labels like in the example images?

All the best, Stefan.