KieranLitschel / XSWEM

A simple and explainable deep learning model for NLP.
MIT License
1 stars 0 forks source link

Implement global explainability for word embedding components #5

Closed KieranLitschel closed 3 years ago

KieranLitschel commented 3 years ago

In section 4.1.1 of the original paper, the authors proposed a method for interpreting the components of the embeddings learned by SWEM-max. We should implement this method in XSWEM.

To do this we need to first implement a function that allows users to generate a histogram from their word embeddings so that they can confirm whether the embeddings learned for their model are also sparse. Second, we need to implement a function that returns the n words with the largest values for each component (n=5 should be the default).

KieranLitschel commented 3 years ago

We have implemented most of the functionality required in branch #2-Implement-global-explainability-for-word-embedding-components. But the results are not as expected, which we suspect may be because our implementation deviates from the original paper as described in issues #6 and #7, so this issue is on hold until those issues are fixed.

KieranLitschel commented 3 years ago

7 added an option to adapt_embeddings, we need to update the method implemented for this issue to support this functionality