Closed KieranLitschel closed 3 years ago
We have implemented most of the functionality required in branch #2-Implement-global-explainability-for-word-embedding-components. But the results are not as expected, which we suspect may be because our implementation deviates from the original paper as described in issues #6 and #7, so this issue is on hold until those issues are fixed.
In section 4.1.1 of the original paper, the authors proposed a method for interpreting the components of the embeddings learned by SWEM-max. We should implement this method in XSWEM.
To do this we need to first implement a function that allows users to generate a histogram from their word embeddings so that they can confirm whether the embeddings learned for their model are also sparse. Second, we need to implement a function that returns the n words with the largest values for each component (n=5 should be the default).