Open lttsmn opened 3 years ago
Yeah! I tried the same sort of thing on XLM/XLM-Roberta, expecting that embeddings of word translation pairs can get aligned, which is mentioned as universal embedding in many literature. However, it turns out that intra-language clustering is much stronger than crosslingual semantic clustering. I haven't tried the visualization here in Oscar. It seems that the implementation details of t-SNE visualization is not explicated well in paper. Appreciate it if who manages to replay the result could share the code :P
Hello, I tried to reduce the dimension of text and image features with t-sne, but the final image and text are not in the same range and the same kind of text and image are not together. Have you processed the feature or dimension reduction before visualization? Do you mind sharing the code of 2D visualization using t-SNE? thanks!