naver-ai / pcme

Official Pytorch implementation of "Probabilistic Cross-Modal Embedding" (CVPR 2021)
Other
122 stars 17 forks source link

Visualization of representations #10

Open futakw opened 6 months ago

futakw commented 6 months ago

Hi, thanks for a valuable work!

I have a question for the visualization shown in Figure 5 of the paper.

How exactly did you visualize the distributional representations? What was the algorithm, and how did you obtain the N% confidence region?

SanghyukChun commented 6 months ago

Hi, I did train 2D embeddings directly on the subsampled classes. It is because I do not trust visualization methods, such as t-SNE. Instead, I directly trained the embeddings with the embedding dimension 2. Please check the details in section C.3. of my paper.

The confidence intervals are simply obtained from the definition of Gaussian and confidence intervals. https://en.wikipedia.org/wiki/Confidence_interval

You can find the z-scores for popular confidence levels: http://www.ltcconline.net/greenl/courses/201/estimation/smallConfLevelTable.htm

For example, for confidence level 0.99, I draw a circle with center $\mu$, and radius $2.58 \sigma$.

futakw commented 6 months ago

Thanks for the very quick and detailed reply! This helps me a lot.