psiz-org / psiz

A python package for inferring psychological embeddings.
https://psiz.org
Apache License 2.0
29 stars 7 forks source link

Exploiting low rank and triangle inequality #11

Open rgerkin opened 4 years ago

rgerkin commented 4 years ago

@roads I might be able to figure this out by exploring the code, but does this in any way exploit low dimensionality and the fact that the matrix is a distance matrix? For example, if I take a real distance matrix and add noise to it, then do MDS and make a new distance matrix from the embeddings, most of the noise goes away (for sufficiently high number of points relative to dimensions). This is MDS exploiting geometry and "fixing" noisy points to not violate the triangle inequality. Similarly, low-rank distance matrix completion schemes can denoise even the observed points, since most of the noise reflects irrelevant dimensions. So I'd like to figure out if I am also getting those benefits for free in PsiZ, or if those are things I would need to try during or after PsiZ computed its embeddings.

roads commented 4 years ago

With PsiZ, you get something similar for free at the level of similarity matrices.

In ranking judgments, the similarities are not directly observed and are implicitly captured by the model. There isn't a way to directly de-noise a matrix. What you can do is generate noisy rankings from a known ground-truth model and compare an inferred similarity matrix to a ground-truth similarity matrix. As you add more data, the two will become highly correlated. There's a script called examples/one_group_convergence.py that does just this.

I think the problem boils down to the strong assumption of classical MDS; that distance and dissimilarity are equivalent. The first issue is that PsiZ operates on similarities rather than dissimilarites. Second, the PsiZ's predefined similarity functions are all non-linear.

Issue #1 With pair-wise similarity rating judgments, it isn't difficult to convert from dissimilarity to similarity by simply subtracting dissimilarity from a constant. When collecting ranking judgments, it doesn't make much sense to talk about dissimilarity, since similarity/dissimilarity is never directly observed.

Issue #2: In my view, the triangle inequality exerts "pressure" on the similarity/dissimilarity relations. In classic MDS, this is equivalent to exerting pressure on the distance relations. When using a non-linear similarity function, the triangle inequality does not exert the same pressure on the distance matrix. When using a non-linear function like an exponential, the triangular inequality with exert pressure on a local neighborhood of points (in distance space), but will exert less pressure on points that are far apart.

I hope this helps. Please let me know if there is anything else I can clarify.