Open DataWaveAnalytics opened 7 years ago
Mostly I've been using trustworthiness over varying neighborhood sizes, which, I agree, is not tractable for large data sets. With enough compute time (and some judicious pre-compute and code tuning (cython or numba) I can manage, for example, the full MNIST test set. That means that, for now at least, I am only really comparing on small datasets. It is notable, however, that this is true of almost all the literature on manifold based dimension reduction techniques, so I am at least comparable with the rest of the literature.
On Tue, Oct 10, 2017 at 10:32 AM, Claudio Sanhueza <notifications@github.com
wrote:
Hello Leland,
Thank you for sharing this new algorithm. I have a question regarding evaluation measures of dimensionality reduction methods. I'm aware of trustworthiness and continuity, but I'm looking for measures that can handle large datasets.
I found the paper "Scale-independent quality criteria for dimensionality reduction https://perso.uclouvain.be/michel.verleysen/papers/patreclet10jl.pdf" which is an alternative quality measure, but it is still for small datasets.
How are you evaluating umap against other approaches at the moment?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/lmcinnes/umap/issues/6, or mute the thread https://github.com/notifications/unsubscribe-auth/ALaKBXdtQkuJq8PibAZxiJv5Y_ckWg2aks5sqzmlgaJpZM4Pzoo1 .
Thanks for the hints, Leland. I will try to implement my version of the metrics for large datasets or, at least, a methodology to do it with what is available.
BTW, there is a new MNIST-like dataset called "fashion MNIST" if you want to test (released Aug. 2017). They argue we should move away from MNIST to test new algorithms (see the link for details).
I have tried UMAP on fashion MNIST. It does not magically separate the 10 classes (at least not as tidily as it does with digits), but the classes that it mixes are reasonable under the circumstances. It is, certainly, a more interesting dataset on which to try such algorithms.
On Wed, Oct 11, 2017 at 7:31 PM, Claudio Sanhueza notifications@github.com wrote:
Thanks for the hints, Leland. I will try to implement my version of the metrics for large datasets or, at least, a methodology to do it with what is available.
BTW, there is a new MNIST-like dataset called "fashion MNIST https://github.com/zalandoresearch/fashion-mnist" if you want to test (released Aug. 2017). They argue we should move away from MNIST to test new algorithms (see the link for details).
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/lmcinnes/umap/issues/6#issuecomment-335977504, or mute the thread https://github.com/notifications/unsubscribe-auth/ALaKBaLDaoCH5_JPBJ1U_MZIKP9kEsXEks5srU-_gaJpZM4Pzoo1 .
I got these visualizations of fashion mnist with t-SNE and LargeVis (trains+test=70000 using labels only for the colors). It looks like LargeVis looks better, but then when I evaluate both of them using a kNN classifier (with 10 runs for each training % and using the mean) t-SNE is a better embedding for this task.(Should I trust my eyes or the numbers?)
Do you have a visualization using umap that you can share?
I can dig up a visualization I think. It is closer to LargeVis in appearance, but still a little different. As to what to trust; I think you have to trust both to some extent. t-SNE does do some thing right, and those curves do matter, so despite the clearly better appearance of LargeVis there seems to be something deceptive going on underneath it all.
Here's what UMAP did:
As I said, more similar to LargeVis. It is worth noting that UMAP has kept some of the groups together where LargeVis split them into multiple blobs (the royal blue category in your LargeVis plot, equivalent to the pale purple in the UMAP plot, for example). I wonder if that may effect the kNN-classifier accuracy?
I also find the banding of three classes that all three algorithms reproduced quite interesting; the fact that the three algorithms all generated it gives me confidence that it probably isn't an artifact of the reduction but actually a property of the data, but if so ... that's quite intriguing.
I tweaked the min_dist parameter (which defines how closely the embedding should pack points together in the embedded space) to compress things less (and hence resemble the t-SNE result more) and got this:
Still very similar (up to rotation) but less aggressive in separating clusters and showing a little more of the interconnected structure. I believe this would almost certainly embed a whole lot better in 3 or 4 dimensions.
Thank you for sharing, UMAP is doing great (visually). I definitely need to study the details of your implementation. Are you planning to submit a preprint soon? (just trying to decide if I wait for your document or I should jump to implementation instead)
I believe a less aggressive separation would lead to better k-NN classifiers performance, but we should evaluate with trustworthiness and continuity anyway (or others like the scale-independent criteria).
I'm struggling to find time to short up all the math and get the preprint done (because I really want sound explanations of why things work, which means getting good explanations well hammered out). It will be a little while yet unfortunately. The code may be a little hard to follow, but check the numba
branch as that has code that is, perhaps easier to wrap one's head around. The preprint will probably help rather a lot though. Thanks for the extra reminder that I really need to get to work on getting that done.
Is it too much to ask for a code example displaying an implementation of "trustworthiness" and "continuity"? I'm trying to evaluate the quality of dimensionality reductions acquired from t-SNE.
Any help would be greatly appreciated!
There's something in https://github.com/lmcinnes/umap/blob/master/umap/validation.py and you can see https://github.com/scikit-learn/scikit-learn/blob/ccd3331f7eb3468ac96222dc5350e58c58ccba20/sklearn/manifold/t_sne.py#L394 for a (semi-canonical) implementation.
Are "Trustworthiness" and "continuity" still the two best measures for evaluating the embedding? In validation.py
, I see the parameter max_k
in the function trustworthiness_vector
. How do I choose this parameter?
Also, kind of related, you said there are some guidance on how many n_components
we should choose. Any update on that? Without the metric above, I also don't know how to optimize for n_components
and other parameters. TIA!
Is trustworthiness a good method for selection of UMAP parameters. It is mentioned above, but I have not seen it in any other resource.
Based on my experience, I recommend using the ZADU package for this task. It is based on the information provided in this paper.
Hello Leland,
Thank you for sharing this new algorithm. I have a question regarding evaluation measures of dimensionality reduction methods. I'm aware of trustworthiness and continuity, but I'm looking for measures that can handle large datasets.
I found the paper "Scale-independent quality criteria for dimensionality reduction" which is an alternative quality measure, but it is still for small datasets.
How are you evaluating umap against other approaches at the moment?