pavlin-policar / openTSNE

Extensible, parallel implementations of t-SNE
https://opentsne.rtfd.io
BSD 3-Clause "New" or "Revised" License
1.48k stars 165 forks source link

t-SNE-Π #181

Closed dkobak closed 3 years ago

dkobak commented 3 years ago

Have you looked into SG-t-SNE-Π paper? They accelerate FIt-SNE (both repulsive computations and attractive computations) using some neat tricks, so that the algorithm runs faster by ~5 times in 2D and also runs in 3D faster than FIt-SNE in 2D.

The "SG" part is not relevant for openTSNE (or rather it's implemented here anyway), it's the Π part that does the acceleration.

http://t-sne-pi.cs.duke.edu/ https://github.com/fcdimitr/sgtsnepi https://sci-hub.do/10.1109/hpec.2019.8916505 https://ieeexplore.ieee.org/document/8916505

a

Page 6 out of the article PDF provides a concise summary of the acceleration tricks. Curiously, Π stands for "permutation" because apparently permuting rows/columns of P to optimize the sparsity structure can alone lead to a large speed-up.

pavlin-policar commented 3 years ago

I have seen it a while ago and I looked at what they did and it doesn't seem straightforward. Or rather, it would require significant effort to reimplement here.

From what I can tell, the real speed up comes from the attractive forces and the way they speed this up is by using a special sparse format that they implemented in C++. This format is optimized for the cache, which leads to much fewer cache misses, and is therefore much faster. It's implemented in a library that would be difficult to incorporate into openTSNE, and we'd likely have to reimplement it ourselves.

I'm not really sure what the difference of the QQ computation is from FIt-SNE, other than maybe a faster implementation. They talk about binning the data, but @linqiaozhi actually had implemented this in one of the first versions of FIt-SNE, but then we threw it out because there really didn't seem to be any noticeable speed-up. Perhaps that was premature?

dkobak commented 3 years ago

From what I can tell, the real speed up comes from the attractive forces and the way they speed this up is by using a special sparse format that they implemented in C++. This format is optimized for the cache, which leads to much fewer cache misses, and is therefore much faster. It's implemented in a library that would be difficult to incorporate into openTSNE, and we'd likely have to reimplement it ourselves.

That's definitely too much effort. I thought perhaps the speed-up would happen with scipy.sparse if one permutes P to make it as close to block-diagonal as possible. But I don't know.

pavlin-policar commented 3 years ago

I'm going to close this for the time being, as it's not really an issue. It would be nice to have, but it would require substantial effort to do this. But also, it's non-urgent. I'll close issues like this and give them a wishlist label, so they don't get lost.