Hello!
First of all, thank you for providing such a fantastic tool for fast tSNE :)
I was experimenting with large datasets with both FI-tSNE and BH-tSNE. For Fl-tSNE, I am able to embed the GloVe dataset (2.2M, 300d) using a GTX-1660 with 6GB of memory. Using a V100 32GB, I can scale up to more than 10 million points with 128 dimensions. With BH-tSNE, however, I saturate the 6GB memory with only ~120K, 50-dimensional points. With the V100 32GB, I can't even embed the GloVe dataset and run out of memory.
If I am not incorrect, in the arXiv paper, it is reported that up to 5 million 50 dimensional Gaussian points can be embedded using a 12GB GPU device (NVIDIA Titan-X Maxwell). However, in a past issue on this repository (#28), it is noted that even 300,000 points with 6 dimensions is a fair size for 12GBs, therefore I am a little bit confused.
I am trying to apply a custom modification I made to the BH-tSNE version of the code and I need to experiment with very large data points. Since the reported experiments in the paper contain millions of data points and are performed with only 12 GB of memory, I was wondering if I am using a different implementation than the paper's. Am I possibly missing something else in the paper or the repository, or do you have any tips on resolving this problem?
P.S: I use the tags 1.0 and 2.1 for BH-tSNE and FI-tSNE source code respectively.
Hello! First of all, thank you for providing such a fantastic tool for fast tSNE :)
I was experimenting with large datasets with both FI-tSNE and BH-tSNE. For Fl-tSNE, I am able to embed the GloVe dataset (2.2M, 300d) using a GTX-1660 with 6GB of memory. Using a V100 32GB, I can scale up to more than 10 million points with 128 dimensions. With BH-tSNE, however, I saturate the 6GB memory with only ~120K, 50-dimensional points. With the V100 32GB, I can't even embed the GloVe dataset and run out of memory.
If I am not incorrect, in the arXiv paper, it is reported that up to 5 million 50 dimensional Gaussian points can be embedded using a 12GB GPU device (NVIDIA Titan-X Maxwell). However, in a past issue on this repository (#28), it is noted that even 300,000 points with 6 dimensions is a fair size for 12GBs, therefore I am a little bit confused.
I am trying to apply a custom modification I made to the BH-tSNE version of the code and I need to experiment with very large data points. Since the reported experiments in the paper contain millions of data points and are performed with only 12 GB of memory, I was wondering if I am using a different implementation than the paper's. Am I possibly missing something else in the paper or the repository, or do you have any tips on resolving this problem?
P.S: I use the tags 1.0 and 2.1 for BH-tSNE and FI-tSNE source code respectively.
Thank you very much and have a great week :) !