elbamos / largeVis

An implementation of the largeVis algorithm for visualizing large, high-dimensional datasets, for R
340 stars 62 forks source link

randomProjectionTreeSearch gets stuck and never returns #36

Closed avanwouwe closed 7 years ago

avanwouwe commented 8 years ago

Apologies in advance if this is not a "bug" but just something I am doing wrong.

I have a data set of 423K rows and 225 dimensions. I am running the different largeVis steps separately to debug ("randomProjectionTreeSearch", "buildEdgeMatrix", "buildWijMatrix", "projectKNNs"). The first step runs at full speed for a couple of seconds and then settles in a mono-thread load (15% on a 4 core machine) and never returns.

I have had the same behaviour with a similar dataset (423K rows) but with 500 different dimensions. In that case changing the "K" parameter prevented the issue. I have gone over the various hyper parameters but have not been able to find a setting that works for my set of 225 dimensions.

Is there any way that I can debug this so as to prevent me from having to search randomly the solution space of hyper parameters ? I have tried setting the option "getOption("verbose", TRUE)" but this does not ouput anything.

Any help would be appreciated. In any case, thanks for your wonderful package!

Spec:

elbamos commented 8 years ago

There was a bug in 0.1.10 with 32-bit builds. The hotfix is version 0.1.10.2 which was accepted by cran this past weekend. Let me know if that fixes it. If it doesn't, please cut and paste the precise code you're running.

Thanks for reporting!

On Nov 28, 2016, at 9:36 AM, avanwouwe notifications@github.com wrote:

Apologies in advance if this is not a "bug" but just something I am doing wrong.

I have a data set of 423K rows and 225 dimensions. I am running the different largeVis steps separately to debug ("randomProjectionTreeSearch", "buildEdgeMatrix", "buildWijMatrix", "projectKNNs"). The first step runs at full speed for a couple of seconds and then settles in a mono-thread load (15% on a 4 core machine) and never returns.

I have had the same behaviour with a similar dataset (423K rows) but with 500 different dimensions. In that case changing the "K" parameter prevented the issue. I have gone over the various hyper parameters but have not been able to find a setting that works for my set of 225 dimensions.

Is there any way that I can debug this so as to prevent me from having to search randomly the solution space of hyper parameters ? I have tried setting the option "getOption("verbose", TRUE)" but this does not ouput anything.

Any help would be appreciated. In any case, thanks for your wonderful package!

Spec:

Windows 10 pro 16 Gb RAM, core i7 6700 HQ (4 core) largeVis 0.1.10 x64 (compiled against github, though I have also tried CRAN 32-bit version) R 3.3.2 x86_64-w64-mingw32 — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

avanwouwe commented 8 years ago

The 32 bit version of 0.10.2 currently on CRAN does not get stuck. However I get stuck further down the road, during the "buildWijMatrix" step:

wij = buildWijMatrix(edges, perplexity = 5)

error: SpMat::init(): requested size is too large Error in referenceWij(is, x@i, x@x^2, as.integer(threads), perplexity) : SpMat::init(): requested size is too large

.. I imagine this is due to the 32-bit restriction? When I recompile against your current github with "devtools::install_github("elbamos/largeVis")" to get 64-bit, the "randomProjectionTreeSearch" step gets stuck again. I figured perhaps because the 64 bit version uses wider pointers, and thus consumes more RAM than the 32-bit version. But at no time does the RAM used by Rserver go past 12 Gb, and I have 16 Gb of RAM. Disk activity does not show excessive paging.

For information, the code I am using :

library(largeVis) index = t(index) neighbors = randomProjectionTreeSearch(index, max_iter = 1, distance_method = "Cosine", verbose = getOption("verbose", TRUE), K = 100, n_trees = 50, tree_threshold = 100) edges = buildEdgeMatrix(data = index, neighbors = neighbors, distance_method = "Cosine") wij = buildWijMatrix(edges, perplexity = 5)

The "index" object is "Large matrix (211546500 elements, 839.9 Mb)", containing 423093 rows with 225 columns worth of doubles.

Is there some way to get more verbose status information, so that I can see what is going on behind the curtains?

On Mon, Nov 28, 2016 at 4:20 PM, elbamos notifications@github.com wrote:

There was a bug in 0.1.10 with 32-bit builds. The hotfix is version 0.1.10.2 which was accepted by cran this past weekend. Let me know if that fixes it. If it doesn't, please cut and paste the precise code you're running.

Thanks for reporting!

On Nov 28, 2016, at 9:36 AM, avanwouwe notifications@github.com wrote:

Apologies in advance if this is not a "bug" but just something I am doing wrong.

I have a data set of 423K rows and 225 dimensions. I am running the different largeVis steps separately to debug ("randomProjectionTreeSearch", "buildEdgeMatrix", "buildWijMatrix", "projectKNNs"). The first step runs at full speed for a couple of seconds and then settles in a mono-thread load (15% on a 4 core machine) and never returns.

I have had the same behaviour with a similar dataset (423K rows) but with 500 different dimensions. In that case changing the "K" parameter prevented the issue. I have gone over the various hyper parameters but have not been able to find a setting that works for my set of 225 dimensions.

Is there any way that I can debug this so as to prevent me from having to search randomly the solution space of hyper parameters ? I have tried setting the option "getOption("verbose", TRUE)" but this does not ouput anything.

Any help would be appreciated. In any case, thanks for your wonderful package!

Spec:

Windows 10 pro 16 Gb RAM, core i7 6700 HQ (4 core) largeVis 0.1.10 x64 (compiled against github, though I have also tried CRAN 32-bit version) R 3.3.2 x86_64-w64-mingw32 — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/elbamos/largeVis/issues/36#issuecomment-263298099, or mute the thread https://github.com/notifications/unsubscribe-auth/ASzNyBqu6KXaK0FUhEybB2Cfvzp-XPRdks5rCvEkgaJpZM4K92QT .

elbamos commented 8 years ago

That's odd. Is it possible for you to share your data?

Where the 32/64-bit issue comes in is actually with allocating the matrix indices, and I wouldn't expect it to be problematic on that dataset in that function. The issue with bits is actually not whether the OS is 32 or 64-bit, its whether ARMA_64BIT_WORD is set. The issue is how big a sparse matrix Armadillo is willing to make. Are you sure that ARMA_64BIT_WORD was enabled when you compiled? (Just installing from github won't do it, you have to add -DARMA_64BIT_WORD to your R Makevars.)

I really appreciate your help tracking this down - I haven't focused much on Sparse matrices since I seemed to be the only one interested in using that functionality.

avanwouwe commented 8 years ago

I can definitely share my data. It's 75 Mb compressed: mail me at largevis@mailinator.com so we can set up an exchange?

Though I am concerned we're talking across purposes: I do not believe I am using a sparse matrix as an input? It is in fact the result of an inferrence of an LDA model on pieces of text, so it contains a combined probability distribution over 225 classes. Most of the columns are very close to zero (but not quite) and a hand-full are closer to 1. The sum of all columns is 1. The "wij" matrix resulting from the "buildEdgeMatrix" step is sparse of course.

As you can see in the code, I am reading a long vector from a binary file, and then re-dimension that to a m x n matrix with the "index = matrix(index, numRows, numCols, byrow = TRUE)".

In terms of bitness, yes I appreciate that it is the compilation that matters. I did put the make option in the Makevars file, I saw the option appear at compile time, and it stopped warning me at library loading time, so I'm reasonably sure that I coaxed it into 64 bit.

It is possible that it is just an intermittent issue I suppose, largeVis being stochastic. I'll test a couple of more time. Next step: data sharing?

On Mon, Nov 28, 2016 at 7:04 PM, elbamos notifications@github.com wrote:

That's odd. Is it possible for you to share your data?

Where the 32/64-bit issue comes in is actually with allocating the matrix indices, and I wouldn't expect it to be problematic on that dataset in that function. The issue with bits is actually not whether the OS is 32 or 64-bit, its whether ARMA_64BIT_WORD is set. The issue is how big a sparse matrix Armadillo is willing to make. Are you sure that ARMA_64BIT_WORD was enabled when you compiled? (Just installing from github won't do it, you have to add -DARMA_64BIT_WORD to your R Makevars.)

I really appreciate your help tracking this down - I haven't focused much on Sparse matrices since I seemed to be the only one interested in using that functionality.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/elbamos/largeVis/issues/36#issuecomment-263345759, or mute the thread https://github.com/notifications/unsubscribe-auth/ASzNyC6eKJsvmD6cwGTRHk8d7xOdmx1xks5rCxeigaJpZM4K92QT .

elbamos commented 8 years ago

I've sent you an email about the data.

Given that these are unit vectors anyway, why not just use Euclidean?

On Nov 28, 2016, at 4:04 PM, avanwouwe notifications@github.com wrote:

I can definitely share my data. It's 75 Mb compressed: mail me at largevis@mailinator.com so we can set up an exchange?

Though I am concerned we're talking across purposes: I do not believe I am using a sparse matrix as an input? It is in fact the result of an inferrence of an LDA model on pieces of text, so it contains a combined probability distribution over 225 classes. Most of the columns are very close to zero (but not quite) and a hand-full are closer to 1. The sum of all columns is 1. The "wij" matrix resulting from the "buildEdgeMatrix" step is sparse of course.

As you can see in the code, I am reading a long vector from a binary file, and then re-dimension that to a m x n matrix with the "index = matrix(index, numRows, numCols, byrow = TRUE)".

In terms of bitness, yes I appreciate that it is the compilation that matters. I did put the make option in the Makevars file, I saw the option appear at compile time, and it stopped warning me at library loading time, so I'm reasonably sure that I coaxed it into 64 bit.

It is possible that it is just an intermittent issue I suppose, largeVis being stochastic. I'll test a couple of more time. Next step: data sharing?

On Mon, Nov 28, 2016 at 7:04 PM, elbamos notifications@github.com wrote:

That's odd. Is it possible for you to share your data?

Where the 32/64-bit issue comes in is actually with allocating the matrix indices, and I wouldn't expect it to be problematic on that dataset in that function. The issue with bits is actually not whether the OS is 32 or 64-bit, its whether ARMA_64BIT_WORD is set. The issue is how big a sparse matrix Armadillo is willing to make. Are you sure that ARMA_64BIT_WORD was enabled when you compiled? (Just installing from github won't do it, you have to add -DARMA_64BIT_WORD to your R Makevars.)

I really appreciate your help tracking this down - I haven't focused much on Sparse matrices since I seemed to be the only one interested in using that functionality.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/elbamos/largeVis/issues/36#issuecomment-263345759, or mute the thread https://github.com/notifications/unsubscribe-auth/ASzNyC6eKJsvmD6cwGTRHk8d7xOdmx1xks5rCxeigaJpZM4K92QT .

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

avanwouwe commented 8 years ago

perhaps I am misunderstanding, but I do not think it is a unit vector. Each vector consis of 225 doubles, of which most are close to 0 (but not entirely), and of which the sum is always 1 (since they are a combined probability distribution, so the sum is always 100%). Or are you saying it is a close enough approximation? I was afraid that due to the high dimensionality Euclidean would not work out.

In any case, I re-ran using "euclidean" and that does not seem to fix the issue.

On Mon, Nov 28, 2016 at 10:22 PM, elbamos notifications@github.com wrote:

I've sent you an email about the data.

Given that these are unit vectors anyway, why not just use Euclidean?

On Nov 28, 2016, at 4:04 PM, avanwouwe notifications@github.com wrote:

I can definitely share my data. It's 75 Mb compressed: mail me at largevis@mailinator.com so we can set up an exchange?

Though I am concerned we're talking across purposes: I do not believe I am using a sparse matrix as an input? It is in fact the result of an inferrence of an LDA model on pieces of text, so it contains a combined probability distribution over 225 classes. Most of the columns are very close to zero (but not quite) and a hand-full are closer to 1. The sum of all columns is 1. The "wij" matrix resulting from the "buildEdgeMatrix" step is sparse of course.

As you can see in the code, I am reading a long vector from a binary file, and then re-dimension that to a m x n matrix with the "index = matrix(index, numRows, numCols, byrow = TRUE)".

In terms of bitness, yes I appreciate that it is the compilation that matters. I did put the make option in the Makevars file, I saw the option appear at compile time, and it stopped warning me at library loading time, so I'm reasonably sure that I coaxed it into 64 bit.

It is possible that it is just an intermittent issue I suppose, largeVis being stochastic. I'll test a couple of more time. Next step: data sharing?

On Mon, Nov 28, 2016 at 7:04 PM, elbamos notifications@github.com wrote:

That's odd. Is it possible for you to share your data?

Where the 32/64-bit issue comes in is actually with allocating the matrix indices, and I wouldn't expect it to be problematic on that dataset in that function. The issue with bits is actually not whether the OS is 32 or 64-bit, its whether ARMA_64BIT_WORD is set. The issue is how big a sparse matrix Armadillo is willing to make. Are you sure that ARMA_64BIT_WORD was enabled when you compiled? (Just installing from github won't do it, you have to add -DARMA_64BIT_WORD to your R Makevars.)

I really appreciate your help tracking this down - I haven't focused much on Sparse matrices since I seemed to be the only one interested in using that functionality.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub <https://github.com/elbamos/largeVis/issues/36#issuecomment-263345759 , or mute the thread https://github.com/notifications/unsubscribe-auth/ ASzNyC6eKJsvmD6cwGTRHk8d7xOdmx1xks5rCxeigaJpZM4K92QT .

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/elbamos/largeVis/issues/36#issuecomment-263398200, or mute the thread https://github.com/notifications/unsubscribe-auth/ASzNyOZssWYjAQoU3H3f_Zbu5ycW-boCks5rC0YEgaJpZM4K92QT .

elbamos commented 8 years ago

I'm saying the vectors for each document sum to 1. Look at the formula for cosine distance, compare it to Euclidean, you'll see what I mean.

I'll take a look at the data tonight.

On Nov 28, 2016, at 4:40 PM, avanwouwe notifications@github.com wrote:

perhaps I am misunderstanding, but I do not think it is a unit vector. Each vector consis of 225 doubles, of which most are close to 0 (but not entirely), and of which the sum is always 1 (since they are a combined probability distribution, so the sum is always 100%). Or are you saying it is a close enough approximation? I was afraid that due to the high dimensionality Euclidean would not work out.

In any case, I re-ran using "euclidean" and that does not seem to fix the issue.

On Mon, Nov 28, 2016 at 10:22 PM, elbamos notifications@github.com wrote:

I've sent you an email about the data.

Given that these are unit vectors anyway, why not just use Euclidean?

On Nov 28, 2016, at 4:04 PM, avanwouwe notifications@github.com wrote:

I can definitely share my data. It's 75 Mb compressed: mail me at largevis@mailinator.com so we can set up an exchange?

Though I am concerned we're talking across purposes: I do not believe I am using a sparse matrix as an input? It is in fact the result of an inferrence of an LDA model on pieces of text, so it contains a combined probability distribution over 225 classes. Most of the columns are very close to zero (but not quite) and a hand-full are closer to 1. The sum of all columns is 1. The "wij" matrix resulting from the "buildEdgeMatrix" step is sparse of course.

As you can see in the code, I am reading a long vector from a binary file, and then re-dimension that to a m x n matrix with the "index = matrix(index, numRows, numCols, byrow = TRUE)".

In terms of bitness, yes I appreciate that it is the compilation that matters. I did put the make option in the Makevars file, I saw the option appear at compile time, and it stopped warning me at library loading time, so I'm reasonably sure that I coaxed it into 64 bit.

It is possible that it is just an intermittent issue I suppose, largeVis being stochastic. I'll test a couple of more time. Next step: data sharing?

On Mon, Nov 28, 2016 at 7:04 PM, elbamos notifications@github.com wrote:

That's odd. Is it possible for you to share your data?

Where the 32/64-bit issue comes in is actually with allocating the matrix indices, and I wouldn't expect it to be problematic on that dataset in that function. The issue with bits is actually not whether the OS is 32 or 64-bit, its whether ARMA_64BIT_WORD is set. The issue is how big a sparse matrix Armadillo is willing to make. Are you sure that ARMA_64BIT_WORD was enabled when you compiled? (Just installing from github won't do it, you have to add -DARMA_64BIT_WORD to your R Makevars.)

I really appreciate your help tracking this down - I haven't focused much on Sparse matrices since I seemed to be the only one interested in using that functionality.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub <https://github.com/elbamos/largeVis/issues/36#issuecomment-263345759 , or mute the thread https://github.com/notifications/unsubscribe-auth/ ASzNyC6eKJsvmD6cwGTRHk8d7xOdmx1xks5rCxeigaJpZM4K92QT .

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/elbamos/largeVis/issues/36#issuecomment-263398200, or mute the thread https://github.com/notifications/unsubscribe-auth/ASzNyOZssWYjAQoU3H3f_Zbu5ycW-boCks5rC0YEgaJpZM4K92QT .

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

elbamos commented 7 years ago

I took a look at your data and I'm not able to reproduce the error. The 32- vs-64-bit build should not matter at this data size. Peak RAM use was ~ 8 GB.

> file = file("../test-skillizr.bin", "rb")
> numRows = readBin(file, integer(), 1, endian = "big")
> numCols = readBin(file, integer(), 1, endian = "big")
> docs = readLines(file, numRows, ok = FALSE, skipNul =  TRUE)
> index = readBin(file, double(), numCols * numRows, endian = "big")
> index = matrix(index, numRows, numCols, byrow = TRUE)
> str(index)
 num [1:423093, 1:225] 7.77e-05 8.56e-05 1.95e-04 1.28e-04 5.97e-05 ...
Warning message:
closing unused connection 3 (../test-skillizr.bin) 
> index = t(index)
> library(largeVis)
> neighobrs <- randomProjectionTreeSearch(index, max_iter = 1, verbose = TRUE, K = 100, n_trees = 50, tree_threshold = 100)
Searching for neighbors.
0                                                                                                   %
|----|----|----|----|----|----|----|----|----|----|
**************************************************|
> str(neighobrs)
 num [1:100, 1:423093] 331837 42204 308194 184452 211760 ...
> edges = buildEdgeMatrix(data = index, neighbors = neighobrs, distance_method = "Cosine")
> str(edges)
Formal class 'dgCMatrix' [package "Matrix"] with 6 slots
  ..@ i       : int [1:42309300] 21237 21804 26710 42204 59873 60531 91230 93913 108898 122277 ...
  ..@ p       : int [1:423094] 0 28 52 157 202 432 462 597 935 1068 ...
  ..@ Dim     : int [1:2] 423093 423093
  ..@ Dimnames:List of 2
  .. ..$ : NULL
  .. ..$ : NULL
  ..@ x       : num [1:42309300] 0.00534 0.05356 0.00856 0.00369 0.00799 ...
  ..@ factors : list()
> wij = buildWijMatrix(edges)
> str(wij)
Formal class 'dgCMatrix' [package "Matrix"] with 6 slots
  ..@ i       : int [1:65098192] 921 6379 20259 21237 21804 26710 36382 36384 42204 59873 ...
  ..@ p       : int [1:423094] 0 101 201 340 437 681 784 959 1338 1547 ...
  ..@ Dim     : int [1:2] 423093 423093
  ..@ Dimnames:List of 2
  .. ..$ : NULL
  .. ..$ : NULL
  ..@ x       : num [1:65098192] 3.24e-251 5.26e-12 2.34e-193 2.85e-02 1.79e-02 ...
  ..@ factors : list()
> wij = buildWijMatrix(edges, perplexity = 5)
> str(wij)
Formal class 'dgCMatrix' [package "Matrix"] with 6 slots
  ..@ i       : int [1:52643056] 6379 21237 26710 42204 59873 60531 77421 84110 90655 93027 ...
  ..@ p       : int [1:423094] 0 47 140 249 335 516 573 746 1093 1293 ...
  ..@ Dim     : int [1:2] 423093 423093
  ..@ Dimnames:List of 2
  .. ..$ : NULL
  .. ..$ : NULL
  ..@ x       : num [1:52643056] 1.23e-267 2.04e-03 4.21e-17 2.33e-01 2.98e-13 ...
  ..@ factors : list()

Can you try from a clean install?

P.S.: Is there a reason you're setting perplexity to 5?

elbamos commented 7 years ago

Actually I take that slightly back -- some of the numbers in the wij matrix are so small, that if you're using a 32-bit build of R (not Armadillo, but R itself), they might underflow. I don't know that this would cause the error you're reporting, however.

elbamos commented 7 years ago

Can you confirm this is working so I can close the Issue?

avanwouwe commented 7 years ago

Hi,

I installed R + Rstudio + Rtools on the machine of a colleague who didn't have anything installed before. I compiled from master a 64bit version, and ran the same script.

I'm afraid it is the same result; the 32 bit version from CRAN works, the 64 bit version from github serttles down after a few minutes at 15% CPU usage and never returns.

His machine has 16 Gb, just like mine. Where do you get the max 8 Gb frm? In the Windows taks manager I can see that the Rserver process needs 12 Gb at the beginning. How much memory does your machine have? Normally the size of the data set should not matter, as I have processed a set with the same number of rows and twice the number of dimensions.

Unfortunately I need the 64 bit, since the "buildWijMatrix" breaks otherwise with the "error: SpMat::init(): requested size is too large" error.

Any ideas? Is there an quick way to get debug statements, so that I can see what is going on?

On Wed, Nov 30, 2016 at 11:14 PM, elbamos notifications@github.com wrote:

Can you confirm this is working so I can close the Issue?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/elbamos/largeVis/issues/36#issuecomment-264013497, or mute the thread https://github.com/notifications/unsubscribe-auth/ASzNyDKwcfLwHONZ_IitW7rDr9erWCKqks5rDfVWgaJpZM4K92QT .

elbamos commented 7 years ago

Try installing it without openmp.

Since this works on OS X and Linux, my ability to help diagnose what's either an issue in your configuration, or a Windows OS/compilation error, is limited.

On Dec 1, 2016, at 11:39 AM, avanwouwe notifications@github.com wrote:

Hi,

I installed R + Rstudio + Rtools on the machine of a colleague who didn't have anything installed before. I compiled from master a 64bit version, and ran the same script.

I'm afraid it is the same result; the 32 bit version from CRAN works, the 64 bit version from github serttles down after a few minutes at 15% CPU usage and never returns.

His machine has 16 Gb, just like mine. Where do you get the max 8 Gb frm? In the Windows taks manager I can see that the Rserver process needs 12 Gb at the beginning. How much memory does your machine have? Normally the size of the data set should not matter, as I have processed a set with the same number of rows and twice the number of dimensions.

Unfortunately I need the 64 bit, since the "buildWijMatrix" breaks otherwise with the "error: SpMat::init(): requested size is too large" error.

Any ideas? Is there an quick way to get debug statements, so that I can see what is going on?

On Wed, Nov 30, 2016 at 11:14 PM, elbamos notifications@github.com wrote:

Can you confirm this is working so I can close the Issue?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/elbamos/largeVis/issues/36#issuecomment-264013497, or mute the thread https://github.com/notifications/unsubscribe-auth/ASzNyDKwcfLwHONZ_IitW7rDr9erWCKqks5rDfVWgaJpZM4K92QT .

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

elbamos commented 7 years ago

@avanwouwe You can try the version currentyl in the /develop branch here. The multithreading in the neighbor search was refactored, so that might help you.

avanwouwe commented 7 years ago

Just recompiled and retried, no dice I'm afraid.

If there is a simple way to "add debugging statements" let me know and I can have a look what is going on. If not I'll see if I can manually add some debugging statements myself, but even though I have done C++ before, I' have not done so before within the scope of an R module.

On Mon, Dec 5, 2016 at 7:29 PM, elbamos notifications@github.com wrote:

@avanwouwe https://github.com/avanwouwe You can try the version currentyl in the /develop branch here. The multithreading in the neighbor search was refactored, so that might help you.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/elbamos/largeVis/issues/36#issuecomment-264935165, or mute the thread https://github.com/notifications/unsubscribe-auth/ASzNyDiQSWxzIXPyI4wB5OyKgQXJLkM3ks5rFFf6gaJpZM4K92QT .

elbamos commented 7 years ago

Debugging statements would have to be manual I'm afraid. Let me know if you find anything out.

On Dec 6, 2016, at 8:48 AM, avanwouwe notifications@github.com wrote:

Just recompiled and retried, no dice I'm afraid.

If there is a simple way to "add debugging statements" let me know and I can have a look what is going on. If not I'll see if I can manually add some debugging statements myself, but even though I have done C++ before, I' have not done so before within the scope of an R module.

On Mon, Dec 5, 2016 at 7:29 PM, elbamos notifications@github.com wrote:

@avanwouwe https://github.com/avanwouwe You can try the version currentyl in the /develop branch here. The multithreading in the neighbor search was refactored, so that might help you.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/elbamos/largeVis/issues/36#issuecomment-264935165, or mute the thread https://github.com/notifications/unsubscribe-auth/ASzNyDiQSWxzIXPyI4wB5OyKgQXJLkM3ks5rFFf6gaJpZM4K92QT .

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

elbamos commented 7 years ago

@avanwouwe I'm going to close this for now. The neighbor search in the develop branch has threading almost completely refactored, so if you're still getting thread lock issues, I think it may have to do with your OpenMP library. If you're able to find anything out or make and progress please let me know, and please feel free to reopen.