Open qdread opened 4 years ago
This is especially the case for the multivariate (hypervolume) stuff. It's extremely slow.
I just noticed a major reason why this might be slow, which is that the code is written very stupidly. For each pairwise comparison, the density function for each member of the pair is calculated. That means that the density function for each species is calculated individually every time it's part of a pair. That is executing the exact same code over and over, especially if there are a lot of species. In principle that isn't hard to change but the structure of the code is kind of set up so that a lot will have to be rewritten to fix.
One additional thing to do is to add a sensible default to num.points.max
in the hypervolume overlap function, as well as letting the user specify a value.
Issue raised by Pat Bills but is definitely a generally important issue. We need to probably profile the code to see where the bottleneck(s) are and try to speed it up. It is very slow even for small and not very complex datasets.