mhahsler / dbscan

Density Based Clustering of Applications with Noise (DBSCAN) and Related Algorithms - R package
GNU General Public License v3.0
310 stars 64 forks source link

Possible optimizations #17

Closed GearFear closed 2 months ago

GearFear commented 7 years ago

Hello,

I run DBSCAN on my machine, OS: RedHat_7.2_x86_64 Memory: 504GB CPUCount: 2 CoreCount: 48 HT: yes CPU Model: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz

I have dataset with approx. 290k instances and 17 features - taken from https://www.kaggle.com/dalpozz/creditcardfraud. Some features are dropped from dataset, I left (V1, V2, V3, V4, V5, V6, V7, V9, V10, V11, V12,V14, V16, V17, V18, V19, V21) features. Your implementation is much faster than scikit-learn's one, but still it takes several minutes to cluster this dataset. I'd like to run DBSCAN with different hyperparameters: eps and minPts to choose optimal ones, but it takes hours to do it. Could you add some optimizations, i.e. build kd-tree in parallel? It would make possible to use this algorithm for larger datasets.

Thanks in advance!

mhahsler commented 7 years ago

Hi! Thanks for this request. parallelizing the kd-tree building process is problematic since we use a fast, but somewhat complicated library. Currently, the tree is rebuilt every time you cluster. A while ago, I was looking into creating the kd-tree structure once and then performing clustering serval times on the same tree. I did not get far with implementing this consistently (without memory leaks) in R. We plan to look into this more soon.

peekxc commented 7 years ago

I've added a preliminary branch that allows the use of a pre-built kd tree with dbscan. You could try using that, although again it's worth noting there still seem to be same memory issues when I try to validate. It's worth saving your environment beforehand.

That being said, usage would look something like:

X_n <- iris[, 1:4]
xn_kdtree <- kd_tree(X_n)
dbscan(X_n, eps = 0.8, minPts = 15, kd_tree = xn_kdtree)

You can also do summary(xn_kdtree), print(xn_kdtree), or str(xn_kdtree) to get more information on the actual structure of the tree.

It's worth noting that in my experience, building the tree is quite fast, and is often not the bottleneck of the search. It depends on the distribution of the data, so your mileage may vary (and I hope it does). I ran some benchmarks which seemed to indicate roughly a ~5-15% speedup in some cases.

The real bottleneck are the points which cause the traversal to constantly 'miss' going down the correct branch because they happen to be close to a couple of the splitting hyperplanes. If the tree is particularly large, the recursion stack gets pretty expensive to go back through.

You could try making a couple trees using different splitting strategies, which could help shave off some additional time. For example, if you suspect the distribution of the data is quite 'clustered' (implying there are several very dense areas of the feature space surrounded by sparse, non-dense areas), the "FAIR" splitting rule might help as it'll try to make balanced partitions in the tree. A more direct way to optimize is to make sure that your bucket size isn't too small. By default, it's set to 10, if the user gives no input... this could be problematic for 290k instances. What kind of setting of minPts are you using?

A parallel version of DBSCAN itself may be in the future, but perhaps too far off to be useful here.
too far off to even