arborx / ArborX

Performance-portable geometric search library
BSD 3-Clause "New" or "Revised" License
170 stars 35 forks source link

Improve performance for molecular dynamics simulations #359

Open aprokop opened 3 years ago

aprokop commented 3 years ago

Problem

Given a set of 3D points (particles) and a radius, for every particle find all particles within a sphere of the given radius.

Of note:

Current status (CabanaMD)

Lagging behind Verlet list for uniform particle distribution, clear win for non-uniform

Things to try with the current ArborX implementation

Things to consider implementing in ArborX

Papers from HOOMD group:

[1] Howard, M. P., Anderson, J. A., Nikoubashman, A., Glotzer, S. C., & Panagiotopoulos, A. Z. (2016). Efficient neighbor list calculation for molecular simulation of colloidal systems using graphics processing units. Computer Physics Communications, 203, 45-52.

[2] Howard, M. P., Statt, A., Madutsa, F., Truskett, T. M., & Panagiotopoulos, A. Z. (2019). Quantized bounding volume hierarchies for neighbor search in molecular simulations on graphics processing units. Computational Materials Science, 164, 139-146.

The code is available here: https://github.com/mphowardlab/neighbor

Summary of [2] (as I understand it):

The main difference of [2] compared to [1] is quantization. The claim is this is what allowed them to beat grid-based methods. Essentially, they are snapping to the boundaries of Morton cells (2^10 in each direction).

aprokop commented 3 years ago

I think we can try these algorithms of [2] (or an approximation of them) without too much trouble:

Update: rope traversal implemented in #364.

aprokop commented 3 years ago

Quantization

In the papers [1] and [2], they use the following layout with (a) original and (b) quantized (c/o [2]): Screen Shot 2020-08-19 at 10 55 29 AM

Here are some thoughts on quantization.

Can we live with false positives?

For MD likely yes, as long as their number is small comparatively (in [2], they have ~5%). For other problems, like halo finder, likely no. And we can't really postprocess/filter them out, as the information about true boxes is lost after construction (unless we store the original full precision boxes, which is a bad idea).

If we can't live with false positives, then what?

We would need to separate internal and leaf nodes types. The internal nodes will be compressed, while leaf nodes will not. This means 25% memory reduction instead of 50%. I think we'd like to know the ratio of hit internal nodes vs hit leaf nodes in the traversal. It's at least 1:1, but may be better, and this 25% may go up.

Is quantization better than using low precision floating points?

~Not sure. Would want to examine how it compares with an (unconventional) 10-bit floating number with 6 bit mantissa and 4 bit exponent, both from error and performance point of view.~ Independent of how you construct them, using 10-bit results in 2^10 different values. Thus, if not using uniform partition of the range, some bins will necessarily be larger than the ones in uniform. Therefore, using the original quantization seems to be preferable to minimize the error.

Is the information in the node sufficient to perform operations?

No. With quantized boundaries, intersections with user data require first expanding stored data using scene bounds. Thus, there will need to be an additional information pass to the calls. On the other hand, if user's query information is transformed to the quantized state prior to traversal, that will not be necessary. But this approach will result in fase positives, and thus see first point.