openmm / NNPOps

High-performance operations for neural network potentials
Other
79 stars 17 forks source link

Neigbors for >128K particles #101

Open RaulPPelaez opened 1 year ago

RaulPPelaez commented 1 year ago

The index here overflows with 128K particles (a thread is launched per possible pair, N*(N-1)/2) https://github.com/openmm/NNPOps/blob/054d487d9fec8f98a111bc30ec6d3ec1ce423356/src/pytorch/neighbors/getNeighborPairsCUDA.cu#L43-L46

I know it sounds like an absurd regime for such a brute force approach, but it turns out that this kernel remains a competitive option in some situations depending on the average number of neighbors per particle, for instance.

I believe there should be at least a TORCH_CHECK around.

Switching to int64_t virtually eliminates this problem, but there is a big performance penalty and the maximum number of blocks a CUDA kernel can take becomes a problem soon after.

OTOH, the way the row index is computed results in incorrect results due to floating point error at ~100k particles, even when switching to int64 and double.

I suspect this kernel will be defeated by this other approach at some point: https://developer.nvidia.com/gpugems/gpugems3/part-v-physics-simulation/chapter-31-fast-n-body-simulation-cuda

Which does not take too much time to implement and does not suffer from the aforementioned issues, since it launches an O(N) number of threads. Maybe it would be worth to have the two options and decide on the algorithm at runtime depending on some heuristic?

Additionally, this line: https://github.com/openmm/NNPOps/blob/054d487d9fec8f98a111bc30ec6d3ec1ce423356/src/pytorch/neighbors/getNeighborPairsCPU.cpp#L58 Tries to allocate 8GB of memory when asked for 128k particles. This can be fixed by using something like fancy iterators, however, I wonder if there is a reason the CPU implementation is written using torch operations exclusively.

peastman commented 1 year ago

This is one of many reasons you really don't want to use a thread per interaction. In https://github.com/torchmd/torchmd-net/pull/61#issuecomment-1108854574 I recommended using a smaller number of threads and having each thread loop over multiple interactions, but that didn't get implemented.