tum-pbs / DMCF

Guaranteed Conservation of Momentum for Learning Particle-based Fluid Dynamics (NeurIPS '22)
MIT License
47 stars 10 forks source link

Required memory for large numbers of particles #19

Open iSach opened 1 week ago

iSach commented 1 week ago

Hello,

We are trying to replicate the results of DMCF on large datasets, with 100k to 1M particles. When trying to do inference with 100k particles on a 24GB-VRAM GPU, the provided code quickly reaches an out-of-memory state, even though it is 2D and the provided figure in the README (and paper) is 3D.

Did you use a larger GPU (eg 80GB A100), or did you tweak specific parts of the code? We would like to know how to reach such high numbers of particles on standard hardware.

Thanks a lot for your help, Sacha Lewin

Prantl commented 4 days ago

Hello,

as far as I can remember, we didn't use more GPU memory than 24GB and I don't remember any particular changes. It could possibly be due to the density of the points, if too many points are sampled in the neighborhood there can be problems, but that's just a guess. You could maybe reduce the memory requirement by dividing the scene into subsets and doing the calculation only on the smaller subsets, it is important to reduce only the sample points but not the data points. Unfortunately, I'm currently traveling until the end of next week, so I'll have to rely on my memory.

Cheers, Lukas Prantl