deepmodeling / deepmd-kit

A deep learning package for many-body potential energy representation and molecular dynamics
https://docs.deepmodeling.com/projects/deepmd/
GNU Lesser General Public License v3.0
1.48k stars 508 forks source link

[BUG] VRAM is wasted when running Lammps with multiple GPUs #4171

Open Entropy-Enthalpy opened 3 weeks ago

Entropy-Enthalpy commented 3 weeks ago

Bug summary

I have been using DP for a long time, and in every version I have used, I have encountered this issue: when running a Lammps MD simulation using multiple GPUs via mpirun, each MPI Rank consumes VRAM on all GPUs, even though the computation of each MPI Rank is actually running on only one GPU.

For example, in the picture below, I requested 4 V100-SXM2-16GB GPUs for a single MD job and started 4 MPI Ranks. In reality, each GPU has (4-1)0.3=0.9GiB of VRAM "wasted". For an 8-GPU job, this would "waste" (8-1)0.3=2.1GiB of VRAM. If MPS is used, the "wasted" VRAM would be doubled.

image

On the surface, it seems that this issue arises because the TensorFlow gpu_device runtime executes a "create device" operation for each GPU in every MPI Rank (as can be seen in the logs), but I don't know how to avoid this problem. It is noteworthy that TensorFlow "can't see" the GPUs on different nodes, so when running Lammps MD across multiple nodes and each node uses only one GPU, there is no such issue.

DeePMD-kit Version

3.0.0b4

Backend and its version

TensorFlow v2.15.2, Lammps 29Aug2024

How did you download the software?

Offline packages

Input Files, Running Commands, Error Log, etc.

Running Commands: mpirun -np 4 lmp_mpi -in input.lammps

Part of Log:

...
2024-10-01 03:13:12.619343: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1886] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 14529 MB memory:  -> device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0000:84:00.0, compute capability: 7.0
2024-10-01 03:13:12.620016: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1886] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 14529 MB memory:  -> device: 1, name: Tesla V100-SXM2-16GB, pci bus id: 0000:85:00.0, compute capability: 7.0
2024-10-01 03:13:12.620570: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1886] Created device /job:localhost/replica:0/task:0/device:GPU:2 with 14529 MB memory:  -> device: 2, name: Tesla V100-SXM2-16GB, pci bus id: 0000:c4:00.0, compute capability: 7.0
2024-10-01 03:13:12.621108: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1886] Created device /job:localhost/replica:0/task:0/device:GPU:3 with 14529 MB memory:  -> device: 3, name: Tesla V100-SXM2-16GB, pci bus id: 0000:c5:00.0, compute capability: 7.0
2024-10-01 03:13:12.640945: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1886] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 14529 MB memory:  -> device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0000:84:00.0, compute capability: 7.0
2024-10-01 03:13:12.641605: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1886] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 14529 MB memory:  -> device: 1, name: Tesla V100-SXM2-16GB, pci bus id: 0000:85:00.0, compute capability: 7.0
2024-10-01 03:13:12.642124: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1886] Created device /job:localhost/replica:0/task:0/device:GPU:2 with 14529 MB memory:  -> device: 2, name: Tesla V100-SXM2-16GB, pci bus id: 0000:c4:00.0, compute capability: 7.0
2024-10-01 03:13:12.642635: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1886] Created device /job:localhost/replica:0/task:0/device:GPU:3 with 14529 MB memory:  -> device: 3, name: Tesla V100-SXM2-16GB, pci bus id: 0000:c5:00.0, compute capability: 7.0
2024-10-01 03:13:12.659556: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1886] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 14529 MB memory:  -> device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0000:84:00.0, compute capability: 7.0
2024-10-01 03:13:12.660457: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1886] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 14529 MB memory:  -> device: 1, name: Tesla V100-SXM2-16GB, pci bus id: 0000:85:00.0, compute capability: 7.0
2024-10-01 03:13:12.661253: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1886] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 14529 MB memory:  -> device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0000:84:00.0, compute capability: 7.0
2024-10-01 03:13:12.661270: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1886] Created device /job:localhost/replica:0/task:0/device:GPU:2 with 14529 MB memory:  -> device: 2, name: Tesla V100-SXM2-16GB, pci bus id: 0000:c4:00.0, compute capability: 7.0
2024-10-01 03:13:12.662060: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1886] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 14529 MB memory:  -> device: 1, name: Tesla V100-SXM2-16GB, pci bus id: 0000:85:00.0, compute capability: 7.0
2024-10-01 03:13:12.662095: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1886] Created device /job:localhost/replica:0/task:0/device:GPU:3 with 14529 MB memory:  -> device: 3, name: Tesla V100-SXM2-16GB, pci bus id: 0000:c5:00.0, compute capability: 7.0
2024-10-01 03:13:12.662639: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1886] Created device /job:localhost/replica:0/task:0/device:GPU:2 with 14529 MB memory:  -> device: 2, name: Tesla V100-SXM2-16GB, pci bus id: 0000:c4:00.0, compute capability: 7.0
2024-10-01 03:13:12.663289: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1886] Created device /job:localhost/replica:0/task:0/device:GPU:3 with 14529 MB memory:  -> device: 3, name: Tesla V100-SXM2-16GB, pci bus id: 0000:c5:00.0, compute capability: 7.0
...

Steps to Reproduce

N/A

Further Information, Files, and Links

No response

Entropy-Enthalpy commented 1 week ago

I found a similar issue with the PyTorch backend, but only GPU_0's VRAM was "wasted".

For a 8-GPU job, like this: Image

DeePMD-kit Version

source:             v3.0.0b4-17-g8174cf11
source branch:      devel
source commit:      8174cf11
source commit at:   2024-10-11 03:20:55 +0000

LAMMPS version

Lammps 29Aug2024 update1

Backend stack

PyTorch 2.4.1 cuDNN 9.3.0 NVHPC 24.5 (nompi) OpenMPI 5.0.5 (CUDA-Aware) UCX 1.17.0 (CUDA + GDRCopy)

njzjz commented 1 week ago

For PyTorch, I guess c10::cuda::set_device should work. This API is not documented, though.

related discussion: https://discuss.pytorch.org/t/cuda-extension-with-multiple-gpus/160053/6

Entropy-Enthalpy commented 1 week ago

For PyTorch, I guess c10::cuda::set_device should work. This API is not documented, though.

related discussion: https://discuss.pytorch.org/t/cuda-extension-with-multiple-gpus/160053/6

As a user, I just know that source/api_cc/src/DeepPotPT.cc might need to be modified, but I don't know how... 🥺