aiqm / torchani

Accurate Neural Network Potential on PyTorch
https://aiqm.github.io/torchani/
MIT License
459 stars 126 forks source link

cuaev benchmark file #564

Closed yueyericardo closed 3 years ago

lgtm-com[bot] commented 3 years ago

This pull request introduces 1 alert when merging 32910c45a50d00cc780549b1533ac6267f5a85f6 into 23c9816c5d6490ac4e0fe80b98bf78b5be10cef5 - view on LGTM.com

new alerts:

yueyericardo commented 3 years ago

Current result on TITAN V python tools/aev-benchmark-size.py

File: small.pdb, Molecule size: 264

/home/richard/dev/torchani_cuaev/torchani/resources/
Downloading ANI model parameters ...
Original TorchANI:
  Duration: 1.26 s
  Speed: 2.53 ms/it

CUaev:
  Duration: 0.25 s
  Speed: 0.51 ms/it
  Speed up: 4.97 X

----------------------------------------------------------------------

File: 1hz5.pdb, Molecule size: 973

/home/richard/dev/torchani_cuaev/torchani/resources/
Original TorchANI:
  Duration: 1.21 s
  Speed: 2.43 ms/it

CUaev:
  Duration: 2.42 s
  Speed: 4.83 ms/it
  Speed up (slower): 0.50 X

----------------------------------------------------------------------

File: 6W8H.pdb, Molecule size: 3410

/home/richard/dev/torchani_cuaev/torchani/resources/
Original TorchANI:
  Duration: 1.86 s
  Speed: 3.72 ms/it

CUaev:
  Duration: 27.69 s
  Speed: 55.38 ms/it
  Speed up (slower): 0.07 X

python tools/training-aev-benchmark.py /home/richard/dev/ANI-1x-wb97xdz.h

=> loading dataset...
=> loading /home/richard/dev/ANI-1x-wb97xdz.h5, total molecules: 3114
3114/3114  [==============================] - 73.4s
=> Caching shuffled dataset...
=> loading /home/richard/dev/ANI-1x-wb97xdz.h5, total molecules: 3114
3114/3114  [==============================] - 66.5s
=> CUDA info:
Total devices: 1
0: TITAN V
   _CudaDeviceProperties(name='TITAN V', major=7, minor=0, total_memory=12036MB, multi_processor_count=80)
   GPU Memory Cached (pytorch) :     0.0MB / 12036.7MB (TITAN V)
   GPU Memory Used (nvidia-smi):    11.9MB / 12036.7MB (TITAN V)

=> Test 1: USE cuda extension, Energy training
=> start training
Epoch: 1/1
1935/1935 [========] - 32s 16ms/step - rmse: 55.5933
   GPU Memory Cached (pytorch) :   826.0MB / 12036.7MB (TITAN V)
   GPU Memory Used (nvidia-smi):  2088.9MB / 12036.7MB (TITAN V)
=> More detail about benchmark PER EPOCH
   Total AEV - 3.892 sec
   Forward - 7.081 sec
   Backward - 8.709 sec
   Force - 0.0 ms
   Optimizer - 8.386 sec
   Others - 3.525 sec
   Epoch time - 31.593 sec

=> Test 2: NO cuda extension, Energy training
=> start training
Epoch: 1/1
1935/1935 [========] - 50s 26ms/step - rmse: 61.1153
   GPU Memory Cached (pytorch) :  2278.0MB / 12036.7MB (TITAN V)
   GPU Memory Used (nvidia-smi):  3540.9MB / 12036.7MB (TITAN V)
=> More detail about benchmark PER EPOCH
   Total AEV - 23.749 sec
   Forward - 6.979 sec
   Backward - 8.809 sec
   Force - 0.0 ms
   Optimizer - 8.274 sec
   Others - 2.393 sec
   Epoch time - 50.204 sec

=> Test 3: USE cuda extension, Force and Energy inference
=> start training
Epoch: 1/1
1935/1935 [========] - 43s 22ms/step - rmse: 1316.6297
   GPU Memory Cached (pytorch) :  2972.0MB / 12036.7MB (TITAN V)
   GPU Memory Used (nvidia-smi):  4234.9MB / 12036.7MB (TITAN V)
=> More detail about benchmark PER EPOCH
   Total AEV - 3.926 sec
   Forward - 7.434 sec
   Backward - 0.0 ms
   Force - 22.671 sec
   Optimizer - 0.0 ms
   Others - 8.975 sec
   Epoch time - 43.006 sec

=> Test 4: NO cuda extension, Force and Energy inference
=> start training
Epoch: 1/1
1935/1935 [========] - 88s 46ms/step - rmse: 21.6047
   GPU Memory Cached (pytorch) : 10054.0MB / 12036.7MB (TITAN V)
   GPU Memory Used (nvidia-smi): 11316.9MB / 12036.7MB (TITAN V)
=> More detail about benchmark PER EPOCH
   Total AEV - 25.259 sec
   Forward - 7.305 sec
   Backward - 0.0 ms
   Force - 48.022 sec
   Optimizer - 0.0 ms
   Others - 7.652 sec
   Epoch time - 88.239 sec
lgtm-com[bot] commented 3 years ago

This pull request introduces 1 alert when merging aa8bc0d43e84ee3ababe6916a9df176602261c64 into 23c9816c5d6490ac4e0fe80b98bf78b5be10cef5 - view on LGTM.com

new alerts:

lgtm-com[bot] commented 3 years ago

This pull request introduces 1 alert when merging 685cbe181c1da7934a6bef8750fe06b509cee16c into 23c9816c5d6490ac4e0fe80b98bf78b5be10cef5 - view on LGTM.com

new alerts: