keras-team / keras-applications

Reference implementations of popular deep learning models.
Other
2k stars 914 forks source link

About power consumption of VGG keras deep learning models #130

Open csitaula opened 4 years ago

csitaula commented 4 years ago

I am wondering how to calculate the power consumption (watt) and memory (MB) of deep learning model designed in keras? Can anybody help me in such as case? Thanks in advance.

PhilipMay commented 4 years ago

Both should be possible with nvidia-smi

$ nvidia-smi
Mon Aug  5 08:33:31 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.67       Driver Version: 418.67       CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 108...  On   | 00000000:65:00.0 Off |                  N/A |
| 48%   80C    P2   231W / 250W |   5413MiB / 11177MiB |     94%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0     11610      C   ...e/phmay/miniconda3/envs/py36/bin/python  2701MiB |
|    0     11852      C   ...e/phmay/miniconda3/envs/py36/bin/python  2701MiB |
+-----------------------------------------------------------------------------+
csitaula commented 4 years ago

I am wondering to calculate the cost (computation cost and Memory) per batch of my data on VGG. I think this idea does not work for me.

csitaula commented 4 years ago

Both should be possible with nvidia-smi


$ nvidia-smi
Mon Aug  5 08:33:31 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.67       Driver Version: 418.67       CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 108...  On   | 00000000:65:00.0 Off |                  N/A |
| 48%   80C    P2   231W / 250W |   5413MiB / 11177MiB |     94%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0     11610      C   ...e/phmay/miniconda3/envs/py36/bin/python  2701MiB |
|    0     11852      C   ...e/phmay/miniconda3/envs/py36/bin/python  2701MiB |
+-----------------------------------------------------------------------------+

I am wondering to calculate the cost (computation cost and Memory) per batch of my data on VGG. I think this idea does not work for me.