Open OpenSource-fan opened 1 year ago
set --per_device_train_batch_size 4
. It worked. However, the usage of each GPU varies too much especially for GPU6
Every 1.0s: nvidia-smi node-0: Tue Apr 4 18:19:26 2023
Tue Apr 4 18:19:27 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.85.02 Driver Version: 510.85.02 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... On | 00000001:00:00.0 Off | 0 |
| N/A 44C P0 76W / 300W | 17145MiB / 32768MiB | 36% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 Tesla V100-SXM2... On | 00000002:00:00.0 Off | 0 |
| N/A 47C P0 77W / 300W | 17566MiB / 32768MiB | 57% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 Tesla V100-SXM2... On | 00000003:00:00.0 Off | 0 |
| N/A 43C P0 71W / 300W | 12458MiB / 32768MiB | 83% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 3 Tesla V100-SXM2... On | 00000004:00:00.0 Off | 0 |
| N/A 44C P0 72W / 300W | 11226MiB / 32768MiB | 33% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 4 Tesla V100-SXM2... On | 00000005:00:00.0 Off | 0 |
| N/A 42C P0 76W / 300W | 13112MiB / 32768MiB | 32% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 5 Tesla V100-SXM2... On | 00000006:00:00.0 Off | 0 |
| N/A 46C P0 68W / 300W | 10724MiB / 32768MiB | 62% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 6 Tesla V100-SXM2... On | 00000007:00:00.0 Off | 0 |
| N/A 44C P0 75W / 300W | 26108MiB / 32768MiB | 30% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 7 Tesla V100-SXM2... On | 00000008:00:00.0 Off | 0 |
| N/A 47C P0 69W / 300W | 20388MiB / 32768MiB | 57% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
The log tells us we need more time rather than one hour to fine-tune it.
'loss': 1.3567, 'learning_rate': 0.0, 'epoch': 0.0}
{'loss': 1.6182, 'learning_rate': 0.0, 'epoch': 0.0}
{'loss': 1.5182, 'learning_rate': 0.0, 'epoch': 0.0}
{'loss': 1.6655, 'learning_rate': 0.0, 'epoch': 0.0}
{'loss': 1.5583, 'learning_rate': 0.0, 'epoch': 0.01}
{'loss': 1.4035, 'learning_rate': 0.0, 'epoch': 0.01}
{'loss': 1.5064, 'learning_rate': 0.0, 'epoch': 0.01}
{'loss': 1.5115, 'learning_rate': 0.0, 'epoch': 0.01}
{'loss': 1.6298, 'learning_rate': 0.0, 'epoch': 0.01}
{'loss': 1.539, 'learning_rate': 1e-05, 'epoch': 0.01}
{'loss': 1.4479, 'learning_rate': 1.9999997924406317e-05, 'epoch': 0.01}
{'loss': 1.2644, 'learning_rate': 1.9999981319662e-05, 'epoch': 0.01}
{'loss': 1.2153, 'learning_rate': 1.9999948110200944e-05, 'epoch': 0.02}
{'loss': 1.1531, 'learning_rate': 1.9999898296078282e-05, 'epoch': 0.02}
{'loss': 1.2623, 'learning_rate': 1.999983187737674e-05, 'epoch': 0.02}
{'loss': 1.1446, 'learning_rate': 1.99997488542066e-05, 'epoch': 0.02}
{'loss': 1.1687, 'learning_rate': 1.999964922670572e-05, 'epoch': 0.02}
{'loss': 1.1339, 'learning_rate': 1.9999532995039525e-05, 'epoch': 0.02}
{'loss': 1.2593, 'learning_rate': 1.999940015940102e-05, 'epoch': 0.02}
{'loss': 1.0747, 'learning_rate': 1.9999250720010775e-05, 'epoch': 0.02}
{'loss': 1.184, 'learning_rate': 1.9999084677116928e-05, 'epoch': 0.03}
1%| | 42/4878 [22:52<44:47:46, 33.35s/it]
Is it correct?
To train one epoch, it takes one hour and the total time required depends on the number of epochs. The alpaca dataset comprises 52,000 training samples, and in one epoch, the number of batches is approximately 100, calculated by dividing 52,000 by 64*8. Based on the nvidia-smi status you provided, the time taken to run 42 batches is consistent with our expectations.
In fact, It will be OOM when I set batch size = 64. 😭😭😭 It works when I set batch size =4. And it will take about 45h to train 3 epochs.
I fine-tune llama-7b on 8 V100 32G. However, it occurs
CUDA out of memory
.watch -n 1 nvidia-smi