Closed jcp31 closed 2 years ago
Thanks! Just to double check, were you able to run it with batch size 32 on the GPU? Ah, no worries, just see your GPU has 16 Gb, so that was probably fine. There was another person with an RTX 3060 that only had 6 Gb RAM which is why I was asking
I didn't change anything in the original github source code and do, I supposed that the answer is yes.
Regards
Jean-Charles
Le dim. 22 mai 2022, 16:02, Sebastian Raschka @.***> a écrit :
Thanks! Just to double check, were you able to run it with batch size 32 on the GPU?
— Reply to this email directly, view it on GitHub https://github.com/rasbt/machine-learning-notes/issues/6#issuecomment-1133902681, or unsubscribe https://github.com/notifications/unsubscribe-auth/AZITWCNABM7DE7UFXOSLQXDVLI47LANCNFSM5WSCPHCQ . You are receiving this because you authored the thread.Message ID: @.***>
Hah, yes, no worries! I am currently updating the results and will add yours! Thanks again!
Just updated and included it in the results at https://sebastianraschka.com/blog/2022/pytorch-m1-gpu.html
I ran your script vgg16-cifar10 and got the following results (2 runs).
My machine is ASUS ROG Zephyrus Duo 15 SE (laptop):
First run: performance mode
torch 1.11.0+cu113 device cuda Files already downloaded and verified Using cache found in C:\Users\jcpou/.cache\torch\hub\pytorch_vision_v0.11.0 Epoch: 001/001 | Batch 0000/1406 | Loss: 2.3515 Epoch: 001/001 | Batch 0100/1406 | Loss: 2.2140 Epoch: 001/001 | Batch 0200/1406 | Loss: 2.3048 Epoch: 001/001 | Batch 0300/1406 | Loss: 2.2738 Epoch: 001/001 | Batch 0400/1406 | Loss: 2.3186 Epoch: 001/001 | Batch 0500/1406 | Loss: 1.8402 Epoch: 001/001 | Batch 0600/1406 | Loss: 1.8930 Epoch: 001/001 | Batch 0700/1406 | Loss: 2.4219 Epoch: 001/001 | Batch 0800/1406 | Loss: 1.8200 Epoch: 001/001 | Batch 0900/1406 | Loss: 1.8298 Epoch: 001/001 | Batch 1000/1406 | Loss: 1.8207 Epoch: 001/001 | Batch 1100/1406 | Loss: 1.8437 Epoch: 001/001 | Batch 1200/1406 | Loss: 1.4916 Epoch: 001/001 | Batch 1300/1406 | Loss: 1.6030 Epoch: 001/001 | Batch 1400/1406 | Loss: 1.9421 Time / epoch without evaluation: 7.04 min Epoch: 001/001 | Train: 38.71% | Validation: 38.70% | Best Validation (Ep. 001): 38.70% Time elapsed: 9.70 min Total Training Time: 9.70 min Test accuracy 39.42% Total Time: 10.25 min
Second run: turbo mode
torch 1.11.0+cu113 device cuda Files already downloaded and verified Using cache found in C:\Users\jcpou/.cache\torch\hub\pytorch_vision_v0.11.0 Epoch: 001/001 | Batch 0000/1406 | Loss: 2.4374 Epoch: 001/001 | Batch 0100/1406 | Loss: 2.2763 Epoch: 001/001 | Batch 0200/1406 | Loss: 2.1181 Epoch: 001/001 | Batch 0300/1406 | Loss: 2.1129 Epoch: 001/001 | Batch 0400/1406 | Loss: 2.0286 Epoch: 001/001 | Batch 0500/1406 | Loss: 2.2182 Epoch: 001/001 | Batch 0600/1406 | Loss: 1.8516 Epoch: 001/001 | Batch 0700/1406 | Loss: 2.0691 Epoch: 001/001 | Batch 0800/1406 | Loss: 1.8235 Epoch: 001/001 | Batch 0900/1406 | Loss: 1.9100 Epoch: 001/001 | Batch 1000/1406 | Loss: 1.7260 Epoch: 001/001 | Batch 1100/1406 | Loss: 1.9619 Epoch: 001/001 | Batch 1200/1406 | Loss: 1.7266 Epoch: 001/001 | Batch 1300/1406 | Loss: 1.8404 Epoch: 001/001 | Batch 1400/1406 | Loss: 2.2174 Time / epoch without evaluation: 6.66 min Epoch: 001/001 | Train: 34.86% | Validation: 35.98% | Best Validation (Ep. 001): 35.98% Time elapsed: 9.19 min Total Training Time: 9.19 min Test accuracy 35.99% Total Time: 9.71 min