rasbt / machine-learning-notes

Collection of useful machine learning codes and snippets (originally intended for my personal use)
BSD 3-Clause "New" or "Revised" License
774 stars 138 forks source link

vgg16-cifar10 - RTX 3060 (Laptop) - Intel i7-10870H (Ubuntu20.04) #7

Closed sgerke-ST closed 2 years ago

sgerke-ST commented 2 years ago

Hi, I ran this on my Gigabyte Aero 15 with an RTX 3060 (6 GB VRAM) and Intel i7=10870H. I had to set the batch size to 8 due to memory constraints on that GPU.

device cuda
Files already downloaded and verified
Using cache found in /home/gerke/.cache/torch/hub/pytorch_vision_v0.11.0
Epoch: 001/001 | Batch 0000/5625 | Loss: 2.9805
Epoch: 001/001 | Batch 0100/5625 | Loss: 2.4223
Epoch: 001/001 | Batch 0200/5625 | Loss: 2.2558
Epoch: 001/001 | Batch 0300/5625 | Loss: 2.3670
Epoch: 001/001 | Batch 0400/5625 | Loss: 2.4191
Epoch: 001/001 | Batch 0500/5625 | Loss: 2.2816
Epoch: 001/001 | Batch 0600/5625 | Loss: 2.4255
Epoch: 001/001 | Batch 0700/5625 | Loss: 2.2249
Epoch: 001/001 | Batch 0800/5625 | Loss: 2.3222
Epoch: 001/001 | Batch 0900/5625 | Loss: 2.2943
Epoch: 001/001 | Batch 1000/5625 | Loss: 2.2927
Epoch: 001/001 | Batch 1100/5625 | Loss: 2.3051
Epoch: 001/001 | Batch 1200/5625 | Loss: 2.0943
Epoch: 001/001 | Batch 1300/5625 | Loss: 2.3638
Epoch: 001/001 | Batch 1400/5625 | Loss: 2.2915
Epoch: 001/001 | Batch 1500/5625 | Loss: 2.3452
Epoch: 001/001 | Batch 1600/5625 | Loss: 2.3329
Epoch: 001/001 | Batch 1700/5625 | Loss: 2.3025
Epoch: 001/001 | Batch 1800/5625 | Loss: 2.2956
Epoch: 001/001 | Batch 1900/5625 | Loss: 2.2994
Epoch: 001/001 | Batch 2000/5625 | Loss: 2.2964
Epoch: 001/001 | Batch 2100/5625 | Loss: 2.3315
Epoch: 001/001 | Batch 2200/5625 | Loss: 2.3060
Epoch: 001/001 | Batch 2300/5625 | Loss: 2.3055
Epoch: 001/001 | Batch 2400/5625 | Loss: 2.2876
Epoch: 001/001 | Batch 2500/5625 | Loss: 2.2982
Epoch: 001/001 | Batch 2600/5625 | Loss: 2.3192
Epoch: 001/001 | Batch 2700/5625 | Loss: 2.3176
Epoch: 001/001 | Batch 2800/5625 | Loss: 2.3281
Epoch: 001/001 | Batch 2900/5625 | Loss: 2.3009
Epoch: 001/001 | Batch 3000/5625 | Loss: 2.3031
Epoch: 001/001 | Batch 3100/5625 | Loss: 2.3171
Epoch: 001/001 | Batch 3200/5625 | Loss: 2.3002
Epoch: 001/001 | Batch 3300/5625 | Loss: 2.2863
Epoch: 001/001 | Batch 3400/5625 | Loss: 2.3257
Epoch: 001/001 | Batch 3500/5625 | Loss: 2.3163
Epoch: 001/001 | Batch 3600/5625 | Loss: 2.3172
Epoch: 001/001 | Batch 3700/5625 | Loss: 2.3052
Epoch: 001/001 | Batch 3800/5625 | Loss: 2.3109
Epoch: 001/001 | Batch 3900/5625 | Loss: 2.3230
Epoch: 001/001 | Batch 4000/5625 | Loss: 2.2976
Epoch: 001/001 | Batch 4100/5625 | Loss: 2.3131
Epoch: 001/001 | Batch 4200/5625 | Loss: 2.2884
Epoch: 001/001 | Batch 4300/5625 | Loss: 2.3075
Epoch: 001/001 | Batch 4400/5625 | Loss: 2.3074
Epoch: 001/001 | Batch 4500/5625 | Loss: 2.2968
Epoch: 001/001 | Batch 4600/5625 | Loss: 2.3173
Epoch: 001/001 | Batch 4700/5625 | Loss: 2.2763
Epoch: 001/001 | Batch 4800/5625 | Loss: 2.3128
Epoch: 001/001 | Batch 4900/5625 | Loss: 2.3160
Epoch: 001/001 | Batch 5000/5625 | Loss: 2.3110
Epoch: 001/001 | Batch 5100/5625 | Loss: 2.3077
Epoch: 001/001 | Batch 5200/5625 | Loss: 2.3121
Epoch: 001/001 | Batch 5300/5625 | Loss: 2.2908
Epoch: 001/001 | Batch 5400/5625 | Loss: 2.3096
Epoch: 001/001 | Batch 5500/5625 | Loss: 2.2987
Epoch: 001/001 | Batch 5600/5625 | Loss: 2.2882
Time / epoch without evaluation: 14.53 min
Epoch: 001/001 | Train: 10.03% | Validation: 9.76% | Best Validation (Ep. 001): 9.76%
Time elapsed: 18.63 min
Total Training Time: 18.63 min
Test accuracy 10.00%
Total Time: 19.46 min
rasbt commented 2 years ago

Thanks, I included it in the updated results at https://sebastianraschka.com/blog/2022/pytorch-m1-gpu.html