issues
search
ryujaehun
/
pytorch-gpu-benchmark
Using the famous cnn model in Pytorch, we run benchmarks on various gpu.
MIT License
224
stars
85
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Added 7900XTX results with lamikr's support for AMD cards
#31
eitch
opened
2 months ago
0
plot.ipynb does not work for parsing new results with pytorch 2.30
#30
lamikr
opened
3 months ago
0
Test.sh needs changes to allow running tests also for AMD gpus
#29
lamikr
opened
3 months ago
0
AMD support to test.sh and AMD RX 6800 results
#28
lamikr
opened
3 months ago
1
When running benchmark_models.py, show error messages"No module named 'pandas'"
#27
Oracle-Chen
opened
5 months ago
0
Added RTX 5000 Ada Laptop performances
#26
dot-Eagle96
opened
5 months ago
0
Add 6800XT results
#25
ToughStyle
opened
6 months ago
0
Added 4060 Ti 16GB results
#24
adorobis
closed
6 months ago
0
why did this benchmark blow away my linux partition?
#23
achillez
closed
1 year ago
1
bugfix , support for torch 2.0
#22
hellojixian
closed
1 year ago
1
TypeError: __call__() got an unexpected keyword argument 'pretrained'
#21
joshhu
opened
2 years ago
4
adding rtx 3060 results
#20
zachcoleman
closed
2 years ago
0
RuntimeError: miopenStatusUnknownError
#19
Bengt
opened
2 years ago
0
error in type mobilenet_v2
#18
jjziets
closed
3 years ago
5
GTX 2080 Ti Performance [Windows 10]
#17
arstropica
closed
3 years ago
1
cannot support ROCM
#16
znsoftm
closed
3 years ago
3
titan RTX
#15
ryujaehun
closed
3 years ago
0
results for nvidia A100
#14
kirk86
closed
3 years ago
2
poly.ipynb ValueError
#13
sevaroy
closed
3 years ago
1
Incorrect help information for arguments: folder
#12
yqtianust
closed
4 years ago
3
test rtx2080ti on windows10
#11
olixu
closed
4 years ago
0
resnet with batchsize=12?
#10
twmht
closed
4 years ago
1
The performance of cuda on windows10 VS Linux
#9
olixu
closed
4 years ago
7
If the batch size becomes smaller, it would occurs because the other part is overloaded rather than the operation.
#8
olixu
closed
4 years ago
0
have you any interest to try larger batch size, such as 64 or 128?
#7
qixiang109
closed
4 years ago
1
Have you tried batch_size 1 for fp32 and fp16?
#6
Edwardmark
closed
4 years ago
1
RuntimeError: CUDA out of memory. Tried to allocate 32.00 MiB (GPU 0; 7.79 GiB total capacity; 6.64 GiB already allocated; 21.00 MiB free; 150.86 MiB cached) .
#5
M-Abdallah
closed
4 years ago
4
Fix bar plot layout, add a new figure
#4
elombardi2
closed
5 years ago
0
My 1080 GTX Ti is 50% slower than your benchmark
#3
NikEyX
closed
4 years ago
5
Update typo
#2
johmathe
closed
5 years ago
0
Summary BenchMark
#1
ryujaehun
closed
4 years ago
0