Open u39kun opened 6 years ago
Pre-orderd a 2080 Ti. I plan on posting results when I get it (should arrive early Oct.)
When use to training ,RTX 2080 Ti and TiTan V, which do you prefer ?
Thanks.
Could you compare the inference framework, such as Tensorflow Lite ,PaddlePaddle,TEngine,etc. Thanks.
A recent MxNet build would interesting to see in this comparison
@u39kun Here are the results for TensorFlow on the following system:
nvidia-docker
with nvidia/cuda:10.0-cudnn7-devel
For a Zotac RTX 2070 AMP Extreme:
Precision | vgg16 eval | vgg16 train | resnet152 eval | resnet152 train |
---|---|---|---|---|
32-bit | 42.6ms | 130.6ms | 65.1ms | 264.2ms |
16-bit | 29.0ms | 94.2ms | 39.5ms | 183.9ms |
For a Zotac RTX 2080 TI:
Precision | vgg16 eval | vgg16 train | resnet152 eval | resnet152 train |
---|---|---|---|---|
32-bit | 29.0ms | 91.4ms | 43.6ms | 191.2ms |
16-bit | 18.7ms | 60.2ms | 25.5ms | 135.0ms |
Hi, I am really interested to see mxnet benchmark. Please include it if possible.
It would be great to have some AMD GPUs tested. See https://github.com/ROCmSoftwarePlatform/tensorflow-upstream/issues/173
Hi, RX 3080 vs RX6800XT please
This code works well on the AMD rocm system. 12700K+6800XT Pytorch1.12.1(ROCM5.1) Test without MIOpen: pytorch's vgg16 eval at fp32: 32.9ms avg pytorch's vgg16 train at fp32: 126.4ms avg pytorch's resnet152 eval at fp32: 48.9ms avg pytorch's resnet152 train at fp32: 184.8ms avg pytorch's densenet161 eval at fp32: 45.4ms avg pytorch's densenet161 train at fp32: 170.6ms avg
pytorch's vgg16 eval at fp16: 19.9ms avg pytorch's vgg16 train at fp16: 92.8ms avg pytorch's resnet152 eval at fp16: 28.1ms avg pytorch's resnet152 train at fp16: 132.8ms avg pytorch's densenet161 eval at fp16: 37.8ms avg pytorch's densenet161 train at fp16: 136.9ms avg
Hi, RX 3080 vs RX6800XT please
This is an open thread for requests.