huawei-noah / AdderNet

Code for paper " AdderNet: Do We Really Need Multiplications in Deep Learning?"
BSD 3-Clause "New" or "Revised" License
952 stars 187 forks source link

Why the paper does not provide the training/inference time or speed experiment results? #11

Closed machanic closed 4 years ago

machanic commented 4 years ago

I read your paper, but I cannot find the speed test experiments, why not test?

HantingChen commented 4 years ago

Since we do not implement cuda and cudnn acclearation for addernet for now, the speed will not be faster than convolution with cuda and cudnn.

KindleHe commented 4 years ago

Since we do not implement cuda and cudnn acclearation for addernet for now, the speed will not be faster than convolution with cuda and cudnn.

Then, what about your inference time on cpu, compared other sota models ?

HantingChen commented 4 years ago

Since we do not implement cuda and cudnn acclearation for addernet for now, the speed will not be faster than convolution with cuda and cudnn.

Then, what about your inference time on cpu, compared other sota models ?

I'm sorry that we have not compared with other sota models.