Closed shunsuke227ono closed 6 years ago
I imagine you're using cuDNN. I think NVIDIA has spent more time tuning their libraries for VGG than SqueezeNet.
Does that mean I have to switch off cudnn if i want to use squeezenet on GPU? However I haven't find the method to shutdown cudnn in keras,does anyone have any ideas to use squeezenet in keras with gpu?
I'm trying to measure speed of doing inference on a single image with SqueezeNet. When I run it on CPU, SqueezeNet seems fast enough (comparing to VGG). But when it is on GPU, SqueezeNet gets very slower, even slower than CPU.
Does anyone know why it gets slow on GPU? Should I do something on SqueezeNet when I run it on GPU?
Here are some results of experiments I have made for SqueezeNet vs VGG in terms of their speeds both on CPU and GPU.
On CPU, SqueezeNet is much faster than VGG16.
On GPU, VGG16 gets really faster, even faster than SqueezeNet. And SqueezeNet gets even slower than it on CPU.
Thanks!