jaxony / ShuffleNet

ShuffleNet in PyTorch. Based on https://arxiv.org/abs/1707.01083
MIT License
294 stars 90 forks source link

Inference Speed Test? #4

Open ildoonet opened 7 years ago

ildoonet commented 7 years ago

It would be great if you test your code to check the inference speed.

jaxony commented 7 years ago

Hi @ildoonet, I'll take a look when I get time :)

jaxony commented 6 years ago

I finally got around to doing some inference on ShuffleNet today. And it is definitely far too slow. Any ideas on how to speed it up? I suspect the snail-like speed is due to the frequent channel shuffling.

jaxony commented 6 years ago

@gngdb if you have any ideas on how to speed it up in PyTorch, would love to know. I can't imagine doing a full training run at this speed. Speeding up would drastically help you with training too

gngdb commented 6 years ago

What version of PyTorch are you running? The speed of grouped convolutions increased a lot in the most recent versions.

jaxony commented 6 years ago

I'm running PyTorch 0.3.0 with CUDA. How long does it take for you to do one inference on the cat image? It takes probably 30 seconds for me.

gngdb commented 6 years ago

The entire script takes about 400ms for me to run, and the actual inference step y = net(x) takes about 70ms. The infer.py script never calls .cuda() so everything is running on CPU. I tried moving it to the GPU, but that just makes the single inference slower (takes longer to move the single image on and off the GPU); ends up being 16 seconds, with 5 seconds on inference.

gngdb commented 6 years ago

For completeness, I was running with pytorch version 0.4.0, and pip freeze gave this. I installed the conda env following the instructions to build pytorch from source.

Also, here are the CPU details:

Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                32
On-line CPU(s) list:   0-31
Thread(s) per core:    2
Core(s) per socket:    8
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 79
Model name:            Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
Stepping:              1
CPU MHz:               1200.281

With an old conda env on pytorch version 0.2.0, it took 150ms for inference and 350ms for the whole script.

jaxony commented 6 years ago

Hmm okay. I guess there's no need to improve speed if it works well enough. I'll figure out what the problem is on my end.

DW1HH commented 6 years ago

@jaxony HI,have you solved it?