gpleiss / efficient_densenet_pytorch

A memory-efficient implementation of DenseNets
MIT License
1.52k stars 327 forks source link

Inference time issue #55

Closed youngwanLEE closed 5 years ago

youngwanLEE commented 5 years ago

Hi,

[The table](Speed (sec/mini batch)) shows comparison of speed(sec/mini bach).

Is the speed a training time?

I wonder whether the inference speed of efficient one is also slower than the naive implementation.

Did you compare the inference time, too?

gpleiss commented 5 years ago

Inference time is exactly the same. The efficient operations are only necessary for training. The efficient operations reduce the number of feature maps that are stored in memory. During inference, this is not an issue because feature maps are not stored during inference.

For more description, please read the tech report.