pytorch / pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration
https://pytorch.org
Other
82.53k stars 22.21k forks source link

How to do inference with float16 (or half) in caffe2? #13493

Open baynaa7 opened 5 years ago

baynaa7 commented 5 years ago

I cannot find any resource about running inference with float16. There is resnet.py for training with float16, but no information is provided for inference with float16.

Any help? Thank you

dzung-hoang commented 5 years ago

NVIDIA provides an example here. However, when I tried "--dtype=float16" I get an type mismatch error in the ATen library. It looks like a patch is needed for ATen.

alexbuyval commented 5 years ago

@pcub Have you found any relevant information about the inference with float16? Thank you!