Closed samiwilf closed 2 years ago
Right it looks like fp16 is not supported on the Aten ops in PyTorch for CPU. But there is support for BFloat16, if you would like to choose that for CPU.
https://www.intel.com/content/www/us/en/artificial-intelligence/posts/intel-facebook-boost-bfloat16.html +CC @erichan1
Helpful! Thanks for the link @aaronenyeshi
Not currently fixed. Will loop back to this likely by using bfloat16.
Ran into the error screenshotted below. We could raise a SystemExit when --half-precision (to be added) is passed without --use-gpu, as a possible solution.