facebookresearch / FAMBench

Benchmarks to capture important workloads.
Apache License 2.0
28 stars 23 forks source link

half precision xlmr error when model on cpu #40

Closed samiwilf closed 2 years ago

samiwilf commented 2 years ago

Ran into the error screenshotted below. We could raise a SystemExit when --half-precision (to be added) is passed without --use-gpu, as a possible solution.

Screen Shot 2021-10-25 at 12 25 08 AM
aaronenyeshi commented 2 years ago

Right it looks like fp16 is not supported on the Aten ops in PyTorch for CPU. But there is support for BFloat16, if you would like to choose that for CPU.

aaronenyeshi commented 2 years ago

https://www.intel.com/content/www/us/en/artificial-intelligence/posts/intel-facebook-boost-bfloat16.html +CC @erichan1

erichan1 commented 2 years ago

https://www.intel.com/content/www/us/en/artificial-intelligence/posts/intel-facebook-boost-bfloat16.html +CC @erichan1

Helpful! Thanks for the link @aaronenyeshi

erichan1 commented 2 years ago

Not currently fixed. Will loop back to this likely by using bfloat16.