huggingface / optimum-intel

🤗 Optimum Intel: Accelerate inference with Intel optimization tools
https://huggingface.co/docs/optimum/main/en/intel/index
Apache License 2.0
386 stars 109 forks source link

Fix inference for batched inputs for llama #784

Closed echarlaix closed 3 months ago

echarlaix commented 3 months ago

Fix inference for batched inputs for fp32 model coming from min_dtype = torch.finfo(torch.float16).min

HuggingFaceDocBuilderDev commented 3 months ago

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.