Open asmith26 opened 1 week ago
We have supported Llama 3.2 VLM & LLM models. Please check the link: https://github.com/intel/intel-extension-for-pytorch/blob/2.6-llama-3/docs/index.md#211-run-vision-text-generation-with-llama-32-11b-models-using-bf16-autotp-tensor-parallel-on-multiple-cpu-numa-nodes
Thanks for the info @ZailiWang it looks like this might only be for CPU (apologies if I'm misunderstanding)? Thanks again
Hi there,
Just wondering if you have plans to add fine-tuning/inference support of VLMs like Llama 3.2 (https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/)?
Many thanks for any help! :)