Hello. With the release of Llama 3 8B, user can now run good quality LLMs locally with a recent computer. (A Meteor Lake Core 155 laptop with a ~13 TOPs NPU can run it at ~18tokens/sec.) Please support local LLM integration, as this would avoid paying for the OpenAI API.
Hello. With the release of Llama 3 8B, user can now run good quality LLMs locally with a recent computer. (A Meteor Lake Core 155 laptop with a ~13 TOPs NPU can run it at ~18tokens/sec.) Please support local LLM integration, as this would avoid paying for the OpenAI API.
Thanks!!