johnwalz97 / LibreOfficeAICopilot

MIT License
14 stars 1 forks source link

Implementing Local LLMs for Enhanced Performance and Flexibility #1

Open NADOOITChristophBa opened 8 months ago

NADOOITChristophBa commented 8 months ago

Description

With the growing trend of running Large Language Models (LLMs) on local hardware, there's an exciting opportunity to explore non-OpenAI versions of these models. One intriguing possibility is leveraging the capabilities of Llama on Apple Silicon.

Background

As more users and developers seek greater control and flexibility over their tools, running LLMs locally becomes an increasingly appealing option. This shift not only enhances performance by reducing reliance on external APIs but also opens up new possibilities for customization and integration.

Suggestion

I propose investigating the integration of Llama, a non-OpenAI LLM, into our existing framework. The Llama model has shown promising results and could be an excellent fit for our needs, especially considering its potential compatibility with Apple Silicon.

Reference

For further details on Llama and its implementation, the following GitHub repository provides extensive examples and documentation: MLX-Examples - Llama on Apple Silicon

Potential Benefits

I look forward to discussing this further and exploring how we can integrate this into our current systems.

kolergy commented 4 months ago

Ollama is an opensource library to run models localy

Here a good post on geting compatibility with oai https://towardsdatascience.com/how-to-build-an-openai-compatible-api-87c8edea2f06