-
Is it possible to use local models or are there any plans for that to happen? For example, using models from Hugging Face like the meta-llama/Llama-3.2-11B-Vision.
-
how to set base_url and model in python sdk?
-
would like to save some cash during the initial experimenty/reverse engineering documentationy initial wave of development on the project, so don't want to hit up openai's api too much, especially for…
cazlo updated
2 weeks ago
-
Would love to have local llm support through llmstudio or ollama
-
Can you please add local LLM support, please?
Ollama support will be nice too.
Thank you.
-
Currently it takes a lot of time to start the model, implement caching to improve loading speed.
ps: please create a PR on the local-llm branch and not the main branch.
-
Hi,
Looking into https://github.com/PLangHQ/plang/issues/14
Is there a possibily to get `plang` working with a LLM running locally?
-
![b9b846877c545e753d310f5dc4d092d](https://github.com/user-attachments/assets/5aa5ce35-13d1-4056-b971-61e9c463e9ab)
-
Instead of using OpenAI (#69), we want to use a local model that runs on the device (makes it free!).
-
local llm like ollama can do same things but not efficiency depends on llm like llama 3.2 and in example no data fields added.