-
Has anyone tried running it locally?
I adapted it for use with LM Studio by changing the tokenizer, LLM calls, and configurations. The connection to the API endpoint works, and persona creation is su…
-
![image](https://github.com/user-attachments/assets/222b56eb-f871-4097-bde3-36cfd3c0b4ef)
-
#### ALL software version info
(this library, plus any other relevant software, e.g. bokeh, python, notebook, OS, browser, etc should be added within the dropdown below.)
Software Version Info
…
-
Is it possible to use local models or are there any plans for that to happen? For example, using models from Hugging Face like the meta-llama/Llama-3.2-11B-Vision.
-
how to set base_url and model in python sdk?
-
Hi,
is it possible to use vLLM endpoint for OpenAI where we can set the base_url instead of OpenAI?
I had a similar issue with Weave where I wanted to trace local LLMs. Would be great if it’s supp…
-
### Description
Sorry for the silly question .
Does Kotaemon has in built local LLM .I am not connected to any model yet document analysis is working .
How do I connect to my local LLM .i see o…
-
When I download the gguf file of Qwen 2.5 from Hugging Face and deploy it as an LLM for LightRag through Ollama's modelfile, it always gets stuck at the last step, no matter how large or small my txt …
-
I have run a vllm proxy server with my fine-tuned local llm, and have the URL for the vLLM proxy server. How can I use it within Knowleadge-Table in the same way as OpenAI servers. THX
-
I would suggest the `Ollama` api as that is well documented and supports many llms.