Open copeelite opened 1 month ago
@copeelite can you say more, I don't understand what this issue is about. Are you saying you want to work with local LLMs vs. remote? If so, you use something that provides an OpenAI-compatible API and run it locally, and add it as a provider.
local LLMs
Dear Professor Yes I want to use this with my local LLMs, which is llama 3.1 with fine-tuning, I am not sure if this one work, and when I run this project locally, it seems that the sign-in function is not working. I feel it's better to have an env.example file?
Run a chat completions endpoint with something like https://github.com/ollama/ollama/blob/main/docs/api.md
Need a way to prepare local computational resources to deploy the model and modify the application's code to communicate with the local model.