tarasglek / chatcraft.org

Developer-oriented ChatGPT clone
https://chatcraft.org/
MIT License
155 stars 36 forks source link

Implementing offline local usage #696

Open copeelite opened 1 month ago

copeelite commented 1 month ago

Need a way to prepare local computational resources to deploy the model and modify the application's code to communicate with the local model. Xnip2024-10-05_13-35-08

humphd commented 1 month ago

@copeelite can you say more, I don't understand what this issue is about. Are you saying you want to work with local LLMs vs. remote? If so, you use something that provides an OpenAI-compatible API and run it locally, and add it as a provider.

copeelite commented 1 month ago

local LLMs

Dear Professor Yes I want to use this with my local LLMs, which is llama 3.1 with fine-tuning, I am not sure if this one work, and when I run this project locally, it seems that the sign-in function is not working. I feel it's better to have an env.example file?

humphd commented 1 month ago

Run a chat completions endpoint with something like https://github.com/ollama/ollama/blob/main/docs/api.md