Closed mobguang closed 1 day ago
No response
We hope ragflow support run GLM-4 LLM locally.
https://github.com/THUDM/GLM-4
If you can deploy GLM-4 locally by Xinference or any other backend which can offer OpenAI style's API, then you can integrate it into RAGFlow.
Is there an existing issue for the same feature request?
Is your feature request related to a problem?
No response
Describe the feature you'd like
We hope ragflow support run GLM-4 LLM locally.
https://github.com/THUDM/GLM-4
Describe implementation you've considered
No response
Documentation, adoption, use case
No response
Additional information
No response