Reading the related issue, it says to use ollama to start a local model, but https://ollama.com/library doesn't support ChatGLM,or needs a lot of work to support ChatGLM with ollama, also, currently already using fastchat to deploy other apps, so would like to be able to reuse this model, please Can I start a big model using fastchat and wrap the interface myself using fastapi, disguised as ollama?
What are the key interfaces I need to provide to ragflow?
Describe your problem
Reading the related issue, it says to use ollama to start a local model, but
https://ollama.com/library
doesn't support ChatGLM,or needs a lot of work to support ChatGLM with ollama, also, currently already using fastchat to deploy other apps, so would like to be able to reuse this model, please Can I start a big model using fastchat and wrap the interface myself using fastapi, disguised as ollama? What are the key interfaces I need to provide to ragflow?