eosphoros-ai / DB-GPT

AI Native Data App Development framework with AWEL(Agentic Workflow Expression Language) and Agents
http://docs.dbgpt.cn
MIT License
13.55k stars 1.8k forks source link

[Bug][Install] When using llm proxy,Whether it is necessary to mount the /data/models directory #1448

Closed caicongyang closed 6 months ago

caicongyang commented 6 months ago

Search before asking

Description

docker run -d -p 3307:3306 \ -p 5000:5000 \ -e LOCAL_DB_HOST=127.0.0.1 \ -e LOCAL_DB_PASSWORD=aa123456 \ -e MYSQL_ROOT_PASSWORD=aa123456 \ -e LLM_MODEL=proxyllm \ -e PROXY_API_KEY=11111 \ -e PROXY_SERVER_URL=http://192.168.1.10:3000/api \ -e LANGUAGE=zh \ --name db-gpt-allinone \ eosphorosai/dbgpt

使用docker 启动提示 ValueError: Path /app/models/text2vec-large-chinese not found 使用代理llm ,是否仍需要挂载/data/models 目录? 在中文使用文档中有相关的歧义;能否帮忙解答下

Documentation Links

https://www.yuque.com/eosphoros/dbgpt-docs/glf87qg4xxcyrp89

Are you willing to submit PR?

Aries-ckt commented 6 months ago

@caicongyang hi, text2vec-large-chinese model is embedding model, not llm model, so you need to mount it.

yyhhyyyyyy commented 6 months ago

Hi, @caicongyang If you are using the OpenAI embedding model, please find the EMBEDDING_MODEL option in the .env file and set it up step by step. If you are using a local embedding model, please mount the /data/models directory and download the required model into that directory. It's important to note that Embedding models and LLMs (Large Language Models) are two different types of models. Both are needed when in use.

caicongyang commented 6 months ago

Thank you. I use the fastgpt proxy llm model and local embedding model; Can I simply gitclone embedding model into my local directory and start Docker?

$ mkdir models && cd models $ git clone https://huggingface.co/GanymedeNil/text2vec-large-chinese

docker run --ipc host -d \ -p 5000:5000 \ -e LOCAL_DB_TYPE=sqlite \ -e LOCAL_DB_PATH=data/default_sqlite.db \ -e PROXY_API_KEY=fastgpt-gVnR9vKKr8YQLq7THcNB7ULK4y6uKFsKQR7sFwPB9amOVz5YA9h4EMDhEoyEWP5Y \ -e PROXY_SERVER_URL=http://10.210.13.25:3000/api/v1/chat/completions \ -e LANGUAGE=zh \ -v /app/models/text2vec-large-chinese:/app/models/text2vec-large-chinese \ -e LANGUAGE=zh \ --name dbgpt \ eosphorosai/dbgpt

yyhhyyyyyy commented 6 months ago

Thank you. I use the fastgpt proxy llm model and local embedding model; Can I simply gitclone embedding model into my local directory and start Docker?谢谢。我使用 fastgpt 代理 llm 模型和本地嵌入模型;我能简单地将嵌入模型 gitclone 到本地目录并启动 Docker 吗?

$ mkdir models && cd models $ git clone https://huggingface.co/GanymedeNil/text2vec-large-chinese

docker run --ipc host -d \ -p 5000:5000 \ -e LOCAL_DB_TYPE=sqlite \ -e LOCAL_DB_PATH=data/default_sqlite.db \ -e PROXY_API_KEY=fastgpt-gVnR9vKKr8YQLq7THcNB7ULK4y6uKFsKQR7sFwPB9amOVz5YA9h4EMDhEoyEWP5Y \ -e PROXY_SERVER_URL=http://10.210.13.25:3000/api/v1/chat/completions \ -e LANGUAGE=zh \ -v /app/models/text2vec-large-chinese:/app/models/text2vec-large-chinese \ -e LANGUAGE=zh \ --name dbgpt \ eosphorosai/dbgpt

Please note that the command -v /app/models/text2vec-large-chinese:/app/models/text2vec-large-chinese is used to mount a volume in Docker. Make sure that the directory specified before the colon is your local directory and that it contains the necessary files. This ensures that the specified model is available to the application running in the Docker container.

caicongyang commented 6 months ago

thanks , it's running; My Docker command is as follows :

PROXY_API_KEY="fastgpt-gVnR9vKKr8YQLq7THcNB7ULK4y6uKFsKQR7sFwPB9amOVz5YA9h4EMDhEoyEWP5Y" PROXY_SERVER_URL="http://10.210.13.25:3000/api/v1/chat/completions" docker run --ipc host -d \ -p 5670:5670 \ -e LOCAL_DB_TYPE=sqlite \ -e LOCAL_DB_PATH=data/default_sqlite.db \ -e LANGUAGE=zh \ -v /data01/models/text2vec-large-chinese:/app/models/text2vec-large-chinese \ -e LLM_MODEL=proxyllm \ -e PROXY_API_KEY=$PROXY_API_KEY \ -e PROXY_SERVER_URL=$PROXY_SERVER_URL \ --name dbgpt \ eosphorosai/dbgpt

let me enjoy it