i start the embedding model with fastchat like
python3 -m fastchat.serve.controller --host 127.0.0.1 --port 7008 & python3 -m fastchat.serve.model_worker --host 127.0.0.1 --port 7007 --worker-address http://127.0.0.1:7007 --controller-address http://127.0.0.1:7008 --model-path bge-large-zh-v1.5 --model-name text-davinci-003 & python3 -m fastchat.serve.openai_api_server --host 127.0.0.1 --port 7009 --controller-address http://127.0.0.1:7008
and get
but when i test the embedding service with python code like
from langchain.embeddings.openai import OpenAIEmbeddings
import os
os.environ['OPENAI_API_KEY']="EMPTY"
os.environ['OPENAI_API_BASE']="http://41.230.180.239:7009/v1"
embedding=OpenAIEmbeddings(model="text-davinci-003")
print(embedding.embed_query("str"))
i get a error respose like this
how to solve this problem, please
It appears that you are using an SSH connection, suggesting that the system is operating on a remote GPU server. There are two primary methods to establish a connection:
Utilize a public network port and expose this port to your client.
Implement SSH port forwarding, also known as local port mapping, and configure the SSH identity accordingly.
i start the embedding model with fastchat like
python3 -m fastchat.serve.controller --host 127.0.0.1 --port 7008 & python3 -m fastchat.serve.model_worker --host 127.0.0.1 --port 7007 --worker-address http://127.0.0.1:7007 --controller-address http://127.0.0.1:7008 --model-path bge-large-zh-v1.5 --model-name text-davinci-003 & python3 -m fastchat.serve.openai_api_server --host 127.0.0.1 --port 7009 --controller-address http://127.0.0.1:7008
and get but when i test the embedding service with python code likei get a error respose like this how to solve this problem, please