lm-sys / FastChat

An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
Apache License 2.0
36.91k stars 4.55k forks source link

How to use langchain and openai API with multiple models #2761

Open HSIAOKUOWEI opened 11 months ago

HSIAOKUOWEI commented 11 months ago

I want to use EMBEDDING of 2 different models separately using OpenAIEmbedding, how do I change my command

python -m fastchat.serve.controller python -m fastchat.serve.multi_model_worker --model-names "gpt-3.5-turbo,text-davinci-003,text-embedding-ada-002" --model-path llama\llama2-7b --load-8bit --model-names "gpt-3.5-turbo-instruct,text-davinci-003,text-embedding-ada-002" --model-path llama\Llama-2-7b-chat-hf --load-8bit python -m fastchat.serve.openai_api_server --host localhost --port 8000

HSIAOKUOWEI commented 11 months ago

Also using 2 different models using CHATOpenAI(), how can I change my command