OpenGVLab / Ask-Anything

[CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.
https://vchat.opengvlab.com/
MIT License
3.07k stars 252 forks source link

Langchain uses wrong OpenAI endpoint #29

Open JeffJassky opened 1 year ago

JeffJassky commented 1 year ago

Context:

Using video_chat

Error:

**openai.error.InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?**

Work around:

In chatbot.py:

  1. import OpenAiChat instead of OpenAi from langchain.llms.openai on line 4.
  2. Use self.llm = OpenAIChat(...) instead of self.llm = OpenAI(...) on line 70.
yinanhe commented 1 year ago

What's your langchain version?

JeffJassky commented 1 year ago

langchain 0.0.101 (as defined in video_chat/requirements.txt) openai 0.27.6

ShaiShmuel commented 1 year ago

Context:

Using video_chat

Error:

**openai.error.InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?**

Work around:

In chatbot.py:

  1. import OpenAiChat instead of OpenAi from langchain.llms.openai on line 4.
  2. Use self.llm = OpenAIChat(...) instead of self.llm = OpenAI(...) on line 70.

Had the same issue, that solution did worked for me, thank you!

xuf12 commented 1 year ago

The chat module is renamed to ChatOpenAI (langchain==0.0.228, openai==0.27.8). The following worked for me.

from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(...)
tkreuder commented 1 year ago

The chat module is renamed to ChatOpenAI (langchain==0.0.228, openai==0.27.8). The following worked for me.

from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(...)

Made my Day 💯

XieJiSS commented 10 months ago

Recent versions of langchain now suggest importing ChatOpenAI from langchain_community.chat_models instead of langchain.chat_models:

UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain_community.chat_models import ChatOpenAI`
  warnings.warn(

So it should be

from langchain_community.chat_models import ChatOpenAI
llm = ChatOpenAI(...)
Stoik-Reddy commented 10 months ago

@xuf12 can you please suggest me the code to use gpt-4 model with langchain. i'm using like this: from langchain.chat_models import ChatOpenAI from langchain.chains import ConversationChain from langchain.memory import ConversationBufferMemory from langchain import LLMChain from langchain.prompts.prompt import PromptTemplate import os os.environ["OPENAI_API_KEY"] = '******' llm = ChatOpenAI(temperature=0, model_name='gpt-3.5-turbo')

thanks

yinanhe commented 9 months ago

@Stoik-Reddy Change your model_name to "gpt-4" is fine.

makaveli006 commented 3 months ago

Context:

Using video_chat

Error:

**openai.error.InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?**

Work around:

In chatbot.py:

  1. import OpenAiChat instead of OpenAi from langchain.llms.openai on line 4.
  2. Use self.llm = OpenAIChat(...) instead of self.llm = OpenAI(...) on line 70.

i did modifications as of below

initially => from langchain import OpenAI modification => from langchain.chat_models import ChatOpenAI

initially => llm = OpenAI(temperature=0.9, max_tokens=500,model_name="gpt-4") modification => llm = ChatOpenAI(temperature=0.9, max_tokens=500,model_name="gpt-4")