Open JeffJassky opened 1 year ago
What's your langchain
version?
langchain 0.0.101 (as defined in video_chat/requirements.txt) openai 0.27.6
Context:
Using video_chat
Error:
**openai.error.InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?**
Work around:
In
chatbot.py
:
- import
OpenAiChat
instead ofOpenAi
fromlangchain.llms.openai
on line 4.- Use
self.llm = OpenAIChat(...)
instead ofself.llm = OpenAI(...)
on line 70.
Had the same issue, that solution did worked for me, thank you!
The chat module is renamed to ChatOpenAI (langchain==0.0.228, openai==0.27.8). The following worked for me.
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(...)
The chat module is renamed to ChatOpenAI (langchain==0.0.228, openai==0.27.8). The following worked for me.
from langchain.chat_models import ChatOpenAI llm = ChatOpenAI(...)
Made my Day 💯
Recent versions of langchain now suggest importing ChatOpenAI
from langchain_community.chat_models
instead of langchain.chat_models
:
UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain_community.chat_models import ChatOpenAI`
warnings.warn(
So it should be
from langchain_community.chat_models import ChatOpenAI
llm = ChatOpenAI(...)
@xuf12 can you please suggest me the code to use gpt-4 model with langchain.
i'm using like this:
from langchain.chat_models import ChatOpenAI from langchain.chains import ConversationChain from langchain.memory import ConversationBufferMemory from langchain import LLMChain from langchain.prompts.prompt import PromptTemplate import os os.environ["OPENAI_API_KEY"] = '******' llm = ChatOpenAI(temperature=0, model_name='gpt-3.5-turbo')
thanks
@Stoik-Reddy Change your model_name to "gpt-4" is fine.
Context:
Using video_chat
Error:
**openai.error.InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?**
Work around:
In
chatbot.py
:
- import
OpenAiChat
instead ofOpenAi
fromlangchain.llms.openai
on line 4.- Use
self.llm = OpenAIChat(...)
instead ofself.llm = OpenAI(...)
on line 70.
i did modifications as of below
initially => from langchain import OpenAI modification => from langchain.chat_models import ChatOpenAI
initially => llm = OpenAI(temperature=0.9, max_tokens=500,model_name="gpt-4") modification => llm = ChatOpenAI(temperature=0.9, max_tokens=500,model_name="gpt-4")
Context:
Using video_chat
Error:
**openai.error.InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?**
Work around:
In
chatbot.py
:OpenAiChat
instead ofOpenAi
fromlangchain.llms.openai
on line 4.self.llm = OpenAIChat(...)
instead ofself.llm = OpenAI(...)
on line 70.