langchain-ai / langchain

πŸ¦œπŸ”— Build context-aware reasoning applications
https://python.langchain.com
MIT License
94.63k stars 15.31k forks source link

ChatOpenAI: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.chat_completion.ChatCompletion'> #5000

Closed sahand68 closed 9 months ago

sahand68 commented 1 year ago

System Info

langchain==0.0.169 openai==0.27.6

Who can help?

@hwchase17 @agola11 @vowelparrot

Information

Related Components

Reproduction

from langchain.embeddings import OpenAIEmbeddings from dotenv import load_dotenv load_dotenv('.env')

ChatOpenAI(temperature=0, max_tokens=500, model_name='gpt-3.5-turbo', openai_api_base = os.environ['OPENAI_API_BASE'] ).call_as_llm('Hi')

Expected behavior

[nltk_data] Downloading package stopwords to /home/sahand/nltk_data... [nltk_data] Package stopwords is already up-to-date! [nltk_data] Downloading package punkt to /home/sahand/nltk_data... [nltk_data] Package punkt is already up-to-date! Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.chat_completion.ChatCompletion'> Invalid API key.

trend-ted-zhang commented 1 year ago

I have same issue when I use the Azure OpenAI Embedding service.

from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(deployment="text-embedding-ada-002", model="text-embedding-ada-002")
text = "This is a test query."
query_result = embeddings.embed_query(text)
print(query_result)
Traceback (most recent call last):
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/embedding.py", line 55, in <module>
    main()
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/embedding.py", line 44, in main
    query_result = embeddings.embed_query(text)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 300, in embed_query
    embedding = self._embedding_func(text, engine=self.deployment)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 266, in _embedding_func
    return embed_with_retry(
           ^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 64, in embed_with_retry
    return _embed_with_retry(**kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/tenacity/__init__.py", line 289, in wrapped_f
    return self(f, *args, **kw)
           ^^^^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/tenacity/__init__.py", line 379, in __call__
    do = self.iter(retry_state=retry_state)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/tenacity/__init__.py", line 314, in iter
    return fut.result()
           ^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/python@3.11/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 449, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/python@3.11/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/tenacity/__init__.py", line 382, in __call__
    result = fn(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 62, in _embed_with_retry
    return embeddings.client.create(**kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/openai/api_resources/embedding.py", line 33, in create
    response = super().create(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 149, in create
    ) = cls.__prepare_create_request(
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 83, in __prepare_create_request
    raise error.InvalidRequestError(
openai.error.InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.embedding.Embedding'>
darrynv commented 1 year ago

llm = AzureOpenAI( deployment_name="text-davinci-003", model_name="text-davinci-003", temperature=0, openai_api_base=openai.api_base, openai_api_key=openai.api_key ) chain = load_qa_chain(llm, chain_type="stuff") lchain_result = chain.run({"input_documents": documents, "question": query, "return_only_outputs": True} )

Check the doco https://python.langchain.com/en/latest/modules/models/llms/integrations/azure_openai_example.html

rnavarromatesanz commented 1 year ago

Hi everyone,

I don't know if this topic has been resolved, in my case I added the parameter "engine" into the function call

chat = ChatOpenAI(temperature=0.0, engine="gpt-35-turbo")
chat

And I got this warning.

WARNING! engine is not default parameter.
                    engine was transferred to model_kwargs.
                    Please confirm that engine is what you intended.

But it worked for me... and the AzureOpenAI servce answered correctly.

Regards

Leonor-Fernandes commented 1 year ago

I have same issue when I use the Azure OpenAI Embedding service.

from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(deployment="text-embedding-ada-002", model="text-embedding-ada-002")
text = "This is a test query."
query_result = embeddings.embed_query(text)
print(query_result)
Traceback (most recent call last):
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/embedding.py", line 55, in <module>
    main()
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/embedding.py", line 44, in main
    query_result = embeddings.embed_query(text)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 300, in embed_query
    embedding = self._embedding_func(text, engine=self.deployment)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 266, in _embedding_func
    return embed_with_retry(
           ^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 64, in embed_with_retry
    return _embed_with_retry(**kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/tenacity/__init__.py", line 289, in wrapped_f
    return self(f, *args, **kw)
           ^^^^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/tenacity/__init__.py", line 379, in __call__
    do = self.iter(retry_state=retry_state)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/tenacity/__init__.py", line 314, in iter
    return fut.result()
           ^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/python@3.11/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 449, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/python@3.11/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/tenacity/__init__.py", line 382, in __call__
    result = fn(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 62, in _embed_with_retry
    return embeddings.client.create(**kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/openai/api_resources/embedding.py", line 33, in create
    response = super().create(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 149, in create
    ) = cls.__prepare_create_request(
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 83, in __prepare_create_request
    raise error.InvalidRequestError(
openai.error.InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.embedding.Embedding'>

Anyone able to fix this issue with the OpenAI Embedding service?

DSgUY commented 1 year ago

you only need this:

embeddings: OpenAIEmbeddings = OpenAIEmbeddings(
    openai_api_base= f"https://{AZURE_OPENAI_SERVICE}.openai.azure.com",
    openai_api_type='azure',
    deployment='text-embedding-ada-002',
    openai_api_key=AZURE_OPENAI_API_KEY,
    chunk_size=1,
)

query_result = embeddings.embed_query("is this issue solve?")

My deployment name and model name are the same: text-embedding-ada-002

levalencia commented 1 year ago

embeddings: OpenAIEmbeddings = OpenAIEmbeddings( openai_api_base= f"https://{AZURE_OPENAI_SERVICE}.openai.azure.com", openai_api_type='azure', deployment='text-embedding-ada-002', openai_api_key=AZURE_OPENAI_API_KEY, chunk_size=1, )

this worked for me

I think open AI SDK changed, because my code was working in the past.

mrbusche commented 1 year ago

This worked for me, I added openai_api_type and I had to remove openai_api_version

return OpenAIEmbeddings(
    deployment="deployment-name",
    model="text-embedding-ada-002",
    openai_api_type='azure',
    chunk_size=1,
)
DSgUY commented 1 year ago

@levalencia or @mrbusche are you having this bug? https://github.com/hwchase17/langchain/issues/7841

levalencia commented 1 year ago

openai_api_type='azure',

Yes, Adding openai_api_type='azure', fixes the issue

langchain==0.0.232

itsalwaysamir commented 1 year ago

I have same issue when I use the Azure OpenAI Embedding service.

from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(deployment="text-embedding-ada-002", model="text-embedding-ada-002")
text = "This is a test query."
query_result = embeddings.embed_query(text)
print(query_result)
Traceback (most recent call last):
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/embedding.py", line 55, in <module>
    main()
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/embedding.py", line 44, in main
    query_result = embeddings.embed_query(text)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 300, in embed_query
    embedding = self._embedding_func(text, engine=self.deployment)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 266, in _embedding_func
    return embed_with_retry(
           ^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 64, in embed_with_retry
    return _embed_with_retry(**kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/tenacity/__init__.py", line 289, in wrapped_f
    return self(f, *args, **kw)
           ^^^^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/tenacity/__init__.py", line 379, in __call__
    do = self.iter(retry_state=retry_state)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/tenacity/__init__.py", line 314, in iter
    return fut.result()
           ^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/python@3.11/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 449, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/python@3.11/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/tenacity/__init__.py", line 382, in __call__
    result = fn(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 62, in _embed_with_retry
    return embeddings.client.create(**kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/openai/api_resources/embedding.py", line 33, in create
    response = super().create(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 149, in create
    ) = cls.__prepare_create_request(
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tedzhang/Work/AI_Code/ChatWithPDF/.venv/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 83, in __prepare_create_request
    raise error.InvalidRequestError(
openai.error.InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.embedding.Embedding'>

I have the same issue exactly. did you find any solutions?

levalencia commented 1 year ago

embeddings: OpenAIEmbeddings = OpenAIEmbeddings(

openai_api_base= f"https://{AZURE_OPENAI_SERVICE}.openai.azure.com",

openai_api_type='azure',

deployment='text-embedding-ada-002',

openai_api_key=AZURE_OPENAI_API_KEY,

chunk_size=1,

)

this worked for me

I think open AI SDK changed, because my code was working in the past.

Yes here

ShreyashKumarpandey commented 1 year ago

Hello Everyone, I am getting same error when I am trying to use GPT-4 from OpenAI directly.

ShreyashKumarpandey commented 1 year ago

I finally solved it. Azure related variables were also loaded so it was making the library to raise Azure related requirements,

Krithika-Devi commented 1 year ago

The solutions are given for OpenAIEmbeddings. But I want to solve this error when I use HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2") and openai_api_type='azure'

can anyone help me? @levalencia

levalencia commented 1 year ago

openai_api_type is only needed when using OpenAIEmbeddings, I dont think its needed when using HuggingFaceEmbeddings, maybe you are missing another parameter? can you paste the code?

ksaurabh21 commented 1 year ago

passing the model deployment name like this worked for me: chat = ChatOpenAI(temperature=0.0, model_kwargs={"engine": model_name})

dosubot[bot] commented 9 months ago

Hi, @sahand68,

I'm helping the LangChain team manage their backlog and am marking this issue as stale. The issue is related to the ChatOpenAI module, where users are encountering an error message "Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.chat_completion.ChatCompletion'>". There have been discussions and code snippets shared by several users, suggesting adding the 'engine' parameter or the 'openai_api_type' parameter to the function call. Additionally, one user mentioned that Azure related variables were causing the library to raise Azure related requirements. The issue is currently unresolved.

Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or the issue will be automatically closed in 7 days. Thank you!

sahand68 commented 9 months ago

Hi,

I haven't faced this issue anymore since new versions of langchain!

Best, Sahand

On Fri, Feb 9, 2024 at 8:03β€―AM dosubot[bot] @.***> wrote:

Hi, @sahand68 https://github.com/sahand68,

I'm helping the LangChain team manage their backlog and am marking this issue as stale. The issue is related to the ChatOpenAI module, where users are encountering an error message "Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.chat_completion.ChatCompletion'>". There have been discussions and code snippets shared by several users, suggesting adding the 'engine' parameter or the 'openai_api_type' parameter to the function call. Additionally, one user mentioned that Azure related variables were causing the library to raise Azure related requirements. The issue is currently unresolved.

Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or the issue will be automatically closed in 7 days. Thank you!

β€” Reply to this email directly, view it on GitHub https://github.com/langchain-ai/langchain/issues/5000#issuecomment-1936188349, or unsubscribe https://github.com/notifications/unsubscribe-auth/AJRG4YWTQL24ZAVBHQOYKZ3YSZCFDAVCNFSM6AAAAAAYIGMNWCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMZWGE4DQMZUHE . You are receiving this because you were mentioned.Message ID: @.***>

dosubot[bot] commented 9 months ago

πŸ€–

Hi @sahand68,

Thank you for confirming that the issue is no longer present in the latest versions of LangChain. We appreciate your feedback. We will proceed to close the issue accordingly.

Best regards, [Your Name]

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a πŸ‘ if this is helpful and πŸ‘Ž if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.