langchain-ai / langchain

πŸ¦œπŸ”— Build context-aware reasoning applications
https://python.langchain.com
MIT License
94.9k stars 15.37k forks source link

AttributeError: module 'openai' has no attribute 'Completion' #12967

Closed dosuken123 closed 1 year ago

dosuken123 commented 1 year ago

System Info

Downloading langchain-0.0.331-py3-none-any.whl (2.0 MB) Downloading openai-1.1.1-py3-none-any.whl (217 kB)

Who can help?

No response

Information

Related Components

Reproduction

Run the following code in colab:

!pip install langchain
!pip install openai

from langchain.llms import OpenAI

OpenAI().predict("hoge")

You'll get:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
[<ipython-input-5-0eec0f4f0523>](https://localhost:8080/#) in <cell line: 4>()
      2 from langchain.llms import OpenAI
      3 
----> 4 OpenAI().predict("hoge")

3 frames
[/usr/local/lib/python3.10/dist-packages/langchain/llms/openai.py](https://localhost:8080/#) in validate_environment(cls, values)
    264             import openai
    265 
--> 266             values["client"] = openai.Completion
    267         except ImportError:
    268             raise ImportError(

AttributeError: module 'openai' has no attribute 'Completion'

Expected behavior

This was working until yesterday. It's likely due to the openai dependency update.

dosubot[bot] commented 1 year ago

πŸ€–

Hello @dosuken123! I'm Dosu, a friendly bot here to assist you with LangChain. I'm able to help troubleshoot bugs, answer questions, and guide you on becoming a contributor. While we're waiting for a human maintainer to join, consider me your go-to helper. I'm currently looking into your issue and will get back to you with a detailed response shortly. Hang tight!

dosubot[bot] commented 1 year ago

πŸ€–

Based on the error message you're encountering, it seems like the 'openai' module has been updated and the 'Completion' attribute has been removed or renamed. This is likely the reason why your code was working until yesterday but is now throwing an AttributeError.

A similar issue was encountered in the past as documented in the LangChain repository issue titled openai has no ChatCompletion attribute. The suggested solution in that issue was to modify the code in azure_openai.py to directly use openai.ChatCompletion instead of openai.Completion.

Here's the suggested code modification:

        openai.api_key = openai_api_key
        if openai_organization:
            openai.organization = openai_organization
        if openai_api_base:
            openai.api_base = openai_api_base

        values["client"] = openai.ChatCompletion

        if values["n"] < 1:
            raise ValueError("n must be at least 1.")
        if values["n"] > 1 and values["streaming"]:
            raise ValueError("n must be 1 when streaming.")
        return values

Please try this modification and see if it resolves your issue. If the problem persists, it would be helpful to know the exact version of the 'openai' module you're using. You can find this by running pip show openai in your Python environment. This will help in diagnosing if the issue is due to a specific version of the 'openai' module.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a πŸ‘ if this is helpful and πŸ‘Ž if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

dosuken123 commented 1 year ago

Confirmed openai==v0.28.1 is https://github.com/openai/openai-python/releases/tag/v0.28.1 working correctly.

Likely this issue started since V1.0.0 release https://github.com/openai/openai-python/releases/tag/v1.0.0 which happened a few hours ago

dosuken123 commented 1 year ago

In v0.28, the Completion class was exposed at the openai package namespace https://github.com/openai/openai-python/blob/v0.28.1/openai/__init__.py#L22, however, v1.0.x doesn't do that anymore https://github.com/openai/openai-python/blob/v1.0.0/src/openai/__init__.py.

We need to find an openai.Completion equivalent class in the new structure.

IANTHEREAL commented 1 year ago

@dosuken123 Will langchain be compatible with the latest api soon?

BrianHHough commented 1 year ago

@dosuken123 - your suggestion of switching to using openai==v0.28.1 with langchain worked like a charm and saved me many more hours of debugging. Thank you!!

dosuken123 commented 1 year ago

Opened a fix https://github.com/langchain-ai/langchain/pull/12973

sixdjango commented 1 year ago

nice, I wish published new version soon

akarshghale commented 1 year ago

Still facing this issue. When will this be fixed?

cadaats commented 1 year ago

Confirmed openai==v0.28.1 is https://github.com/openai/openai-python/releases/tag/v0.28.1 working correctly.

Likely this issue started since V1.0.0 release https://github.com/openai/openai-python/releases/tag/v1.0.0 which happened a few hours ago

Saved my day, rolling back to 0.28.1

SavvasMohito commented 1 year ago

Still facing this issue. When will this be fixed?

The fix seems to be merged in #12969 so we are just waiting for the new version to be pushed. Using the previous openai versions should work until then

CorruptEntity0982 commented 1 year ago

For those who are wondering how to rollback to v0.28.1 Open your terminal and type pip install --upgrade openai=0.28.1 This will rollback your current version to 0.28.1 Restarting your kernel should fix the issue

MikeyBeez commented 1 year ago

Basically, by changing the API, OpenAI is guaranteeing that we won't have usable local models. We need a different API, or they'll just run over us every time. How difficult is it to get a response back from the server without using their POS not so open API? And if it was just a change why don't older versions of openai work any more?

tomiezhang commented 1 year ago

openai: 0.28.1 langchain: 0.0.331 ` lm = OpenAI(model="gpt-4",openai_api_key=openai_key,openai_api_base=openai_proxy_site)

lm.predict("Who's the president of the united states?")

` and still cant use langchain... what can i do ?

mayank00927 commented 1 year ago

I also faced same issue since yesterday, thanks to @dosuken123 , I switched to "openai==v0.28.1" and now it's working fine with langchain.

m2web commented 1 year ago

While pip install --upgrade openai==0.28.1 worked, will openai be correcting this issue soon?

parland commented 1 year ago

I guess langchain needs to update source code to match the changes in openai API this works fine on my instance

response = openai.completions.create(
  model="text-davinci-002",
  prompt="Once upon a time,",
  max_tokens=50
)
idoneo commented 1 year ago

uninstall openai install openai==v0.28.1 solved the AttributeError: module 'openai' has no attribute 'Completion'

YCHuang2112sub commented 1 year ago

for chat it shall be like:

response = openai.chat.completions.create( model="gpt-3.5-turbo", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who won the world series in 2020?"}, {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."}, {"role": "user", "content": "Where was it played?"} ] )

https://platform.openai.com/docs/guides/text-generation/chat-completions-api

openai.completions.create(model="gpt-3.5-turbo-instruct", prompt="Say this is a test",) is for sentence completion and instruction

MikeyBeez commented 1 year ago

uninstall openai install openai==v0.28.1 solved the AttributeError: module 'openai' has no attribute 'Completion'

Thanks for that. OpenAI must have forgotten to delete that version. I hade tried 1.0, but it's no longer working either. Funny, because it was working two days ago. BTW, I'm dumping the openai API before they do this to me again. After all, it's just an LLM. LLMs will continue to get better and better. Right now they's very helpful, but one needs a platform one can trust for developing anything meaningful. I'll continue to code using local models, and when something truly awesome hits the AI stage, I'll point my code there. In any case, you can't use openai functions on local models anyway, so anything I build will continue to break. If I weren't so busy, I'd build openapi for local models for the community. In any case, we should think carefully about supporting a company that cares so little for its developers. Are these the guys we want holding all the cards? I mean, how greedy are they?

YCHuang2112sub commented 1 year ago

I think the biggest issue here is that it shall not directly deleted the function. Before moving it do the deprecated status and redirect the user to the new function. The changes of the structure may due to the more features it provide and want to put them into a more structural way

MikeyBeez commented 1 year ago

I think the biggest issue here is that it shall not directly deleted the function. Before moving it do the deprecated status and redirect the user to the new function. The changes of the structure may due to the more features it provide and want to put them into a more structural way

That's exactly right. It's a long standing pythonic practice. OpenAI went out of their way to hurt developers. That doesn't bode well for an OpenAI-dominated future. My advice is, don't give them a nickel. They've proved they aren't trust worthy.

MichaelHoughtonDeBox commented 1 year ago

Other two things to make sure you have correct is the imports for LLMChain and PromptTemplate.

This seem to have changed over the past few weeks?

from langchain.chains import LLMChain from langchain.prompts import PromptTemplate

dosuken123 commented 1 year ago

Reopened a fix https://github.com/langchain-ai/langchain/pull/13099.

dosuken123 commented 1 year ago

Closing as https://github.com/langchain-ai/langchain/pull/13099 was released in 0.0.334