Closed dosuken123 closed 1 year ago
π€
Hello @dosuken123! I'm Dosu, a friendly bot here to assist you with LangChain. I'm able to help troubleshoot bugs, answer questions, and guide you on becoming a contributor. While we're waiting for a human maintainer to join, consider me your go-to helper. I'm currently looking into your issue and will get back to you with a detailed response shortly. Hang tight!
π€
Based on the error message you're encountering, it seems like the 'openai' module has been updated and the 'Completion' attribute has been removed or renamed. This is likely the reason why your code was working until yesterday but is now throwing an AttributeError.
A similar issue was encountered in the past as documented in the LangChain repository issue titled openai
has no ChatCompletion
attribute. The suggested solution in that issue was to modify the code in azure_openai.py
to directly use openai.ChatCompletion
instead of openai.Completion
.
Here's the suggested code modification:
openai.api_key = openai_api_key
if openai_organization:
openai.organization = openai_organization
if openai_api_base:
openai.api_base = openai_api_base
values["client"] = openai.ChatCompletion
if values["n"] < 1:
raise ValueError("n must be at least 1.")
if values["n"] > 1 and values["streaming"]:
raise ValueError("n must be 1 when streaming.")
return values
Please try this modification and see if it resolves your issue. If the problem persists, it would be helpful to know the exact version of the 'openai' module you're using. You can find this by running pip show openai
in your Python environment. This will help in diagnosing if the issue is due to a specific version of the 'openai' module.
openai
has no ChatCompletion
attributeThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a π if this is helpful and π if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
Confirmed openai==v0.28.1
is https://github.com/openai/openai-python/releases/tag/v0.28.1 working correctly.
Likely this issue started since V1.0.0 release https://github.com/openai/openai-python/releases/tag/v1.0.0 which happened a few hours ago
In v0.28, the Completion
class was exposed at the openai
package namespace https://github.com/openai/openai-python/blob/v0.28.1/openai/__init__.py#L22, however, v1.0.x doesn't do that anymore https://github.com/openai/openai-python/blob/v1.0.0/src/openai/__init__.py.
We need to find an openai.Completion
equivalent class in the new structure.
@dosuken123 Will langchain be compatible with the latest api soon?
@dosuken123 - your suggestion of switching to using openai==v0.28.1
with langchain worked like a charm and saved me many more hours of debugging. Thank you!!
Opened a fix https://github.com/langchain-ai/langchain/pull/12973
nice, I wish published new version soon
Still facing this issue. When will this be fixed?
Confirmed
openai==v0.28.1
is https://github.com/openai/openai-python/releases/tag/v0.28.1 working correctly.Likely this issue started since V1.0.0 release https://github.com/openai/openai-python/releases/tag/v1.0.0 which happened a few hours ago
Saved my day, rolling back to 0.28.1
Still facing this issue. When will this be fixed?
The fix seems to be merged in #12969 so we are just waiting for the new version to be pushed. Using the previous openai versions should work until then
For those who are wondering how to rollback to v0.28.1
Open your terminal and type pip install --upgrade openai=0.28.1
This will rollback your current version to 0.28.1
Restarting your kernel should fix the issue
Basically, by changing the API, OpenAI is guaranteeing that we won't have usable local models. We need a different API, or they'll just run over us every time. How difficult is it to get a response back from the server without using their POS not so open API? And if it was just a change why don't older versions of openai work any more?
openai: 0.28.1 langchain: 0.0.331 ` lm = OpenAI(model="gpt-4",openai_api_key=openai_key,openai_api_base=openai_proxy_site)
` and still cant use langchain... what can i do ?
I also faced same issue since yesterday, thanks to @dosuken123 , I switched to "openai==v0.28.1" and now it's working fine with langchain.
While pip install --upgrade openai==0.28.1
worked, will openai be correcting this issue soon?
I guess langchain needs to update source code to match the changes in openai API this works fine on my instance
response = openai.completions.create(
model="text-davinci-002",
prompt="Once upon a time,",
max_tokens=50
)
uninstall openai install openai==v0.28.1 solved the AttributeError: module 'openai' has no attribute 'Completion'
for chat it shall be like:
response = openai.chat.completions.create( model="gpt-3.5-turbo", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who won the world series in 2020?"}, {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."}, {"role": "user", "content": "Where was it played?"} ] )
https://platform.openai.com/docs/guides/text-generation/chat-completions-api
openai.completions.create(model="gpt-3.5-turbo-instruct", prompt="Say this is a test",) is for sentence completion and instruction
uninstall openai install openai==v0.28.1 solved the AttributeError: module 'openai' has no attribute 'Completion'
Thanks for that. OpenAI must have forgotten to delete that version. I hade tried 1.0, but it's no longer working either. Funny, because it was working two days ago. BTW, I'm dumping the openai API before they do this to me again. After all, it's just an LLM. LLMs will continue to get better and better. Right now they's very helpful, but one needs a platform one can trust for developing anything meaningful. I'll continue to code using local models, and when something truly awesome hits the AI stage, I'll point my code there. In any case, you can't use openai functions on local models anyway, so anything I build will continue to break. If I weren't so busy, I'd build openapi for local models for the community. In any case, we should think carefully about supporting a company that cares so little for its developers. Are these the guys we want holding all the cards? I mean, how greedy are they?
I think the biggest issue here is that it shall not directly deleted the function. Before moving it do the deprecated status and redirect the user to the new function. The changes of the structure may due to the more features it provide and want to put them into a more structural way
I think the biggest issue here is that it shall not directly deleted the function. Before moving it do the deprecated status and redirect the user to the new function. The changes of the structure may due to the more features it provide and want to put them into a more structural way
That's exactly right. It's a long standing pythonic practice. OpenAI went out of their way to hurt developers. That doesn't bode well for an OpenAI-dominated future. My advice is, don't give them a nickel. They've proved they aren't trust worthy.
Other two things to make sure you have correct is the imports for LLMChain and PromptTemplate.
This seem to have changed over the past few weeks?
from langchain.chains import LLMChain from langchain.prompts import PromptTemplate
Reopened a fix https://github.com/langchain-ai/langchain/pull/13099.
Closing as https://github.com/langchain-ai/langchain/pull/13099 was released in 0.0.334
System Info
Downloading langchain-0.0.331-py3-none-any.whl (2.0 MB) Downloading openai-1.1.1-py3-none-any.whl (217 kB)
Who can help?
No response
Information
Related Components
Reproduction
Run the following code in colab:
You'll get:
Expected behavior
This was working until yesterday. It's likely due to the openai dependency update.