Open pawarbi opened 1 year ago
Thank you for opening your first issue in this project! Engagement like this is essential for open source projects! :hugs:
If you haven't done so already, check out Jupyter's Code of Conduct. Also, please try to follow the issue template as it helps other other community members to contribute more effectively.
You can meet the other Jovyans by joining our Discourse forum. There is also an intro thread there where you can stop by and say Hi! :wave:
Welcome to the Jupyter community! :tada:
@pawarbi Hm, it almost looks like you're somehow using the azure-chat-openai
provider instead of the openai-chat
provider? I did a bit more digging and there are several identical issues upstream in LangChain. For example: https://github.com/langchain-ai/langchain/issues/5422
It looks like the OpenAI SDK uses a singleton object that persists over the lifetime of a process. To clarify, if you used an Azure OpenAI provider previously in the same notebook kernel, subsequent calls to the "non-Azure" OpenAI provider will fail because the OpenAI SDK assumes you're still trying to make calls to the Azure API. The workaround recommended in the linked issue is to explicitly "reset" the singleton before calling the non-Azure OpenAI provider:
llmazure = AzureChatOpenAI(**openaiazure_params)
# run the below 3 lines to reset the state after using an Azure OpenAI provider
openai.api_type = "open_ai"
openai.api_base = "https://api.openai.com/v1"
openai.api_version = None
llm = ChatOpenAI(**openai_params)
If this happens even if you didn't use an Azure OpenAI provider previously, it might be because the Microsoft Fabric environment is setting the OPENAI_API_TYPE
environment variable to azure
instead of open_ai
. Try running:
echo $OPENAI_API_TYPE
in your shell. Let me know if the workaround helped, and if Microsoft Fabric is setting OPENAI_API_TYPE=azure
by default. Thanks!
cc @3coins This might motivate an upstream fix to LangChain, or even to the OpenAI Python SDK. Here's a full list of identical issues on the LangChain repo:
Thank you @dlqqq .
Running !echo $OPENAI_API_TYPE
did not return anything.
I ran below as you suggested :
from langchain.chat_models import AzureChatOpenAI, ChatOpenAI
import openai
openai.api_key = "sk-xxxxxxxxxxxxxxxxxxxx"
openai_params = {
"api_type": "open_ai",
"api_base": "https://api.openai.com/v1",
"api_version": None
}
llm = ChatOpenAI(**openai_params)
%%ai gpt3
who are you
This led to a different error message :
--> 298 resp, got_stream = self._interpret_response(result, stream)
299 return resp, got_stream, self.api_key
File ~/cluster-env/clonedenv/lib/python3.10/site-packages/openai/api_requestor.py:700, in APIRequestor._interpret_response(self, result, stream)
692 return (
693 self._interpret_response_line(
694 line, result.status_code, result.headers, stream=True
695 )
696 for line in parse_stream(result.iter_lines())
697 ), True
698 else:
699 return (
--> 700 self._interpret_response_line(
701 result.content.decode("utf-8"),
702 result.status_code,
703 result.headers,
704 stream=False,
705 ),
706 False,
707 )
File ~/cluster-env/clonedenv/lib/python3.10/site-packages/openai/api_requestor.py:765, in APIRequestor._interpret_response_line(self, rbody, rcode, rheaders, stream)
763 stream_error = stream and "error" in resp.data
764 if stream_error or not 200 <= rcode < 300:
--> 765 raise self.handle_error_response(
766 rbody, rcode, resp.data, rheaders, stream_error=stream_error
767 )
768 return resp
AuthenticationError: Your authentication token is not from a valid issuer.
I created a brand new token which worked fine locally but gave above error in fabric. I wasn;t sure what params to pass to :
llmazure = AzureChatOpenAI(**openaiazure_params)
Please advise. Thank you!
@pawarbi You have to set these attributes on the openai
object directly:
import openai
openai.api_type = "open_ai"
openai.api_base = "https://api.openai.com/v1"
openai.api_version = None
This block of code that you ran:
openai_params = {
"api_type": "open_ai",
"api_base": "https://api.openai.com/v1",
"api_version": None
}
llm = ChatOpenAI(**openai_params)
doesn't do anything except define two unused variables, openai_params
and llm
.
Thanks @dlqqq
Same error message:
150 api_key, api_base, api_type, api_version, organization, **params
151 )
--> 153 response, _, api_key = requestor.request(
154 "post",
155 url,
156 params=params,
157 headers=headers,
158 stream=stream,
159 request_id=request_id,
160 request_timeout=request_timeout,
161 )
163 if stream:
164 # must be an iterator
165 assert not isinstance(response, OpenAIResponse)
--> 700 self._interpret_response_line(
701 result.content.decode("utf-8"),
702 result.status_code,
703 result.headers,
704 stream=False,
705 ),
706 False,
707 )
File ~/cluster-env/clonedenv/lib/python3.10/site-packages/openai/api_requestor.py:765, in APIRequestor._interpret_response_line(self, rbody, rcode, rheaders, stream)
763 stream_error = stream and "error" in resp.data
764 if stream_error or not 200 <= rcode < 300:
--> 765 raise self.handle_error_response(
766 rbody, rcode, resp.data, rheaders, stream_error=stream_error
767 )
768 return resp
AuthenticationError: Your authentication token is not from a valid issuer.
My code:
!pip install jupyter_ai_magics --quiet
!pip install openai --quiet
import openai
openai.api_type = "open_ai"
openai.api_base = "https://api.openai.com/v1"
openai.api_version = None
%env OPENAI_API_KEY=sk-xxxx
%reload_ext jupyter_ai_magics
%%ai gpt4
write a pandas program to import csv data
This exact code and API token works without any errors locally in Jupyter Notebook. I tried
openai.api_key = "sk-xxx"
as well.
Thanks again for your help.
Hi, I am using the notebook in Microsoft Fabric. I have python 3.10.10. I installed
!pip install jupyter_ai_magics
and saved the open ai key using%env OPENAI_API_KEY=sk-xxxxxx
. After loading the extension, if I use any model from openai, I get below error message:Error:
All other models from other providers (cohere, huggingface, ai21) work fine without any issues. it's only openai that throw this error. I also installed and upgraded openai and langchain but same error message. If I follow the same steps in my local jupyter notebook with Python 3.9, everything works fine and I get the expected output. I am stumped why I am getting
Must provide an 'engine' or 'deployment_id' parameter
message. Can you please help? Thanks.Description
Reproduce
Expected behavior
Context
Troubleshoot Output
Command Line Output
Browser Output