google-gemini / generative-ai-python

The official Python library for the Google Gemini API
https://pypi.org/project/google-generativeai/
Apache License 2.0
1.62k stars 322 forks source link

Getting InternalServerError: Error code: 501 - [{'error': {'code': 501, 'message': 'Operation is not implemented, or supported, or enabled.', 'status': 'UNIMPLEMENTED'}}] #625

Closed taupirho closed 1 week ago

taupirho commented 1 week ago

Description of the bug:

When trying to use the new OpenAi API compatibility with Gemini 1.5 Flash I'm getting a InternalServerError: Error code: 501 - [{'error': {'code': 501, 'message': 'Operation is not implemented, or supported, or enabled.', 'status': 'UNIMPLEMENTED'}}] error message

Actual vs expected behavior:

No response

Any other information you'd like to share?

The code I'm using is lifted straight from the Gemini /OpenAI announcement page at https://developers.googleblog.com/en/gemini-is-now-accessible-from-the-openai-library/ The only change is swapping my API key in

from openai import OpenAI

client = OpenAI(
    api_key="MY GEMINI API KEY HERE",
    base_url="https://generativelanguage.googleapis.com/v1beta/"
)

response = client.chat.completions.create(
    model="gemini-1.5-flash",
    n=1,
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {
            "role": "user",
            "content": "Explain to me how AI works"
        }
    ]
)

print(response.choices[0].message)

results in the following output

response = client.chat.completions.create( 9 model="gemini-1.5-flash", 10 n=1, 11 messages=[ 12 {"role": "system", "content": "You are a helpful assistant."}, 13 { 14 "role": "user", 15 "content": "Explain to me how AI works" 16 } 17 ] 18 ) 20 print(response.choices[0].message)

File ~/.local/lib/python3.10/site-packages/openai/_utils/_utils.py:274, in required_args..inner..wrapper(*args, *kwargs) 272 msg = f"Missing required argument: {quote(missing[0])}" 273 raise TypeError(msg) --> 274 return func(args, **kwargs)

File ~/.local/lib/python3.10/site-packages/openai/resources/chat/completions.py:742, in Completions.create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_completion_tokens, max_tokens, metadata, n, parallel_tool_calls, presence_penalty, response_format, seed, service_tier, stop, store, stream, stream_options, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout) 704 @required_args(["messages", "model"], ["messages", "model", "stream"]) 705 def create( 706 self, (...) 739 timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN, 740 ) -> ChatCompletion | Stream[ChatCompletionChunk]: 741 validate_response_format(response_format) --> 742 return self._post( 743 "/chat/completions", 744 body=maybe_transform( 745 { 746 "messages": messages, 747 "model": model, 748 "frequency_penalty": frequency_penalty, 749 "function_call": function_call, 750 "functions": functions, 751 "logit_bias": logit_bias, 752 "logprobs": logprobs, 753 "max_completion_tokens": max_completion_tokens, 754 "max_tokens": max_tokens, 755 "metadata": metadata, 756 "n": n, 757 "parallel_tool_calls": parallel_tool_calls, 758 "presence_penalty": presence_penalty, 759 "response_format": response_format, 760 "seed": seed, 761 "service_tier": service_tier, 762 "stop": stop, 763 "store": store, 764 "stream": stream, 765 "stream_options": stream_options, 766 "temperature": temperature, 767 "tool_choice": tool_choice, 768 "tools": tools, 769 "top_logprobs": top_logprobs, 770 "top_p": top_p, 771 "user": user, 772 }, 773 completion_create_params.CompletionCreateParams, 774 ), 775 options=make_request_options( 776 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout 777 ), 778 cast_to=ChatCompletion, 779 stream=stream or False, 780 stream_cls=Stream[ChatCompletionChunk], 781 )

File ~/.local/lib/python3.10/site-packages/openai/_base_client.py:1277, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls) 1263 def post( 1264 self, 1265 path: str, (...) 1272 stream_cls: type[_StreamT] | None = None, 1273 ) -> ResponseT | _StreamT: 1274 opts = FinalRequestOptions.construct( 1275 method="post", url=path, json_data=body, files=to_httpx_files(files), **options 1276 ) -> 1277 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))

File ~/.local/lib/python3.10/site-packages/openai/_base_client.py:954, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls) 951 else: 952 retries_taken = 0 --> 954 return self._request( 955 cast_to=cast_to, 956 options=options, 957 stream=stream, 958 stream_cls=stream_cls, 959 retries_taken=retries_taken, 960 )

File ~/.local/lib/python3.10/site-packages/openai/_base_client.py:1043, in SyncAPIClient._request(self, cast_to, options, retries_taken, stream, stream_cls) 1041 if remaining_retries > 0 and self._should_retry(err.response): 1042 err.response.close() -> 1043 return self._retry_request( 1044 input_options, 1045 cast_to, 1046 retries_taken=retries_taken, 1047 response_headers=err.response.headers, 1048 stream=stream, 1049 stream_cls=stream_cls, 1050 ) 1052 # If the response is streamed then we need to explicitly read the response 1053 # to completion before attempting to access the response text. 1054 if not err.response.is_closed:

File ~/.local/lib/python3.10/site-packages/openai/_base_client.py:1092, in SyncAPIClient._retry_request(self, options, cast_to, retries_taken, response_headers, stream, stream_cls) 1088 # In a synchronous context we are blocking the entire thread. Up to the library user to run the client in a 1089 # different thread if necessary. 1090 time.sleep(timeout) -> 1092 return self._request( 1093 options=options, 1094 cast_to=cast_to, 1095 retries_taken=retries_taken + 1, 1096 stream=stream, 1097 stream_cls=stream_cls, 1098 )

File ~/.local/lib/python3.10/site-packages/openai/_base_client.py:1043, in SyncAPIClient._request(self, cast_to, options, retries_taken, stream, stream_cls) 1041 if remaining_retries > 0 and self._should_retry(err.response): 1042 err.response.close() -> 1043 return self._retry_request( 1044 input_options, 1045 cast_to, 1046 retries_taken=retries_taken, 1047 response_headers=err.response.headers, 1048 stream=stream, 1049 stream_cls=stream_cls, 1050 ) 1052 # If the response is streamed then we need to explicitly read the response 1053 # to completion before attempting to access the response text. 1054 if not err.response.is_closed:

File ~/.local/lib/python3.10/site-packages/openai/_base_client.py:1092, in SyncAPIClient._retry_request(self, options, cast_to, retries_taken, response_headers, stream, stream_cls) 1088 # In a synchronous context we are blocking the entire thread. Up to the library user to run the client in a 1089 # different thread if necessary. 1090 time.sleep(timeout) -> 1092 return self._request( 1093 options=options, 1094 cast_to=cast_to, 1095 retries_taken=retries_taken + 1, 1096 stream=stream, 1097 stream_cls=stream_cls, 1098 )

File ~/.local/lib/python3.10/site-packages/openai/_base_client.py:1058, in SyncAPIClient._request(self, cast_to, options, retries_taken, stream, stream_cls) 1055 err.response.read() 1057 log.debug("Re-raising status error") -> 1058 raise self._make_status_error_from_response(err.response) from None 1060 return self._process_response( 1061 cast_to=cast_to, 1062 options=options, (...) 1066 retries_taken=retries_taken, 1067 )

InternalServerError: Error code: 501 - [{'error': {'code': 501, 'message': 'Operation is not implemented, or supported, or enabled.', 'status': 'UNIMPLEMENTED'}}]

taupirho commented 1 week ago

import openai print(openai.version)

1.54.3

manojssmk commented 1 week ago

Hi @taupirho

I tried to reproduce the code and working fine in google colab. Can you once try executing the colab and see if the issue persists.

Thanks

taupirho commented 1 week ago

Seems to be working now for me in or out of collab now. Could it just have been a timing issue. I had used a brand new API key?

On Tue, Nov 12, 2024 at 8:30 AM S Sai Manoj Kumar @.***> wrote:

Hi @taupirho https://github.com/taupirho

I tried to reproduce the code and working fine in google colab https://colab.sandbox.google.com/gist/manojssmk/efe7d3d2bd8444aeb4cf4d5f36b23af5/issue_625.ipynb. Can you once try executing the colab and see if the issue persists.

Thanks

— Reply to this email directly, view it on GitHub https://github.com/google-gemini/generative-ai-python/issues/625#issuecomment-2469889960, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFHMTFJ35CYWE4NSNJRVJXL2AG4DZAVCNFSM6AAAAABRSBVXZCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINRZHA4DSOJWGA . You are receiving this because you were mentioned.Message ID: @.***>

MarkDaoust commented 1 week ago

There were errors in the initial push, I think this is fixed now.