google-gemini / generative-ai-python

The official Python library for the Google Gemini API
https://pypi.org/project/google-generativeai/
Apache License 2.0
1.55k stars 309 forks source link

chat.send_message errors out in many ways. (stream=true) #179

Closed budhesandeep closed 3 weeks ago

budhesandeep commented 9 months ago

Description of the bug:

{ "name": "InternalServerError", "message": "500 An internal error has occurred. Please retry or report in https://developers.generativeai.google/guide/troubleshooting", "stack": "--------------------------------------------------------------------------- _MultiThreadedRendezvous Traceback (most recent call last) File /opt/homebrew/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:173, in _wrap_stream_errors..error_remapped_callable(*args, **kwargs) 172 prefetchfirst = getattr(callable, \"_prefetch_firstresult\", True) --> 173 return _StreamingResponseIterator( 174 result, prefetch_first_result=prefetch_first 175 ) 176 except grpc.RpcError as exc:

File /opt/homebrew/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:95, in _StreamingResponseIterator.init(self, wrapped, prefetch_first_result) 94 if prefetch_first_result: ---> 95 self._stored_first_result = next(self._wrapped) 96 except TypeError: 97 # It is possible the wrapped method isn't an iterable (a grpc.Call 98 # for instance). If this happens don't store the first result.

File /opt/homebrew/lib/python3.10/site-packages/grpc/_channel.py:540, in _Rendezvous.next(self) 539 def next(self): --> 540 return self._next()

File /opt/homebrew/lib/python3.10/site-packages/grpc/_channel.py:966, in _MultiThreadedRendezvous._next(self) 965 elif self._state.code is not None: --> 966 raise self

_MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with: \tstatus = StatusCode.INTERNAL \tdetails = \"An internal error has occurred. Please retry or report in https://developers.generativeai.google/guide/troubleshooting\" \tdebug_error_string = \"UNKNOWN:Error received from peer ipv6:%5B2607:f8b0:400a:800::200a%5D:443 {created_time:\"2024-01-28T11:59:20.552904-08:00\", grpc_status:13, grpc_message:\"An internal error has occurred. Please retry or report in https://developers.generativeai.google/guide/troubleshooting\"}\"

The above exception was the direct cause of the following exception:

InternalServerError Traceback (most recent call last) Cell In[22], line 22 18 model = genai.GenerativeModel('gemini-pro', tools=[google_bills_function_call.bill_schema_1]) 20 chat = model.start_chat() ---> 22 response = chat.send_message(df['text'][0], stream=True) 23 response.resolve() 24 print(response.candidates)

File /opt/homebrew/lib/python3.10/site-packages/google/generativeai/generative_models.py:367, in ChatSession.send_message(self, content, generation_config, safety_settings, stream, kwargs) 365 if generation_config.get(\"candidate_count\", 1) > 1: 366 raise ValueError(\"Can't chat with candidate_count > 1\") --> 367 response = self.model.generate_content( 368 contents=history, 369 generation_config=generation_config, 370 safety_settings=safety_settings, 371 stream=stream, 372 kwargs, 373 ) 375 if response.prompt_feedback.block_reason: 376 raise generation_types.BlockedPromptException(response.prompt_feedback)

File /opt/homebrew/lib/python3.10/site-packages/google/generativeai/generative_models.py:245, in GenerativeModel.generate_content(self, contents, generation_config, safety_settings, stream, **kwargs) 243 if stream: 244 with generation_types.rewrite_stream_error(): --> 245 iterator = self._client.stream_generate_content(request) 246 return generation_types.GenerateContentResponse.from_iterator(iterator) 247 else:

File /opt/homebrew/lib/python3.10/site-packages/google/ai/generativelanguage_v1beta/services/generative_service/client.py:852, in GenerativeServiceClient.stream_generate_content(self, request, model, contents, retry, timeout, metadata) 847 metadata = tuple(metadata) + ( 848 gapic_v1.routing_header.to_grpc_metadata(((\"model\", request.model),)), 849 ) 851 # Send the request. --> 852 response = rpc( 853 request, 854 retry=retry, 855 timeout=timeout, 856 metadata=metadata, 857 ) 859 # Done; return the response. 860 return response

File /opt/homebrew/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:131, in _GapicCallable.call(self, timeout, retry, compression, *args, *kwargs) 128 if self._compression is not None: 129 kwargs[\"compression\"] = compression --> 131 return wrapped_func(args, **kwargs)

File /opt/homebrew/lib/python3.10/site-packages/google/api_core/retry.py:372, in Retry.call..retry_wrapped_func(*args, *kwargs) 368 target = functools.partial(func, args, **kwargs) 369 sleep_generator = exponential_sleep_generator( 370 self._initial, self._maximum, multiplier=self._multiplier 371 ) --> 372 return retry_target( 373 target, 374 self._predicate, 375 sleep_generator, 376 self._timeout, 377 on_error=on_error, 378 )

File /opt/homebrew/lib/python3.10/site-packages/google/api_core/retry.py:207, in retry_target(target, predicate, sleep_generator, timeout, on_error, **kwargs) 205 for sleep in sleep_generator: 206 try: --> 207 result = target() 208 if inspect.isawaitable(result): 209 warnings.warn(_ASYNC_RETRY_WARNING)

File /opt/homebrew/lib/python3.10/site-packages/google/api_core/timeout.py:120, in TimeToDeadlineTimeout.call..func_with_timeout(*args, *kwargs) 117 # Avoid setting negative timeout 118 kwargs[\"timeout\"] = max(0, self._timeout - time_since_first_attempt) --> 120 return func(args, **kwargs)

File /opt/homebrew/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:177, in _wrap_stream_errors..error_remapped_callable(*args, **kwargs) 173 return _StreamingResponseIterator( 174 result, prefetch_first_result=prefetch_first 175 ) 176 except grpc.RpcError as exc: --> 177 raise exceptions.from_grpc_error(exc) from exc

InternalServerError: 500 An internal error has occurred. Please retry or report in https://developers.generativeai.google/guide/troubleshooting" }

Actual vs expected behavior:

No response

Any other information you'd like to share?

No response

LindaLawton commented 7 months ago

This is a known issue with the model and not related to the library see #236 236

ryao commented 7 months ago

I have the same issue. I do not understand @LindaLawton's remark that it is an issue with the model. This is in code running entirely locally.

The lines in question are:

            elif self._state.code is grpc.StatusCode.OK:
                raise StopIteration()

I do not understand why an exception is being raised if the status code is okay.

LindaLawton commented 7 months ago

@ryao by default 500 errors are when something goes wrong on the system you are connecting to. Its not something that you can fix. You may be running your code locally but you are connecting to the Gemini API all this library does is make it easer for you to connect to that api.

InternalServerError: 500 An internal error has occurred. Please retry or report in

Please post a full example of your code and the chat history I would love to test it and see if i can figure out what may have caused it or forward it to the people on the API team that can help.

ryao commented 7 months ago

My code is here:

https://github.com/ryao/gemini-chat/commit/dacc25e199897d8be86ed4d1c003d9a1275e1242

The conversation history is private, but I sent you a copy on discord.

ryao commented 7 months ago

I believe that I have found the root cause of the issue as far as the client side is concerned. That has been reported at google/generative-ai-docs#348. The issue is that the LLM has emited <ctrl100></ctrl100> tags, that cause a 500 error when returned as part of the context.

ryao commented 7 months ago

This is what I see when I get a 500 error here:

Traceback (most recent call last):
  File "/home/richard/bloom_env/lib/python3.11/site-packages/flask/app.py", line 1488, in __call__
    return self.wsgi_app(environ, start_response)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/richard/bloom_env/lib/python3.11/site-packages/flask/app.py", line 1466, in wsgi_app
    response = self.handle_exception(e)
               ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/richard/bloom_env/lib/python3.11/site-packages/flask/app.py", line 1463, in wsgi_app
    response = self.full_dispatch_request()
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/richard/bloom_env/lib/python3.11/site-packages/flask/app.py", line 872, in full_dispatch_request
    rv = self.handle_user_exception(e)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/richard/bloom_env/lib/python3.11/site-packages/flask/app.py", line 870, in full_dispatch_request
    rv = self.dispatch_request()
         ^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/richard/bloom_env/lib/python3.11/site-packages/flask/app.py", line 855, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)  # type: ignore[no-any-return]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/richard/gemini-chatbot/app.py", line 112, in chat
    response_stream = generate_response(user_input, conversation_history)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/richard/gemini-chatbot/app.py", line 97, in generate_response
    response = model.generate_content(messages, safety_settings=SAFETY_SETTINGS, stream=True)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/richard/bloom_env/lib/python3.11/site-packages/google/generativeai/generative_models.py", line 230, in generate_content
    return generation_types.GenerateContentResponse.from_iterator(iterator)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/richard/bloom_env/lib/python3.11/site-packages/google/generativeai/types/generation_types.py", line 428, in from_iterator
    response = next(iterator)
               ^^^^^^^^^^^^^^
  File "/home/richard/bloom_env/lib/python3.11/site-packages/google/api_core/grpc_helpers.py", line 119, in __next__
    return next(self._wrapped)
           ^^^^^^^^^^^^^^^^^^^
  File "/home/richard/bloom_env/lib/python3.11/site-packages/grpc/_channel.py", line 542, in __next__
    return self._next()
           ^^^^^^^^^^^^
  File "/home/richard/bloom_env/lib/python3.11/site-packages/grpc/_channel.py", line 949, in _next
    raise StopIteration()
StopIteration
markmcd commented 1 month ago

We had some issues with increased 500 errors earlier in the year - it's possible you were seeing these issues. Can you report back and tell us if you are still seeing this problem? And if so, can you share the prompts you are using so we can reproduce them internally?

github-actions[bot] commented 1 month ago

Marking this issue as stale since it has been open for 14 days with no activity. This issue will be closed if no further activity occurs.

github-actions[bot] commented 3 weeks ago

This issue was closed because it has been inactive for 28 days. Please post a new issue if you need further assistance. Thanks!