google-gemini / generative-ai-python

The official Python library for the Google Gemini API
https://pypi.org/project/google-generativeai/
Apache License 2.0
1.6k stars 321 forks source link

google.api_core.exceptions.InvalidArgument: 400 Developer instruction error when using system_instruction with Gemini pro #278

Closed Bikatr7 closed 7 months ago

Bikatr7 commented 7 months ago

Description of the bug:

Attempting to generate a response while using system_instruction raises a 400 developer instruction error.

Test values, and code to reproduce the bug in "Any other information you'd like to share"

I may be mistaken, but from the v0.5.0 notes, this was ready for use: https://github.com/google/generative-ai-python/releases/tag/v0.5.0

If it is not, could I request release notes make it clear when stuff isn't for use yet.

Thank you.

pip freeze for reference: aiofiles==23.2.1 aiohttp==3.8.4 aiosignal==1.3.1 altair==5.2.0 annotated-types==0.6.0 anyio==3.7.1 appdirs==1.4.4 asttokens==2.2.1 async-timeout==4.0.2 attrs==22.2.0 backcall==0.2.0 backoff==2.2.1 beautifulsoup4==4.12.2 blis==0.7.9 build==1.0.3 cache==1.0.3 cachetools==5.3.0 catalogue==2.0.8 certifi==2023.5.7 cffi==1.16.0 charset-normalizer==3.1.0 click==8.1.3 cloudpathlib==0.16.0 cmake==3.25.2 colorama==0.4.6 colorthief==0.2.1 colour==0.1.5 comm==0.1.3 commonmark==0.9.1 confection==0.1.4 contourpy==1.2.0 cryptography==42.0.2 cycler==0.12.1 cymem==2.0.7 debugpy==1.6.7 decorator==4.4.2 deepl==1.16.1 discord==2.2.2 discord-py-slash-command==3.0.3 discord-typings==0.5.1 discord.py==2.2.2 distro==1.8.0 dnspython==2.4.2 docutils==0.20.1 easytl==0.1.0 emoji==2.2.0 en-core-web-lg @ https://github.com/explosion/spacy-models/releases/download/en_core_web_lg-3.7.0/en_core_web_lg-3.7.0-py3-none-any.whl#sha256=708da1110fbe1163d059de34a2cbedb1db65c26e1e624ca925897a2711cb7d77 executing==1.2.0 fastapi==0.104.1 ffmpy==0.3.1 filelock==3.13.1 fonttools==4.45.1 frozenlist==1.3.3 fsspec==2023.10.0 google-ai-generativelanguage==0.6.1 google-api-core==2.11.0 google-api-python-client==2.78.0 google-auth==2.16.1 google-auth-httplib2==0.1.0 google-auth-oauthlib==1.0.0 google-cloud-core==2.4.1 google-cloud-texttospeech==2.15.1 google-cloud-translate==3.14.0 google-generativeai==0.5.0 googleapis-common-protos==1.58.0 gradio==4.19.2 gradio_client==0.10.1 grpcio==1.60.1 grpcio-status==1.60.1 h11==0.14.0 httpcore==1.0.1 httplib2==0.21.0 httpx==0.25.1 huggingface-hub==0.19.4 idna==3.4 imageio==2.33.1 imageio-ffmpeg==0.4.9 importlib-metadata==7.0.1 importlib-resources==6.1.1 ipykernel==6.22.0 ipython==8.13.1 ja-core-news-lg @ https://github.com/explosion/spacy-models/releases/download/ja_core_news_lg-3.7.0/ja_core_news_lg-3.7.0-py3-none-any.whl#sha256=f08eecb4d40523045c9478ce59a67564fd71edd215f32c076fa91dc1f05cc7fd jaraco.classes==3.3.0 jedi==0.18.2 Jinja2==3.1.2 jsonschema==4.20.0 jsonschema-specifications==2023.11.2 jupyter_client==8.2.0 jupyter_core==5.3.0 kairyou==1.4.1 keyboard==0.13.5 keyring==24.3.0 kiwisolver==1.4.5 langcodes==3.3.0 loguru==0.7.2 lxml==4.9.3 markdown-it-py==3.0.0 MarkupSafe==2.1.2 matplotlib==3.8.2 matplotlib-inline==0.1.6 mdurl==0.1.2 mecab-python3==1.0.6 minify_html==0.15.0 more-itertools==10.2.0 MouseInfo==0.1.3 moviepy==1.0.3 multidict==6.0.4 murmurhash==1.0.9 mysql-connector-python==8.3.0 nest-asyncio==1.5.6 nh3==0.2.15 numpy==1.24.2 oauth2client==4.1.3 oauthlib==3.2.2 openai==1.13.3 opencv-python==4.9.0.80 orjson==3.9.10 packaging==23.0 pandas==2.1.0 parso==0.8.3 pathy==0.10.1 pickleshare==0.7.5 Pillow==9.5.0 pkginfo==1.9.6 plac==1.3.5 platformdirs==3.5.0 preshed==3.0.8 proglog==0.1.10 prompt-toolkit==3.0.38 proto-plus==1.23.0 protobuf==4.25.3 psutil==5.9.5 pure-eval==0.2.2 pyasn1==0.4.8 pyasn1-modules==0.2.8 PyAutoGUI==0.9.53 pycparser==2.21 pydantic==2.5.3 pydantic_core==2.14.6 PyDrive==1.3.1 pydub==0.25.1 pyee==8.2.2 PyGetWindow==0.0.9 Pygments==2.15.1 pymongo==4.5.0 PyMsgBox==1.0.9 pynput==1.7.6 pyparsing==3.0.9 pyperclip==1.8.2 pyppeteer==1.0.2 pyproject_hooks==1.0.0 PyQt5==5.15.10 PyQt5-Qt5==5.15.2 PyQt5-sip==12.13.0 PyRect==0.2.0 PyScreeze==0.1.28 python-dateutil==2.8.2 python-docx==0.8.11 python-multipart==0.0.9 pytube==12.1.3 pytweening==1.0.4 pytz==2023.3.post1 pywin32==306 pywin32-ctypes==0.2.2 PyYAML==6.0 pyzmq==25.0.2 readme-renderer==42.0 referencing==0.31.1 regex==2023.3.23 requests==2.31.0 requests-oauthlib==1.3.1 requests-toolbelt==1.0.0 rfc3986==2.0.0 rich==12.6.0 rpds-py==0.13.2 rsa==4.9 ruff==0.2.2 semantic-version==2.10.0 shellingham==1.5.4 six==1.16.0 smart-open==6.3.0 sniffio==1.3.0 soupsieve==2.5 spacy==3.7.4 spacy-legacy==3.0.12 spacy-loggers==1.0.5 srsly==2.4.5 stack-data==0.6.2 starlette==0.27.0 SudachiDict-core==20230927 SudachiPy==0.6.7 thinc==8.2.3 tiktoken==0.6.0 tomli==2.0.1 tomlkit==0.12.0 toolz==0.12.0 tornado==6.3.1 tqdm==4.64.1 traitlets==5.9.0 twine==4.0.2 typer==0.9.0 typing_extensions==4.8.0 tzdata==2023.3 unidic-lite==1.0.8 uritemplate==4.1.1 urllib3==1.26.14 uvicorn==0.24.0.post1 wasabi==1.1.1 wcwidth==0.2.6 weasel==0.3.4 websockets==10.4 win32-setctime==1.1.0 yarl==1.8.2 zipp==3.17.0

Actual vs expected behavior:

actual:

Traceback (most recent call last): File "C:\Users\Tetra\AppData\Local\Programs\Python\Python311\Lib\site-packages\google\api_core\grpc_helpers.py", line 72, in error_remappedcallable return callable(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Tetra\AppData\Local\Programs\Python\Python311\Lib\site-packages\grpc_channel.py", line 1160, in call return _end_unary_response_blocking(state, call, False, None) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Tetra\AppData\Local\Programs\Python\Python311\Lib\site-packages\grpc_channel.py", line 1003, in _end_unary_response_blocking raise _InactiveRpcError(state) # pytype: disable=not-instantiable ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: status = StatusCode.INVALID_ARGUMENT details = "Developer instruction is not enabled for models/gemini-pro" debug_error_string = "UNKNOWN:Error received from peer ipv4:172.217.31.170:443 {created_time:"2024-04-11T07:47:24.5529039+00:00", grpc_status:3, grpc_message:"Developer instruction is not enabled for models/gemini-pro"}"

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "c:\Users\Tetra\AppData\Local\Programs\Python\Python311\Lib\site-packages\easytl\test.py", line 86, in _client.generate_content( File "C:\Users\Tetra\AppData\Local\Programs\Python\Python311\Lib\site-packages\google\generativeai\generative_models.py", line 262, in generate_content response = self._client.generate_content( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Tetra\AppData\Local\Programs\Python\Python311\Lib\site-packages\google\ai\generativelanguage_v1beta\services\generative_service\client.py", line 791, in generate_content response = rpc( ^^^^ File "C:\Users\Tetra\AppData\Local\Programs\Python\Python311\Lib\site-packages\google\api_core\gapic_v1\method.py", line 113, in call return wrapped_func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Tetra\AppData\Local\Programs\Python\Python311\Lib\site-packages\google\api_core\retry.py", line 349, in retry_wrapped_func return retry_target( ^^^^^^^^^^^^^ File "C:\Users\Tetra\AppData\Local\Programs\Python\Python311\Lib\site-packages\google\api_core\retry.py", line 191, in retry_target return target() ^^^^^^^^ File "C:\Users\Tetra\AppData\Local\Programs\Python\Python311\Lib\site-packages\google\api_core\timeout.py", line 120, in func_with_timeout return func(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Tetra\AppData\Local\Programs\Python\Python311\Lib\site-packages\google\api_core\grpc_helpers.py", line 74, in error_remapped_callable raise exceptions.from_grpc_error(exc) from exc google.api_core.exceptions.InvalidArgument: 400 Developer instruction is not enabled for models/gemini-pro

expected:

generate_content() should return a response properly

Any other information you'd like to share?

to reproduce:

## built-in libraries
import typing

## third party libraries
from google.generativeai import GenerationConfig
from google.generativeai.types import GenerateContentResponse, AsyncGenerateContentResponse

import google.generativeai as genai

## dummy values from my production code
_default_translation_instructions:str = "Please translate the following text into English."
_default_model:str = "gemini-pro"

_system_message = _default_translation_instructions

_model:str = _default_model
_temperature:float = 0.5
_top_p:float = 0.9
_top_k:int = 40
_candidate_count:int = 1
_stream:bool = False
_stop_sequences:typing.List[str] | None = None
_max_output_tokens:int | None = None

_client:genai.GenerativeModel
_generation_config:GenerationConfig

_decorator_to_use:typing.Union[typing.Callable, None] = None

_safety_settings = [
    {
        "category": "HARM_CATEGORY_DANGEROUS",
        "threshold": "BLOCK_NONE",
    },
    {
        "category": "HARM_CATEGORY_HARASSMENT",
        "threshold": "BLOCK_NONE",
    },
    {
        "category": "HARM_CATEGORY_HATE_SPEECH",
        "threshold": "BLOCK_NONE",
    },
    {
        "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
        "threshold": "BLOCK_NONE",
    },
    {
        "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
        "threshold": "BLOCK_NONE",
    },
]

genai.configure(api_key="YOUR_API_KEY")

## commented out _client and & _generation_config DO NOT RAISE THE 400 DEVELOPER ERROR
##_client = genai.GenerativeModel(model_name=_model,
                   ##            safety_settings=_safety_settings)

##_generation_config = GenerationConfig(candidate_count=_candidate_count,
      ##                                                     stop_sequences=_stop_sequences,
             ##                                              max_output_tokens=_max_output_tokens,
                  ##                                          temperature=_temperature,
                      ##                                      top_p=_top_p,
                      ##                                      top_k=_top_k)

## but when you introduce the system_instruction parameter, it raises the 400 developer error
## google.api_core.exceptions.InvalidArgument: 400 Developer instruction is not enabled for models/gemini-pro

## I may be mistaken, but v0.5.0 release notes mentioned this parameter was added for use.
_client = genai.GenerativeModel(model_name=_model,
                                safety_settings=_safety_settings,
                                system_instruction=_system_message)

_generation_config = GenerationConfig(candidate_count=_candidate_count,
                                                           stop_sequences=_stop_sequences,
                                                           max_output_tokens=_max_output_tokens,
                                                            temperature=_temperature,
                                                            top_p=_top_p,
                                                            top_k=_top_k)

_generation_config = GenerationConfig(candidate_count=1, max_output_tokens=1)

_client.generate_content(
    "Respond to this with 1",generation_config=_generation_config
)
Bikatr7 commented 7 months ago

For reference, if it helps, bug is occuring here in my code: https://github.com/Bikatr7/EasyTL/blob/implement-system-message-for-gemini/src/easytl/gemini_service.py#L277

singhniraj08 commented 7 months ago

@Bikatr7, The error is reproducible with gemini-pro model but changing the model to gemini-1.5-pro-latest makes the code work. My guess is system message is not yet enabled for gemini 1.0 pro model. Can you try the same code using Gemini 1.5 pro model and let us know if it works. I have attached a colab gist with working code. Ref: gist Thank you!

Bikatr7 commented 7 months ago

@singhniraj08 I have verified that the code you have linked does work on my device, and seems to resolve the issue. Yet I am still seeing the same issue in production code after ensuring gemini model is set to gemini-1.5-pro-latest. I will try to reproduce once more.

Bikatr7 commented 7 months ago

@singhniraj08 I stand corrected, and I apologize. I failed to properly set the model in production code.

This has resolved my issue.

If I may ask, do you have any insight, if and when, system message will be enabled for 1.0? Additionally, do I need to do anything else before closing my issue?

Thanks.

singhniraj08 commented 7 months ago

@Bikatr7, System Instructions is introduced in Gemini 1.5 pro only as I am unable to see this feature while using Gemini 1.0 pro model on AI studio. We will keep you posted if this feature gets introduced to Gemini 1.0. Meanwhile, You can keep using for Gemini 1.5 pro. If this issue is resolved for you, you can go ahead and close this issue. Thank you

Bikatr7 commented 7 months ago

Thank you very much!

annamalaiarunachalam commented 6 months ago

@Bikatr7, System Instructions is introduced in Gemini 1.5 pro only as I am unable to see this feature while using Gemini 1.0 pro model on AI studio. We will keep you posted if this feature gets introduced to Gemini 1.0. Meanwhile, You can keep using for Gemini 1.5 pro. If this issue is resolved for you, you can go ahead and close this issue. Thank you

Hello, I tried with 1.5-pro, I get a different error. ValueError Traceback (most recent call last)..................................................... ............................... GoogleGenerativeAIError: Error embedding content: content must not be empty if I use 1.0 I get the above error raised by the original thread. If I change to 1.5, get this error. But I didnt get any error when I used OpenAI's chat model. Why is Google's API's so annoying. You guys keen to gain more adoption, but the experience sucks. Just the other day, Google Architect, almost forced me to move away from openAi models, how can I do that? how to trust these unreliable google models. even the embedding model is not good. Please reply to the error msg reported above.

annamalaiarunachalam commented 6 months ago

Hello, I tried with 1.5-pro, I get a different error. ValueError Traceback (most recent call last)..................................................... ............................... GoogleGenerativeAIError: Error embedding content: content must not be empty if I use 1.0 I get the above error raised by the original thread. If I change to 1.5, get this error. But I didnt get any error when I used OpenAI's chat model. Why is Google's API's so annoying. You guys keen to gain more adoption, but the experience sucks. Just the other day, Google Architect, almost forced me to move away from openAi models, how can I do that? how to trust these unreliable google models. even the embedding model is not good. Please reply to the error msg reported above.

bISTP commented 6 months ago

Hello, I tried with 1.5-pro, I get a different error. ValueError Traceback (most recent call last)..................................................... ............................... GoogleGenerativeAIError: Error embedding content: content must not be empty if I use 1.0 I get the above error raised by the original thread. If I change to 1.5, get this error. But I didnt get any error when I used OpenAI's chat model. Why is Google's API's so annoying. You guys keen to gain more adoption, but the experience sucks. Just the other day, Google Architect, almost forced me to move away from openAi models, how can I do that? how to trust these unreliable google models. even the embedding model is not good. Please reply to the error msg reported above.

DId you try without system_instruction ?

Can you please provide the code snippet if possible?