googleapis / python-aiplatform

A Python SDK for Vertex AI, a fully managed, end-to-end platform for data science and machine learning.
Apache License 2.0
589 stars 318 forks source link

Gemini Pro : VertexAI grpc _channel _MultiThreadedRendezvous RPC that terminated with StatusCode.INVALID_ARGUMENT google.api_core.exceptions.InvalidArgument: 400 Request contains an invalid argument. #3117

Closed jm-nab closed 7 months ago

jm-nab commented 7 months ago

I was excited to give gemini a try, but ran into an issue. Any way to debug this? or info that I can grab to fix this?

Environment details

python --version
Python 3.11.6

poetry --version
Poetry (version 1.6.1)

pip --version
pip 23.2.1 from /home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/pip (python 3.11)

pip show google-cloud-aiplatform

Name: google-cloud-aiplatform
Version: 1.38.1
Summary: Vertex AI API client library
Home-page: https://github.com/googleapis/python-aiplatform
Author: Google LLC
Author-email: googleapis-packages@google.com
License: Apache 2.0
Location: /home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages
Requires: google-api-core, google-cloud-bigquery, google-cloud-resource-manager, google-cloud-storage, packaging, proto-plus, protobuf, shapely
Required-by: vertexai-experiments, vertexailoaders

pip show langchain
Name: langchain
Version: 0.0.350
Summary: Building applications with LLMs through composability
Home-page: https://github.com/langchain-ai/langchain
Author:
Author-email:
License: MIT
Location: /home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages
Requires: aiohttp, dataclasses-json, jsonpatch, langchain-community, langchain-core, langsmith, numpy, pydantic, PyYAML, requests, SQLAlchemy, tenacity
Required-by: vertexai-experiments, vertexailoaders

lsb_release -a

No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 22.04.3 LTS
Release:    22.04
Codename:   jammy

Code example

-> return llm.client.generate_content(
(Pdb) l
 63         def _completion_with_retry(
 64             prompt: List[Union[str, "Image"]], is_gemini: bool = False, **kwargs: Any
 65         ) -> Any:
 66             if is_gemini:
 67                 breakpoint()
 68  ->             return llm.client.generate_content(
 69                     prompt, stream=stream, generation_config=kwargs
 70                 )
 71             else:
 72                 if stream:
 73                     return llm.client.predict_streaming(prompt[0], **kwargs)
(Pdb) prompt
["Answer the question at the end. \nWhen providing code samples, write them in codeblocks.\nAnswer in english.\nWhen providing how to's, write them as lists.\n\n\nQuestion: how do i operate a husqvarna snow blower?\nAnswer:"]
(Pdb) kwargs
{'temperature': 0.5, 'max_output_tokens': 1024, 'candidate_count': 1, 'top_k': 2, 'top_p': 0.7, 'stop_sequences': None}
(Pdb) stream
False
(Pdb) c
> /home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/vertexai/generative_models/_generative_models.py(436)_generate_content()
-> stream = self._prediction_client.stream_generate_content(request=request)
(Pdb) request
model: "projects/genai-qa-230803/locations/us-central1/google/gemini-pro@001"
contents {
  role: "user"
  parts {
    text: "Answer the question at the end. \nWhen providing code samples, write them in codeblocks.\nAnswer in english.\nWhen providing how to\'s, write them as lists.\n\n\nQuestion: how do i operate a husqvarna snow blower?\nAnswer:"
  }
}
generation_config {
  temperature: 0.5
  top_p: 0.7
  top_k: 2
  candidate_count: 1
  max_output_tokens: 1024
}
    llm = VertexAI(
            model_name=model_name,
            max_output_tokens=max_output_tokens,
            temperature=temperature,
            top_p=0.7,
            top_k=2,
            verbose=True,
            # streaming=True,
        )
        # sets up  the retriever, chat memory, and buffer, etc
    predict = get_memory_chain(
        request,
        selected_history,
        collection_name=collection_name,
        model_name=selected_model,
        max_output_tokens=1024,
        prompt_template=latest_prompt.content.replace("#QUESTION#", "{question}"),
        temperature=temperature,
        llm=llm,
    )
    # get chatbot response
    gpt_response = predict({"question": message})

Stack trace

INFO:     127.0.0.1:51544 - "POST /api/predict HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/google/api_core/grpc_helpers.py", line 165, in error_remapped_callable
    return _StreamingResponseIterator(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/google/api_core/grpc_helpers.py", line 91, in __init__
    self._stored_first_result = next(self._wrapped)
                                ^^^^^^^^^^^^^^^^^^^
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/grpc/_channel.py", line 541, in __next__
    return self._next()
           ^^^^^^^^^^^^
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/grpc/_channel.py", line 967, in _next
    raise self
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
    status = StatusCode.INVALID_ARGUMENT
    details = "Request contains an invalid argument."
    debug_error_string = "UNKNOWN:Error received from peer ipv4:xxx:443 {grpc_message:"Request contains an invalid argument.", grpc_status:3, created_time:"2023-12-15T19:43:10.238841694+00:00"}"
>

The above exception was the direct cause of the following exception:

  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/src/vertexai_loaders/serve.py", line 165, in predict
    gpt_response = predict({"question": message})
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/langchain/chains/base.py", line 312, in __call__
    raise e
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/langchain/chains/base.py", line 306, in __call__
    self._call(inputs, run_manager=run_manager)
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/langchain/chains/conversational_retrieval/base.py", line 162, in _call
    answer = self.combine_docs_chain.run(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/langchain/chains/base.py", line 512, in run
    return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/langchain/chains/base.py", line 312, in __call__
    raise e
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/langchain/chains/base.py", line 306, in __call__
    self._call(inputs, run_manager=run_manager)
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/langchain/chains/combine_documents/base.py", line 123, in _call
    output, extra_return_dict = self.combine_docs(
                                ^^^^^^^^^^^^^^^^^^
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/langchain/chains/combine_documents/stuff.py", line 172, in combine_docs
    return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/langchain/chains/llm.py", line 293, in predict
    return self(kwargs, callbacks=callbacks)[self.output_key]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/langchain/chains/base.py", line 312, in __call__
    raise e
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/langchain/chains/base.py", line 306, in __call__
    self._call(inputs, run_manager=run_manager)
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/langchain/chains/llm.py", line 103, in _call
    response = self.generate([inputs], run_manager=run_manager)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/langchain/chains/llm.py", line 115, in generate
    return self.llm.generate_prompt(
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 516, in generate_prompt
    return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 666, in generate
    output = self._generate_helper(
             ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 553, in _generate_helper
    raise e
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 540, in _generate_helper
    self._generate(
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/langchain_community/llms/vertexai.py", line 328, in _generate
    res = completion_with_retry(
          ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/langchain_community/llms/vertexai.py", line 76, in completion_with_retry
    return _completion_with_retry(prompt, is_gemini, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/tenacity/__init__.py", line 289, in wrapped_f
    return self(f, *args, **kw)
           ^^^^^^^^^^^^^^^^^^^^
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/tenacity/__init__.py", line 379, in __call__
    do = self.iter(retry_state=retry_state)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/tenacity/__init__.py", line 325, in iter
    raise retry_exc.reraise()
          ^^^^^^^^^^^^^^^^^^^
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/tenacity/__init__.py", line 158, in reraise
    raise self.last_attempt.result()
          ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jm/.pyenv/versions/3.11.6/lib/python3.11/concurrent/futures/_base.py", line 449, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "/home/jm/.pyenv/versions/3.11.6/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/tenacity/__init__.py", line 382, in __call__
    result = fn(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/langchain_community/llms/vertexai.py", line 68, in _completion_with_retry
    return llm.client.generate_content(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/vertexai/generative_models/_generative_models.py", line 351, in generate_content
    return self._generate_content(
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/vertexai/generative_models/_generative_models.py", line 435, in _generate_content
    stream = self._prediction_client.stream_generate_content(request=request)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/google/cloud/aiplatform_v1beta1/services/prediction_service/client.py", line 1634, in stream_generate_content
    response = rpc(
               ^^^^
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/google/api_core/gapic_v1/method.py", line 131, in __call__
    return wrapped_func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jm/repos/pinecone-langchain-llm/vertexai_loaders/venv_311/lib/python3.11/site-packages/google/api_core/grpc_helpers.py", line 169, in error_remapped_callable
    raise exceptions.from_grpc_error(exc) from exc
google.api_core.exceptions.InvalidArgument: 400 Request contains an invalid argument.

Related:

sasha-gitg commented 7 months ago

@jm-nab What value are you passing in for model_name?

jm-nab commented 7 months ago

Hi @sasha-gitg the model name string being passed in at the top is: google/gemini-pro@001

I see though after it works it way through langchain to the python sdk it looks like its trying to find it in us-central1 by default? even though the project stuff is in us-east4-b? would that cause an issue?

From the traceback above:

-> stream = self._prediction_client.stream_generate_content(request=request)
(Pdb) request
model: "projects/genai-qa-230803/locations/us-central1/google/gemini-pro@001"
contents {
  role: "user"
  parts {
    text: "Answer the question at the end. \nWhen providing code samples, write them in codeblocks.\nAnswer in english.\nWhen providing how to\'s, write them as lists.\n\n\nQuestion: how do i operate a husqvarna snow blower?\nAnswer:"
  }
}
generation_config {
  temperature: 0.5
  top_p: 0.7
  top_k: 2
  candidate_count: 1
  max_output_tokens: 1024
}

Thank you for the quick response, any other debugging i can add?

sasha-gitg commented 7 months ago

You shouldn't use the @001 version for gemini-pro. Just pass gemini-pro.

You should be able to set the location in Langchain:

 llm = VertexAI(..., location='us-central1')
jm-nab commented 7 months ago

@sasha-gitg ah! Thank you, that was it!

I was confused by the documentation here: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemini-pro

I had been able to previously copy and paste the model names at the bottom and it'd work, so i tried that approach again.

2023-12-18_10-03 2023-12-18_10-04