run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
36.74k stars 5.27k forks source link

[Bug]: #16970

Open dayglo opened 5 hours ago

dayglo commented 5 hours ago

Bug Description

Im having a bit of trouble with wrapping a VertexAI llm as a structured LLM. It works fine with openai.

I set it up with

sllm = llm.as_structured_llm(output_cls=MyClass)

But then when calling it, I get

ERROR:flask_app:Exception on /process_chatlog [POST]
Traceback (most recent call last):
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/google/protobuf/json_format.py", line 585, in _ConvertFieldValuePair
    raise ParseError(
google.protobuf.json_format.ParseError: Message type "google.cloud.aiplatform.v1beta1.Schema" has no field named "$defs" at "Schema".
 Available Fields(except extensions): "['type', 'format', 'title', 'description', 'nullable', 'default', 'items', 'minItems', 'maxItems', 'enum', 'properties', 'propertyOrdering', 'required', 'minProperties', 'maxProperties', 'minimum', 'maximum', 'minLength', 'maxLength', 'pattern', 'example', 'anyOf']"

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/flask/app.py", line 1473, in wsgi_app
    response = self.full_dispatch_request()
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/flask/app.py", line 882, in full_dispatch_request
    rv = self.handle_user_exception(e)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/flask_cors/extension.py", line 194, in wrapped_function
    return cors_after_request(app.make_response(f(*args, **kwargs)))
                                                ^^^^^^^^^^^^^^^^^^
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/flask/app.py", line 880, in full_dispatch_request
    rv = self.dispatch_request()
         ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/flask/app.py", line 865, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)  # type: ignore[no-any-return]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/flask_app.py", line 145, in process_chatlog
    tickets = summariser(chat_text)
              ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/flask_app.py", line 98, in summariser
    output = sllm.chat([input_msg])
             ^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/llama_index/core/instrumentation/dispatcher.py", line 311, in wrapper
    result = func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/llama_index/core/llms/callbacks.py", line 173, in wrapped_llm_chat
    f_return_val = f(_self, messages, **kwargs)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/llama_index/core/llms/structured_llm.py", line 75, in chat
    output = self.llm.structured_predict(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/llama_index/core/instrumentation/dispatcher.py", line 311, in wrapper
    result = func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/llama_index/core/llms/llm.py", line 374, in structured_predict
    result = program(llm_kwargs=llm_kwargs, **prompt_args)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/llama_index/core/instrumentation/dispatcher.py", line 311, in wrapper
    result = func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/llama_index/core/program/function_program.py", line 202, in __call__
    agent_response = self._llm.predict_and_call(
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/llama_index/core/instrumentation/dispatcher.py", line 311, in wrapper
    result = func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/llama_index/core/llms/function_calling.py", line 182, in predict_and_call
    response = self.chat_with_tools(
               ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/llama_index/core/llms/function_calling.py", line 48, in chat_with_tools
    response = self.chat(**chat_kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/llama_index/core/instrumentation/dispatcher.py", line 311, in wrapper
    result = func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/llama_index/core/llms/callbacks.py", line 173, in wrapped_llm_chat
    f_return_val = f(_self, messages, **kwargs)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/llama_index/llms/vertex/base.py", line 240, in chat
    generation = completion_with_retry(
                 ^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/llama_index/llms/vertex/utils.py", line 113, in completion_with_retry
    return _completion_with_retry(**kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/tenacity/__init__.py", line 336, in wrapped_f
    return copy(f, *args, **kw)
           ^^^^^^^^^^^^^^^^^^^^
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/tenacity/__init__.py", line 475, in __call__
    do = self.iter(retry_state=retry_state)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/tenacity/__init__.py", line 376, in iter
    result = action(retry_state)
             ^^^^^^^^^^^^^^^^^^^
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/tenacity/__init__.py", line 398, in <lambda>
    self._add_action_func(lambda rs: rs.outcome.result())
                                     ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/Cellar/python@3.12/3.12.7_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 449, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/Cellar/python@3.12/3.12.7_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/tenacity/__init__.py", line 478, in __call__
    result = fn(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/llama_index/llms/vertex/utils.py", line 92, in _completion_with_retry
    tools = to_gemini_tools(tools) if tools else []
            ^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/llama_index/llms/vertex/utils.py", line 60, in to_gemini_tools
    func_name = FunctionDeclaration(
                ^^^^^^^^^^^^^^^^^^^^
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/vertexai/generative_models/_generative_models.py", line 2160, in __init__
    raw_schema = _dict_to_proto(aiplatform_types.Schema, parameters)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/vertexai/generative_models/_generative_models.py", line 2950, in _dict_to_proto
    json_format.ParseDict(message_dict, message._pb)
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/google/protobuf/json_format.py", line 495, in ParseDict
    parser.ConvertMessage(js_dict, message, '')
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/google/protobuf/json_format.py", line 540, in ConvertMessage
    self._ConvertFieldValuePair(value, message, path)
  File "/Users/george.cairns@contino.io/Documents/code/fca/chatbot2/.venv/lib/python3.12/site-packages/google/protobuf/json_format.py", line 696, in _ConvertFieldValuePair
    raise ParseError(str(e)) from e
google.protobuf.json_format.ParseError: Message type "google.cloud.aiplatform.v1beta1.Schema" has no field named "$defs" at "Schema".
 Available Fields(except extensions): "['type', 'format', 'title', 'description', 'nullable', 'default', 'items', 'minItems', 'maxItems', 'enum', 'properties', 'propertyOrdering', 'required', 'minProperties', 'maxProperties', 'minimum', 'maximum', 'minLength', 'maxLength', 'pattern', 'example', 'anyOf']"

I had a look through the issues on here and noticed that someone is getting a similar error message(google.protobuf.json_format.ParseError: Message type "google.cloud.aiplatform.v1beta1.Schema" has no field named "$defs" at "Schema".), but their use case is with function calling, not structured output.

Version

llama-index 0.11.21

Steps to Reproduce

As above

Relevant Logs/Tracbacks

No response

dosubot[bot] commented 5 hours ago

Hey, @dayglo! I'm here to help you with any bugs or questions you have. Let's tackle this together!

The error you're encountering seems to be related to the to_gemini_tools function in the VertexAI integration of the LlamaIndex library. This function is responsible for converting tool specifications into a format compatible with the Gemini model in VertexAI. The error message indicates that the Schema type in google.cloud.aiplatform.v1beta1 does not have a field named $defs, which suggests that the parameters field in your FunctionDeclaration might be incorrectly structured or contains unexpected fields.

To resolve this issue, you should ensure that the parameters field in your FunctionDeclaration is correctly formatted according to the expected schema. The parameters should match the available fields in the Schema type, such as type, format, title, description, nullable, default, items, minItems, maxItems, enum, properties, propertyOrdering, required, minProperties, maxProperties, minimum, maximum, minLength, maxLength, pattern, example, and anyOf.

Review the structure of the parameters you are passing to ensure it aligns with these fields and does not include any unsupported fields like $defs. This should help in resolving the ParseError you are experiencing.

To continue talking to Dosu, mention @dosu.