run-llama / llama-hub

A library of data loaders for LLMs made by the community -- to be used with LlamaIndex and/or LangChain
https://llamahub.ai/
MIT License
3.44k stars 732 forks source link

[Bug]: 2024-02-14 Build an Agentic Pipeline from Scratch - ValueError: Input {input} is not stringable. #957

Open SDpedrovilaplana opened 7 months ago

SDpedrovilaplana commented 7 months ago

Bug Description

Using Colab Pro (with T4 GPU), I'm trying to use the "Build Agents from Scratch (Building Advanced RAG, Part 3)" Colab notebook to run an Agent using a Local LLM with LlamaCPP. The error is in the final step when I'm trying to run the agent, step_output = agent.run_step(task.task_id) that it raises: ValueError: Input {input} is not stringable. How can I fix it?

Version

0.10.7

Steps to Reproduce

Here is the Colab Pro notebook in MD format so you can reproduce it 2024_02_14_Build_an_Agentic_Pipeline_from_Scratch.md

Relevant Logs/Tracbacks

> Running step cd3d6e36-b04e-4799-8fda-9801e4e820ac. Step input: What are some tracks from the artist AC/DC? Limit it to 3
> Running module agent_input with input: 
state: {'sources': [], 'memory': ChatMemoryBuffer(token_limit=3000, tokenizer_fn=functools.partial(<bound method Encoding.encode of <Encoding 'cl100k_base'>>, allowed_special='all'), chat_store=SimpleChatSto...
task: task_id='ef8fbec0-d927-4305-ab26-3ba1878065a5' input='What are some tracks from the artist AC/DC? Limit it to 3' memory=ChatMemoryBuffer(token_limit=3000, tokenizer_fn=functools.partial(<bound method ...

> Running module react_prompt with input: 
input: What are some tracks from the artist AC/DC? Limit it to 3

> Running module llm with input: 
prompt: [ChatMessage(role=<MessageRole.SYSTEM: 'system'>, content='\nYou are designed to help with a variety of tasks, from answering questions     to providing summaries to other types of analyses.\n\n## Too...

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-21-540316236b4e> in <cell line: 1>()
----> 1 step_output = agent.run_step(task.task_id)

11 frames
/usr/local/lib/python3.10/dist-packages/llama_index/core/base/query_pipeline/query.py in validate_and_convert_stringable(input)
     62         return str(input)
     63     else:
---> 64         raise ValueError(f"Input {input} is not stringable.")
     65 
     66 

ValueError: Input system: 
You are designed to help with a variety of tasks, from answering questions     to providing summaries to other types of analyses.

## Tools
You have access to a wide variety of tools. You are responsible for using
the tools in any sequence you deem appropriate to complete the task at hand.
This may require breaking the task into subtasks and using different tools
to complete each subtask.

You have access to the following tools:
> Tool Name: sql_tool
Tool Description: Useful for translating a natural language query into a SQL query
Tool Args: {"type": "object", "properties": {"input": {"title": "Input", "type": "string"}}, "required": ["input"]}

## Output Format
To answer the question, please use the following format.

Thought: I need to use a tool to help me answer the question.
Action: tool name (one of sql_tool) if using a tool.
Action Input: the input to the tool, in a JSON format representing the kwargs (e.g. {"input": "hello world", "num_beams": 5})

Please ALWAYS start with a Thought.

Please use a valid JSON format for the Action Input. Do NOT do this {'input': 'hello world', 'num_beams': 5}.

If this format is used, the user will respond in the following format:

Observation: tool response

You should keep repeating the above format until you have enough information to answer the question without using any more tools. At that point, you MUST respond in the one of the following two formats:

Thought: I can answer without using any more tools.
Answer: [your answer here]
Thought: I cannot answer the question with the provided tools.
Answer: Sorry, I cannot answer your query.

Current Conversation

Below is the current conversation consisting of interleaving human and assistant messages.

is not stringable.