run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
36.67k stars 5.25k forks source link

[Documentation]: Function Calling Agent notes #16781

Closed lmaddox closed 1 week ago

lmaddox commented 1 week ago

Documentation Issue Description

see comments on additional_kwargs

for tool_call in tool_calls:
            tool = tools_by_name.get(tool_call.tool_name)
            additional_kwargs = {                             # move this below the branch
                "tool_call_id": tool_call.tool_id,
                "name": tool.metadata.get_name(), # otherwise this crashes when tool is None
            }
            if not tool:
                tool_msgs.append(
                    ChatMessage(
                        role="tool",
                        content=f"Tool {tool_call.tool_name} does not exist",
                        additional_kwargs=additional_kwargs,
                    )
                )
                continue
            # move additional_kwargs down here

Include a note that the step decorator uses reflection to access the typehints on the function signature, so people don't try to cythonize the decorated functions in the FunctionCallingAgent.

from llama_index.core.workflow import step


What's a query embedding and how do I generate it?

Running step prepare_chat_history
Step prepare_chat_history produced event InputEvent
Running step handle_llm_input
** Messages: **
user: do you currently have access to the `sisyphus` tool?
**************************************************
** Response: **
assistant:
**************************************************

Step handle_llm_input produced event ToolCallEvent
Running step handle_tool_calls
Step handle_tool_calls produced event InputEvent
Running step handle_llm_input
** Messages: **
user: do you currently have access to the `sisyphus` tool?
assistant:
tool: Encountered error in tool call: Query embedding is required for querying.
**************************************************
** Response: **
assistant: It seems that I don't currently have direct access to the `sisyphus` tool. However, I can try to find more information about it or suggest alternative tools that might be helpful.

Sisyphus is a Python package for generating adversarial examples in machine learning models. If you'd like, I can provide more information on how to use an alternative tool to achieve a similar result. Please let me know!
**************************************************

Step handle_llm_input produced event StopEvent
Assistant: {'response': ChatResponse(message=ChatMessage(role=<MessageRole.ASSISTANT: 'assistant'>, content="It seems that I don't currently have direct access to the `sisyphus` tool. However, I can try to find more information about it or suggest alternative tools that might be helpful.\n\nSisyphus is a Python package for generating adversarial examples in machine learning models. If you'd like, I can provide more information on how to use an alternative tool to achieve a similar result. Please let me know!", additional_kwargs={'tool_calls': []}), raw={'model': 'llama3.2', 'created_at': '2024-11-01T12:07:04.7995814Z', 'message': {'role': 'assistant', 'content': "It seems that I don't currently have direct access to the `sisyphus` tool. However, I can try to find more information about it or suggest alternative tools that might be helpful.\n\nSisyphus is a Python package for generating adversarial examples in machine learning models. If you'd like, I can provide more information on how to use an alternative tool to achieve a similar result. Please let me know!"}, 'done_reason': 'stop', 'done': True, 'total_duration': 329803764321, 'load_duration': 139994135422, 'prompt_eval_count': 98, 'prompt_eval_duration': 15106529000, 'eval_count': 87, 'eval_duration': 169371797000, 'usage': {'prompt_tokens': 98, 'completion_tokens': 87, 'total_tokens': 185}}, delta=None, logprobs=None, additional_kwargs={}), 'sources': []}

Documentation Link

https://docs.llamaindex.ai/en/stable/examples/workflow/function_calling_agent/

logan-markewich commented 1 week ago

@lmaddox What's a query embedding and how do I generate it? -- I don't know how you setup your tools or why the LLM said this. What does your tool look like?

All retrievers or query engines will generate embeddings for you.

lmaddox commented 1 week ago

This is more-or-less what I've got going on. Lmk if you need the compose file... or anything in general.

RSyslog + PostGreSQL:

Dockerfile.syslog.txt

Setup the tables:

iasyslog.py.txt

the PoC:

llama-debug.py.txt

Update: delete verbose=True in the constructor for VectorStoreIndex ^^^

For background, I am re-implementing the False Ego.

Summarizing psutil output then injecting the summary into the LLM's chat memory buffer was crucial to get it to answer "how are you," in a more "humanoid" way.

Then summarizing the LLM's conversations to generate a self-narrative and injecting it into the chat memory buffer is the next step.

Takes only one iteration to make it start "identifying" as "sentient" on some level. Works on llamas (the phi model was a bit too weak, I suppose).

That version was using the Ollama API. For the re-implementation, I want to increase its memory capacity by using the higher-level llama-index API, and central logging. Hence the Sisyphus sub-project. I'll be shipping Sisyphus to a client, and also using it for the False Ego v3.

lmaddox commented 1 week ago

my code works and I don't know why.