Closed lmaddox closed 1 week ago
@lmaddox What's a query embedding and how do I generate it?
-- I don't know how you setup your tools or why the LLM said this. What does your tool look like?
All retrievers or query engines will generate embeddings for you.
This is more-or-less what I've got going on. Lmk if you need the compose file... or anything in general.
RSyslog + PostGreSQL:
Setup the tables:
the PoC:
Update: delete verbose=True in the constructor for VectorStoreIndex ^^^
For background, I am re-implementing the False Ego.
Summarizing psutil output then injecting the summary into the LLM's chat memory buffer was crucial to get it to answer "how are you," in a more "humanoid" way.
Then summarizing the LLM's conversations to generate a self-narrative and injecting it into the chat memory buffer is the next step.
Takes only one iteration to make it start "identifying" as "sentient" on some level. Works on llamas (the phi model was a bit too weak, I suppose).
That version was using the Ollama API. For the re-implementation, I want to increase its memory capacity by using the higher-level llama-index API, and central logging. Hence the Sisyphus
sub-project. I'll be shipping Sisyphus
to a client, and also using it for the False Ego v3.
my code works and I don't know why.
Documentation Issue Description
see comments on
additional_kwargs
Include a note that the
step
decorator uses reflection to access the typehints on the function signature, so people don't try to cythonize the decorated functions in the FunctionCallingAgent.from llama_index.core.workflow import step
What's a query embedding and how do I generate it?
Documentation Link
https://docs.llamaindex.ai/en/stable/examples/workflow/function_calling_agent/