Open beddows opened 1 year ago
right now i changed it so that the library streams responses. You can put the response and put it in a for loop and print each character like this:
for token in response:
print(token, end="", flush=True)
I still need to implement the option of switching between streaming and just the normal mode.
I haven’t updated tests/main.py to work yet unfortunately. But try the others.
Thanks @kyb3r , have it working now! Yes, I've been playing with clinic.py, which is very cool. Has given me a lot of ideas to think through.
In terms of priorities, what kinds of contributions are you looking for from the community? How can I chip in? I have all kinds of ideas regarding your excellent HMCS implementation.
Making the implementation of tools more robust is something that needs experimentation.
I’m also thinking of making alternate ways of storing memories.
One thing I might want to explore is giving the agent access to a save_memory
tool so it can store memories on it’s own accord.
When using "response = agent.send(message)", I'm getting "Object of type generator is not JSON serializable".
I noticed that handle_message in ToolManager has been updated quite a bit and I'm trying to understand how it returns a response to the agent (e.g.: for use in tests/main.py).