Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://gpt-docs.h2o.ai/
Make a matplotlib plot and save as titanic.png, for the titanic surivors vs. other parameters in interesting way.
Today is August 31, 2024. Write Python code to plot TSLA's and META's stock price gains YTD vs. time per week, and save the plot to a file named 'stock_gains.png'
[x] code executation agent
[x] Add OpenAI Files API
[x] Return files in some way supported by OpenAI API for files so give back file objects instead of disk locations. use usage to return file_ids
[x] Prevent autogen from going on OpenAI server
[ ] handle pip install issues like AgentZero
[x] Avoid newline in streaming of iostream
[ ] Remove empty vs. None messages
[ ] Collapse assistants
[ ] Check how continue works, seems to get stuck in loop (i.e. max_tokens small)
[ ] Control docker life time externally, too slow (10s) for each chat to stop it then
[x] If no pyautogen, then don't allow autogen_server True
Example prompts:
Make a matplotlib plot and save as titanic.png, for the titanic surivors vs. other parameters in interesting way.
Today is August 31, 2024. Write Python code to plot TSLA's and META's stock price gains YTD vs. time per week, and save the plot to a file named 'stock_gains.png'
[x] code executation agent
[x] Add OpenAI Files API
[x] Return files in some way supported by OpenAI API for files so give back file objects instead of disk locations. use usage to return file_ids
[x] Prevent autogen from going on OpenAI server
[ ] handle pip install issues like AgentZero
[x] Avoid newline in streaming of iostream
[ ] Remove empty vs. None messages
[ ] Collapse assistants
[ ] Check how continue works, seems to get stuck in loop (i.e. max_tokens small)
[ ] Control docker life time externally, too slow (10s) for each chat to stop it then
[x] If no pyautogen, then don't allow autogen_server True
[ ] DinD needs
--privileged
for docker in docker run. But better is Docker out of Docker: https://microsoft.github.io/autogen/docs/topics/code-execution/cli-code-executor/#combining-autogen-in-docker-with-a-docker-based-executor with-v /var/run/docker.sock:/var/run/docker.sock
to basedocker run
https://microsoft.github.io/autogen/docs/tutorial/conversation-patterns/ https://microsoft.github.io/autogen/docs/Installation#option-2-install-autogen-locally-using-virtual-environment https://microsoft.github.io/autogen/docs/tutorial/code-executors/
Some discussions: https://www.reddit.com/r/LangChain/comments/1db6evc/best_production_agent_framework_langraph_vs/ https://www.reddit.com/r/LangChain/comments/1b7q44y/autogen_vs_langgraph/ langchain tools with autogen: https://github.com/microsoft/autogen/blob/main/notebook/agentchat_langchain.ipynb