Open mehulgupta2016154 opened 4 days ago
In this video, it seems demonstrate the usage of ollama and Magnetic One..
That is my video posted above :) I have a semi-functional fork of this that works with ollama and was tested with llama-3.2-11b-vision. Here is a link to the repo: https://github.com/OminousIndustries/autogen-llama3.2
The install steps should be the same as the regular magentic-one install. You can ignore the "Environment Configuration for Chat Completion Client" since the model info is hard coded into the utils.py in my repo (which is a current limitation as it chains it to llama-3.2-11b-vision), but since I was using that for testing, it worked for my purposes!
What feature would you like to be added?
How Magentic-One be used with local LLMs or Ollama?
Why is this needed?
This will enable users to use Magentic-One with open-source LLMs other than OpenAI-API