-
I have to use the local model due to some reasons. However, readme does not mention how to adjust the configuration to call the local model. If you can tell me the method, I would be very grateful.
-
I would suggest the `Ollama` api as that is well documented and supports many llms.
-
[Edit 7/20/23]: Let's use Llama 2. AWS / Azure might have hosted versions too, so no local needed.
If there's any ticket I need engagement from the community, it's this one. Adding the ability for …
-
### What feature would you like to be added?
How Magentic-One be used with local LLMs or Ollama?
### Why is this needed?
This will enable users to use Magentic-One with open-source LLMs other than …
-
Does not work specially in the Cursor Settings Section, there could be totally new instructions as couple of sections are not available there
-
We don't have an API key for OpenAI, but we have other LLMs, such as Ollama,
May I ask if it is possible to call the other LLM through API? If so, how can I configure it
Thanks
`python tests/te…
-
would like to save some cash during the initial experimenty/reverse engineering documentationy initial wave of development on the project, so don't want to hit up openai's api too much, especially for…
cazlo updated
1 month ago
-
Would love to have local llm support through llmstudio or ollama
NurvX updated
1 month ago
-
### Do you need to file an issue?
- [x] I have searched the existing issues and this bug is not already filed.
- [x] My model is hosted on OpenAI or Azure. If not, please look at the "model providers…
-
TLDR: Create and Test Local LLM's for Podcastify