nirelbaz / meniai

MeniAI is a CLI tool that lets developers create task-specific AI bots for automating and streamlining development tasks. These bots evolve with user feedback and can be used interactively or fully automated in CI/CD pipelines.
MIT License
2 stars 0 forks source link

[FEATURE] Support Ollama #4

Open leonj1 opened 1 week ago

leonj1 commented 1 week ago

Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] Because closed sourced LLMs can get expensive, there is a large community of developers that also host their own smaller local LLMs and run Ollama.

Describe the solution you'd like A clear and concise description of what you want to happen. Support self hosted Ollama server

Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered. The exo project also exists that support clustering multiple GPUs. I've not tried it, but there is also a community of people running larger LLMs using Exo to run larger models: https://github.com/exo-explore/exo

Additional context Add any other context or screenshots about the feature request here.

nirelbaz commented 1 week ago

Thank you for your feature request regarding support for self-hosted LLMs, specifically Ollama. I understand the desire to use open-source alternatives, especially considering the potential cost of closed-source LLMs. Let me address your suggestion:

  1. Ollama Support: Adding support for Ollama is an interesting idea that could benefit many users. MeniAI currently supports both OpenAI's ChatGPT and Anthropic's Claude, as seen in our src/integrations/llm-provider.ts file:
export function llmProvider(llm: LLMProvider, context: Context | string): BaseLLM {
  switch (llm) {
    case 'openai':
      return new ChatGPT(context);
    case 'claude':
      return new Claude(context);
    default:
      throw new Error(`Invalid LLM: ${llm}`);
  }
}

To implement Ollama support, we would need to create a new integration similar to our existing LLM providers. Here's a high-level plan for implementing this:

a. Create a new OllamaProvider class in the src/integrations directory. b. Implement the necessary methods to communicate with an Ollama server. c. Modify the LLMProvider interface to accommodate Ollama-specific configurations. d. Update the src/cli/command-options.ts to include Ollama-related options (like server URL). e. Extend the llmProvider function to include the new Ollama option.

Here's my proposed plan:

  1. Research and design the Ollama integration, ensuring it fits well within MeniAI's current architecture.
  2. Implement the OllamaProvider class and necessary CLI options.
  3. Add documentation on how to use MeniAI with a self-hosted Ollama server.
  4. Test the integration thoroughly to ensure it works seamlessly with the rest of MeniAI's features.

While I think this addition would make MeniAI more versatile and accessible to a wider range of users and would provide an open-source alternative to our existing closed-source LLM integrations, giving users more flexibility in choosing their preferred AI model.