al-swaiti / ComfyUI-OllamaGemini

AI-api text generation
MIT License
11 stars 1 forks source link

GeminiOllama ComfyUI Extension

This extension integrates Google's Gemini API and Ollama into ComfyUI, allowing users to leverage these powerful language models directly within their ComfyUI workflows.

Features

Installation

  1. Clone this repository into your ComfyUI's custom_nodes directory:

    cd /path/to/ComfyUI/custom_nodes
    git clone https://github.com/yourusername/GeminiOllama.git
  2. Install the required dependencies:

    pip install google-generativeai requests vtracer

Configuration

Gemini API Key Setup

  1. Go to the Google AI Studio.
  2. Create a new API key or use an existing one.
  3. Copy the API key.
  4. Create a config.json file in the extension directory with the following content:
    {
     "GEMINI_API_KEY": "your_api_key_here"
    }

Ollama Setup

  1. Install Ollama by following the instructions on the Ollama GitHub page.
  2. Start the Ollama server (usually runs on http://localhost:11434).
  3. Add the Ollama URL to your config.json:
    {
     "GEMINI_API_KEY": "your_api_key_here",
     "OLLAMA_URL": "http://localhost:11434"
    }

Usage

After installation and configuration, a new node called "Gemini Ollama API" will be available in ComfyUI.

Input Parameters

Output

Main Functions

  1. get_gemini_api_key(): Retrieves the Gemini API key from the config file.
  2. get_ollama_url(): Gets the Ollama URL from the config file.
  3. generate_content(): Main function to generate content based on the chosen API and parameters.
  4. generate_gemini_content(): Handles content generation for Gemini API.
  5. generate_ollama_content(): Manages content generation for Ollama API.
  6. tensor_to_image(): Converts a tensor to a PIL Image for vision-based tasks.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

This project is licensed under the MIT License - see the LICENSE file for details.