PyRepair / maniple

8 stars 2 forks source link

Extend MANIPLE to support open-source LLMs using Ollama #16

Closed gauransh closed 4 months ago

gauransh commented 6 months ago

Description

Hello,

Thank you for your excellent work on MANIPLE. I have a suggestion to extend the capabilities of MANIPLE to support open-source language models. Specifically, I recommend integrating MANIPLE with an open-source platform like Ollama that allows running various open LLMs locally.

Ollama provides a user-friendly way to download, manage, and interact with open-source models like LLaMA, Mistral, Gemma, and others. By leveraging Ollama, MANIPLE could potentially reach a broader audience and provide more flexible solutions for automated program repair using different open LLMs.

I believe this integration could significantly benefit the community and further advance the state-of-the-art in automated program repair by enabling experimentation with various open models.

Motivation

Integrating MANIPLE with an open-source LLM platform like Ollama would make MANIPLE more accessible to researchers and developers who prefer or require using open models. This could increase adoption and contributions to the MANIPLE project.

Additionally, the architectures and training data of open LLMs available through Ollama may provide capabilities or knowledge that complement MANIPLE's existing model. Exploring the synergies between MANIPLE and various open models could yield even better performance on the fact selection and bug repair tasks.

Proposed Solution

One approach would be to add support for using Ollama and the open LLMs it enables as an alternative backend for MANIPLE. The key steps would be:

  1. Modify the LLM interface in MANIPLE to support Ollama's API for downloading and prompting open models.
  2. Test MANIPLE's prompt generation and fact selection pipeline with open LLMs from Ollama to ensure compatibility.
  3. Evaluate MANIPLE's bug fixing performance when using open models vs the current model to understand the impact.
  4. Document how users can configure MANIPLE to use Ollama and select different open LLMs.
  5. Consider fine-tuning open models on the MANIPLE fact databases to further specialize them for this use case.

Here is a possible way to integrate (not tested but the idea is this - within maniple/utils/openai_utils.py):

import ollama

# Instantiate directly with Ollama
ollama_engine = ollama.Engine(model="model_name_here")  # Specify the model name directly

def get_and_save_response_with_fix_path(prompt: str, actual_group_bitvector: str, database_dir: str,
                                        project_name: str, bug_id: str, trial: int, data_to_store: dict = None) -> dict:
    # Function body setup
    output_dir = os.path.join(database_dir, project_name, bug_id, actual_group_bitvector)
    if not os.path.exists(output_dir):
        os.makedirs(output_dir)

    responses = []
    for _ in range(trial):
        try:
            # Generate a response using the ollama engine directly
            response = ollama_engine.complete(prompt=prompt, max_tokens=1024)
            responses.append(response)
        except Exception as e:
            print_in_red("An error occurred: " + str(e))
            continue

    # Process and save responses 
    # More code here to handle responses...

    return responses

def get_response_with_valid_patch(prompt: str, trial: int) -> list:
    responses = []
    for _ in range(trial):
        try:
            response = ollama_engine.complete(prompt=prompt, max_tokens=1024)
            # Here you would validate the patch and process it
            responses.append(response)
        except Exception as e:
            print_in_red("An error occurred: " + str(e))
            continue

    return responses

    # Rest of the code

Alternatives Considered

Rather than directly integrating Ollama into MANIPLE, an alternative would be to provide documentation and examples for how MANIPLE's techniques could be adapted to open models enabled by Ollama. However, directly supporting Ollama in MANIPLE would provide a more seamless experience and make it easier to compare models.

Additional Context

Ollama website: https://ollama.com/
Ollama GitHub org: https://github.com/ollama Ollama Python client: https://github.com/ollama/ollama-python

Thank you for considering this suggestion.

jerryyangboyu commented 6 months ago

Hey Gauransh,

Thanks for your insightful and detailed suggestion. Support for multiple LLMs is on the way, and a clearer roadmap will be out shortly.