containers / ramalama

The goal of RamaLama is to make working with AI boring.
MIT License
272 stars 47 forks source link

Add llama-cpp-python server #452

Open ericcurtin opened 1 week ago

ericcurtin commented 1 week ago

Changed default runtime from 'llama.cpp' to 'llama-cpp-python'. Added 'llama-cpp-python' as a runtime option for better flexibility with the --runtime flag.

Summary by Sourcery

Add 'llama-cpp-python' as a new runtime option and set it as the default runtime, enhancing flexibility in model serving.

New Features:

Enhancements:

sourcery-ai[bot] commented 1 week ago

Reviewer's Guide by Sourcery

This PR changes the default runtime from 'llama.cpp' to 'llama-cpp-python' and adds support for the 'llama-cpp-python' server implementation. The changes involve modifying the server execution logic and updating the CLI configuration to accommodate the new runtime option.

Sequence diagram for server execution logic

sequenceDiagram
    participant User
    participant CLI
    participant Model
    User->>CLI: Run command with --runtime flag
    CLI->>Model: Pass runtime argument
    alt Runtime is vllm
        Model->>CLI: Execute vllm server
    else Runtime is llama.cpp
        Model->>CLI: Execute llama-server
    else Runtime is llama-cpp-python
        Model->>CLI: Execute llama_cpp.server
    end

Updated class diagram for runtime configuration

classDiagram
    class CLI {
        -runtime: String
        +configure_arguments(parser)
    }
    class Model {
        +serve(args)
    }
    CLI --> Model: uses
    note for CLI "Updated default runtime to 'llama-cpp-python' and added it as a choice"

File-Level Changes

Change Details Files
Added llama-cpp-python as a new runtime option and made it the default
  • Added 'llama-cpp-python' as a new choice in runtime options
  • Changed default runtime from 'llama.cpp' to 'llama-cpp-python'
  • Updated help text to include the new runtime option
ramalama/cli.py
Implemented server execution logic for llama-cpp-python runtime
  • Restructured server execution logic to handle three different runtimes
  • Added specific command construction for llama-cpp-python server
  • Maintained existing logic for vllm and llama.cpp runtimes
ramalama/model.py

Possibly linked issues


Tips and commands #### Interacting with Sourcery - **Trigger a new review:** Comment `@sourcery-ai review` on the pull request. - **Continue discussions:** Reply directly to Sourcery's review comments. - **Generate a GitHub issue from a review comment:** Ask Sourcery to create an issue from a review comment by replying to it. - **Generate a pull request title:** Write `@sourcery-ai` anywhere in the pull request title to generate a title at any time. - **Generate a pull request summary:** Write `@sourcery-ai summary` anywhere in the pull request body to generate a PR summary at any time. You can also use this command to specify where the summary should be inserted. #### Customizing Your Experience Access your [dashboard](https://app.sourcery.ai) to: - Enable or disable review features such as the Sourcery-generated pull request summary, the reviewer's guide, and others. - Change the review language. - Add, remove or edit custom review instructions. - Adjust other review settings. #### Getting Help - [Contact our support team](mailto:support@sourcery.ai) for questions or feedback. - Visit our [documentation](https://docs.sourcery.ai) for detailed guides and information. - Keep in touch with the Sourcery team by following us on [X/Twitter](https://x.com/SourceryAI), [LinkedIn](https://www.linkedin.com/company/sourcery-ai/) or [GitHub](https://github.com/sourcery-ai).
ericcurtin commented 1 week ago

@cooktheryan @lsm5 @mrunalp @slp @rhatdan @tarilabs @umohnani8 @ygalblum PTAL

ericcurtin commented 1 week ago

@ygalblum we probably need to push some container images before merging this, but when we do that, we should be all good.