OVOSHatchery / ovos-solver-plugin-llmcpp

Apache License 2.0
7 stars 1 forks source link
proof-of-concept work-in-progress

LLM.cpp Persona

Give OpenVoiceOS some sass with Alpaca.cpp, GPT4All.cpp or Bloomz.cpp

This plugin requires providing the path to the executable and model, it uses subprocess which allows it to work with these programs without requiring python bindings

Dedicated plugins may exist for each LLM

Examples

Usage

Spoken answers api

from ovos_solver_llmcpp import LLMcppSolver

ALPACA_MODEL_FILE = "./models/ggml-alpaca-7b-q4.bin"
GPT4ALL_MODEL_FILE = "./models/gpt4all-lora-quantized.bin"

# binpath = "~/alpaca.cpp/chat"
binpath = "~/gpt4all.cpp/chat"

bot = LLMcppSolver({"model": GPT4ALL_MODEL_FILE,
                    "executable_path": binpath})

sentence = bot.spoken_answer("Qual é o teu animal favorito?", {"lang": "pt-pt"})
# Meus animais favoritos são cães, gatos e tartarugas!

for q in ["Does god exist?",
          "what is the speed of light?",
          "what is the meaning of life?",
          "What is your favorite color?",
          "What is best in life?"]:
    a = bot.get_spoken_answer(q)
    print(q, a)