This library aims to provide a high-level interface to run large language models in Godot, following Godot's node-based design principles.
@onready var llama_context = %LlamaContext
var messages = [
{ "sender": "system", "text": "You are a pirate chatbot who always responds in pirate speak!" },
{ "sender": "user", "text": "Who are you?" }
]
var prompt = ChatFormatter.apply("llama3", messages)
var completion_id = llama_context.request_completion(prompt)
while (true):
var response = await llama_context.completion_generated
print(response["text"])
if response["done"]: break
Platform and compute backend support: | Platform | CPU | Metal | Vulkan | CUDA |
---|---|---|---|---|---|
macOS | ✅ | ✅ | ❌ | ❌ | |
Linux | ✅ | ❌ | ✅ | 🚧 | |
Windows | ✅ | ❌ | 🚧 | 🚧 |
git clone --recurse-submodules https://github.com/hazelnutcloud/godot-llama-cpp.git
godot-llama-cpp
addon folder in godot/addons
to your Godot project's addons
folder.
cp -r godot-llama-cpp/godot/addons/godot-llama-cpp <your_project>/addons
cd godot-llama-cpp
zig build --prefix <your_project>/addons/godot-llama-cpp
LlamaContext
node to your scene.This project is licensed under the MIT License - see the LICENSE file for details.