Closed ulmentflam closed 2 weeks ago
Just not interested, sorry.
Okay, No problem. I apologize for not adhering to the conventions.
I could have also gone through the manual install steps from https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install which just installs the binary and doesn't install the Nvidia or AMD drivers and leaves the WSL2 passthrough up to the installer.
Out of curiosity what conventions did I miss besides using Ollama's install.sh, script inside of another bash script?
header_info
I see, you use the slant ASCII font https://github.com/tteck/Proxmox/pull/2605#issuecomment-1979668948. Thanks!
Also thanks for maintaining this, it is fantastic!
Description
This PR adds Ollama as an LXC. Ollama is a self-hosted API for running large language models. It supports the open Llama Models, Phi, Minstral, Gemma, and more. This LXC benefits from GPU passthrough and installation of GPU drivers.
Type of change