I'm trying to do a fresh setup of OpenLLM on my linux box, here's what I'm seeing:
python3.11 -m venv ai
source ai/bin/activate
pip install openllm
(ai) user@wallflower ~/.venv $ openllm repo update
(ai) user@wallflower ~/.venv $ openllm hello
Detected Platform: linux
Detected Accelerators:
- NVIDIA GeForce RTX 4090 24GB
? Select a model gemma default Yes
? Select a version gemma:2b Yes
? Select an action 0. Run the model in terminal
$ export BENTOML_HOME=/home/user/.openllm/repos/github.com/bentoml/openllm-models/main/bentoml
$ source /home/user/.openllm/venv/608482142055440317/bin/activate
$ bentoml serve gemma:2b-instruct-fp16-f020 --port 33009
Model server started 2459797
Model loading...
/home/user/.openllm/venv/608482142055440317/bin/python: No module named bentoml
I'm trying to do a fresh setup of OpenLLM on my linux box, here's what I'm seeing:
Here's the version information
I'm not sure what to look at next to debug this. Anyone have any advice?