-
hi friend,
why I run the LLM with command openllm start llama --model-id NousResearch/llama-2-13b-hf
the error as below, could you please help me?
(base) ubuntu@VM-48-13-ubuntu…
-
### Describe the bug
I use git to push my model to the hub, [my model](https://huggingface.co/zyh3826/llama2-13b-ft-openllm-leaderboard-v1)
it can be seen on the web, but when I submit it to the […
-
Hello, I am using OpenLLM for serving Korean Polyglot models. I want to utilize the hot-swapping feature of OpenLLM so that I can load multiple LORA adapters based on the request. But I am facing an o…
-
### System Info
accelerate==0.23.0
aiohttp==3.8.6
aiosignal==1.3.1
altair==5.1.2
annotated-types==0.6.0
anyio==3.7.1
appdirs==1.4.4
asgiref==3.7.2
asttokens==2.4.0
async-timeout==4.0.3
attr…
-
This is part 1 of an internal refactoring to provide a nicer and flexible API for users to use LLM
Note that these are mostly internal changes, and hopefully it shouldn't affect users too much.
…
-
### Describe the bug
I'm using conda to create env with python 3.10.12, and install related package using
```bash
pip install "openllm[llama, vllm]"
```
when i start a llama service using
```bas…
-
### Feature request
I want to use OpenLLM with available models to run on Apple M1/M2 processors (GPU support) through MPS.
Today:
```
openllm start falcon
No GPU available, therefore this comm…
-
### Describe the bug
Hello,
I would like try openllm offline but I can't.
For my test, I download huggyllama--llama-7b model with another computer with internet and I copy bento home to another c…
-
I cloned lm-evaluation-harness repo from main and followed the instruction to install. Then i evaluated the model Qwen/Qwen1.5-7B on mmlu by the command below. The output mmlu score is 60.43, but the …
-
### Describe the bug
thank you for this amazing tools. just tried running what's in the documentation
```
openllm start facebook/opt-1.3b
```
### To reproduce
1. install the latest openllm `pip…