-
(ol) C:\Users\SNS>openllm start falcon --model-id tiiuae/falcon-7b
Downloading (…)lve/main/config.json: 100%|████████████████████████████████████████████████| 1.05k/1.05k [00:00
-
### Describe the bug
I tried simple code example in Google's Colab and there is no response or error. It just hangs.
```
import openllm
client = openllm.client.HTTPClient("http://localhost:3000…
-
### Feature request
Support changing the httpx client properties for example to disable SSL verification. By default since the httpx client is loaded as part of the 'inner' class, there doesn't seem …
-
### Describe the bug
I am trying to host a flant5 model, after the flant5 model is hosted and when I try to send the request, I am getting the 500 error, saying Response payload is not completed
###…
-
how to start a server with my own sft model ? I run this command: openllm start baichuan --model-id /root/autodl-tmp/pre_train_model . but it doesn't work
-
### Describe the bug
When i try to run `openllm`, it crashed with the following exception. I followed the few steps explained in `README
`. Any idea what i might be missing?
```zsh
$ pip insta…
-
### Describe the bug
When running `openllm start opt` for the first time, the process fails after downloading `..._config.json`. I'm on a Macbook Pro M2. Here's the output:
```
(openllm) ➜ OpenL…
-
### Describe the bug
> No GPU available, therefore this command is disabled
But I think my GPU works well with pytorch 😟
This is the third computer I've tried, and none of them are working 😩
…
-
I have an AMD Ryzen 5 5600G processor which has an integrated GPU, and I do not have a separate graphics card. Am using `Linux Mint 21` Cinnamon.
I installed PyTorch with this command `pip3 install…
nav9 updated
4 months ago
-
Dear Authors,
I recently had the pleasure of reading your "A Survey of Large Language Models" paper. The content is insightful, comprehensive, and provides a remarkable reference point for those wh…