-
### Describe the bug
After running `openllm build dolly-v2` to download the dolly-v2 model, I attempted to run the command `openllm start dolly-v2` and got the following error:
I also got this s…
-
This model is ready for testing. If you are assigned to this issue, please try it out using the CLI, Google Colab and DockerHub and let us know if it works!
-
### Describe the bug
When i run the bentoML CLI with the --production flag then all the cpus and ram are running out of resourses.
![image](https://user-images.githubusercontent.com/57390521/1922279…
-
### Feature request
I wanna try out openllm-ui-clojure
Can you please post screenshots, demo, and howto on running it locally?
it would be really great if there's a docker image that runs OpenL…
-
### Describe the bug
> No GPU available, therefore this command is disabled
But I think my GPU works well with pytorch 😟
This is the third computer I've tried, and none of them are working 😩
…
-
Hello @abidlabs @aliabid94,
can we create dynamic output on gradio
for example
```
all_output = []
def generate_output(number_of_outputs):
for x in range(number_of_outputs):
b…
-
### Feature request
Bentos are a great way to collaborate, as they are self-contained and can be converted to virtually anything - loaded in Python as used as runners, converted to docker images or…
-
### Describe the bug
I'm trying to run tiiuae/falcon-7b with vLLM backend with or without adapters.
It fails with a "Response not completed" error which is triggered after a Value Error as seen in t…
-
Hello there,
I have deployed an OpenLLM on a managed service that is protected by the use of an auth Bearer token:
```bash
curl -X 'POST' \
'https://themodel.url/v1/generate' \
-H "Author…
-
### Model Name
Scaffold Morphing
### Model Description
The context discusses a novel notation system called Sequential Attachment-based Fragment Embedding (SAFE) that improves upon traditiona…