-
What I understand about this is actually deploy a model (e.g Llama3.1-70B-Instruct) by using 'vllm serve Llama3.1-70B-Instruct ... ' and then config the url and model name to llama-stack for LLM capab…
-
### Your current environment
```text
docker run --rm --runtime nvidia --gpus all --name vllm-qwen72b -v ~/.cache/huggingface:/root/.cache/huggingface \
-v /data1/Download/models/Qwen-7…
-
**Is your feature request related to a problem? Please describe.**
New Hyundai EV services
**Describe the solution you'd like**
From the Hyundai Bluelink Europe iOS app update from today, there a…
-
### Before submitting your bug report
- [X] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue]…
-
I use `client.images.push(repository, stream=True)` api to push docker image to remote registry, and get response just like
```json
{"status":"Pushing repository yourname/app (1 tags)"}
{"status":…
-
We are looking to implement the Clipboard Change Event API and spec for clipboardchange event exists today, [link](https://www.w3.org/TR/clipboard-apis/#clipboard-event-clipboardchange)
However, impo…
-
- VS Code Version: 1.94.1
- OS Version: Windows 11
Steps to Reproduce:
```dockerfile
FROM ubuntu:14.04
RUN apt-get update \
&& apt-get install -y autoconf build-essential ca-certificates curl jq l…
-
I'm unable to connect this to my server, every attempt ends with a `Could not connect to server` error. I've verified my URL and API are correct, even generated a new API key, nothing helps.
![ima…
-
Would it be possible to point the integration to the Ollama API through OpenWebUI?
While configuring the integration, when I set the API hostname to `ai.hq.arpa/ollama`, I just get:
```
Failed …
-
Greetings! I was possibly interested in developing tooling around MHF and also saw the need for a more dynamic way of reading the games client side data. I did have a few questions regarding this proj…