-
Is sending images to the bot supported using ollama?
This is what I get from the logs using llava model:
```shell
2024-04-10 21:25:09.161 INFO: Message received (user ID: 161792098901688320, at…
-
### Your current environment
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (U…
-
A.I says:
I encountered an error while trying to use the tool. This was the error: SerperDevTool._run() missing 1 required positional argument: 'search_query'.
Tool Search the internet accepts thes…
-
### System Info
- TensorRT-LLM v0.8.0 (pinned to release commit)
- Nvidia A100
- Mistral-7B-Instruct-v0.2
- Using the CPP runner
- Installed with `pip install tensorrt_llm==0.8.0 --extra-index-ur…
-
### Model description
Hi all,
currently, the microsoft/Phi-3-mini-128k-instruct is not supported by text-generation-inference. As displayed in the following error:
```
2024-04-25T12:45:45.28…
-
### Your current environment
Not applicable -- Dockerfile.
### 🐛 Describe the bug
Steps to reproduce:
- Clone the `vllm` repo
- run `docker build . --target vllm-base`
- Build fails
```shel…
-
# Bug Report
## Description
**Bug Summary:**
I configured my LocalAI instance as the OpenAI API endpoint; when I ose curl to verify, I see the models just fine:
```
curl.exe http://192.168.28…
-
### What is the issue?
The ollama.ai certificate has expired today, ollama now can't download models:
```
ollama run mistral
pulling manifest
Error: pull model manifest: Get "https://registry.…
psy-q updated
2 months ago
-
Opening a new issue (see https://github.com/ollama/ollama/pull/2195) to track support for integrated GPUs. I have a AMD 5800U CPU with integrated graphics. As far as i did research ROCR lately does su…
-
First, great work on getting AMD GPU support ready on Windows in such a good shape within such a short period. Really appreciate your work!
However, once I switched to Fedora 39, on the same Ryzen …