-
### What is the issue?
Whenever i try to give the second prompt on any GGUF models ollama fails here is the logs
time=2024-07-12T15:47:23.505Z level=INFO source=sched.go:738 msg="new model will…
-
I believe I used to run **llama-2-7b-chat.ggmlv3.q4_0.bin** successfully locally. My **3090** comes with **24G** GPU memory, which should be just enough for running this model. Well, how much memoery …
-
# Bug Report
## Description
**Bug Summary:**
The body of chat API requests is now being logged. This probably shouldn't happen by default. The title endpoint is also affected. See output b…
-
### What is the issue?
Whenever i try to chat with the llm through open-webui and ollama, i get this in the logs of ollama:
`ERROR [validate_model_chat_template] The chat template comes with this mo…
Joly0 updated
5 months ago
-
Using deterministic algorithms for pytorch
Total VRAM 11264 MB, total RAM 32509 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 2080 Ti : native
V…
-
I want to run ollama for open source llms. Ive got devika running in the local webui but he modelbox is unselectable and when iclick settings a web page opens just searching for ‘settings’ with cant f…
-
操作系统:macOS-14.1.1-arm64-arm-64bit.
python版本:3.10.13 (main, Sep 11 2023, 08:16:02) [Clang 14.0.6 ]
项目版本:v0.2.10
langchain版本:0.1.13. fastchat版本:0.2.36
当前使用的分词器:ChineseRecursiveTextSplitter
当前启动…
-
HTML entities returned from the remote LLM cause a crash because the markdown renderer internally uses `innerHTML` in one of the packages. This conflicts with UntrustedWebUI, of which AIChat's WebUI i…
-
TomlDecodeError: Reserved escape sequence used (line 100 column 1 char 3693)
Traceback:
File "C:\Users\tiago\miniconda3\envs\MoneyPrinterTurbo\lib\site-packages\streamlit\runtime\scriptrunner\script…
-
In attached, please find output of webui and ollama server console. At line 1 of webui output, I ask the question, using llama3:latest (line 3). Result is shown in lines 4-42
At line 45, I ask sam…