-
Let's say I have an outage on one of my server I'm monitoring and it's inaccessible but I don't know how long it's gonna take to fix it, so I'm muting it for a really long time.
With this approach, I…
-
Would be a nice addition.
https://github.com/cznic/ql
I saw gorm being used with it so it should work.
ghost updated
6 years ago
-
Hi, thank you for the wonderful ollama project and the amazing community!
I am testing the Mixtral 3Bit Quantized model under a RTX400 with 20GB of VRAM. The model is 20GB of size and as you ca…
-
Model: https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF
I tested quant Q5_K_M
At the very default koboldcpp_cu12 v1.68, CuBLAS with 0 layers, no flash attention.
Prompt: (p…
-
# Purpose
- 調查 FAITH 機器人回覆速度緩慢的原因並修復它
- ref
- https://github.com/ollama/ollama/issues/4242
# TODO
- [x] 檢查 docker container ollama log
- [x] 查看 CPU 用量 (`docker stats`)
- [x] 檢查…
-
### Description of defect
the macro `BLE_SECURITY_DATABASE_FILESYSTEM` has a default value of true and this always through an error in the logs whenever file system is not used (for example I…
-
### Bug
LocalFileStore tries to treat Document as byte
```
store = LocalFileStore(get_project_relative_path("doc_store"))
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=2000)
…
-
### What is the issue?
### CLI
When I run **codestral:22b-v0.1-q2_K** on my M1 Macbook Air via the CLI with `ollama run codestral:22b-v0.1-q2_K` it performs a little slowly, but usable. When I l…
-
I was using a pair of 3060 12gb cards, and got the error below. With the settings I had, about 19gb would be taken by VRAM, the remaining 20gb as RAM. Using a single card at 7 layers, I successful…
-
I'm trying to provide my user an anonymous access to my superset using "PUBLIC_LIKE_ROLE".
For this, I setup a Gamme Like role with these permissions :
`[can read on SavedQuery, can read on CssTemp…