-
Error: rpc error: code = Unknown desc = unimplemented
Using whisper-large-q5_0, localai/localai:v2.23.0-hipblas-ffmpeg image
Also for tarnished-9b-i1
Error: failed to load model with intern…
-
### What happened?
My command:
```
llama-cli --model C:\Users\Edw590\Downloads\Llama-3.1-8B-Instruct-abliterated_via_adapter.Q4_K_M.gguf --interactive-first --ctx-size 8192 --threads 4 --temp 0.8 -…
-
train_loader, vali_loader, test_loader, model, model_optim, scheduler = accelerator.prepare(train_loader, vali_loader, test_loader,model, model_optim, scheduler) raise error:Command '['hostname -I']' …
-
### What happened?
Running speculative decoding with the new Llama-3.1-405B-Instruct, with Llama-3.1-8B-Instruct as a draft model (with the large model on CPU and the small one on GPU), results in a …
-
### What happened?
I am running on Rocm with 4 x Instinct MI100.
Only when using `--split-mode row` mode I get a Address boundary error.
llama.cpp was working when I had a XGMI GPU Bridge working w…
-
### What happened?
```
You are a helpful assistant
> what is 2+2+2+2
44444444444444444444444444444444444444444444444444444444444444444444444444444444444444444
>
```
When I run llama-cli with…
-
### What happened?
I was experimenting with the llama.cpp project and llm inference in general. I made a basic chat application (similar to the main.cpp project from the examples) but much simpler. N…
-
### How are you running AnythingLLM?
Docker (local)
### What happened?
Docker sees my models. I start chatting in my workspace, and then I get an error "Failed to load model"
```
anythingllm |…
-
### What happened?
I've already quantized a 2b variant of this model, and one of its instruct fine tune, on a subset of the same data (the first 1000 samples are the same in the same order -- the e…
-
`Actions/SynchronizeAction.php` uses the `Spatie\TranslationLoader\LanguageLine` instead of the model defined in `config/translation-loader.php`