-
got prompt
2024-09-28 22:00:06.415988 run in id number : 1
2024-09-28 22:00:06.416957 Init model in fp8
2024-09-28 22:01:44.732608 Start a quantization process...
2024-09-28 22:02:54.907596 …
-
### Guidelines
- [X] I checked for duplicate bug reports
- [X] I tried to find a way to reproduce the bug
### Version
Development (Unstable)
### What happened? What did you expect to happen?
When…
-
Error occurred when executing magictime_model_loader:
No module named 'swift'
what does it means ?
-
Model placed in brushnet folder, loader cannot read
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [ Yes] I am running the latest code. Development is very rapid so there are no tagged versions as…
-
As the title states, do we need to set the model loader to ExLlamav2_HF or ExLlamav2?
The [documentation](https://github.com/oobabooga/text-generation-webui/wiki/04-%E2%80%90-Model-Tab) says:
`…
-
### Bug Description
I have been trying to find solutions how to make alpha textures render correctly with obj and Sodium (NeoForge 1.21.1).
Everything else renders great except that. Previously I…
-
### What happened?
I am running on Rocm with 4 x Instinct MI100.
Only when using `--split-mode row` mode I get a Address boundary error.
llama.cpp was working when I had a XGMI GPU Bridge working w…
-
### What happened?
u0_a227@localhost ~> ./llama.cpp/build/bin/llama-cli -m llama.cpp/models/Qwen2.5-0.5B-Instruct-Q4_K_M.gguf -p "You are a helpful assistant" -cnv -ngl 99 -t 8 -b 64 -tb 8 --ctx-size…
-
### What happened?
I am trying to run a Q4_0_4_4 quantized Llama3 8B model. This is my config :
```
/home/piuser/Desktop/Abhrant/llama-cpp-BLAS/llama.cpp/llama-cli -m /home/piuser/Desktop/Abhran…