-
model=TheBloke_Llama-2-13B-GPTQ/model.safetensors, I also tried: Wizard-Vicuna-7B-Uncensored-GPTQ-4bit-128g.no-act-order.safetensors, same problem
```
Loading model ...
----------------------…
-
Could you please provide a simple interface similar to OpenAI API?
-
### Checklist
- [x] The issue exists after disabling all extensions
- [x] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a …
-
### OS
Windows
### GPU Library
CUDA 12.x
### Python version
3.10
### Pytorch version
3.10.8
### Model
turboderp/Llama-3.1-8B-Instruct-exl2
### Describe the bug
I always receive `assistant` …
-
Hi, i'm having difficulties loading this on the current versions of aiogram and oogabooga. I tried installing this as a standalone app and it didn't work because the cmd kept crashing because it wante…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [ X] I am running the latest code. Development is very rapid so there are no tagged versions as o…
-
Traceback (most recent call last):
File "C:\MR\Finetune\xtts-finetune-webui\venv\lib\site-packages\gradio\queueing.py", line 459, in call_prediction
output = await route_utils.call_process_api…
-
I have the following rig for AI:
- Gigabyte 4090
- MSI MAG X670E Tomahawk
- 7800x3d
- RM1000x — 1000 Watt 80 PLUS® Gold
- 64 GB DDR5 @ 6000mhz
- Fractal XL case
I bought 2 PCIE 4.0 riser ca…
-
I thought that it would be cool to use last version of vicuna (vicuna-13b) instead of 7b, as it can be more efficient. Then I thought that it'll be useful for some people to use other models too. The…
-
I would like to suggest an improvement to Freedom GPT. It would be great if the model could be executed on Colab, This would allow for easy access and execution of the model on a cloud-based environme…