-
### Describe the bug
The chat does not respond.
log shows parameter "device" not found.
"/root/miniconda/envs/textgen/lib/python3.11/site-packages/transformers/generation/utils.py "line 1900 -> d…
-
### Describe the bug
ModuleNotFoundError: No module named 'yaml'
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
1.download(https://github.com/…
-
I use Doom Emacs. I just updated to the latest org-ai and got this error when restarting emacs `Error caused by user's config or system: .doom.d/config.el, (void-variable evil-org-ai-on-region)`. Rele…
-
### Describe the bug
I tried installing server and running it which worked at first, but couldnt load my llama 3 models, I typed this command
`conda install pytorch torchvision torchaudio pytorch-cu…
-
-
As the title states, do we need to set the model loader to ExLlamav2_HF or ExLlamav2?
The [documentation](https://github.com/oobabooga/text-generation-webui/wiki/04-%E2%80%90-Model-Tab) says:
`…
-
### Describe the bug
I ran this on a server with 4x RTX3090,GPU0 is busy with other tasks, I want to use GPU1 or other free GPUs. I set CUDA_VISIBLE_DEVICES env, but it doesn't work.
How to specify …
-
https://lmsys.org/blog/2023-06-29-longchat/
https://arxiv.org/abs/2305.07185
https://www.reddit.com/r/LocalLLaMA/comments/14fgjqj/a_simple_way_to_extending_context_to_8k/
https://github.com/epfml…
-
The new models coming out are built on 4.45.1. Attempting to run on 4.44 returns the error "Exception: data did not match any variant of untagged enum ModelWrapper at line 277218 column 3".
An upd…
-
I'm developing AI assistant for fiction writer. As openai API gets pretty expensive with all the inference tricks needed, I'm looking for a good local alternative for most of inference, saving gpt4 ju…