-
**Is your feature request related to a problem? Please describe.**
Ollama and cloud APIs support simple text generation (continuations) in addition to chat, but this is not exposed in WebUI.
**Des…
-
### Describe the bug
attempting to load a model results in the error ERROR:Failed to disable exllama. Does the config.json for this model contain the necessary quantization info?
### Is there an exi…
-
### Describe the bug
-edit- This is using Exllama. Did some more testing, and loading a model via llama.cpp and offloading to GPU works as expected.
I am trying to use 2x Tesla K80s, however whe…
-
Gemma models that have been quantized using Llamacpp are not working. Please look into the issue
error
"llama.cpp error: 'create_tensor: tensor 'output.weight' not found'"
I will open a issue…
-
### Describe the bug
ERROR: Wheel 'ffmpy' located at C:\users\timor\appdata\local\pip\cache\wheels\01\a6\d1\1c0828c304a4283b2c1639a09ad86f83d7c487ef34c6b4a1bf\ffmpy-0.3.1-py3-none-any.whl is invalid.…
-
### 🐛 Describe the bug
When trying to use pytorch-nightly there is no cuda Version in version.py generated file
```
File "G:\git-jv\ai\oobabooga_one-click-installersk-installers\installer_fil…
-
Hi,
I want to try connecting Jarvis using oobabooga/text-generation-webui as a backend instead of openai for private and offline solution. Is that possible? Thanks!
-
### Describe the bug
The link to use Colab is not working. I get the local URL (which is useless for Colab) but get an error for the share link. Following the instructions gives me the same error, de…
-
I cloned the repo, saw that webui.bat has the code for install which is fine, but got errors looking for "data folder" which had the "config" file. I simply copied it the to the appropriate folder so…
-
Hi!
Can we use custom LLM and Embedding models with it?
Thanx