-
I see that there are many PRs about [StaticCache](https://github.com/huggingface/transformers/pulls?q=is%3Apr+StaticCache), but I couldn't find a clear documentation on how to use it.
#### What I w…
-
Will you add support for oobabooga's text-generation-webui? An llm initialization for post requests and a few patterns might be sufficient. I've been trying to do it, but I've had to try to figure out…
-
One of the main reasons for using oobabooga is the locality of data and computation. Do we really need to use a third party, external database? The original project allows you to use Weaviate
-
See the relevant issue with logs here: https://github.com/oobabooga/text-generation-webui/issues/4005
Error about the wheel:
```
Ignoring llama-cpp-python: markers 'platform_system == "Windows"'…
-
I thought I would move to a straight subject on the extension itself so I can clearly explain whats going on.
Installed oobabooga, ran it, closed. Ran cmd_windows and did a pip --version so the env…
-
Hi, Apologies i seem to be unable to figure this out.
When using the Webui to download a model the list loads correctly and it also downloads giving the confirmation **"Model successfully saved to …
-
It would be interesting to be able to use `loom` with open source LLMs such as GPT-Neo-X, FLAN-UL2, and LLaMA. The [transformers](https://github.com/huggingface/transformers) library by Huggingface h…
-
Hi, I started test simulation using `base_the_ville_isabella_maria_klaus`, and for 40 seconds of simulation it spent ~2.6$. Are this expenditure rates sound right? If I'm correct, it's somewhere aroun…
-
The project needs any of:
- A 30 seconds video with the features highlights
- A 10-seconds video showing quickly a feature, to be embedded in the app directly (from Youtube)
Features that need 10s e…
-
I am trying to run the Solar model, but I am constantly failing. Here are my attempts:
1. [quantized] example (modified) with the Quantized Solar model (local)
: Failed. It only outputs nonsense…