-
Hi team
Do you have any examples or know of any videos of people showing this?
I literally can't find a thing - but it sounds really good for RAG.
However, as you can imagine, searching web or Yo…
-
**Describe the solution you'd like**
I want an option to be able to use the KoboldAI API to allow a way of local use.
**Describe alternatives you've considered**
The problems with cloud hosted so…
-
ImportError: cannot import name 'clear_chat_log' from 'modules.chat' (A:\text-generation-webui-main\modules\chat.py)
is the error i get any time i use the "python bot.py" start command
Any ideas wha…
-
### Feature request
I would like to request [llama.cpp](https://github.com/ggerganov/llama.cpp) as a new model backend in the transformers library.
### Motivation
llama.cpp offers:
1) Exce…
-
### 相关问题
_No response_
### 可能的解决办法
您好,我是 RWKV 的作者,介绍见:https://zhuanlan.zhihu.com/p/626083366
目前支持 RWKV 的中英文界面有:
* 闻达:https://github.com/l15y/wenda
* Gradio DEMO: https://huggingface.co/spaces/…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [X] I am running the latest code. Development is very rapid so there are no tagged versions as of…
-
Hi, just wondering if it is possible to do something like this but for a local machine maybe with LLAMA or other LLM that is available offline without using an API call?
-
By default Exllama V2 uses a batch of 2048 for prompt processing, which adds a ton of VRAM usage. On TabbyAPI and ExGUI it is possible to set the prompt processing batch to 1024 and 512. Those decreas…
-
### Describe the bug
Just as what the title says, I want to update my webui just now, but it installed dependencies from requirements_cpu_only.txt.
### Is there an existing issue for this?
- [X] I …
-
The current API is about to be deprecated and will be replaced with an OpenAI compatible API on November 13th. This update will likely break oobabot so It needs to be updated.