-
Ultra 155H platform(https://ark.intel.com/content/www/us/en/ark/products/236847/intel-core-ultra-7-processor-155h-24m-cache-up-to-4-80-ghz.html).
We installed Text Generation WebUI on Intel GPU acco…
-
## Edit
The solution (currently) is to use `/sdapi/v1/options` endpoint as mentioned in [this comment below](https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/1902#issuecomment-2373…
-
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. create a docker container with Textgen web ui. https://github.com…
-
**Is your feature request related to a problem? Please describe.**
I would like an easier way to import and export a mind map to use with open source offline AI where the only feature that is importe…
-
### What happened?
Large models like [Meta-Llama-3-405B-Instruct-Up-Merge](https://huggingface.co/mradermacher/Meta-Llama-3-405B-Instruct-Up-Merge-GGUF/tree/main) require `LLAMA_MAX_NODES` to be in…
-
### Describe the bug
Unable to run GGUF files after update. Tested with several 120B 5_Q_M and 103B and 70b Q8_0
I have 128GB RAM. Every single one of these models ran just fine until today. Pleas…
-
用8卡4090训练后的效果输出内容以空白居多,甚至全空白。
请问sft阶段只训练了一个epoch吗?
因为容易显存oom,将投影层前面的部分先预处理了,就没有图像增强的,这么做对模型效果影响会很大吗?
针对这样的模型效果,训练有什么改善建议。
感谢大佬!
-
webui对话总被截断,如何设置 Max Tokens
模型平台:xinference
LLM模型:autodl-tmp-glm-4-9b-chat
![image](https://github.com/chatchat-space/Langchain-Chatchat/assets/1206487/51773ddd-2104-48b8-b38a-cb437ccbc1b1)
-
what happened?
`
(llm_py310) hs@xxxxx-dl:/Data/Projects/LLM_Solution$ uvicorn webui:app --host 0.0.0.0 --port 8000
usage: uvicorn [-h] [--config CONFIG] [--prompt_engineering {general,extract_u…
-
### Issue with current documentation:
I believe the Oobabooga Text Generation Web UI API was rewritten, causing the code on the TextGen page of the Langchain docs to stop working.
e.g.: the way th…