-
Traceback (most recent call last):
File "/home/dl/micromamba/envs/CosyVoice/lib/python3.8/site-packages/gradio/queueing.py", line 560, in process_events
response = await route_utils.call_pro…
-
to @davisagli:
@ericof and me discussed with others on the Plone Beethoven Sprint 2024 options to enhance the user story & experience using cookiecutter and the Plone Distributions chooser forms.
…
acsr updated
6 months ago
-
### Describe the bug
After cloning the repository and using the Canary browser, I rebuilt the Docker container and encountered the following issues:
The terminal/CMD shows repeated warnings and er…
-
无法加载模型,后台报错:
1.当前版本为PYTHON 3.11.9,之前试过PYTHON3.10.X也报错
2. xinference . 0.12.2
3. pip install "xinference[all]" 安装后本地环境运行
4. Full stack of the error.
Traceback (most recent call last):
File …
-
hello, thanks your solution for the local llm deploy in autogpt, when i installed it , follow the Readme, some steps not clear, and i can not test it sucess , is there needs to provides the openai …
-
This will be most similar to , which also has completely bare, uncompressed binary files.
## Notes for Cheat Sheet
### v1.0
- [x] How to run
```sh
# these were in the docs
OLLAMA…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
报错为
load_model_config modle\chatglm-6b-int4...
Loading modle\chatglm-6b-int4...
No compile…
-
[Edit 7/20/23]: Let's use Llama 2. AWS / Azure might have hosted versions too, so no local needed.
If there's any ticket I need engagement from the community, it's this one. Adding the ability for …
-
**Why**
Just like Big-AGI can interface with local LLM (for instance, ollama), it would be cool to have the ability in the Draw section to interface with Automatic1111 which possesses an API (link pr…
-
### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a …