-
The Huggingface demo (https://huggingface.co/spaces/FinGPT/FinGPT-Forecaster) that you posted does not work.
There is an error message that says, that "This Space has been paused by its owner."
-
With PIP install or Conda install, same error:
```
❯ chat-with-mlx
Traceback (most recent call last):
File "/Users/bdruth/radioconda/envs/mlx-chat/bin/chat-with-mlx", line 5, in
from cha…
-
尝试找过huggingface,只看到有300M的模型,想看有没移动端适用的方案
-
Can anyone tell me how do i add custom models to by downloading it from huggingface? In particular I want to use stable diffusion 3.5
-
huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connect…
-
Hi, firstly again thank you so much for this very fast and memory-friendly finetune library! Below I will share my thoughts about full finetuning.
Firstly, I made some experiments to test speed and…
-
想請問有上傳huggingface的計畫嗎
-
When the picture is generated, Run python scripts/txt2img.py --prompt "Generate a monster" --outdir output --device cuda --ckpt stable-diffusion-2-1/v2-1_768-ema-pruned.ckpt --config configs/stable-di…
-
Downloaded this gguf from here: https://huggingface.co/city96/flux.1-lite-8B-alpha-gguf
When i attempted to load it, got these errors:
![{4BF77226-9BC3-42F0-9498-4C4F1D69D3D8}](https://github.com/us…
-
Can you run this on a GPU below a H100 (like a T4) or not?
Maybe this will be a https://huggingface.co/piotr25691/thea-3b-25r (v2)