-
-
Reason of this issue in really big models, which are more than 60GB. So diffusers tries to put all of them to GPU VRAM.
Now there are couple ways to fix it.
First one is to add this line of code t…
-
### System Info
Hello, I was using depth estimation model,
`pipe = pipeline(task="depth-estimation", model="depth-anything/Depth-Anything-V2-Metric-Indoor-Small-hf")`
But I got this error:
```
…
-
### System Info
- `transformers` version: 4.47.0.dev0
- Platform: Windows-11-10.0.22631-SP0
- Python version: 3.12.7
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate v…
-
All steps are based on these docs.
https://ryzenai.docs.amd.com/en/latest/inst.html
https://ryzenai.docs.amd.com/en/latest/llm_flow.html
https://github.com/amd/RyzenAI-SW/blob/main/example/transfor…
-
从https://hf-mirror.com/下载的模型 `./hfd.sh IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-v0.1 --tool wget -x 6`.
使用 MNN/transformers/diffusion/export/onnx_export.py 转换出错:
```
[root@localhost export]$ pyt…
-
### The model to consider.
Announcement blog: https://www.zyphra.com/post/zamba2-7b
Base model: https://huggingface.co/Zyphra/Zamba2-7B
Instruct tuned: https://huggingface.co/Zyphra/Zamba2-7B-I…
mgoin updated
2 weeks ago
-
Hi,
I'm trying to constrain the generation of my VLMs using this repo; however i can't figure out the way to personalize the pipeline for handling inputs (query+image). Whereas it is documented as …
-
### System Info
- `transformers` version: 4.47.0.dev0
- Platform: Linux-5.15.0-1052-oracle-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.25.2
- Safetensors version:…
-
### System Info
The regression happens after transformers==4.45.2.
```
- `transformers` version: 4.47.0.dev0
- Platform: Linux-6.6.0-gnr.bkc.6.6.9.3.15.x86_64-x86_64-with-glibc2.34
- Python v…