-
从https://hf-mirror.com/下载的模型 `./hfd.sh IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-v0.1 --tool wget -x 6`.
使用 MNN/transformers/diffusion/export/onnx_export.py 转换出错:
```
[root@localhost export]$ pyt…
-
Hi, thank you for your awesome works. However, when I was trying to run the M3DClip model using code on huggingface I have some errors related to the einops lib. I noticed you use the monai ViT layers…
-
Hi, I tried a test about compiling unet(torch.float16), which is the part of StableDiffusionXLPipeline in Inferentia2.8xlarge and it failed.
When the latent size of unet is (64, 64), it did not fai…
-
Loading model with torch.hub.load() always loads model as fp16 which is not supported on cuda thus results in slower inference.
Is there any option in kwargs to disable half() during loading?
Thanks…
-
It was working for a while and then It said
File "C:\lunar-main\lunar-main\lunar.py", line 21, in main
lunar = Aimbot(collect_data = "collect_data" in sys.argv)
File "C:\lunar-main\lunar-mai…
-
### System Info
`transformers` version: 4.46.2
- Platform: Linux-6.2.0-39-generic-x86_64-with-glibc2.37
- Python version: 3.9.20
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
-…
-
### Describe the bug
XFormer will fail when passing attention mask with its last dimension not being a multiple of 8 (i.e. key's sequence length) under bfloat16. This seems to be because xformer ne…
-
I am trying to quantize an image into a tensor of indices, then decode from it, but I am getting float latents.
My full code:
```py
from huggingface_hub import hf_hub_download
from diffusers import V…
-
How can I load the pretrained Dinov2 model from a local source so that it loads the model even when there is no internet connection and does not attempt to download it again from the server?
The norm…
-
Hello , I try on Pinokio and manually install FLux Gym, but on in the both ways I have issues with Captionning
I try to change device put nothing working its still on "cpu"
I install Cuda twice …