-
I'm building a SDXL model in float16 using 4090x2, therefore the GPU memory available is ~48GB.
however, the script in `diffusers/quantizatoin` does not looks like to able to use both of them, and r…
-
python quantize.py --model_dir /qwen-14b-chat --dtype float16 --qformat int4_awq --export_path ./qwen_14b_4bit_gs128_awq.pt --calib_size 32
python build.py --hf_model_dir=/qwen-14b-chat/ --quant…
-
感谢分享~
1、参考hallo2和[Moore-AnimateAnyone进行第2阶段代码复现,denoise_unet部分的第一个参数latent是跟2个开源项目一致吗?
(1)self.denoising_unet第1个参数:noisy_latents = train_noise_scheduler.add_noise(latents, noise, timesteps)
(2)权重冻结…
-
### 🐛 Describe the bug
```
(/home/ezyang/local/c/pytorch-env) [ezyang@devgpu005.nha1 ~/local/c/pytorch (a55aa71b)]$ python t.py
tensor([9.7422], dtype=torch.float16) tensor([9.7344], dtype=torch.fl…
-
TypeError: set_default_dtype only supports [float16, float32, float64, bfloat16] , but received paddle.float32
-
Platforms: mac, macos, linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_std_mean_cpu_float16&suite=TestInduc…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe the bug
When using the bulk writer to format CSV files, if bf16 and float16 vector types are present,…
-
Hi everyone!
Trying to initialize Llava1.6-34b-hf with flash attention 2 but getting the following issue, after which it doesn't work properly and doesn't speed up inference.
The point is I explicit…
-
Hello,
I had no problems working with ToonCrafter in the past, but somehow today I'm not able to run it. After about an hour of digging, I think it comes down to the version of Pytorch from one of …
-
I want to store `float16` types in ndarrays. Would it be possible to extend `scalar-datatype` to allow for `float16` and `complex32` types?
I am specifically looking for the official `float16` type…