A-suozhang / MixDQ

[ECCV24] MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization
https://a-suozhang.xyz/mixdq.github.io/
19 stars 2 forks source link

mat1 and mat2 must have the same dtype, but got Float and Half #8

Closed greasebig closed 1 week ago

greasebig commented 1 week ago

i use fp16 sdxl_turbo model to Generate Calibration Data. but when i continue to use fp16 sdxl_turbo model in Post Training Quantization (PTQ) Process, i got this error.

greasebig commented 1 week ago

hello author. is there any way to allow i do PTQ by using fp16 sdxl_turbo model?

greasebig commented 1 week ago

model = get_model(config.model, fp16=True, return_pipe=False)

i change here to load fp16