I tried to add check output correctness when converting models, it met Runtime error that mat1 and mat2 must have the same dtype
How can I fix this issue?
python -m python_coreml_stable_diffusion.torch2coreml --convert-unet --convert-vae-decoder --convert-text-encoder --xl-version --model-version /Users/tim/Downloads/ml-stable-diffusion/dreamshaperXL10_alpha2Xl10_diffusers -o /Users/tim/Downloads/ml-stable-diffusion/dreamshaperXL10_alpha2Xl10_split_einsum --check-output-correctness
/Users/tim/coreml/lib/python3.9/site-packages/urllib3/__init__.py:34: NotOpenSSLWarning: urllib3 v2.0 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with 'LibreSSL 2.8.3'. See: https://github.com/urllib3/urllib3/issues/3020
warnings.warn(
scikit-learn version 1.3.1 is not supported. Minimum required version: 0.17. Maximum required version: 1.1.2. Disabling scikit-learn conversion API.
Torch version 2.0.1 has not been tested with coremltools. You may run into unexpected errors. Torch 2.0.0 is the most recent version that has been tested.
INFO:__main__:Initializing StableDiffusionPipeline with /Users/tim/Downloads/ml-stable-diffusion/dreamshaperXL10_alpha2Xl10_diffusers..
INFO:__main__:Initializing DiffusionPipeline with /Users/tim/Downloads/ml-stable-diffusion/dreamshaperXL10_alpha2Xl10_diffusers..
Loading pipeline components...: 100%|█████████████| 7/7 [00:14<00:00, 2.01s/it]
INFO:__main__:Done. Pipeline in effect: StableDiffusionXLPipeline
INFO:__main__:Done.
Start Time: 2023-10-20 17:07:49
INFO:__main__:Attention implementation in effect: AttentionImplementations.SPLIT_EINSUM
INFO:__main__:Converting vae_decoder
/Users/tim/coreml/lib/python3.9/site-packages/diffusers/models/resnet.py:139: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert hidden_states.shape[1] == self.channels
/Users/tim/coreml/lib/python3.9/site-packages/diffusers/models/resnet.py:152: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if hidden_states.shape[0] >= 64:
INFO:__main__:Converting vae_decoder to CoreML..
Converting PyTorch Frontend ==> MIL Ops: 100%|▉| 368/369 [00:00<00:00, 3900.66 o
Running MIL frontend_pytorch pipeline: 100%|█| 5/5 [00:00<00:00, 390.51 passes/s
Running MIL default pipeline: 100%|███████| 65/65 [00:00<00:00, 107.49 passes/s]
Running MIL backend_mlprogram pipeline: 100%|█| 12/12 [00:00<00:00, 540.56 passe
INFO:__main__:Saved vae_decoder into /Users/tim/Downloads/ml-stable-diffusion/dreamshaperXL10_alpha2Xl10_split_einsum/Stable_Diffusion_version__Users_tim_Downloads_ml-stable-diffusion_dreamshaperXL10_alpha2Xl10_diffusers_vae_decoder.mlpackage
INFO:__main__:vae_decoder baseline PyTorch to baseline CoreML: PSNR changed by -86.3 dB (197.4 -> 111.1)
INFO:__main__:111.1 dB > 35 dB (minimum allowed) parity check passed
INFO:__main__:Converted vae_decoder
INFO:__main__:Converting unet
WARNING:python_coreml_stable_diffusion.unet:`use_linear_projection=True` is ignored!
INFO:__main__:Sample UNet inputs spec: {'sample': (torch.Size([2, 4, 128, 128]), torch.float32), 'timestep': (torch.Size([2]), torch.float32), 'encoder_hidden_states': (torch.Size([2, 2048, 1, 77]), torch.float32), 'time_ids': (torch.Size([2, 6]), torch.float32), 'text_embeds': (torch.Size([2, 1280]), torch.float32)}
INFO:__main__:JIT tracing..
/Users/tim/Downloads/ml-stable-diffusion/python_coreml_stable_diffusion/layer_norm.py:61: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert inputs.size(1) == self.num_channels
INFO:__main__:Done.
Traceback (most recent call last):
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/tim/Downloads/ml-stable-diffusion/python_coreml_stable_diffusion/torch2coreml.py", line 1564, in <module>
main(args)
File "/Users/tim/Downloads/ml-stable-diffusion/python_coreml_stable_diffusion/torch2coreml.py", line 1348, in main
convert_unet(pipe, args)
File "/Users/tim/Downloads/ml-stable-diffusion/python_coreml_stable_diffusion/torch2coreml.py", line 833, in convert_unet
baseline_out = pipe.unet(**baseline_sample_unet_inputs,
File "/Users/tim/coreml/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/tim/coreml/lib/python3.9/site-packages/diffusers/models/unet_2d_condition.py", line 841, in forward
emb = self.time_embedding(t_emb, timestep_cond)
File "/Users/tim/coreml/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/tim/coreml/lib/python3.9/site-packages/diffusers/models/embeddings.py", line 192, in forward
sample = self.linear_1(sample)
File "/Users/tim/coreml/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/tim/coreml/lib/python3.9/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 must have the same dtype
I tried to add check output correctness when converting models, it met Runtime error that
mat1 and mat2 must have the same dtype
How can I fix this issue?