Using the default workflow, default parameters, an error occurs at runtime.:
Error occurred when executing UltraPixelProcess:
Error(s) in loading state_dict for CLIPTextModelWithProjection:
size mismatch for text_projection.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([512, 1280]).
You may consider adding ignore_mismatched_sizes=True in the model from_pretrained method.
How do I solve this problem? It seems like a lot of people have this problem.
Using the default workflow, default parameters, an error occurs at runtime.:
Error occurred when executing UltraPixelProcess:
Error(s) in loading state_dict for CLIPTextModelWithProjection: size mismatch for text_projection.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([512, 1280]). You may consider adding ignore_mismatched_sizes=True in the model from_pretrained method.
How do I solve this problem? It seems like a lot of people have this problem.