I am using the workflow for lora-flux in this repo with flux1-dev and the trained lora_flurry. The inference is not working and I get the following error:
RuntimeError: cuDNN Frontend error: [cudnn_frontend] Error: No execution plans support the graph.
Also, for a trained lora (flux-schenel) with kohya, I get another error:
a1 = sorted(list(checkpoint[list(checkpoint.keys())[0]].shape))[0]
IndexError: list index out of range
Any help is appreciated?
I have torch 2.5.0.dev20240903+cu124, cuda 12.4 and cudnn 90100
I am using the workflow for lora-flux in this repo with flux1-dev and the trained lora_flurry. The inference is not working and I get the following error:
RuntimeError: cuDNN Frontend error: [cudnn_frontend] Error: No execution plans support the graph.
Also, for a trained lora (flux-schenel) with kohya, I get another error:
Any help is appreciated?
I have torch 2.5.0.dev20240903+cu124, cuda 12.4 and cudnn 90100