Open Agnyy opened 1 year ago
Appears to work if I disable "Move VAE and CLIP to RAM when training if possible. Saves VRAM." after getting this message.
Previously I had used the "Unload SD checkpoint to free VRAM" action and loaded a different model before trying to train textual inversion and getting this error.
After disabling this feature. The error is starting to change.
I get the error
RuntimeError: Input type (torch.cuda.HalfTensor) and weight type (torch.HalfTensor) should be the same
When the feature Move VAE and CLIP to RAM when training if possible. Saves VRAM. is already off.
Is there an existing issue for this?
What happened?
I am training Textual Inversion and sometimes an error occurs. I have not been able to trace the patterns for what reason this is happening. This problem occurs in 80% of cases.
Steps to reproduce the problem
What should have happened?
Training should have started.
Commit where the problem happens
Commit hash: a9eab236d7e8afa4d6205127904a385b2c43bb24
What platforms do you use to access the UI ?
Windows
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
List of extensions
Console logs
Additional information
No response