Open aire1 opened 1 year ago
I don't think this is possible at this time due to this being more of an upstream issue to do with DirectML not supporting autocast. See: https://github.com/microsoft/DirectML/issues/454
There have been other threads relating to this issue here: https://github.com/lshqqytiger/stable-diffusion-webui-directml/issues/142
So far the only solution is to use --no-half
in you command line arguments. This is the same case for features like inpainting as discussed here: https://github.com/lshqqytiger/stable-diffusion-webui-directml/issues/87
The current non-native autocast functionality that lshqqytiger implemented in this fork doesn't provide much in the way of VRAM use or speed enhancements and works more like a compatibility layer.
Have you tried using the --medvram
argument? That should provide a middle-ground in terms of performance.
Is there an existing issue for this?
What would your feature do ?
Textual inversion without --no-half
Proposed workflow
Additional information
Textual Inversion works on my RX5600 XT only with --no-half --no-half flag requires a lot of memory, so you have to combine it with --lowmem. But with --lowmem GPU usage drops to 30%-40% and one iteration takes much more time (8 sec instead of 1 sec). Can u add support for textual inversion without --no-half?