Closed dhwz closed 10 months ago
I can’t reproduce this, maybe this is only with medvram? I added devices.torch_gc()
after switching back to base model, but this will give delay in half second.
Thx, I think your very last commit fixed it. My last test worked without any issues. Will do some more tests. 👍
I can confirm, since i uninstalled this extension, I no longer have this problem (A charitable soul told me this problem) , 4 days lost because of this...
@SeBL4RD but the issue was already fixed?
Just installing this extension breaks somehow the now officially implemented refiner support on A1111 dev branch.
max checkpoints loaded at the same time = 2 only keep one model on device is set checkpoint cache in RAM is 0
starting with 2,8GB after startup press genrate base model is moved to GPU 6,8GB clear for loading refiner model 1,4GB refiner is moved to GPU 5,8GB
stays on 5,5GB press generate again 7,7GB when it moves the base model ---> slow genration
All is done WITHOUT even setting "enable Refiner" in the accordion, so seems it breaks something fundamental for the whole WebUI