carson-katri / dream-textures

Stable Diffusion built-in to Blender
GNU General Public License v3.0
7.78k stars 419 forks source link

Model Loading Improvements and Safetensors Importing #703

Closed NullSenseStudio closed 1 year ago

NullSenseStudio commented 1 year ago

Changes

Downloading models now uses diffusers own implementation as is does a great job of only getting the required files for inference. It will also handle variant files instead of a separate revision/branch for fp16 weights. SDXL will now be properly downloaded due to not blacklisting .safetensors weights.

Reimplemented loading a model in half precision and restricted it to only using cached weights. Incomplete models will now raise an error again instead of continuing to download without indication to the user.

Fixed loading of imported models and updated model importing to use diffusers own implementation and support for safetensors. @carson-katri, I'm not familiar with why the whole checkpoint conversion source was included. Were there any necessary changes involved for importing depth and inpainting models besides the configs?

TODO

carson-katri commented 1 year ago

Regarding checkpoint conversion, it was simpler at the time to include the full script because it wasn't part of the diffusers Python package. With download_from_original_stable_diffusion_ckpt, that seems to be resolved though.

NullSenseStudio commented 1 year ago

Re-added getting revision paths because the auto pipelines don't account for local_files_only when fetching configs. If there is an update to the model it would download only model_index.json and change the revision hash, causing the previously downloaded weights to be unused. I'll file a proper bug report to diffusers when I get the time.

NullSenseStudio commented 1 year ago

Loading from checkpoint works well enough. They can be linked in preferences similar to how they are imported. Entire folders can be selected or the checkpoint files directly. image A few issues though. First being that models are identified by their basename in the diffusers backend, which will prevent models with the same name in different locations from being loadable. Not a huge issue but worth noting. The second issue is that killing the subprocess will break checkpoint loading until a checkpoint link has been added/removed or the addon is disable and re-enabled.

NullSenseStudio commented 1 year ago

I've noticed ControlNet pipelines don't convert to/from a non-ControlNet pipeline with the same weights. It doesn't seem to be accounted for in the auto pipelines from_pipe() method. May just have to add another cache invalidation check for that.

NullSenseStudio commented 1 year ago

SDXL base+refiner can now remain loaded together on CUDA. I've noticed around 28% less time using this with model offloading compared to reloading each pipeline when they would be used. Time savings should be even higher for cards with enough VRAM to not require model offloading. I've also extended this to keeping control nets cached between iterations, but I don't have any timings to share for that.

I've also made it possible to save imported weights in half precision, and importing will now show its progress instead of freezing Blender until it's done. blender_J7oiG3xlpe