invoke-ai / InvokeAI

Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
https://invoke-ai.github.io/InvokeAI/
Apache License 2.0
23.8k stars 2.45k forks source link

[enhancement]: Support SDXL Checkpoint VAEs #6483

Open psychedelicious opened 5 months ago

psychedelicious commented 5 months ago

Is there an existing issue for this?

Contact Details

No response

What should this feature add?

Support SDXL Checkpoint VAEs. For example, the Pony v6 VAE found here: https://civitai.com/models/257749/pony-diffusion-v6-xl

Alternatives

No response

Additional Content

No response

psychedelicious commented 5 months ago

The VAE works with this diff:

diff --git a/invokeai/backend/model_manager/load/model_loaders/vae.py b/invokeai/backend/model_manager/load/model_loaders/vae.py
index 122b2f079..34192fc4c 100644
--- a/invokeai/backend/model_manager/load/model_loaders/vae.py
+++ b/invokeai/backend/model_manager/load/model_loaders/vae.py
@@ -24,6 +24,7 @@ from .generic_diffusers import GenericDiffusersLoader
 @ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.VAE, format=ModelFormat.Diffusers)
 @ModelLoaderRegistry.register(base=BaseModelType.StableDiffusion1, type=ModelType.VAE, format=ModelFormat.Checkpoint)
 @ModelLoaderRegistry.register(base=BaseModelType.StableDiffusion2, type=ModelType.VAE, format=ModelFormat.Checkpoint)
+@ModelLoaderRegistry.register(base=BaseModelType.StableDiffusionXL, type=ModelType.VAE, format=ModelFormat.Checkpoint)
 class VAELoader(GenericDiffusersLoader):
     """Class to load VAE models."""
     """Class to load VAE models."""

@@ -40,12 +41,8 @@ class VAELoader(GenericDiffusersLoader):
             return True

     def _convert_model(self, config: AnyModelConfig, model_path: Path, output_path: Optional[Path] = None) -> AnyModel:
-        # TODO(MM2): check whether sdxl VAE models convert.
-        if config.base not in {BaseModelType.StableDiffusion1, BaseModelType.StableDiffusion2}:
-            raise Exception(f"VAE conversion not supported for model type: {config.base}")
-        else:
-            assert isinstance(config, CheckpointConfigBase)
-            config_file = self._app_config.legacy_conf_path / config.config_path
+        assert isinstance(config, CheckpointConfigBase)
+        config_file = self._app_config.legacy_conf_path / config.config_path

         if model_path.suffix == ".safetensors":
             checkpoint = safetensors_load_file(model_path, device="cpu")

However, I get better results using the diff baked into the pony model: image

Here's using the converted ckpt VAE: image

For kicks, inserting a CLIP Skip with any value other than 0 results in blobs: image

Yaruze66 commented 5 months ago

I was unable to successfully convert and use these VAEs.

Initially, when I add them, they are defined as SD 1.x, and I manually change them to SDXL. However, I get an error when converting them.

[2024-06-04 22:13:57,237]::[InvokeAI]::ERROR --> Error while invoking session 48d8845b-2455-47a6-8f3c-0ef079e1f889, invocation 1d9e3710-2f89-4ff2-9722-3449fe38ed13 (l2i):
No subclass of LoadedModel is registered for base=sdxl, type=vae, format=checkpoint
[2024-06-04 22:13:57,237]::[InvokeAI]::ERROR --> Traceback (most recent call last):
  File "D:\invokeai\.venv\lib\site-packages\invokeai\app\services\session_processor\session_processor_default.py", line 185, in _process
    outputs = self._invocation.invoke_internal(
  File "D:\invokeai\.venv\lib\site-packages\invokeai\app\invocations\baseinvocation.py", line 289, in invoke_internal
    return self.invoke(context)
  File "D:\invokeai\.venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\invokeai\.venv\lib\site-packages\invokeai\app\invocations\latent.py", line 1040, in invoke
    vae_info = context.models.load(self.vae.vae)
  File "D:\invokeai\.venv\lib\site-packages\invokeai\app\services\shared\invocation_context.py", line 360, in load
    return self._services.model_manager.load.load_model(model, _submodel_type, self._data)
  File "D:\invokeai\.venv\lib\site-packages\invokeai\app\services\model_load\model_load_default.py", line 74, in load_model
    implementation, model_config, submodel_type = self._registry.get_implementation(model_config, submodel_type)  # type: ignore
  File "D:\invokeai\.venv\lib\site-packages\invokeai\backend\model_manager\load\model_loader_registry.py", line 97, in get_implementation
    raise NotImplementedError(
NotImplementedError: No subclass of LoadedModel is registered for base=sdxl, type=vae, format=checkpoint

Same thing even with the safetensors from here.

JWBWork commented 5 months ago

Same issue as @Yaruze66, also experiencing new VAEs added as 1.5 by default. Seeing a slightly difference exception but looks like it means the same thing, stack is identical.

NotImplementedError: No subclass of LoadedModel is registered for base=BaseModelType.StableDiffusionXL, type=ModelType.VAE, format=ModelFormat.Checkpoint
psychedelicious commented 5 months ago

Yes, that's the same problem this issue is about.

ft-scobra commented 4 months ago

I have the same issue.