Open NorbertZheng opened 1 year ago
Ablation Study: Why ControlNets use deep encoder? What if it was lighter? Or even an MLP?
In 2023, if we want to train an encoder to perform some tasks, we have four basic options as follows:
In our problem, we want to control Stable Diffusion, and the encoder will be trained jointly with a big Stable Diffusion (SD). Because of this, the option (3) requires super large computation power and is not practical unless you have as many A100s as EMostaque does. But we do not have that, so we may just forget about (3).
The option (1) and (2) are just similar and can be merged. They usually have similar performances.
Note that
are both relatively preferred methods. Which one is "harder" or "easier" to train is a complicated question and even related to different training environments. We should not presume the learning behaviors by simply looking at the number of parameters.
But in this post, let's pay more attention to the qualitative differences of these methods if they are already trained successfully.
Let us consider these architectures:
Below is the model architecture that we released many days ago as
It directly uses the encoder of Stable Diffusion (SD). Because it copies itself, let us call it ControlNet-Self.
Below is a typical architecture to train lightweight encoders from scratch. We just use some simple convolution layers to
Because it has relatively fewer parameters, let's call it ControlNet-Lite. Channels of layers are computed by instantiating the ldm
python object.
Below is a more extreme case to just use
In recent years, MLPs are suddenly popular again, and they are actually just $1\times 1$ convolutions. We use AVG pool as downsampling and then let us call it "ControlNet-MLP". Channels of layers are computed by instantiating the ldm
python object.
This house image is just the first searching result when I search "house" in pinterest. Let us use it as an example:
And this is the synthesized scribble map after preprocessor (you can use our scribble code to get this).
Then let me show off a bit my prompt engineering skills. I want a house under the winter snow. I will use this prompt:
Professional high-quality wide-angle digital art of a house designed by frank lloyd wright.
A delightful winter scene. photorealistic, epic fantasy, dramatic lighting, cinematic,
extremely high detail, cinematic lighting, trending on artstation, cgsociety, realistic
rendering of Unreal Engine 5, 8k, 4k, HQ, wallpaper
longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits,
cropped, worst quality, low quality
The ControlNet-Self is just our final released ControlNet and you can actually reproduce the results with below parameters. Note that we will just use same random seed 123456
for all experiments and generate 16 images without cherry-picking.
It seems that they all give good results! The only difference is in some aesthetic concepts.
But why? Is the problem of controlling Stable Diffusion so trivial, and everything can work very well?
Why not turn off the ControlNet and see what happens:
Ah, then the secret trick is clear!
Because my prompts are carefully prepared, even without any control, the standard Stable Diffusion can already generate similar images that have many "overlapping concepts/semantics/shapes" with the input scribble maps.
In this case, it is true that every method can work very well.
In fact, in such an "easy" experimental setting, I believe Sketch-Guided Diffusion or even anisotropic filtering will also work very well to change the shape of objects and fit it to some user-specified structure.
But what about some other cases?
Here we must introduce the Non-Prompt Test (NPT), a test that can avoid the influence of the prompts and test the "pure" capability of ControlNet encoder.
NPT is simple - just remove all prompts (and put image conditions on the "c" side of cfg formulation "prd=uc+(c-uc)*cfg_scale" so that the cfg scale can still work). In our user interface, we call this "Guess Mode" because the model seems to guess contents from input control maps.
Because no prompt is available, the ControlNet encoder must recognize everything on its own. This is really challenging, and note that all our production-ready ControlNets have passed extensive NPT tests before we made them publicly available.
The "ControlNet-Self" is just our final released ControlNet and you can actually reproduce the results with below parameters. Note that we do not input any prompts.
Now things are much clearer.
The answer depends on your goal.
But if you want to achieve a system with the quality similar to Style2Paints V5, then to the best of my knowledge, the ControlNet-Self is the only solution.
Now we also know why we need these zero convolutions
Just imagine that
The risk is very high that you are just training the already destroyed trainable copy from scratch again. To obtain the aforementioned object recognition capability would require extensive retraining -- similar to the amount of training required to produce the Stable Diffusion model itself.
We also know why it is important that ControlNet encoder should also receive prompts:
With this part, the ControlNet encoder's object recognition can be guided by the prompts so that
For example, we already know that without prompts, the model can recognize the house in the house scribble map, but we can still make it into cakes: "delicious cakes" using that house scribble map
Finally, note that this field is moving very fast and we won’t be surprised if some method suddenly comes out with just a few parameters and can also recognize objects equally well.