I am confused about the UNet architecture in Stable Diffusion V1.5/V2.1.
In the paper Adding Conditional Control to Text-to-Image Diffusion Models, the author states that the UNet Block as follows, where they claim that each block contains 4 ResNet layers and 2 ViTs and would be repeated 3 times:
We use Stable Diffusion [71] as an example to show how ControlNet can add conditional control to a large pretrained diffusion model. Stable Diffusion is essentially a U-Net [72] with an encoder, a middle block, and a skip-connected decoder. Both the encoder and decoder contain 12 blocks, and the full model contains 25 blocks, including the middle block. Of the 25 blocks, 8 blocks are down-sampling or up-sampling convolution layers, while the other 17 blocks are main blocks that each contain 4 resnet layers and 2 Vision Transformers (ViTs). Each ViT contains several crossattention and self-attention mechanisms. For example, in Figure 3a, the “SD Encoder Block A” contains 4 resnet layers and 2 ViTs, while the “×3” indicates that this block is repeated three times.
However, when I refer to pytorch+huggingface/diffusers implementation, it seems that each encoder block contains 2 ResNet layers and 2 ViTs and decoder block contains 3 ResNet layers and 3 ViTs and would not be repeated.
@inproceedings{zhangAddingConditionalControl2023,
title = {Adding {{Conditional Control}} to {{Text-to-Image Diffusion Models}}},
booktitle = {2023 {{IEEE}}/{{CVF International Conference}} on {{Computer Vision}} ({{ICCV}})},
author = {Zhang, Lvmin and Rao, Anyi and Agrawala, Maneesh},
year = {2023},
month = oct,
pages = {3813--3824},
issn = {2380-7504},
doi = {10.1109/ICCV51070.2023.00355},
urldate = {2024-06-25},
abstract = {We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with "zero convolutions" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, e.g., edges, depth, segmentation, human pose, etc., with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small ({$<$}50k) and large ({$>$}1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.},
}
Confusion on UNet Block in SD 2.1
I am confused about the UNet architecture in Stable Diffusion V1.5/V2.1. In the paper Adding Conditional Control to Text-to-Image Diffusion Models, the author states that the UNet Block as follows, where they claim that each block contains
4
ResNet layers and2
ViTs and would be repeated 3 times:We use Stable Diffusion [71] as an example to show how ControlNet can add conditional control to a large pretrained diffusion model. Stable Diffusion is essentially a U-Net [72] with an encoder, a middle block, and a skip-connected decoder. Both the encoder and decoder contain 12 blocks, and the full model contains 25 blocks, including the middle block. Of the 25 blocks, 8 blocks are down-sampling or up-sampling convolution layers, while the other 17 blocks are main blocks that each contain 4 resnet layers and 2 Vision Transformers (ViTs). Each ViT contains several crossattention and self-attention mechanisms. For example, in Figure 3a, the “SD Encoder Block A” contains 4 resnet layers and 2 ViTs, while the “×3” indicates that this block is repeated three times.
However, when I refer to pytorch+huggingface/diffusers implementation, it seems that each
encoder block
contains2
ResNet layers and2
ViTs anddecoder block
contains3
ResNet layers and3
ViTs and would not be repeated.Supplyment