lllyasviel / ControlNet

Let us control diffusion models!
Apache License 2.0
28.9k stars 2.61k forks source link

Could please introduce your process of building the posture ContrilNet dataset? #594

Closed HuXinjing closed 7 months ago

HuXinjing commented 7 months ago

After reading ur paper and docs I find that this work is just another LoRA which take prompt as input, it's nothing exciting but ur training dataset construction. I wonder how much effort u did to build this dataset with so many different pictures, in my sight, u need to label or prompt more than 50k pictures which are generated by other algorithm.

geroldmeisinger commented 7 months ago

you can find some information about training and dataset in the "supplement material" https://openaccess.thecvf.com/content/ICCV2023/html/Zhang_Adding_Conditional_Control_to_Text-to-Image_Diffusion_Models_ICCV_2023_paper.html -> [supp]

Human Pose (Openpifpaf) We use learning-based pose estimation method [7] to “find” humans from internet using a simple rule: an image with human must have at least 30% of the key points of the whole body detected. We obtain 80K pose-image-caption pairs. (Captions are obtained directly from internet websites.) Note that we directly use visualized pose images with human skeletons as training condition. The model is trained using 400 GPU-hours on a single NVIDIA RTX 3090TI GPU. The base model is Stable Diffusion V2.1. The batch size is 18 (physical batch size is 3, with 6× gradient accumulation). The learning rate is 1e-5. We do not use ema weights.

Human Pose (Openpose) We use learning-based pose estimation method [3] to find humans from internet using the same rule in the above Openpifpaf setting. We obtain 200K pose-image-caption pairs. (Captions are obtained directly from internet websites.) Note that we directly use visualized pose images with human skeletons as training condition. The model is trained using 300 GPU-hours with NVIDIA A100 80GB GPUs. This model is trained with Stable Diffusion V1.5. Other settings are the same as the above Openpifpaf. The batch size is 32. The learning rate is 1e-5. We do not use ema weights.

HuXinjing commented 7 months ago

you can find some information about training and dataset in the "supplement material" https://openaccess.thecvf.com/content/ICCV2023/html/Zhang_Adding_Conditional_Control_to_Text-to-Image_Diffusion_Models_ICCV_2023_paper.html -> [supp]

Human Pose (Openpifpaf) We use learning-based pose estimation method [7] to “find” humans from internet using a simple rule: an image with human must have at least 30% of the key points of the whole body detected. We obtain 80K pose-image-caption pairs. (Captions are obtained directly from internet websites.) Note that we directly use visualized pose images with human skeletons as training condition. The model is trained using 400 GPU-hours on a single NVIDIA RTX 3090TI GPU. The base model is Stable Diffusion V2.1. The batch size is 18 (physical batch size is 3, with 6× gradient accumulation). The learning rate is 1e-5. We do not use ema weights. Human Pose (Openpose) We use learning-based pose estimation method [3] to find humans from internet using the same rule in the above Openpifpaf setting. We obtain 200K pose-image-caption pairs. (Captions are obtained directly from internet websites.) Note that we directly use visualized pose images with human skeletons as training condition. The model is trained using 300 GPU-hours with NVIDIA A100 80GB GPUs. This model is trained with Stable Diffusion V1.5. Other settings are the same as the above Openpifpaf. The batch size is 32. The learning rate is 1e-5. We do not use ema weights.

Holy...it's an impossible quantity in my field. Whatever, it is beyond worthy of the best paper just by the data collection which is mentioned in Experiments section. Thx,LOL