lllyasviel / ControlNet

Let us control diffusion models!
Apache License 2.0
29.94k stars 2.7k forks source link

how to test the ControlNet on validation set? #595

Open KhawlahB opened 10 months ago

KhawlahB commented 10 months ago

Hello,

I am wondering about how to test the ControlNet model on the validation set and where can i find the loss values for each training epoch?

Also, my training process reach to epoch 17... if i stop the training using CTRL+C does it gonna save the ckpt file up to epoch 17? or i am gonna lose the whole data ? should i finish the whole training in order to get the ckpt file?

@scarbain @williamyang1991 @lllyasviel @eltociear

Your collaboration is highly appreciated...

geroldmeisinger commented 10 months ago

see here https://civitai.com/articles/2078/play-in-control-controlnet-training-setup-guide if you care to use diffusers training script for training. it has a parameter for checkpointing steps. i don't know how and when the official script save the checkpoint.

KhawlahB commented 10 months ago

I already trained the model on your dataset using tutorial_train.py, my question is how can i do the inference/ model evaluation on the validation set using the checkpoint file? i could not see any script for inference. Please see the attached image

I need it for research-based project so want to evaluate the model/performed inference on the trained model without using the applications with interactive interface you provide and just see the outputs... i want to evaluate it on validation set and see the values of evaluation matrices. Can you please provide me with the inference script? I really need it for research-based project

Screenshot 2023-12-04 023221

I hope you got my point.

Your help is highly appreciated.

@geroldmeisinger @scarbain @williamyang1991 @lllyasviel

geroldmeisinger commented 10 months ago

all the gradio files are inference demos. you should be able to adapt the code and run it on your model

KhawlahB commented 10 months ago

all the gradio files are inference demos. you should be able to adapt the code and run it on your model

I got this point... but i mean the inference script... i do not need only to see the inference demo, i want to perform inference on validation set and plying with hyperparameters and then test it on testing set. As i told you it is a research-based project i could not just use the inference demo to test the trained model... That's why i asked you for the inference script...

if you can recommend for me a diffusion model project that provide both train and inference as python scripts...

your collaboration is highly appreciated.

@geroldmeisinger

geroldmeisinger commented 10 months ago

sorry, but I don't understand what you need.

Bilal143260 commented 9 months ago

I already trained the model on your dataset using tutorial_train.py, my question is how can i do the inference/ model evaluation on the validation set using the checkpoint file? i could not see any script for inference. Please see the attached image

I need it for research-based project so want to evaluate the model/performed inference on the trained model without using the applications with interactive interface you provide and just see the outputs... i want to evaluate it on validation set and see the values of evaluation matrices. Can you please provide me with the inference script? I really need it for research-based project

Screenshot 2023-12-04 023221

I hope you got my point.

Your help is highly appreciated.

@geroldmeisinger @scarbain @williamyang1991 @lllyasviel

Anyluck with the inference script?

sweetDream6609 commented 8 months ago

I need the inference script,too. Anyluck with the inference script?

SummerWRain commented 6 months ago

@KhawlahB @sweetDream6609 @Bilal143260 This problem has been bothering me for a long time, but after some effort, I wrote a inference script, and now it works relatively normally, hope it can help you. My work doesn't require a text prompt, so the inference script has no text input. My code level is limited, if anyone can optimize it again that would be great!

from share import *

from cldm.model import create_model, load_state_dict
import cv2
from annotator.util import resize_image
import numpy as np
import torch
import einops
from cldm.ddim_hacked import DDIMSampler
from PIL import Image

# Configs
resume_path = '/ControlNet/lightning_logs/version_6/checkpoints/last.ckpt' # your checkpoint path
N = 1
ddim_steps = 50

model = create_model('./models/cldm_v21.yaml').cpu()
model.load_state_dict(load_state_dict(resume_path, location='cuda'))
model = model.cuda()
ddim_sampler = DDIMSampler(model)

img_path = 'your image path'
img = cv2.imread(img_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = resize_image(img, 512)

control = torch.from_numpy(img.copy()).float().cuda() / 255.0
control = torch.stack([control for _ in range(N)], dim=0)
control = einops.rearrange(control, 'b h w c -> b c h w').clone()
c_cat = control.cuda()
c = model.get_unconditional_conditioning(N)
uc_cross = model.get_unconditional_conditioning(N)
uc_cat = c_cat
uc_full = {"c_concat": [uc_cat], "c_crossattn": [uc_cross]}
cond={"c_concat": [c_cat], "c_crossattn": [c]}
b, c, h, w = cond["c_concat"][0].shape
shape = (4, h // 8, w // 8)

samples, intermediates = ddim_sampler.sample(ddim_steps, N, 
                                             shape, cond, verbose=False, eta=0.0, 
                                             unconditional_guidance_scale=9.0,
                                             unconditional_conditioning=uc_full
                                             )
x_samples = model.decode_first_stage(samples)
x_samples = x_samples.squeeze(0)
x_samples = (x_samples + 1.0) / 2.0
x_samples = x_samples.transpose(0, 1).transpose(1, 2)
x_samples = x_samples.cpu().numpy()
x_samples = (x_samples * 255).astype(np.uint8)

image_name = img_path.split('/')[-1]
Image.fromarray(x_samples).save('./outputs/' + image_name)
scarbain commented 6 months ago

Why don't you use the diffusers pipeline to perform inference with your trained ControlNet ? Personally, I use diffusers to both train and perform inference : https://huggingface.co/docs/diffusers/api/pipelines/controlnet

It's easier and it's optimised

SummerWRain commented 6 months ago

@scarbain I understand, but if I use diffusers to inference, I have to convert my trained model to diffusers using convert_original_controlnet_to_diffusers.py. I have tried this, this is not feasible and often results in errors.

scarbain commented 6 months ago

What errors do you get ?

Why don't you also train using diffusers ? https://github.com/huggingface/diffusers/tree/main/examples/controlnet

innat-asj commented 3 months ago

You shouldn't rely on diffusers scripts if you care about learning things step by step and more specifically if its research or educational purposes. If you don't care the training methodology and only care about your input and output, then go with diffusers scripts. But for learning purpose, official code is much better, organized and strongly documented.

I think, the OP is looking for official evaluation scripts that loads the trained checkpoints and perform quantitative and qualitative inference.