Open CQxiaocaimi opened 1 year ago
I was wrong. The images of Sample_002 and Reconstruction_002 should be fixed, but the Sample_002 generated now is from another image, which is completely unrelated to Reconstruction_002.
if you are new to controlnet training I suggest to start out with a simpler one like an edge model to understand how parameters and image preparation effects training, see here https://github.com/lllyasviel/ControlNet/discussions/318#discussioncomment-7202122 and https://civitai.com/articles/2078 . 1-2 epochs are usually fine from my experience (assuming 200k samples a32). I quite don't understand what the point of using the same prompt for all samples is and you might be better off with fine-tuninng a base model.
如果你是控制网训练的新手,我建议从一个简单的开始,比如边缘模型,以了解参数和图像准备如何影响训练,请参阅此处 lllyasviel/ControlNet#318 (评论) 和 https://civitai.com/articles/2078 。根据我的经验,1-2 个时期通常没问题(假设 200k 个样本 a32)。我非常理解对所有样本使用相同的提示的意义是什么,最好对基本模型进行微调。
I have read this article and used the basic tutorial, but I am still having trouble. I want to train ip2p, but the images in Sample_002 and Reconstruction_002 have absolutely no relationship.Their relationship should be fixed.I have tried many different prompt words, but the results are all the same. I sincerely hope to receive your help. I would like to know how you trained ip2p.Could you explain your training process?
https://www.timothybrooks.com/instruct-pix2pix
btw your images are not visible
I'm here again. I recently tried training an IP2P model using the v1-5-pruned.ckpt as the base model. I wanted to train real-life images into cartoon images, using the source, target, and prompt.json data formats, with the prompt command "Turn it into a corresponding cartoon portrait".In addition, I read another comment and changed the training file to model = create_model('PATH/TO/control_v11e_sd15_ip2p.yaml')model.load_state_dict(load_state_dict(PATH/TO/v1-5-pruned.ckpt'), strict = False)model.load_state_dict(load_state_dict(PATH/TO/control_v11e_sd15_ip2p.pth) , strict = False), I haven't changed anything else, but the training results are not satisfactory. The generated images are completely different from the images in the target file, they are random and chaotic. I tried training for 50 epochs, but it was still as bad as before, and it seems to have stabilized at the second epoch. I don't know if there is something wrong with my training method. I have seen comments about using it in conjunction with gradio_ip2p.py, but I don't quite understand what that means. Is it just the part about model settings, or does it require image preprocessing code to be migrated into the training code? Sorry, I have little knowledge of code and am still a newbie, so I would like to ask you some detailed questions about training ip2p. I would like to know how to fix the generated image to the image in target.