I noticed that in the training tutorial, it is mentioned that the results of label cannot be generated exactly because of stable diffusion. If I have requirements like image inpainting and image super-resolution, can I modify controlNet to a model that can generate stable results? Do you have any suggestions for modification?
I don't understand your question, could you please elaborate. If you want reproducable builds during the training I think you need to keep the same seed
I noticed that in the training tutorial, it is mentioned that the results of label cannot be generated exactly because of stable diffusion. If I have requirements like image inpainting and image super-resolution, can I modify controlNet to a model that can generate stable results? Do you have any suggestions for modification?