Closed Timwi closed 1 year ago
Try playing around with the parameters a bit, especially --num-inference-steps
and --strength
. The default settings in the example are usually too low to cause a noticable change in the image.
This is my result when using the example from the readme (demo.py --prompt "Photo of Emilia Clarke with a bright red hair" --init-image ./data/input.png --mask ./data/mask.png --strength 0.5
):
This is a re-run with slightly increased strength and twice the inference steps: (demo.py --prompt "Photo of Emilia Clarke with a bright red hair" --init-image ./data/input.png --mask ./data/mask.png --strength 0.6 --num-inference-steps 64
)
Thank you for responding. However, I’m confused by your response. The purpose of inpainting with a mask is to remove part of the image and have the algorithm re-generate that part of the image with no reference to the original. Does this algorithm not have that capability?
The --strength
parameter controls how much influence the original image contents in the masked area should have. Values close to 0 lead to a large influence of the original image for the result, while values close 1 give freedom to the neural net to (almost) entirely ignore the masked content area in the input image.
As I said - best way to learn is to play around with the parameters to see what happens.
I’ve tried the command line specified in the README.md, and I’ve tried some images+masks of my own. I only seem to get variations of the original image, meaning that the mask is being ignored.
I’ve looked at the
_preprocess_mask
function and had it output its result. I couldn’t find anything wrong with it. Despite, the output does not appear to use it.Thanks.