Closed lstein closed 2 years ago
I do not use GIMP or Inkscape but I'm proficient in PS and other stuff. I can create these masks for you. What do you need done?
Also I'd recommend that in masks -- black be the area block and white be the area to process. That's how masks work across most things.
Why not use masks in the "\data\inpainting_examples" path for tests?
Although there are only realistic images. They are not enough for testing.
https://www.photopea.com/ use the eraser tool, or selection tool and delete the selection, export as PNG
from PIL import Image, ImageOps
mask = None
def has_transparency(img):
if img.info.get("transparency", None) is not None:
return True
if img.mode == "P":
transparent = img.info.get("transparency", -1)
for _, index in img.getcolors():
if index == transparent:
return True
elif img.mode == "RGBA":
extrema = img.getextrema()
if extrema[3][0] < 255:
return True
return False
# Determining if the image has an alpha channel tells us if we want to extract a mask
image = Image.open("your/path/here")
if has_transparency(image):
# Obtain the mask from the transparency channel
mask = Image.new(mode="L", size=image.size, color=255)
mask.putdata(image.getdata(band=3))
mask = ImageOps.invert(mask)
mask.save("outputs/test_mask.png")
Great. I'll give Photopea a try. I appreciate the offer to make the masks for me, but I want to be able to write up a workflow for the documentation and to do that I need to understand the steps myself. If people have other suggestions, please let me know.
I do not use GIMP or Inkscape but I'm proficient in PS and other stuff. I can create these masks for you. What do you need done?
Also I'd recommend that in masks -- black be the area block and white be the area to process. That's how masks work across most things.
H'mmm. Some other stable diffusion inpainting examples are using the reverse logic, with black areas being those that are inpainted. For example:
The last is a research paper that describes the black pixels as "active" and the white ones as "invalid". I think the ML community has decided that black is where the inpainting occurs. I'm happy to flip the logic if this will drive artists crazy, or maybe provide a switch that toggles back and forth (just to make everybody confused).
The last is a research paper that describes the black pixels as "active" and the white ones as "invalid". I think the ML community has decided that black is where the inpainting occurs. I'm happy to flip the logic if this will drive artists crazy, or maybe provide a switch that toggles back and forth (just to make everybody confused).
Every artistic application does black hide and white show. And most of the people using this script will be using these applications to generate masks. Not to mention, the tons of alphas already in existence are all black hide and white show.
I guess having a switch ain't too bad but I feel the default should be black hide and white show.
G'MIC (for GIMP) have "Inpain [Patch-Based]" filter. Default color for inpaint is RED (ff0000). :) Well, if no jokes, in "\data\inpainting_examples" black - don't touch , white - change.
I'm on Linux too and I would recommend Krita. Open source and without cost. It's fairly similar to PS and solid enough to be used in some VFX tasks.
alternatively the Freehand Selection Tool, dashed kidney thing in the vertical toolbar -> select -> backspace or shift backspace to fill with background/foreground color
I'll do white==change, black==leave alone. I hope it is as easy as it looks!
In classic film making and photography masks were painted with a light blocking paint on clear film. Black did block the light leaving the original untouched, white (clear) let the light through exposing the new image. That's the way it used to be in the analog world.
I've tried to use it, but it fails with:
python scripts/inpaint.py --indir img_in --outdir img_out --steps 1
Found 2 inputs.
Traceback (most recent call last):
File "/Users/jan/Documents/ML/stable-diffusion/scripts/inpaint.py", line 60, in <module>
model = instantiate_from_config(config.model)
File "/Users/jan/Documents/ML/stable-diffusion/src/taming-transformers/main.py", line 119, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
TypeError: LatentDiffusion.__init__() missing 1 required positional argument: 'personalization_config'
The .zip in the download seems to be the actual .ckpt. So I've renamed it from model.zip
to last.ckpt
. Not having a checkpoint in models/ldm/inpainting_big
doesn't throw a different error, which seems strange to me.
Using a M1 16GB MacBook
Yes. Right now "scripts/inpaint.py" not working
python scripts/inpaint.py --indir data/inpainting_examples/ --outdir outputs/inpainting_results
Found 8 inputs.
Traceback (most recent call last):
File "scripts/inpaint.py", line 60, in <module>
model = instantiate_from_config(config.model)
File "d:\_work\____deepsoft\_conda\stable-diffusion\src\taming-transformers\main.py", line 119, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
TypeError: __init__() missing 1 required positional argument: 'personalization_config'
https://www.photopea.com/ use the eraser tool, or selection tool and delete the selection, export as PNG
from PIL import Image, ImageOps mask = None def has_transparency(img): if img.info.get("transparency", None) is not None: return True if img.mode == "P": transparent = img.info.get("transparency", -1) for _, index in img.getcolors(): if index == transparent: return True elif img.mode == "RGBA": extrema = img.getextrema() if extrema[3][0] < 255: return True return False # Determining if the image has an alpha channel tells us if we want to extract a mask image = Image.open("your/path/here") if has_transparency(image): # Obtain the mask from the transparency channel mask = Image.new(mode="L", size=image.size, color=255) mask.putdata(image.getdata(band=3)) mask = ImageOps.invert(mask) mask.save("outputs/test_mask.png")
Thank you so much for both the pointer to Photopea and the python code to create the mask. Not having worked with PIL before, it would have taken me ages to figure this out. Works like a charm!
I've just posted an update on the inpainting support in Discussions here: https://github.com/lstein/stable-diffusion/discussions/296
I'm working on the code to support inpainting, and I'm trying to create a mask for testing. A mask is a simply a PNG or JPG file in which the area to be painted over is black, and the rest is white (or really any greyscale value < 0.5). The alpha (transparency) channel is irrelevant, as far as I can tell.
I'm on a Linux system and I've got Gimp and Inkscape. I'd like to use a selection tool to outline the area to convert to black, and then erase everything else to white. There are a ton of options, and I'd rather be coding than learning the intricacies of layers, masking, etc. Could someone provide a walkthrough?