This is an ongoing project aims to Edit and Generate Anything in an image, powered by Segment Anything, ControlNet, BLIP2, Stable Diffusion, etc.
Any forms of contribution and suggestion are very welcomed!
2023/08/09 - Revise UI and code, fixed multiple known issues.
2023/07/25 - EditAnything is accepted by the ACM MM demo track.
2023/06/09 - Support cross-image region drag and merge, unleash creative fusion!
2023/05/24 - Support multiple high-quality character editing: clothes, haircut, colored contact lenses.
2023/05/22 - Support sketch to image by adjusting mask align strength in sketch2image.py
!
2023/05/13 - Support interactive segmentation with click operation!
2023/05/11 - Support tile model for detail refinement!
2023/05/04 - New demos of Beauty/Handsome Edit/Generation is released!
2023/05/04 - ControlNet-based inpainting model on any lora model is supported now. EditAnything can operate on any base/lord models without the requirements of inpainting model.
2023/04/09 - We released a pretrained model of StableDiffusion based ControlNet that generate images conditioned by SAM segmentation.
prompt: "a paint of a tree in the ground with a river."
Also, you could use the generated image and sam model to refine your sketch definitely!
Edit Your beauty and Generate Your beauty
EditAnything+DreamBooth: Train a customized DreamBooth Model with tools/train_dreambooth_inpaint.py
and replace the base model in sam2edit.py
with the trained model.
Human Prompt: "A paint of spring/summer/autumn/winter field."
Text Grounding: "dog head"
Human Prompt: "cute dog"
Text Grounding: "bench"
Human Prompt: "bench"
Human Prompt: "esplendent sunset sky, red brick wall"
BLIP2 Prompt: "a large white and red ferry" (1:input image; 2: segmentation mask; 3-8: generated images.)
1) The human prompt and BLIP2 generated prompt build the text instruction. 2) The SAM model segment the input image to generate segmentation mask without category. 3) The segmentation mask and text instruction guide the image generation.
python sam2semantic.py
Highlight features:
Create a environment
conda env create -f environment.yaml
conda activate control
Install BLIP2 and SAM
Put these models in models
folder.
# BLIP2 and SAM will be audo installed by running app.py
pip install git+https://github.com/huggingface/transformers.git
pip install git+https://github.com/facebookresearch/segment-anything.git
# For text-guided editing
pip install git+https://github.com/openai/CLIP.git
pip install git+https://github.com/facebookresearch/detectron2.git
pip install git+https://github.com/IDEA-Research/GroundingDINO.git
Download pretrained model
# Segment-anything ViT-H SAM model will be auto downloaded.
# BLIP2 model will be auto downloaded.
# Part Grounding Swin-Base Model.
wget https://github.com/Cheems-Seminar/segment-anything-and-name-it/releases/download/v1.0/swinbase_part_0a0000.pth
# Grounding DINO Model.
wget https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha2/groundingdino_swinb_cogcoor.pth
# Get pretrained model from huggingface.
# No need to download this! But please install safetensors for reading the ckpt.
Run Demo
python app.py
# or
python editany.py
# or
python sam2image.py
# or
python sam2vlpart_edit.py
# or
python sam2groundingdino_edit.py
Model | Features | Download Path |
---|---|---|
SAM Pretrained(v0-1) | Good Nature Sense | shgao/edit-anything-v0-1-1 |
LAION Pretrained(v0-3) | Good Face | shgao/edit-anything-v0-3 |
LAION Pretrained(v0-4) | Support StableDiffusion 1.5/2.1, More training data and iterations, Good Face | shgao/edit-anything-v0-4-sd15 shgao/edit-anything-v0-4-sd21 |
dataset_build.py
.tool_add_control_sd21.py
.sam_train_sd21.py
.@InProceedings{gao2023editanything,
author = {Gao, Shanghua and Lin, Zhijie and Xie, Xingyu and Zhou, Pan and Cheng, Ming-Ming and Yan, Shuicheng},
title = {EditAnything: Empowering Unparalleled Flexibility in Image Editing and Generation},
booktitle = {Proceedings of the 31st ACM International Conference on Multimedia, Demo track},
year = {2023},
}
This project is based on:
Segment Anything, ControlNet, BLIP2, MDT, Stable Diffusion, Large-scale Unsupervised Semantic Segmentation, Grounded Segment Anything: From Objects to Parts, Grounded-Segment-Anything
Thanks for these amazing projects!