Inpaint Anything can inpaint anything in images, videos and 3D scenes!
TL; DR: Users can select any object in an image by clicking on it. With powerful vision models, e.g., SAM, LaMa and Stable Diffusion (SD), Inpaint Anything is able to remove the object smoothly (i.e., Remove Anything). Further, prompted by user input text, Inpaint Anything can fill the object with any desired content (i.e., Fill Anything) or replace the background of it arbitrarily (i.e., Replace Anything).
[2023/9/15] Remove Anything 3D code is available!\ [2023/4/30] Remove Anything Video available! You can remove any object from a video!\ [2023/4/24] Local web UI supported! You can run the demo website locally!\ [2023/4/22] Website available! You can experience Inpaint Anything through the interface!\ [2023/4/22] Remove Anything 3D available! You can remove any 3D object from a 3D scene!\ [2023/4/13] Technical report on arXiv available!
Click on an object in the image, and Inpainting Anything will remove it instantly!
Requires python>=3.8
python -m pip install torch torchvision torchaudio
python -m pip install -e segment_anything
python -m pip install -r lama/requirements.txt
In Windows, we recommend you to first install miniconda and
open Anaconda Powershell Prompt (miniconda3)
as administrator.
Then pip install ./lama_requirements_windows.txt instead of
./lama/requirements.txt.
Download the model checkpoints provided in Segment Anything and LaMa (e.g., sam_vit_h_4b8939.pth and big-lama), and put them into ./pretrained_models
. For simplicity, you can also go here, directly download pretrained_models, put the directory into ./
and get ./pretrained_models
.
For MobileSAM, the sam_model_type should use "vit_t", and the sam_ckpt should use "./weights/mobile_sam.pt". For the MobileSAM project, please refer to MobileSAM
bash script/remove_anything.sh
Specify an image and a point, and Remove Anything will remove the object at the point.
python remove_anything.py \
--input_img ./example/remove-anything/dog.jpg \
--coords_type key_in \
--point_coords 200 450 \
--point_labels 1 \
--dilate_kernel_size 15 \
--output_dir ./results \
--sam_model_type "vit_h" \
--sam_ckpt ./pretrained_models/sam_vit_h_4b8939.pth \
--lama_config ./lama/configs/prediction/default.yaml \
--lama_ckpt ./pretrained_models/big-lama
You can change --coords_type key_in
to --coords_type click
if your machine has a display device. If click
is set, after running the above command, the image will be displayed. (1) Use left-click to record the coordinates of the click. It supports modifying points, and only last point coordinates are recorded. (2) Use right-click to finish the selection.
Text prompt: "a teddy bear on a bench"
Click on an object, type in what you want to fill, and Inpaint Anything will fill it!
Requires python>=3.8
python -m pip install torch torchvision torchaudio
python -m pip install -e segment_anything
python -m pip install diffusers transformers accelerate scipy safetensors
Download the model checkpoints provided in Segment Anything (e.g., sam_vit_h_4b8939.pth) and put them into ./pretrained_models
. For simplicity, you can also go here, directly download pretrained_models, put the directory into ./
and get ./pretrained_models
.
For MobileSAM, the sam_model_type should use "vit_t", and the sam_ckpt should use "./weights/mobile_sam.pt". For the MobileSAM project, please refer to MobileSAM
bash script/fill_anything.sh
Specify an image, a point and text prompt, and run:
python fill_anything.py \
--input_img ./example/fill-anything/sample1.png \
--coords_type key_in \
--point_coords 750 500 \
--point_labels 1 \
--text_prompt "a teddy bear on a bench" \
--dilate_kernel_size 50 \
--output_dir ./results \
--sam_model_type "vit_h" \
--sam_ckpt ./pretrained_models/sam_vit_h_4b8939.pth
Text prompt: "a man in office"
Click on an object, type in what background you want to replace, and Inpaint Anything will replace it!
Requires python>=3.8
python -m pip install torch torchvision torchaudio
python -m pip install -e segment_anything
python -m pip install diffusers transformers accelerate scipy safetensors
Download the model checkpoints provided in Segment Anything (e.g. sam_vit_h_4b8939.pth) and put them into ./pretrained_models
. For simplicity, you can also go here, directly download pretrained_models, put the directory into ./
and get ./pretrained_models
.
For MobileSAM, the sam_model_type should use "vit_t", and the sam_ckpt should use "./weights/mobile_sam.pt". For the MobileSAM project, please refer to MobileSAM
bash script/replace_anything.sh
Specify an image, a point and text prompt, and run:
python replace_anything.py \
--input_img ./example/replace-anything/dog.png \
--coords_type key_in \
--point_coords 750 500 \
--point_labels 1 \
--text_prompt "sit on the swing" \
--output_dir ./results \
--sam_model_type "vit_h" \
--sam_ckpt ./pretrained_models/sam_vit_h_4b8939.pth
With a single click on an object in the first view of source views, Remove Anything 3D can remove the object from the whole scene!
Requires python>=3.8
python -m pip install torch torchvision torchaudio
python -m pip install -e segment_anything
python -m pip install -r lama/requirements.txt
python -m pip install jpeg4py lmdb
Download the model checkpoints provided in Segment Anything and LaMa (e.g., sam_vit_h_4b8939.pth), and put them into ./pretrained_models
. Further, download OSTrack pretrained model from here (e.g., vitb_384_mae_ce_32x4_ep300.pth) and put it into ./pytracking/pretrain
. In addition, download [nerf_llff_data] (e.g, horns), and put them into ./example/3d
. For simplicity, you can also go here, directly download pretrained_models, put the directory into ./
and get ./pretrained_models
. Additionally, download pretrain, put the directory into ./pytracking
and get ./pytracking/pretrain
.
For MobileSAM, the sam_model_type should use "vit_t", and the sam_ckpt should use "./weights/mobile_sam.pt". For the MobileSAM project, please refer to MobileSAM
bash script/remove_anything_3d.sh
Specify a 3d scene, a point, scene config and mask index (indicating using which mask result of the first view), and Remove Anything 3D will remove the object from the whole scene.
python remove_anything_3d.py \
--input_dir ./example/3d/horns \
--coords_type key_in \
--point_coords 830 405 \
--point_labels 1 \
--dilate_kernel_size 15 \
--output_dir ./results \
--sam_model_type "vit_h" \
--sam_ckpt ./pretrained_models/sam_vit_h_4b8939.pth \
--lama_config ./lama/configs/prediction/default.yaml \
--lama_ckpt ./pretrained_models/big-lama \
--tracker_ckpt vitb_384_mae_ce_32x4_ep300 \
--mask_idx 1 \
--config ./nerf/configs/horns.txt \
--expname horns
The --mask_idx
is usually set to 1, which typically is the most confident mask result of the first frame. If the object is not segmented out well, you can try other masks (0 or 2).
With a single click on an object in the first video frame, Remove Anything Video can remove the object from the whole video!
Requires python>=3.8
python -m pip install torch torchvision torchaudio
python -m pip install -e segment_anything
python -m pip install -r lama/requirements.txt
python -m pip install jpeg4py lmdb
Download the model checkpoints provided in Segment Anything and STTN (e.g., sam_vit_h_4b8939.pth and sttn.pth), and put them into ./pretrained_models
. Further, download OSTrack pretrained model from here (e.g., vitb_384_mae_ce_32x4_ep300.pth) and put it into ./pytracking/pretrain
. For simplicity, you can also go here, directly download pretrained_models, put the directory into ./
and get ./pretrained_models
. Additionally, download pretrain, put the directory into ./pytracking
and get ./pytracking/pretrain
.
For MobileSAM, the sam_model_type should use "vit_t", and the sam_ckpt should use "./weights/mobile_sam.pt". For the MobileSAM project, please refer to MobileSAM
bash script/remove_anything_video.sh
Specify a video, a point, video FPS and mask index (indicating using which mask result of the first frame), and Remove Anything Video will remove the object from the whole video.
python remove_anything_video.py \
--input_video ./example/video/paragliding/original_video.mp4 \
--coords_type key_in \
--point_coords 652 162 \
--point_labels 1 \
--dilate_kernel_size 15 \
--output_dir ./results \
--sam_model_type "vit_h" \
--sam_ckpt ./pretrained_models/sam_vit_h_4b8939.pth \
--lama_config lama/configs/prediction/default.yaml \
--lama_ckpt ./pretrained_models/big-lama \
--tracker_ckpt vitb_384_mae_ce_32x4_ep300 \
--vi_ckpt ./pretrained_models/sttn.pth \
--mask_idx 2 \
--fps 25
The --mask_idx
is usually set to 2, which typically is the most confident mask result of the first frame. If the object is not segmented out well, you can try other masks (0 or 1).
If you find this work useful for your research, please cite us:
@article{yu2023inpaint,
title={Inpaint Anything: Segment Anything Meets Image Inpainting},
author={Yu, Tao and Feng, Runseng and Feng, Ruoyu and Liu, Jinming and Jin, Xin and Zeng, Wenjun and Chen, Zhibo},
journal={arXiv preprint arXiv:2304.06790},
year={2023}
}