NVlabs / addit

176 stars 2 forks source link

Add-it: Training-Free Object Insertion in Images With Pretrained Diffusion Models

arXiv

[Project Website]

Add-it: Training-Free Object Insertion in Images With Pretrained Diffusion Models
Yoad Tewel1,2, Rinon Gal1,2, Dvir Samuel3, Yuval Atzmon1, Lior Wolf2, Gal Chechik1
1NVIDIA, 2Tel Aviv University, 3Bar-Ilan University

Abstract:
Adding Object into images based on text instructions is a challenging task in semantic image editing, requiring a balance between preserving the original scene and seamlessly integrating the new object in a fitting location. Despite extensive efforts, existing models often struggle with this balance, particularly with finding a natural location for adding an object in complex scenes. We introduce Add-it, a training-free approach that extends diffusion models' attention mechanisms to incorporate information from three key sources: the scene image, the text prompt, and the generated image itself. Our weighted extended-attention mechanism maintains structural consistency and fine details while ensuring natural object placement. Without task-specific fine-tuning, Add-it achieves state-of-the-art results on both real and generated image insertion benchmarks, including our newly constructed "Additing Affordance Benchmark" for evaluating object placement plausibility, outperforming supervised methods. Human evaluations show that Add-it is preferred in over 80% of cases, and it also demonstrates improvements in various automated metrics.

Description

This repo will contain the official code for our Add-it paper.

News

TODO:

Citation

If you make use of our work, please cite our paper:

@misc{tewel2024addit,
    title={Add-it: Training-Free Object Insertion in Images With Pretrained Diffusion Models},
    author={Yoad Tewel and Rinon Gal and Dvir Samuel and Yuval Atzmon and Lior Wolf and Gal Chechik},
    year={2024},
    eprint={2411.07232},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}