amusi / CVPR2024-Papers-with-Code

CVPR 2024 论文和开源项目合集
18.39k stars 2.6k forks source link

CVPR-2024 paper #211

Closed yuzhou914 closed 9 months ago

yuzhou914 commented 9 months ago

SmartEdit: Exploring Complex Instruction-based Image Editing with Multimodal Large Language Models project page: https://yuzhou914.github.io/SmartEdit/ code(must be open-sourced and coming soon): https://github.com/TencentARC/SmartEdit

Current instruction-based editing methods, such as InstructPix2Pix, often fail to produce satisfactory results in complex scenarios due to their dependence on the simple CLIP text encoder in diffusion models. To rectify this, this paper introduces SmartEdit, a novel approach to instruction-based image editing that leverages Large Language Models (LLMs) with visual inputs to enhance their understanding and reasoning capabilities. However, direct integration of these elements still faces challenges in situations requiring complex reasoning. To mitigate this, we propose a Bidirectional Interaction Module that enables comprehensive bidirectional information interactions between the input image and the MLLM output. During training, we initially incorporate perception data to boost the perception and understanding capabilities of diffusion models. Subsequently, we demonstrate that a small amount of complex instruction editing data can effectively stimulate SmartEdit's editing capabilities for more complex instructions. We further construct a new evaluation dataset, Reason-Edit, specifically tailored for complex instruction-based image editing. Both quantitative and qualitative results on this evaluation dataset indicate that our SmartEdit surpasses previous methods, paving the way for the practical application of complex instruction-based image editing.