In an era where visual content generation is increasingly driven by machinelearning, the integration of human feedback into generative models presentssignificant opportunities for enhancing user experience and output quality.This study explores strategies for incorporating iterative human feedback intothe generative process of diffusion-based text-to-image models. We proposeFABRIC, a training-free approach applicable to a wide range of populardiffusion models, which exploits the self-attention layer present in the mostwidely used architectures to condition the diffusion process on a set offeedback images. To ensure a rigorous assessment of our approach, we introducea comprehensive evaluation methodology, offering a robust mechanism to quantifythe performance of generative visual models that integrate human feedback. Weshow that generation results improve over multiple rounds of iterative feedbackthrough exhaustive analysis, implicitly optimizing arbitrary user preferences.The potential applications of these findings extend to fields such aspersonalized content creation and customization.
URL
Affiliations
Abstract
Translation (by gpt-3.5-turbo)
Summary (by gpt-3.5-turbo)