jnhwkim / Pensees

A collection of fragments for reading research papers.
MIT License
6 stars 0 forks source link

StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story Continuation #5

Open KyonP opened 1 year ago

KyonP commented 1 year ago

StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story Continuation

Maharana et al., ECCV 2022

Recent advances in text-to-image synthesis have led to large pretrained transformers with excellent capabilities to generate visualizations from a given text. However, these models are ill-suited for specialized tasks like story visualization, which requires an agent to produce a sequence of images given a corresponding sequence of captions, forming a narrative. Moreover, we find that the story visualization task fails to accommodate generalization to unseen plots and characters in new narratives. Hence, we first propose the task of story continuation, where the generated visual story is conditioned on a source image, allowing for better generalization to narratives with new characters. It is difficult to collect large-scale datasets to train large models for this task from scratch due to the need for continuity and an explicit narrative among the images in a story. Therefore, we propose to leverage the pretrained knowledge of text-to-image synthesis models to overcome the low-resource scenario and improve generation for story continuation. To that end, we enhance or ‘retro-fit’ the pretrained text-to-image synthesis models with taskspecific modules for (a) sequential image generation and (b) copying relevant elements from an initial frame. Then, we explore full-model finetuning, as well as prompt-based tuning for parameter-efficient adaptation, of the pre-trained model. We evaluate our approach StoryDALL-E on two existing datasets, PororoSV and FlintstonesSV, and introduce a new dataset DiDeMoSV collected from a video-captioning dataset. We also develop a model StoryGANc based on Generative Adversarial Networks (GAN) for story continuation, and compare it with the StoryDALL-E model to demonstrate the advantages of our approach. We show that our retro-fitting approach outperforms GAN-based models for story continuation and facilitates copying of visual elements from the source image, thereby improving continuity in the generated visual story. Finally, our analysis suggests that pretrained transformers struggle to comprehend narratives containing several characters and translating them into appropriate imagery. Overall, our work demonstrates that pretrained text-to-image synthesis models can be adapted for complex and low-resource tasks like story continuation. Our results encourage future research into story continuation as well as exploration of the latest, larger models for the task. Code, data, demo and model card available at https://github.com/adymaharana/storydalle

image

🔑 Key idea:

💪 Strength:

😵 Weakness:

🤔 Confidence:

✏️ Memo:

KyonP commented 1 year ago

The author replied: only source frame and captions are provided for test and validation splits (no target frames). wondering, what is the alternatives for the source frame embeddings for the test and validation phases.