StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story Continuation
Maharana et al., ECCV 2022
Recent advances in text-to-image synthesis have led to large pretrained transformers with excellent capabilities to generate visualizations from a given text. However, these models are ill-suited for specialized tasks like story visualization, which requires an agent to produce a sequence of images given a corresponding sequence of captions, forming a narrative. Moreover, we find that the story visualization task fails to accommodate generalization to unseen plots and characters in new narratives. Hence, we first propose the task of story continuation, where the generated visual story is conditioned on a source image, allowing for better generalization to narratives with new characters. It is difficult to collect large-scale datasets to train large models for this task from scratch due to the need for continuity and an explicit narrative among the images in a story. Therefore, we propose to leverage the pretrained knowledge of text-to-image synthesis models to overcome the low-resource scenario and improve generation for story continuation. To that end, we enhance or ‘retro-fit’ the pretrained text-to-image synthesis models with taskspecific modules for (a) sequential image generation and (b) copying relevant elements from an initial frame. Then, we explore full-model finetuning, as well as prompt-based tuning for parameter-efficient adaptation, of the pre-trained model. We evaluate our approach StoryDALL-E on two existing datasets, PororoSV and FlintstonesSV, and introduce a new dataset DiDeMoSV collected from a video-captioning dataset. We also develop a model StoryGANc based on Generative Adversarial Networks (GAN) for story continuation, and compare it with the StoryDALL-E model to demonstrate the advantages of our approach. We show that our retro-fitting approach outperforms GAN-based models for story continuation and facilitates copying of visual elements from the source image, thereby improving continuity in the generated visual story. Finally, our analysis suggests that pretrained transformers struggle to comprehend narratives containing several characters and translating them into appropriate imagery. Overall, our work demonstrates that pretrained text-to-image synthesis models can be adapted for complex and low-resource tasks like story continuation. Our results encourage future research into story continuation as well as exploration of the latest, larger models for the task. Code, data, demo and model card available at https://github.com/adymaharana/storydalle
🔑 Key idea:
giving "source image" (1st image from the image sequence) to adapt new story elements such as the appearance of new characters, unseen plots, etc. (real-world problem)
prompt-tunning: full and partial fine-tunning methods for the pretrained (frozen) state.
To "retro-fit" frozen DALL-E with additional layers to copy relevant output from the initial scene (source image)
💪 Strength:
proposing new task: Story continuation, giving an initial "source image" to generate the following 4 images with narratives
c.f., previous Story visualization task: only given text sentences and generates 5 images.
proposing a new dataset for story continuation, which is rare
proposing an efficient method to handle large DALL-E Mega for this task
😵 Weakness:
not mentioned in detail for validation and test phase
overly given hints: source and target images are described as inputs at the same timestep (normal for VAE setup, but for validation?)
couldn't propose a new metric for evaluating the performance of the "real-world" problem solving, only qualitative analysis.
🤔 Confidence:
Medium
✏️ Memo:
classic approach (from StoryGAN) of global and local captions embeddings is presented: "we add two task-specific modules to the native DALL-E architecture. First, we use a global story encoder to pool information from all captions and produce a story embedding, which provides global context of the story at each timestep. Next, we ‘retro-fit’ the model with cross-attention layers in order to accept the source frame as additional input. We refer to our proposed model as StoryDALL-E"
My opinion: Perhaps it required excessive hints to generate high-quality text-guided images that have a large gap between vision and language alignment without classifier-based models (GAN).
The author replied: only source frame and captions are provided for test and validation splits (no target frames). wondering, what is the alternatives for the source frame embeddings for the test and validation phases.
StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story Continuation
Maharana et al., ECCV 2022
🔑 Key idea:
💪 Strength:
😵 Weakness:
🤔 Confidence:
✏️ Memo: