Here is another text-to-image synthesis mutation, called STORY DALL-E, which adapts pre-selected text-to-image converters for story continuity.

source:

Text-to-image syntax algorithms, such as DALL-E, have demonstrated an extraordinary ability to convert an input comment into a coherent image. Several modern technologies have also used solid multimedia models to create artistic representations of input feedback, proving their ability to democratize art. However, these models are only intended to analyze a single, brief comment as input. To capture the meaning of the input language, many use cases of text-to-image syntax require models to handle overarching narratives and figurative phrases, adapt existing visuals, and create more than one image. Many works have already built generative adversarial network (GAN) models such as image-to-image translation, pattern transfer, etc.

Visualizing a story is a challenging endeavor that combines producing images and understanding a story. However, the recent introduction of large adaptor-based, pre-trained models opens possibilities to more effectively utilize the latent knowledge from large-scale, pre-trained data sets to perform these specialized tasks in a model similar to fine-tuning pre-selected language models to perform downstream tasks based on language comprehension. As a result, they investigate methods for adapting a pre-tested text-to-image synthesis model for complex end applications, with an emphasis on story visualization, in this study. Ways to visualize the story, for example, turn a series of comments into a series of images that depict the story.

While previous work on narrative visualization has highlighted its potential uses, the function presents specific challenges when applied to real-world scenarios. The agent must create an identical sequence of images displaying the contents of a set of captions that make up a tale. The model is limited to the static set of characters, locations, and events that have been rehearsed before. does not understand how to portray a new character appearing in a comment during an audition; Comments don’t contain enough information to adequately describe the character’s appearance. As a result, for the model to generalize to the new parts of the story, it must include a way to gather more information about how these elements are graphically depicted. To start, they make the narrative visualization more appropriate for these usage scenarios by introducing a new function called story continuation.

They present a starting scenario that can be obtained in real-world use situations in this work. It gives DiDeMoSV a new visualization dataset and adapts two existing visualization datasets, PororoSV and FlintstonesSV, to a narrative continuation scenario. The model may then iterate and modify components from this scene as it creates successive images by adding them (see figure below). It benefits by diverting attention away from text-to-image creation, and is a controversial topic. Instead, it shifts attention to the narrative structure of a series of images, such as how the image develops to reflect new narrative material in the annotations.

Predictions from the Mega-StoryDALL-E model for (A) PororoSV, (B) FlintstonesSV and (C) DiDeMoSV for narrative persistence data sets. The initial frame that is provided as an additional input to the model is called the source frame.

To adopt a text-to-image synthesis model for this story-continuation task, they must first tune a pre-tested model (such as DALL-E) in a sequential text-to-image generation task with the added flexibility of copying from a previous entry. To do this, they first modify the model with additional layers that duplicate the dynamic output from the initial scene. Then, during the production of each frame, they incorporate a mass of self-interest to construct narrative weddings that provide a universal semantic context for the tale. The model is completed according to the Story Continuation Challenge, where these additional units are learned from the start. To continue the tale, they call their method StoryDALL-E and compare it to a GAN-based model called StoryGANc.

They also investigate the effective parameter structure for instant tuning and provide a vector consisting of task-specific integrations to entice the pre-tested model to create visuals of the target field. Pre-tested weights are frozen while training this instant-tuned version of the model, and new parameters are learned from scratch, saving time and memory. The results indicate that their StoryDALL-E retrofit strategy effectively exploits DALL-latent E’s prior knowledge of the story continuation problem, outperforming the GAN-based model based on different criteria. Moreover, they discovered that the transcription technique enables improved generation in low resource conditions and produces invisible characters during inference.

In short, they introduce a new data set for story continuation and introduce a story continuation function, which is closely related to real-world downstream applications of narrative visualization.

  • They provide StoryDALL-E, a modified adaptation of pre-trained Transformers for the continuation of the story. They also create StoryGANc to serve as a strong GAN baseline for comparison.
  • They perform comparison and ablation tests to demonstrate that Specific StoryDALL-E outperforms StoryGANc in three narrative follow-up data sets across several criteria.
  • Their investigation shows that reproductions increase the association of the produced images with the original image, thereby enhancing visual story continuity and developing underresourced and unremarkable characters.

The code implementation in PyTorch can be found for free on GitHub.

This Article is written as a research summary article by Marktechpost Staff based on the research paper 'StoryDALL-E: Adapting Pretrained
Text-to-Image Transformers for Story Continuation'. All Credit For This Research Goes To Researchers on This Project. Check out the paper and github link.

Please Don't Forget To Join Our ML Subreddit


Content writing consultant trainee at Marktechpost.