Research
2 minute read

Text2Scene: Generating Compositional Scenes from Textual Descriptions

Generating images from textual descriptions has become an active and exciting area of research. Interest has been partially fueled by the adoption of generative adversarial networks (GANs)1, which have demonstrated impressive results on a number of image synthesis tasks. However, challenges remain when attempting to synthesize images for complex scenes with multiple interacting objects. In our paper, a Best Paper Finalist at CVPR 2019, we proposed to approach this problem from another direction. Inspired by the principle of compositionality2, our model produces a scene by sequentially generating objects (in the forms of clip-art, bounding boxes, or segmented object patches) containing the semantic elements that compose the scene.

Compositional Scene Generation

We introduce Text2Scene, a model to interpret visually descriptive language in order to generate compositional scene representations. In particular, we focus on generating a scene representation consisting of a list of objects, along with their attributes (e.g., location, size, aspect ratio, pose, appearance). We adapt and train models to generate three types of scenes as shown in Figure 1 (removed): cartoon-like scenes, object layouts, and synthetic images. We propose a unified sequence-to-sequence framework to handle these three different tasks.

Generally, Text2Scene consists of a text encoder that maps the input sentence to a set of latent representations, an image encoder, which encodes the current generated canvas, a convolutional recurrent module, which passes the current state to the next step, attention modules, which focus on different parts of the input text, an object decoder that predicts the next object conditioned on the current scene state and attended input text, and an attribute decoder that assigns attributes to the predicted object, and an optional foreground embedding step that learns an appearance vector for patch retrieval in the synthetic image generation task.

The scene generation starts from an initially empty canvas that is updated at each time step. For the synthetic image generation task, our model sequentially retrieves and pastes object patches from other images to compose the scene. As the composite images may exhibit gaps between patches, we also leverage the stitching network in3 for post-processing.

Evaluation

We compare our approach to the latest GAN-based methods. Experimental results show that our model achieves near state-of-the-art performance on automatic metrics. Human subject evaluation shows that 75% of people preferred our outputs compared to the best GAN-based method such as SG2IM4 and AttnGAN5.

Outlook

Synthesizing images from text requires a level of language and visual understanding, which could lead to applications in image retrieval through natural language queries, representation learning for text, and automated computer graphics and image editing applications. Our work proposes an interpretable model that generates various forms of compositional scene representations. Experimental results demonstrate the capacity of our model to capture finer semantic meaning from descriptive text to generate complex scenes.

Date

Share

References

  1. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, DavidWarde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems (NeurIPS), 2014.

  2. Xiaodan Zhu and Edward Grefenstette. Deep learning for semantic composition. In ACL tutorial, 2017.

  3. Qi, Qifeng Chen, Jiaya Jia, and Vladlen Koltun. Semi-parametric image synthesis. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.

  4. Justin Johnson, Agrim Gupta, and Li Fei-Fei. Image gener- ation from scene graphs. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.

  5. Tao Xu, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei Huang, and Xiaodong He. Attngan: Fine- grained text to image generation with attentional generative adversarial networks. In IEEE Conference on Computer Vi- sion and Pattern Recognition (CVPR), 2018.Xiaojuan