Abstract

In many applications of computer graphics, art, and design, it is desirable for a user to provide intuitive non-image input, such as text, sketch, stroke, graph, or layout, and have a computer system automatically generate photo-realistic images according to that input. While classically, works that allow such automatic image content generation have followed a framework of image retrieval and composition, recent advances in deep generative models such as generative adversarial networks (GANs), variational autoencoders (VAEs), and flow-based methods have enabled more powerful and versatile image generation approaches. This paper reviews recent works for image synthesis given intuitive user input, covering advances in input versatility, image generation methodology, benchmark datasets, and evaluation metrics. This motivates new perspectives on input representation and interactivity, cross fertilization between major image generation paradigms, and evaluation and comparison of generation methods.

Details

Title
Deep image synthesis from intuitive user input: A review and perspectives
Author
Xue Yuan 1 ; Yuan-Chen, Guo 2 ; Zhang, Han 3 ; Xu, Tao 4 ; Song-Hai, Zhang 2 ; Huang, Xiaolei 1 

 the Pennsylvania State University, College of Information Sciences and Technology, University Park, USA (GRID:grid.29857.31) (ISNI:0000 0001 2097 4281) 
 Tsinghua University, Department of Computer Science and Technology, Beijing, China (GRID:grid.12527.33) (ISNI:0000 0001 0662 3178); Tsinghua University, Beijing National Research Center for Information Science and Technology (BNRist), Beijing, China (GRID:grid.12527.33) (ISNI:0000 0001 0662 3178) 
 Google Brain, Mountain View, USA (GRID:grid.420451.6) 
 Facebook, Menlo Park, USA (GRID:grid.453567.6) (ISNI:0000 0004 0615 529X) 
Pages
3-31
Publication year
2022
Publication date
Mar 2022
Publisher
Springer Nature B.V.
ISSN
20960433
e-ISSN
20960662
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2586674423
Copyright
© The Author(s) 2021. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.