It appears you don't have support to open PDFs in this web browser. To view this file, Open with your PDF reader
Abstract
Murals are an important part of China’s cultural heritage. After more than a 1000 years of exposure to the sun and wind, most of these ancient murals have become mottled, with damage such as cracking, mold, and even large-scale detachment. It is an urgent work to restore these damaged murals. The technique of digital restoration of mural images refers to the reconstruction of structures and textures to virtually fill in the damaged areas of the image. Existing digital restoration methods have the problems of incomplete restoration and distortion of local details. In this paper, we propose a generative adversarial network model combining a parallel dual convolutional feature extraction depth generator and a ternary heterogeneous joint discriminator. The generator network is designed with the mechanism of parallel extraction of image features by vanilla convolution and dilated convolution, capturing multi-scale features simultaneously, and reasonable parameter settings reduce the loss of image information. A pixel-level discriminator is proposed to identify the pixel-level defects of the captured image, and its joint global discriminator and local discriminator discriminate the generated image at different levels and granularities. In this paper, we create the Dunhuang murals dataset and validate our method on this dataset, and the experimental results show that the method of this paper has an overall improvement in the evaluation metrics of PSNR and SSIM compared with the comparative methods. The restored resultant image is more in line with the subjective vision of human beings, which achieves the effective restoration of mural images.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 Communication University of China, School of Information and Communication Engineering, Beijing, China (GRID:grid.443274.2) (ISNI:0000 0001 2237 1871); Key Laboratory of Acoustic Visual Technology and Intelligent Control System, Ministry of Culture and Tourism, Beijing, China (GRID:grid.443274.2); Beijing Key Laboratory of Modern Entertainment Technology, Beijing, China (GRID:grid.443274.2)