Xuan Zhu 1 and Xianxian Wang 1 and Jun Wang 1 and Peng Jin 1 and Li Liu 1 and Dongfeng Mei 1
Academic Editor:Xosé M. Pardo
School of Information Science and Technology, Northwest University, Xi'an 710127, China
Received 24 January 2017; Revised 15 April 2017; Accepted 23 May 2017; 28 June 2017
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
In video surveillance, medical imaging, satellite observation, and other scenes, due to the imaging equipment, the hardware storage, natural environment, and other limited factors, we usually get low-resolution (LR) images [1]. However, high-resolution (HR) images are often needed for subsequent image processing and analysis in most practical applications. As an effective approach to solve this problem, super-resolution (SR) technique fulfils the task of estimating HR image from one or a sequence of LR images. The SR technology increases high-frequency components and removes the resolution degradation, blur, noise, and other undesirable effects by making full use of the existing data information.
As a hot research direction in the field of image processing, the problem of SR has been studied for more than three decades, and many SR approaches have been proposed. According to the number of input LR images, SR approaches can be broadly classified into two categories: single-image SR and multiframe SR [2]. According to the processing method, it mainly includes three kinds of SR approaches: interpolation-based methods [3], reconstruction-based methods [4], and learning-based methods [5]. Interpolation methods get the value of interpolated point from its surrounding pixels with different weight. The classical interpolation methods include nearest interpolation, bilinear interpolation, and bicubic interpolation [6]. Although such methods have simple principle and low algorithm complexity, they tend to produce considerable blurring and jagged artifacts. The reconstruction-based methods [7-10] are usually used for multiframe SR. These methods usually incorporate the reconstruction constraints or the prior knowledge to model a regularized cost function with a data-fidelity term [11]. The reconstruction-based methods possess the ability to recover better edges and suppress aliasing artifacts. However, they cannot restore the fine structures when the upscaling factor is larger, as the performance depends heavily on the nonredundant complementary information among input LR images. Undoubtedly, the learning-based methods have become a research hotspot in recent years. The methods exploit the information from training images to establish the relationship between HR and LR image patches. As the relationship reflects the inherent similarity among natural images, the learning methods can restore high-frequency information effectively. There are some typical methods, such as Example-Based method [12], Neighbor Embedding method [13], Sparse Coding method [14-16], and Anchored Neighborhood Regression method [17]. In 2010, Yang et al. [18] proposed an image SR method via sparse representation, and it can provide better reconstruction results. In 2012, Zeyde et al. [19] improved the efficiency of Yang's method by reducing the dimension of training samples and using K-SVD algorithm to train dictionaries. In 2014, Farhadifard et al. [20] presented a single-image SR based on sparse representation via directionally structured dictionaries. It can avoid the problem that using same dictionary for sparse representation of image patches cannot reflect the differences of image patch structure characteristics [21], which exists in Yang et al. [18] and Zeyde et al. [19]. Usually, learning-based methods need a large and representative database, leading to high computational costs in the process of training dictionaries.
Inspired by the work of [18, 20] and considering the importance of learning dictionary, the author presents a novel Direction and Edge dictionaries model for image SR. Firstly a pair of Direction and Edge templates is built to classify the training image patches into two clusters. Then each cluster is studied to get two pairs of HR and LR overcomplete Direction and Edge dictionaries. Finally sparse coding and Direction and Edge dictionaries are combined to realize single-image SR. The performance of reconstruction-based methods degrades rapidly when the upscaling factor is larger. Therefore we combine the above single-image SR with the POCS to realize multiframe SR. Experimental results prove that our method is feasible and effective, while demonstrating better edge and texture preservation performance.
The content of this paper is arranged as follows: Section 2 introduces sparse representation and Direction and Edge learning dictionaries. In Section 3, the novel sparse representation based image SR using Direction and Edge dictionaries is illustrated. The experimental results of single-image and multiframe SR and their evaluation are given in Section 4. Section 5 arrives at a brief conclusion.
2. Sparse Representation and Direction and Edge Learning Dictionaries
2.1. Sparse Representation
After downsampling S and fuzzy B, HR image X is degenerated into LR image Y: [figure omitted; refer to PDF] where SB=L, so Y=LX. If y is an image patch taken from Y, x is an image patch taken from X which is in the same location with y. The sparse representation model is as follows [22]: [figure omitted; refer to PDF] where α is the sparse representation coefficient of x, Dh ∈Rn×K (K>n) is the HR overcomplete dictionary. Assuming LR overcomplete dictionary Dl =LDh , Dl ∈Rm×K (K>m), then y=Dl α, so it can be clarified that HR and LR image patches have the same sparse representation coefficient. As a result, taking known a pair of HR and LR dictionaries (Dh ,Dl ) as the premise of prior knowledge, we are able to rebuild the corresponding HR image patch as long as we acquire sparse representation coefficient of the LR image patch.
2.2. Direction and Edge Learning Dictionaries
The quality of reconstructed image depends largely on the expression ability of overcomplete dictionary. In Yang et al. [18], dictionary training scheme is as follows: [figure omitted; refer to PDF] where Uh is the set of sampled HR training image patches and Ul is the corresponding LR training image patches, A=(αi ) is the sparse representation coefficient, and λ is a balance parameter.
Based on the same sparse representation model (2), Zeyde et al. [19] modify the above dictionary training method: LR dictionary Dl is trained from the LR set Ul by applying K-SVD algorithm [23] to solve the following minimization problem [24]: [figure omitted; refer to PDF] where K0 denotes the sparsity constraint. The obtained sparse representation matrix A is used to infer dictionary Dh as follows: [figure omitted; refer to PDF]
Both Yang et al. [18] and Zeyde et al. [19] have two similar aspects in dictionary training: (i) the large scale of training sample sets leads to heavy computational burden in the training process; (ii) it ignores the difference between image patches with only one pair of global dictionaries whose representation ability is limited.
It has been shown in [25] that designing multiple dictionaries is more beneficial than a single one. Furthermore, in [26] it is pointed out that using clustering to design several dictionaries improves quality and reduces computational complexity [27]. In 2014, Farhadifard et al. [20] trained eight pairs of directionally structured dictionaries for directional patches and a pair of dictionaries for nondirectional patches. Firstly, the two-dimensional space is divided into eight fixed directions. Then they design eight kinds of template sets, and each kind of template set contains several templates. Finally these templates are applied to classify the training sets into eight directional clusters and one nondirectional cluster and further to learn a pair of dictionaries for each cluster.
As everyone knows, edge represents the large-scale structure of image and has the characteristics of smoothness, so human visual system is more sensitive to edge. Besides, image content is highly directional. In short, edge and direction are the most important features of an image. In order to better capture the intrinsic direction and edge characteristic of image, we design Direction and Edge dictionaries for different clusters of patches, instead of a global dictionary for all the patches.
Based on the consideration of the significant difference between edge pixels and neighborhood pixels and the strong direction performance of the image, we design a new pair of Direction and Edge templates, as Figure 1. It is not difficult to find that the template A represents vertical direction and edge, while the template B represents horizontal direction and edge.
Figure 1: Direction and Edge templates (from left to right: A template and B template).
[figure omitted; refer to PDF]
Direction and Edge templates are used to guide the clustering of image patches and further to obtain Direction and Edge dictionaries. Firstly, each patch is clustered, and the training image patches are classified into two clusters, in which the criterion for clustering is Euclidean distance. Thus the Euclidean distances between the image patch and two templates are obtained and the smaller value determines which cluster the patch belongs to. Finally, two clusters are trained, respectively, to obtain two pairs of HR and LR dictionaries, which are referred to as the Direction and Edge dictionaries.
There are some advantages of Direction and Edge dictionaries: (i) the dictionaries are expected to better represent the intrinsic direction and edge characteristics of the natural images; (ii) the reconstructed HR image via the above dictionaries inherits the large-scale information of natural images and has more high-frequency information, which are the most important parts for SR; (iii) they reduce computational complexity due to the fact that structural dictionaries can be smaller than a global dictionary.
In order to improve the algorithm efficiency, our templates are at the size of 6 by 6. Compared with Farhadifard et al. [20], our method contains only two templates, which consider not only the direction, but also the edge features. In addition, there is no need to set a specific threshold, which is for clustering nondirectional patches in [20]. Of course, we can try other possible classification templates.
3. Image SR Based on Direction and Edge Dictionaries
3.1. Single-Image SR
3.1.1. Method
The single-image SR based on Direction and Edge dictionaries includes three steps: tectonic training sets, Direction and Edge dictionary training, and image reconstruction, as shown in Figures 2, 3, and 4. In training sets' tectonic phase, after taking patches overlapped from training images, all patches are classified into two clusters according to the Euclidean distances. In the Direction and Edge dictionary training phase, we gain the LR dictionary using K-SVD algorithm for each cluster of LR training set and then obtain the corresponding HR dictionary by (5). In the reconstruction phase, after computing sparse representation coefficient of LR patch, the HR patch is obtained from the coefficient multiplied by corresponding class HR dictionary.
Figure 2: Two classes of tectonic HR and LR training sets.
[figure omitted; refer to PDF]
Figure 3: Direction and Edge dictionary training.
[figure omitted; refer to PDF]
Figure 4: Process of single-image SR.
[figure omitted; refer to PDF]
3.1.2. Algorithm Implementation
Step 1 (tectonic training sets).
(a) Take 91 natural images as HR image library, and the LR image library is comprised of LR images achieved from downsampling of HR images. To reach the HR image dimension, LR images are scaled up to the size of HR images via bicubic interpolation and are termed medium-resolution (MR) images.
(b) Take patches with five-pixel overlap from HR images (6×6), and then calculate the Euclidean distances between each normalized patch and the two templates. Classify the patches into two classes by distances, and mark the first and second class position.
(c) Take the same size patch from MR image in the same position as HR image, and then use the first- and second-order gradients of the patches as the feature vector. Develop the first class LR (LR1) training set and the second class LR (LR2) training set by combining the corresponding class feature vectors.
(d) Extract image patch from HR-MR image to be columns feature vector, so as to develop the first class HR training set (HR1) and the second class HR training set (HR2) by collecting corresponding class feature vectors.
Step 2 (Direction and Edge dictionary training).
For first class, train LR1 training set by K-SVD algorithm to get first class LR dictionary Dl1 and sparse coefficient A1 . According to (5), get first class HR dictionary Dh1 from known A1 and HR1 training set. Similarly, get second class LR dictionary Dl2 and HR dictionary Dh2 .
Step 3 (image reconstruction).
(a) Acquire MR image by interpolation amplification of the input LR image. Take patch with five-pixel overlap from MR image and classify the patches into two clusters by same method as above. Then get feature vectors by extracting the first- and second-order gradients of patches. Finally calculate the sparse coefficient α of each column characteristic vector on corresponding class Dl ;
(b) Calculate high-frequency information of each patch from known α and corresponding class Dh . Add high-frequency information to the corresponding MR image patch, and then remove the patch effect to obtain final HR image.
The results of single-image SR are showed in Section 4.
3.2. Multiframe SR
3.2.1. Method
The method of POCS is widely used for multiframe SR and easily available to introduce prior knowledge. However, it usually shows jagged edges in the reconstructed results when the upscaling factor is larger. Our method based on Direction and Edge dictionaries can recover more high-frequency information and preserve smooth edges. Therefore, we combine the POCS method with our single-image SR method to realize multiframe SR. It includes three steps: multiframe registration, POCS reconstruction, and single-image SR based on Direction and Edge dictionaries, like Figure 5.
Figure 5: Process of multiframe SR.
[figure omitted; refer to PDF]
In the stage of multiframe registration image, firstly extract feature points of input multiple images by SURF algorithm [28] and complete feature points matching. Then remove the mismatching points by RANSAC algorithm [29]. Finally the registration images are obtained according to the parameters computed from affine transformation matrix.
3.2.2. Algorithm Implementation
Step 1 (multiframe registration image).
(a) Obtain LR sequence images via geometric distortion and downsampling of HR image. Then select the first frame as reference frame and other frames for the floating frame. Use SURF algorithm to extract feature points and RANSAC algorithm to remove the false matching points.
(b) The registration images are calculated on the basis of the affine transformation model with matching points.
Step 2.
Use POCS method to reconstruct the registration images by an upscaling factor p.
Step 3.
The result of POCS is magnified by our method by a factor of q. The whole reconstruction upscaling factor is p[low *]q.
4. The Experimental Results and Evaluation
In this section, we demonstrate the numerous experiments to verify the performance of our method. All the experiments are executed with MATLAB 8.3.0.
4.1. Single-Image SR
The experimental setting in this paper refers to Yang et al. [18]. The same 91 training images are adopted, and the dictionaries have 256 atoms, and patch size is 6×6 with the overlap width equal to 5 between the adjacent patches. LR training and testing images are generated by resizing the ground truth image by bicubic interpolation. Since human visual system presents more sensitivity to the luminance changes, we only apply the SR method to the luminance component, while applying the simple bicubic interpolation to the chromatic components.
We compare the proposed single-image SR based on Direction and Edge dictionaries with the bicubic interpolation method and several state-of-the-art SR methods, including Yang et al. [18], Zedye et al. [19], NCSR [16], ANR [17], and CSC [15]. The source codes of competing methods are downloaded from the authors' websites and we use the recommended parameters by the authors.
Visual Quality . We perform experiments on 16 widely used test images by an upscaling factor 2. In Figures 6, 7, and 8, we show the single-image SR results of competing methods on images of Plant, Parrot, and Comic. In order to clearly compare, we amplify four times of local line in left upper corner of the figure. As highlighted in the small window, the SR results by our method can recover more high-frequency information and reduce artifacts.
Figure 6: SR results on image Plant (the upscaling factor 2).
[figure omitted; refer to PDF]
Figure 7: SR results on image Parrot (the upscaling factor 2).
[figure omitted; refer to PDF]
Figure 8: SR results on image Comic (the upscaling factor 2).
[figure omitted; refer to PDF]
PSNR and SSIM . The peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) values by the competing methods are shown in Tables 1 and 2. Our method achieves much better PSNR and SSIM index than bicubic and NCSR. The average values are only slightly inferior to Yang's method, Zedye's method and ANR. For the PSNR index, our method is better than Yang's method on Raccoon, better than Zedye's method on Hat, Lena, and Bike, and better than ANR on Hat, Parrot, and Raccoon. The PSNR of CSC based on convolutional neural network is higher than our method, but the method takes long running time and at least 3G memory. In short, the results verify not only the validity of our method, but also the good robustness for different kinds of input.
Table 1: PSNR (dB) results by different methods (the upscaling factor 2).
Images | Bicubic | Yang | Zedye | NCSR | ANR | CSC | Our |
Butterfly | 27.43 | 30.16 | 29.93 | 29.39 | 29.67 | 31.95 | 29.69 |
Child | 31.93 | 33.36 | 33.26 | 30.84 | 33.20 | 33.68 | 33.19 |
Hat | 31.73 | 33.50 | 33.28 | 31.00 | 33.24 | 34.74 | 33.30 |
Lena | 32.70 | 34.48 | 34.19 | 31.31 | 34.28 | 35.63 | 34.27 |
Parrot | 31.25 | 33.45 | 32.96 | 30.80 | 33.18 | 34.56 | 33.21 |
Plant | 34.30 | 36.56 | 36.37 | 32.53 | 36.28 | 38.75 | 36.24 |
Parthenon | 28.07 | 29.09 | 28.98 | 28.03 | 28.87 | 29.49 | 28.91 |
Bike | 25.64 | 27.68 | 27.39 | 26.87 | 27.52 | 28.93 | 27.57 |
Comic | 26.01 | 27.71 | 27.42 | 26.96 | 27.52 | 28.40 | 27.55 |
Flower | 30.36 | 32.28 | 31.97 | 30.33 | 31.96 | 33.13 | 32.05 |
Foreman | 32.76 | 34.08 | 35.92 | 31.47 | 35.83 | 36.62 | 34.07 |
Girl | 34.74 | 35.53 | 35.45 | 31.92 | 35.55 | 35.68 | 35.48 |
Pepper | 33.15 | 34.08 | 36.31 | 31.64 | 36.01 | 36.90 | 34.04 |
Raccoon | 30.95 | 32.38 | 32.04 | 29.97 | 32.33 | 32.96 | 32.43 |
Woman | 32.14 | 34.37 | 34.20 | 31.44 | 34.13 | 35.31 | 34.04 |
Zebra | 30.63 | 33.20 | 32.92 | 30.91 | 32.70 | 33.69 | 32.83 |
| |||||||
Average | 30.862 | 32.619 | 32.662 | 30.338 | 32.642 | 33.776 | 32.429 |
Table 2: SSIM results by different methods (the upscaling factor 2).
Images | Bicubic | Yang | Zedye | NCSR | ANR | CSC | Our |
Butterfly | 0.9086 | 0.9400 | 0.9406 | 0.8402 | 0.9353 | 0.9591 | 0.9336 |
Child | 0.8922 | 0.9200 | 0.9166 | 0.7719 | 0.9190 | 0.9232 | 0.9187 |
Hat | 0.8898 | 0.9145 | 0.9151 | 0.7213 | 0.9162 | 0.9301 | 0.9104 |
Lena | 0.8990 | 0.9249 | 0.9215 | 0.7504 | 0.9240 | 0.9372 | 0.9219 |
Parrot | 0.9270 | 0.9450 | 0.9439 | 0.7590 | 0.9453 | 0.9538 | 0.9425 |
Plant | 0.9310 | 0.9521 | 0.9530 | 0.7873 | 0.9535 | 0.9675 | 0.9493 |
Parthenon | 0.7932 | 0.8357 | 0.8290 | 0.7181 | 0.8287 | 0.8466 | 0.8305 |
Bike | 0.8433 | 0.8987 | 0.8922 | 0.8238 | 0.8953 | 0.9217 | 0.8940 |
Comic | 0.8411 | 0.8973 | 0.8896 | 0.8328 | 0.8929 | 0.9143 | 0.8919 |
Flower | 0.8896 | 0.9202 | 0.9171 | 0.7955 | 0.9192 | 0.9309 | 0.9167 |
Foreman | 0.9450 | 0.9570 | 0.9590 | 0.7822 | 0.9584 | 0.9637 | 0.9551 |
Girl | 0.8450 | 0.8717 | 0.8664 | 0.7337 | 0.8707 | 0.8744 | 0.8706 |
Pepper | 0.9917 | 0.9954 | 0.9960 | 0.8923 | 0.9967 | 0.9970 | 0.9953 |
Raccoon | 0.8419 | 0.8929 | 0.8817 | 0.7871 | 0.8904 | 0.8969 | 0.8921 |
Woman | 0.9428 | 0.9592 | 0.9592 | 0.7982 | 0.9593 | 0.9663 | 0.9565 |
Zebra | 0.9860 | 0.9971 | 0.9959 | 0.9504 | 0.9970 | 0.9976 | 0.9969 |
| |||||||
Average | 0.8980 | 0.9264 | 0.9236 | 0.7965 | 0.9251 | 0.9363 | 0.9235 |
4.2. Multiframe Image SR
The experiments aim to obtain a HR image (512 × 512) from 10 frames LR image (128 × 128) by an upscaling factor of 4 (p=2, q=2). In order to simulate the imaging process in actual scene, we obtain 10 LR images from the original HR image via downsampling by a factor of 2, random jitter around 1~2 pixel and clockwise rotation of -1~+1 degree.
In this part, we perform SR experiments on multiframe images and the upscaling factor is 4. However, most of the state-of-the-art SR methods are for single-image SR and the upscaling factor is 2 or 3. So we compare our method with the bicubic interpolation method and POCS. As to bicubic interpolation method, we directly magnify the second frame image with a factor 4.
In order to verify the good robustness of our method for different kinds of images, Table 3 shows the PSNR and SSIM values of multiframe SR. Compared with other methods, our method achieves higher PSNR and SSIM values.
Table 3: PSNR (dB) and SSIM results of multiframe SR (the upscaling factor 4).
Images | PSNR | SSIM | ||||
Bicubic | POCS | Our | Bicubic | POCS | Our | |
Lena | 23.5296 | 23.1444 | 24.5765 | 0.5720 | 0.5483 | 0.6194 |
Monarch | 19.4358 | 18.5742 | 20.0315 | 0.6751 | 0.6427 | 0.7172 |
Pepper | 23.6443 | 23.5289 | 26.6741 | 0.6761 | 0.6631 | 0.7505 |
Child | 23.9282 | 23.0742 | 25.5610 | 0.7041 | 0.6622 | 0.7462 |
| ||||||
Average | 22.6345 | 22.0804 | 24.2108 | 0.6568 | 0.6291 | 0.7083 |
Figures 9-11 are the multiframe SR results of competing methods on images Lena, Monarch, and Pepper. In order to demonstrate conveniently, we only reveal four input LR images and cut the reconstructed image as it is too large. As figures show, the edges produced by our method are more smooth and natural, and the results have more details and fewer artifacts.
Figure 9: Results of Lena (the upscaling factor 4). Smaller: input images. Larger: from left to right and top to bottom: bicubic, POCS, our method, and original image.
[figure omitted; refer to PDF]
Figure 10: Results of the Monarch (the upscaling factor 4). Smaller: input images. Larger: from left to right and top to bottom: bicubic, POCS, our method, and original image.
[figure omitted; refer to PDF]
Figure 11: Results of the Pepper (the upscaling factor 4). Smaller: input images. Larger: from left to right and top to bottom: bicubic, POCS, our method, and original image.
[figure omitted; refer to PDF]
5. Conclusion
In this paper, we present a novel approach for image super-resolution based on sparse representation in terms of Direction and Edge dictionaries. The key idea is to classify image patches based on their direction and edge features and selectively code each patch using more appropriate dictionary. According to the Euclidean distances between image patch and two new templates, image patches are divided into two clusters and then are trained to obtain two pairs of Direction and Edge dictionaries. Single-image experimental results indicate the usefulness of the proposed Direction and Edge dictionaries. Furthermore, we combine the POCS with our single-image SR method to realize multiframe SR, especially when upscaling factor is larger, while the experiments show that it has the same satisfactory results. In short, our proposed method achieves not only competitive PSNR and SSIM values, but also more pleasant visual quality of image edge structures and texture.
[1] Li. Jinming, Research on sparse representation based image super-resolution reconstruction method [D. E. thesis] , Chongqing University, 2015.
[2] X. Zhu, B. Li, J. Tao, B. Jiang, "Super-resolution image reconstruction via patch haar wavelet feature extraction combined with sparse coding," in Proceedings of the 2015 IEEE International Conference on Information and Automation, ICIA 2015, pp. 770-775, August 2015.
[3] Z. Wei, K.-K. Ma, "Contrast-guided image interpolation,", IEEE Transactions on Image Processing , vol. 22, no. 11, pp. 4271-4285, 2013.
[4] S. S. Panda, M. S. R. S. Prasad, G. Jena, "POCS based super-resolution image reconstruction using an adaptive regularization parameter,", International Journal of Computer Science Issues , vol. 8, no. 5, 2011.
[5] D. Glasner, S. Bagon, M. Irani, "Super-resolution from a single image," in Proceedings of the 12th International Conference on Computer Vision (ICCV '09), pp. 349-356, October 2009.
[6] F. Zhou, W. Yang, Q. Liao, "Interpolation-based image super-resolution using multisurface fitting,", IEEE Transactions on Image Processing , vol. 21, no. 7, pp. 3312-3318, 2012.
[7] H. Stark, P. Oskoui, "High-resolution image recovery from image-plane arrays, using convex projections,", Journal of the Optical Society of America A: Optics and Image Science , vol. 6, no. 11, pp. 1715-1726, 1989.
[8] M. Irani, S. Peleg, "Improving resolution by image registration,", CVGIP: Graphical Models and Image Processing , vol. 53, no. 3, pp. 231-239, 1991.
[9] R. R. Schultz, R. L. Stevenson, "A Bayesian approach to image expansion for improved definition,", IEEE Transactions on Image Processing , vol. 3, no. 3, pp. 233-242, 1994.
[10] S. Farsiu, M. D. Robinson, M. Elad, P. Milanfar, "Fast and robust multiframe super resolution,", IEEE Transactions on Image Processing , vol. 13, no. 10, pp. 1327-1344, 2004.
[11] Y. Zhang, J. Liu, W. Yang, Z. Guo, "Image super-resolution based on structure-modulated sparse representation,", IEEE Transactions on Image Processing , vol. 24, no. 9, pp. 2797-2810, 2015.
[12] W. T. Freeman, T. R. Jones, E. C. Pasztor, "Example-based super-resolution,", IEEE Computer Graphics and Applications , vol. 22, no. 2, pp. 56-65, 2002.
[13] H. Chang, D.-Y. Yeung, Y. Xiong, "Super-resolution through neighbor embedding," in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '04), pp. 275-282, IEEE, Washington, DC, USA, July 2004.
[14] J. Yang, J. Wright, T. Huang, Y. Ma, "Image super-resolution as sparse representation of raw image patches," in Proceedings of the 26th IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-8, June 2008.
[15] S. Gu, W. Zuo, Q. Xie, D. Meng, X. Feng, L. Zhang, "Convolutional sparse coding for image super-resolution," in Proceedings of the 15th IEEE International Conference on Computer Vision, ICCV 2015, pp. 1823-1831, December 2015.
[16] W. Dong, L. Zhang, G. Shi, X. Li, "Nonlocally centralized sparse representation for image restoration,", IEEE Transactions on Image Processing , vol. 22, no. 4, pp. 1620-1630, 2013.
[17] R. Timofte, V. De, L. V. Gool, "Anchored neighborhood regression for fast example-based super-resolution," in Proceedings of the 14th IEEE International Conference on Computer Vision (ICCV '13), pp. 1920-1927, December 2013.
[18] J. Yang, J. Wright, T. S. Huang, Y. Ma, "Image super-resolution via sparse representation,", IEEE Transactions on Image Processing , vol. 19, no. 11, pp. 2861-2873, 2010.
[19] R. Zeyde, M. Elad, M. Protter, J. D. Boissonnat, "On single image scale-up using sparse-representations,", Curves and Surfaces 2010 , vol. 6920, of Lecture Notes in Computer Science, pp. 711-730, Springer, Berlin, Germany, 2012.
[20] F. Farhadifard, E. Abar, M. Nazzal, H. Ozkaramanh, "Single image super resolution based on sparse representation via directionally structured dictionaries," in Proceedings of the 2014 22nd Signal Processing and Communications Applications Conference, SIU 2014, pp. 1718-1721, April 2014.
[21] Q.-S. Lian, W. Zhang, "Image super-resolution algorithms based on sparse representation of classified image patches,", Acta Electronica Sinica , vol. 40, no. 5, pp. 920-925, 2012.
[22] D. L. Donoho, "For most large underdetermined systems of equations, the minimal l1 -norm near-solution approximates the sparsest near-solution,", Communications on Pure and Applied Mathematics , vol. 59, no. 7, pp. 907-934, 2006.
[23] R. Rubinstein, A. M. Bruckstein, M. Elad, "Dictionaries for sparse representation modeling,", Proceedings of the IEEE , vol. 98, no. 6, pp. 1045-1057, 2010.
[24] N. Ai, J. Peng, X. Zhu, X. Feng, "Single image super-resolution by combining self-learning and example-based learning methods,", Multimedia Tools and Applications , vol. 75, no. 11, pp. 6647-6662, 2016.
[25] M. Elad, I. Yavneh, "A plurality of sparse representations is better than the sparsest one alone,", IEEE Transactions on Information Theory , vol. 55, no. 10, pp. 4701-4714, 2009.
[26] W. Dong, L. Zhang, G. Shi, X. Wu, "Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization,", IEEE Transactions on Image Processing , vol. 20, no. 7, pp. 1838-1857, 2011.
[27] F. Farhadifard, Single image super resolution based on spar- se representation via structurally directional dictionaries [M. S. thesis] , Eastern Mediterranean University (EMU), 2013.
[28] H. Bay, A. Ess, T. Tuytelaars, L. Van Gool, "Speeded-up robust features (SURF),", Computer Vision and Image Understanding , vol. 110, no. 3, pp. 346-359, 2008.
[29] M. A. Fischler, R. C. Bolles, "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,", Communications of the Association for Computing Machinery , vol. 24, no. 6, pp. 381-395, 1981.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2017 Xuan Zhu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
Sparse representation has recently attracted enormous interests in the field of image super-resolution. The sparsity-based methods usually train a pair of global dictionaries. However, only a pair of global dictionaries cannot best sparsely represent different kinds of image patches, as it neglects two most important image features: edge and direction. In this paper, we propose to train two novel pairs of Direction and Edge dictionaries for super-resolution. For single-image super-resolution, the training image patches are, respectively, divided into two clusters by two new templates representing direction and edge features. For each cluster, a pair of Direction and Edge dictionaries is learned. Sparse coding is combined with the Direction and Edge dictionaries to realize super-resolution. The above single-image super-resolution can restore the faithful high-frequency details, and the POCS is convenient for incorporating any kind of constraints or priors. Therefore, we combine the two methods to realize multiframe super-resolution. Extensive experiments on image super-resolution are carried out to validate the generality, effectiveness, and robustness of the proposed method. Experimental results demonstrate that our method can recover better edge structure and details.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer