1. Introduction
Matrix Completion (MC) refers to the problem of filling in missing entries of an unknown matrix from a limited sampling of its entries. This problem has received widespread attentions of researchers in many real applications; see, e.g., [1–3]. Generally, it is impossible to exactly recover arbitrary matrices from such a problem without some additional information since the problem is extraordinarily ill-posed [4] and hence has infinitely many solutions. However, in many realistic scenarios, very often the matrix we wish to recover is low-rank or can be approximated by a low-rank matrix, as illustrated in Figure 1. Therefore, an immediate idea to model MC is to pursue the low rank of matrices by solving the following optimization problem:
[figures omitted; refer to PDF]
Unfortunately, problem (1) is of little practical use since it is NP-hard in general. To circumvent this issue, a nuclear norm minimization method was suggested in [4, 5], which solves
Nuclear norm is the tightest convex approximation of rank function, but it is far from the closest one [13]. The relationship between nuclear norm and the rank function of matrices is similar to that between
Overall, almost all the existing MC methods are designed to approach the rank function and thus to induce the low rank. To some degree, low rank only characterizes the global prior of a matrix. In some instances, however, besides the low rank the matrices often have some additional structural priors. We still take the matrix (grey image) shown in Figure 1(a) for example. It is rich in smoothness features (priors). In other words, an entry and its neighboring entries in such a matrix often have small difference in their values. When dealing with such matrices, most existing MC methods in fact can not well capture their smoothness priors and hence may result in a poor performance. On the other hand, when the entries of a matrix reach an incredibly high missing ratio or the high-quality recovered matrices are in urgent need, one has no choice but to take their additional priors into consideration since it will become very difficult to exactly/robustly recover any matrices only by using their low rank prior. Therefore, how to mine more available priors of underlying matrices and integrate them into MC becomes a very crucial problem.
In this paper, we propose a new method that combines the low rank and smoothness priors of underlying matrices together to deal with the MC problem. Our method is not the first work on this topic, but by using a modified second-order total variation of matrices it becomes very competitive. In summary, the contributions of this paper are stated as follows:
(i)
A modified second-order total variation and nuclear norm of matrix are combined to characterize the smoothness and low-rank priors of underling matrices, respectively, which makes our method much more competitive for MC problem.
(ii)
An efficient algorithm is developed to deal with the optimization problem and the extensive experiments testify to the effectiveness of the proposed method over many state-of-the-art MC methods.
The remainder of this paper is organized as follows. In Section 2, we review some MC methods that simultaneously optimize both the low rank and smoothness priors of underlying matrices. Since matrices can be considered as the second-order tensors, this indicates that most tensor based completion methods can also be applied to the MC problem. Two related tensor completion methods are also included and introduced in this section. In Section 3, we present the proposed method and design an efficient algorithm to solve the induced optimization problem. Experimental results are presented in Section 4. Finally, the conclusion and future work are given in Section 5.
2. A Review on MC with Smoothness Priors
The low rank and smoothness priors of underlying matrices have been well studied in MC and visual data processing communities [31, 32], respectively. However, their combined work for MC is rarely reported. To the best of our knowledge, Han et al. [33] gave the first such work for MC and proposed a Linear Total Variation approximation regularized Nuclear Norm (LTVNN) minimization method, which takes the form
Recently, such consideration has been extended to deal with the tensor completion (TC) problem. From the perspective of tensor, vectors and matrices can be considered as the first- and second-order tensors, respectively. Therefore, most existing TC methods can still be used to deal with the MC problem. In 2014, Chen et al. [34] proposed a Tucker decomposition based Simultaneous Tensor Decomposition and Completion (STDC) method for TC. This STDC method solves the following optimization problem:
3. Proposed New Method
In this section, we propose a new method that combines the low rank and smoothness priors of underlying matrices to deal with the MC problem. In our method, the nuclear norm is used to characterize the low rank of underlying matrices while their smoothness priors are characterized by a modified second-order total variation (MSTV), which is defined as
[figures omitted; refer to PDF]
Now, the proposed methods that are based on nuclear norm and a modified second-order total variation can be modeled as the following optimization problem:
To solve problem (14), we adopt the Alternating Direction Method of Multipliers (ADMM) [38], which has been widely used to solve many application problems. To use ADMM, we first rewrite problem (14) as
Then the corresponding augmented Lagrangian function of (15) is presented as
(1)
Fixing
which is equivalent to
where
where
(2)
Fixing
which is equivalent to
where
where
After we obtain
(3)
Update
We summarize the iterative procedure in Algorithm 1. Note that we do not use a fixed
Algorithm 1: Solving problem (14) via ADMM.
Input:
Output:
1: Initialize
2: repeat
3: Update
4: Update
5: Update
6: Update
7: Update
8: Update
9: until a certain stopping criterion is satisfied.
10: return
4. Numerical Experiments
In this section, some experiments are conducted to evaluate the performance of the proposed method. They mainly involve solving the randomly masked image completion problem and the text masked image reconstruction problem. The former problem aims to complete the incomplete images generated by masking their entries randomly, and the latter one aims to reconstruct the images with text. In our experiments, the Lenna image shown in Figure 1(a) and 20 widely used test images shown in Figure 3 are used to generate the desired test matrices. To evaluate the quality of the reconstructed images, Structural SIMilarity (SSIM) and Peak Signal-to-Noise Ratio (PSNR) indices are adopted. We refer readers to [41] for SSIM index. As to PSNR index, suppose Mean Squared Error (MSE) is defined as
4.1. Convergence Behavior
In this part, we conduct some experiments to examine the convergence behavior of our Algorithm 1. The desired test matrices are generated by masking the entries in Lenna image randomly with a 60% missing ratio. Before the convergence analysis, we have to determine a proper penalty parameter
[figures omitted; refer to PDF]
Figure 5 plots the convergence results of our Algorithm 1. One can easily see that both the RNIE and RTIE decrease rapidly and consistently. Specifically, in terms of RNIE, when
[figures omitted; refer to PDF]
4.2. Randomly Masked Image Completion
To illustrate the effectiveness of our method, we compare our Algorithm 1 with 5 other state-of-the-art algorithms. They are IRLS algorithm [24], PSSV algorithm [26], LTVNN algorithm [33], and the SPC algorithms [35] with TV and QV smoothing, i.e., the SPC-TV and SPC-QV algorithms. In our experiments, we set
We start with completing the Lenna image masked randomly with several different missing ratios
Table 1
PSNR
70% | 65% | 60% | 55% | 50% | |
---|---|---|---|---|---|
IRLS | 21.89 | 22.49 | 22.91 | 23.13 | 23.34 |
| |||||
PSSV | 21.80 | 22.60 | 23.21 | 23.90 | 24.61 |
| |||||
LTVNN | 23.89 | 24.42 | 24.91 | 25.41 | 25.90 |
| |||||
SPC-TV | 24.29 | 24.88 | 25.24 | 25.72 | 26.06 |
| |||||
SPC-QV | | | | | |
| |||||
Ours | | | | | |
[figures omitted; refer to PDF]
To further confirm the effectiveness of our method, we apply our Algorithm 1, together with other 5 algorithms, to 20 widely used test images (Figure 3). The obtained results are presented in Table 2. An overall impression observed from Table 2 is that our Algorithm 1 achieves the highest SSIM in all cases and the highest PSNR in almost all cases. Although in terms of PSNR our Algorithm 1 performs weaker than SPC-QV algorithm in some cases, but these cases only occupy a small proportion (13/80) of all the cases, and the PSNR difference in these cases is also very small (less than 0.8dB).
Table 2
PSNR
Missing ratio=70% | Missing ratio=60% | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
IRLS | PSSV | LTVNN | SPC-TV | SPC-QV | Ours | IRLS | PSSV | LTVNN | SPC-TV | SPC-QV | Ours | |
1 | 27.58 | 27.88 | 29.32 | 29.14 | | | 29.11 | 29.35 | 30.41 | 29.84 | | |
| ||||||||||||
2 | 21.34 | 21.43 | 23.37 | 23.50 | | | 22.61 | 22.67 | 24.22 | 24.34 | | |
| ||||||||||||
3 | 22.03 | 22.12 | 24.29 | 24.70 | | | 23.12 | 23.80 | 25.44 | 25.78 | | |
| ||||||||||||
4 | 19.87 | 20.19 | 21.95 | 21.55 | | | 20.90 | 20.79 | 22.49 | 22.01 | | |
| ||||||||||||
5 | 22.77 | 23.25 | 24.67 | 24.71 | | | 23.98 | 24.22 | 25.44 | 25.41 | | |
| ||||||||||||
6 | 22.72 | 22.56 | 24.55 | 24.86 | | | 23.69 | 23.94 | 25.53 | 25.69 | | |
| ||||||||||||
7 | 20.76 | 20.91 | 22.64 | 23.11 | | | 21.98 | 22.21 | 23.57 | 24.04 | | |
| ||||||||||||
8 | 24.26 | 24.14 | 25.48 | 26.86 | | | 25.56 | 26.21 | 26.68 | 28.11 | | |
| ||||||||||||
9 | 20.40 | 20.33 | 22.01 | 22.14 | | | 21.37 | 21.28 | 22.73 | 22.85 | | |
| ||||||||||||
10 | 23.39 | 23.25 | 25.09 | 25.34 | | | 24.26 | 24.48 | 25.96 | 26.19 | | |
| ||||||||||||
11 | 19.19 | 19.60 | 21.69 | 21.53 | | | 20.69 | 20.68 | 22.52 | 22.36 | | |
| ||||||||||||
12 | 17.87 | 17.92 | 19.80 | 19.95 | | | 18.89 | 19.12 | 20.66 | 20.89 | | |
| ||||||||||||
13 | 24.69 | 24.64 | 27.07 | 26.83 | | | 25.69 | 25.73 | 27.81 | 27.51 | | |
| ||||||||||||
14 | 24.00 | 23.89 | 25.53 | 25.94 | | | 25.12 | 25.25 | 26.49 | 26.96 | | |
| ||||||||||||
15 | 23.44 | 23.71 | 25.64 | 25.38 | | | 24.48 | 24.49 | 26.22 | 25.93 | | |
| ||||||||||||
16 | 19.73 | 19.72 | 21.43 | 21.92 | | | 20.85 | 21.04 | 22.41 | 22.81 | | |
| ||||||||||||
17 | 19.76 | 19.59 | 21.94 | 21.96 | | | 20.76 | 20.94 | 22.86 | 22.89 | | |
| ||||||||||||
18 | 20.12 | 19.96 | 22.11 | 22.67 | | | 20.98 | 21.76 | 23.39 | 23.84 | | |
| ||||||||||||
19 | 21.83 | 21.72 | 23.63 | 24.35 | | | 22.78 | 23.64 | 24.95 | 25.59 | | |
| ||||||||||||
20 | 20.27 | 20.38 | 22.01 | 22.21 | | | 21.41 | 21.63 | 22.89 | 23.19 | | |
| ||||||||||||
Missing ratio=55% | Missing ratio=50% | |||||||||||
| ||||||||||||
IRLS | PSSV | LTVNN | SPC-TV | SPC-QV | Ours | IRLS | PSSV | LTVNN | SPC-TV | SPC-QV | Ours | |
| ||||||||||||
1 | 29.45 | 29.98 | 30.87 | 30.06 | | | 29.90 | 30.74 | 31.47 | 30.46 | | |
| ||||||||||||
2 | 22.91 | 23.21 | 24.53 | 24.75 | | | 23.15 | 23.80 | 24.88 | 25.15 | | |
| ||||||||||||
3 | 23.36 | 24.58 | 25.88 | 26.20 | | | 23.59 | 25.42 | 26.35 | 26.69 | | |
| ||||||||||||
4 | 21.15 | 21.03 | 22.65 | 22.18 | | | 21.28 | 21.26 | 22.82 | 22.36 | | |
| ||||||||||||
5 | 24.17 | 24.61 | 25.71 | 25.68 | | | 24.36 | 25.02 | 25.95 | 25.93 | | |
| ||||||||||||
6 | 23.95 | 24.54 | 25.94 | 26.05 | | | 24.08 | 25.01 | 26.26 | 26.37 | | |
| ||||||||||||
7 | 22.31 | 22.98 | 24.04 | 24.55 | | | 22.37 | 23.48 | 24.28 | 24.78 | | |
| ||||||||||||
8 | 25.81 | 27.13 | 27.11 | 28.58 | | | 26.04 | 28.08 | 27.62 | 29.14 | | |
| ||||||||||||
9 | 21.67 | 21.77 | 23.05 | 23.20 | | | 21.84 | 22.19 | 23.36 | 23.55 | | |
| ||||||||||||
10 | 24.47 | 25.07 | 26.38 | 26.62 | | | 24.62 | 25.66 | 26.71 | 27.02 | | |
| ||||||||||||
11 | 21.00 | 21.15 | 22.89 | 22.72 | | | 21.17 | 21.60 | 23.09 | 22.92 | | |
| ||||||||||||
12 | 19.09 | 19.66 | 20.97 | 21.27 | | | 19.31 | 20.38 | 21.39 | 21.82 | | |
| ||||||||||||
13 | 25.94 | 26.25 | 28.17 | 27.83 | | | 26.11 | 26.70 | 28.40 | 28.08 | | |
| ||||||||||||
14 | 25.35 | 25.92 | 26.93 | 27.41 | | | 25.52 | 26.65 | 27.26 | 27.73 | | |
| ||||||||||||
15 | 24.75 | 24.91 | 26.54 | 26.24 | | | 24.94 | 25.25 | 26.77 | 26.52 | | |
| ||||||||||||
16 | 21.12 | 21.64 | 22.77 | 23.24 | | | 21.28 | 22.22 | 23.16 | 23.66 | | |
| ||||||||||||
17 | 20.94 | 21.51 | 23.20 | 23.19 | | | 21.09 | 22.00 | 23.52 | 23.55 | | |
| ||||||||||||
18 | 21.24 | 22.64 | 23.96 | 24.39 | | | 21.36 | 23.50 | 24.40 | 24.88 | | |
| ||||||||||||
19 | 23.00 | 24.39 | 25.43 | 26.09 | | | 23.19 | 25.20 | 25.88 | 26.53 | | |
| ||||||||||||
20 | 21.64 | 22.13 | 23.20 | 23.60 | | | 21.77 | 22.67 | 23.49 | 23.93 | | |
4.3. Text Masked Image Reconstruction
Compared to the randomly masked image completion problem, text masked image reconstruction is a relatively hard task since the entries masked by the text are not randomly distributed in the matrix and the text may mask some important texture information. However, this problem can still be transformed into an MC problem by checking the position of the text first and regarding the corresponding entries as missing values.
Figure 7 shows the visual results obtained by above-mentioned algorithms on reconstruction of the Lenna image with text. In terms of PSNR and SSIM values, our Algorithm 1 and the SPC-QV algorithm rank first and second, respectively, followed by the rest of the algorithms. In terms of visual quality, the proposed Algorithm 1 well reconstructs the original image and its local feature structure is also well kept without seeing any text traces. In contrast, the images reconstructed by IRLS and PSSV algorithms are covered with the obvious text traces. The LTVNN algorithm suffers from staircase effect again. The image reconstructed by the SPC-QV algorithm is better than that by SPC-TV algorithm, but still has some faint text traces. Based on previous 20 test images, we generate another 20 desired images with text (Figure 8) to compare our Algorithm 1 with 5 other algorithms in dealing with the text masked image reconstruction problem. The obtained PSNR and SSIM results are reported in Table 3, which confirms again the excellent performance of our Algorithm 1.
Table 3
PSNR
IRLS | PSSV | LTVNN | SPC-TV | SPC-QV | Ours | |
---|---|---|---|---|---|---|
1 | 28.56 | 30.47 | | 29.78 | 30.74 | |
| ||||||
2 | 22.13 | 23.96 | 24.43 | 24.62 | | |
| ||||||
3 | 22.27 | 25.45 | 25.41 | 25.78 | | |
| ||||||
4 | 21.14 | 21.68 | 22.69 | 22.50 | | |
| ||||||
5 | 23.60 | 25.18 | 25.33 | 25.40 | | |
| ||||||
6 | 22.85 | 24.94 | 25.74 | 25.56 | | |
| ||||||
7 | 21.14 | 23.30 | 23.20 | 23.70 | | |
| ||||||
8 | 24.93 | 29.08 | 27.33 | 28.72 | | |
| ||||||
9 | 21.12 | 22.70 | 22.94 | 23.38 | | |
| ||||||
10 | 23.84 | 26.49 | 26.04 | 26.76 | | |
| ||||||
11 | 20.20 | 21.32 | 22.21 | 22.07 | | |
| ||||||
12 | 18.12 | 20.51 | 20.64 | 20.97 | | |
| ||||||
13 | 25.32 | 27.09 | 27.95 | 27.72 | | |
| ||||||
14 | 24.31 | 26.74 | 26.67 | 27.18 | | |
| ||||||
15 | 24.30 | 25.41 | 26.37 | 26.06 | | |
| ||||||
16 | 20.19 | 22.56 | 22.41 | 22.97 | | |
| ||||||
17 | 19.76 | 21.80 | 22.77 | 22.61 | | |
| ||||||
18 | 19.77 | 23.90 | 23.72 | 23.97 | | |
| ||||||
19 | 21.43 | 24.76 | 25.19 | 25.12 | | |
| ||||||
20 | 20.69 | 22.71 | 22.80 | 23.02 | | |
[figures omitted; refer to PDF]
[figure omitted; refer to PDF]5. Conclusion and Future Work
This paper proposed a new method that combines the low rank and smoothness priors of matrices to tackle the matrix completion problem. Different from previous LTVNN method, our proposed method characterizes the smoothness of matrices by using a modified second-order total variation, which not only avoids the staircase effect caused by LTVNN method, but also leads to a better performance. Even compared to the recently emerged smooth PARAFAC tensor completion method, our method is still highly competitive in both PSNR and SSIM perspectives. The extensive experiments further confirm the excellent performance of our proposed method. Potential future work includes replacing the nuclear norm used in our method with other (nonconvex) low-rank promoting functions, integrating more available priors of matrices to enhance the performance of existing matrix completion methods, and applying our modified second-order total variation to the tensor completion problems.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Acknowledgments
This work was supported by the Natural Science Foundation of China under Grants Nos. 61273020 and 61673015.
[1] N. Komodakis, "Image Completion Using Global Optimization," Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1 (CVPR'06), pp. 442-452, DOI: 10.1109/CVPR.2006.141, .
[2] Y. Koren, R. Bell, C. Volinsky, "Matrix factorization techniques for recommender systems," The Computer Journal, vol. 42 no. 8, pp. 30-37, DOI: 10.1109/mc.2009.263, 2009.
[3] H. Ji, C. Liu, Z. Shen, Y. Xu, "Robust video denoising using low rank matrix completion," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '10), pp. 1791-1798, DOI: 10.1109/CVPR.2010.5539849, .
[4] E. J. Candès, B. Recht, "Exact matrix completion via convex optimization," Foundations of Computational Mathematics, vol. 9 no. 6, pp. 717-772, DOI: 10.1007/s10208-009-9045-5, 2009.
[5] B. Recht, M. Fazel, P. A. Parrilo, "Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization," SIAM Review, vol. 52 no. 3, pp. 471-501, DOI: 10.1137/070697835, 2010.
[6] Y. Chen, "Incoherence-optimal matrix completion," Institute of Electrical and Electronics Engineers Transactions on Information Theory, vol. 61 no. 5, pp. 2909-2923, DOI: 10.1109/TIT.2015.2415195, 2015.
[7] G. Liu, P. Li, "Low-rank matrix completion in the presence of high coherence," IEEE Transactions on Signal Processing, vol. 64 no. 21, pp. 5623-5633, DOI: 10.1109/TSP.2016.2586753, 2016.
[8] S. W. Ji, J. P. Ye, "An accelerated gradient method for trace norm minimization," pp. 457-464, DOI: 10.1145/1553374.1553434, .
[9] K.-C. Toh, S. Yun, "An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems," Pacific Journal of Optimization, vol. 6 no. 3, pp. 615-640, 2010.
[10] M. Fornasier, H. Rauhut, R. Ward, "Low-rank matrix recovery via iteratively reweighted least squares minimization," SIAM Journal on Optimization, vol. 21 no. 4, pp. 1614-1640, DOI: 10.1137/100811404, 2011.
[11] J. Cai, E. J. Candès, Z. Shen, "A singular value thresholding algorithm for matrix completion," SIAM Journal on Optimization, vol. 20 no. 4, pp. 1956-1982, DOI: 10.1137/080738970, 2010.
[12] J.-F. Cai, S. Osher, "Fast singular value thresholding without singular value decomposition," Methods and Applications of Analysis, vol. 20 no. 4, pp. 335-351, DOI: 10.4310/MAA.2013.v20.n4.a2, 2013.
[13] C. Xu, Z. C. Lin, H. B. Zha, "A unified convex surrogate for the Schatten-p norm," Proceedings of the AAAI Conference on Artificial Intelligence, pp. 926-932, .
[14] S. Foucart, H. Rauhut, A Mathematical Introduction to Compressive Sensing,DOI: 10.1007/978-0-8176-4948-7, 2013.
[15] R. Chartrand, "Exact reconstruction of sparse signals via nonconvex minimization," IEEE Signal Processing Letters, vol. 14 no. 10, pp. 707-710, DOI: 10.1109/LSP.2007.898300, 2007.
[16] J. Wen, D. Li, F. Zhu, "Stable recovery of sparse signals via L p -minimization," Applied and Computational Harmonic Analysis, vol. 38 no. 1, pp. 161-176, DOI: 10.1016/j.acha.2014.06.003, 2015.
[17] Y. Wang, W. Yin, "Sparse signal reconstruction via iterative support detection," SIAM Journal on Imaging Sciences, vol. 3 no. 3, pp. 462-491, DOI: 10.1137/090772447, 2010.
[18] P. Yin, Y. Lou, Q. He, J. Xin, "Minimization of l 1-2 for compressed sensing," SIAM Journal on Scientific Computing, vol. 37 no. 1, pp. A536-A563, DOI: 10.1137/140952363, 2015.
[19] W. Wang, J. Wang, Z. Zhang, "Robust Signal Recovery with Highly Coherent Measurement Matrices," IEEE Signal Processing Letters, vol. 24 no. 3, pp. 304-308, DOI: 10.1109/LSP.2016.2626308, 2017.
[20] G. Marjanovic, V. Solo, "On l q optimization and matrix completion," IEEE Transactions on Signal Processing, vol. 60 no. 11, pp. 5714-5724, DOI: 10.1109/TSP.2012.2212015, 2012.
[21] Y. Hu, D. Zhang, J. Ye, X. Li, X. He, "Fast and accurate matrix completion via truncated nuclear norm regularization," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35 no. 9, pp. 2117-2130, DOI: 10.1109/TPAMI.2012.271, 2013.
[22] T.-H. Ma, Y. Lou, T.-Z. Huang, "Truncated L 1-2 models for sparse recovery and rank minimization," SIAM Journal on Imaging Sciences, vol. 10 no. 3, pp. 1346-1380, DOI: 10.1137/16M1098929, 2017.
[23] F. P. Nie, H. Huang, C. Ding, "Low-rank matrix recovery via efficient Schatten p-norm minimization," Proceedings of the 26th AAAI Conference on Artificial Intelligence, pp. 655-661, .
[24] M. J. Lai, Y. Y. Xu, W. T. Yin, "Improved iteratively reweighted least squares for unconstrained smoothed l q minimization," SIAM Journal on Numerical Analysis, vol. 51 no. 2, pp. 927-957, DOI: 10.1137/110840364, 2013.
[25] F. Cao, J. Chen, H. Ye, J. Zhao, Z. Zhou, "Recovering low-rank and sparse matrix based on the truncated nuclear norm," Neural Networks, vol. 85, pp. 10-20, DOI: 10.1016/j.neunet.2016.09.005, 2017.
[26] T.-H. Oh, Y.-W. Tai, J.-C. Bazin, H. Kim, I. S. Kweon, "Partial sum minimization of singular values in robust PCA: Algorithm and applications," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38 no. 4, pp. 744-758, DOI: 10.1109/TPAMI.2015.2465956, 2016.
[27] R. H. Keshavan, A. Montanari, S. Oh, "Matrix completion from a few entries," IEEE Transactions on Information Theory, vol. 56 no. 6, pp. 2980-2998, DOI: 10.1109/TIT.2010.2046205, 2010.
[28] Z. Wen, W. Yin, Y. Zhang, "Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm," Mathematical Programming Computation, vol. 4 no. 4, pp. 333-361, DOI: 10.1007/s12532-012-0044-1, 2012.
[29] K. Lee, Y. Bresler, "ADMiRA: atomic decomposition for minimum rank approximation," IEEE Transactions on Information Theory, vol. 56 no. 9, pp. 4402-4416, DOI: 10.1109/tit.2010.2054251, 2010.
[30] Z. Wang, M.-J. Lai, Z. Lu, W. Fan, H. Davulcu, J. Ye, "Orthogonal rank-one matrix pursuit for low rank matrix completion," SIAM Journal on Scientific Computing, vol. 37 no. 1, pp. A488-A514, DOI: 10.1137/130934271, 2015.
[31] M. A. Davenport, J. Romberg, "An overview of low-rank matrix recovery from incomplete observations," IEEE Journal of Selected Topics in Signal Processing, vol. 10 no. 4, pp. 608-622, DOI: 10.1109/JSTSP.2016.2539100, 2016.
[32] P. Rodriguez, "Total variation regularization algorithms for images corrupted with different noise models: a review," Journal of Electrical and Computer Engineering, vol. 2013,DOI: 10.1155/2013/217021, 2013.
[33] Xu Han, Jiasong Wu, Lu Wang, Yang Chen, Lotf Senhadji, Huazhong Shu, "Linear total variation approximate regularized nuclear norm optimization for matrix completion," Abstract and Applied Analysis, vol. 2014,DOI: 10.1155/2014/765782, 2014.
[34] Y.-L. Chen, C.-T. Hsu, H.-Y. M. Liao, "Simultaneous tensor decomposition and completion using factor priors," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36 no. 3, pp. 577-591, DOI: 10.1109/TPAMI.2013.164, 2014.
[35] T. Yokota, Q. Zhao, A. Cichocki, "Smooth PARAFAC decomposition for tensor completion," IEEE Transactions on Signal Processing, vol. 64 no. 20, pp. 5423-5436, DOI: 10.1109/TSP.2016.2586759, 2016.
[36] T. G. Kolda, B. W. Bader, "Tensor decompositions and applications," SIAM Review, vol. 51 no. 3, pp. 455-500, DOI: 10.1137/07070111X, 2009.
[37] K. Papafitsoros, C. B. Schoenlieb, B. Sengul, "Combined first and second order total variation inpainting using split Bregman," Image Processing On Line, vol. 3, pp. 112-136, DOI: 10.5201/ipol.2013.40, 2013.
[38] S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, "Distributed optimization and statistical learning via the alternating direction method of multipliers," Foundations and Trends in Machine Learning, vol. 3 no. 1,DOI: 10.1561/2200000016, 2011.
[39] P. Benner, R.-C. Li, N. Truhar, "On the ADI method for Sylvester equations," Journal of Computational and Applied Mathematics, vol. 233 no. 4, pp. 1035-1045, DOI: 10.1016/j.cam.2009.08.108, 2009.
[40] Z. C. Lin, R. S. Liu, Z. X. Su, "Linearized alternating direction method with adaptive penalty for low-rank representation," Proceedings of the 25th Annual Conference on Neural Information Processing Systems, .
[41] Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, "Image quality assessment: from error visibility to structural similarity," IEEE Transactions on Image Processing, vol. 13 no. 4, pp. 600-612, DOI: 10.1109/TIP.2003.819861, 2004.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2018 Wendong Wang and Jianjun Wang. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0/
Abstract
In this paper, we propose a new method to deal with the matrix completion problem. Different from most existing matrix completion methods that only pursue the low rank of underlying matrices, the proposed method simultaneously optimizes their low rank and smoothness such that they mutually help each other and hence yield a better performance. In particular, the proposed method becomes very competitive with the introduction of a modified second-order total variation, even when it is compared with some recently emerged matrix completion methods that also combine the low rank and smoothness priors of matrices together. An efficient algorithm is developed to solve the induced optimization problem. The extensive experiments further confirm the superior performance of the proposed method over many state-of-the-art methods.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer