This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
The problem of completing low-rank and sparse matrices from some of its observed entries occurs frequently in many areas of engineering and applied science such as machine learning [1, 2], model reduction [3], compressed sensing [4], control [5], pattern recognition [6], signal and imaging inpainting [7–10], and computer vision [11]. From the pioneering work on low-rank approximation by Fazel [12] as well as on matrix completion by Candès and Recht [13], there has been a lot of study (see [1–35] and references therein) both from theoretical and algorithmic aspects on the problem of recovering a low-rank matrix from partial entries, also known as matrix completion. There is a rapidly growing interest for this issue. Explicitly seeking the lowest rank matrix consistent with the known entries is mathematically expressed as
The general problem (1), however, is nonconvex and is NP hard [14] due to the rank objective. There are a few of algorithms for solving this model directly. Alternatively, Candès and Rechat [13] switched (1) to another simple convex optimization problem (2) as follows:
Furthermore, it has proved that the sequence
As for the solution of problems (2) and (3), there have been many computational efficient algorithms which are designed for a broad class of matrices such as mainly the accelerated proximal gradient (APG) algorithm [16], the augmented Lagrange multiplier (ALM) algorithm [17], several methods [18–21] resulted in alternating optimization based on the bilinear factorization
This paper develops a modification of the SVT algorithm for approximately solving the nuclear norm minimization problem (2). By using a diagonal-update technique for the approximated sequence at each step, the iteration matrices generated by the new algorithm are approximated to the true solution well, which saves significant computational cost of the SVT algorithm performance. And we also establish the convergence theory in detail. Experimental results show that the new algorithm outperforms the standard SVT and its several variants algorithms, especially when problem (2) comes to large-scale issues.
The rest of the paper is organized as follows. After we provide some notations in this section, we review briefly the standard SVT algorithm, the accelerated singular value thresholding (ASVT) algorithm as well as its modification, and a modified algorithm of the SVT algorithm with diagonal-update (D-SVT) is proposed in Section 2. In Section 3, the convergence theory of the new algorithm is established. Then, numerical experiments are shown and compared in Section 4. Finally, we end the paper with the concluding remarks in Section 5.
Here are some notations.
2. Related Algorithms
In order to completing comparison subsequently, we briefly review and introduce some algorithms for solving the matrix completion problem (2).
2.1. The Standard Singular Value Thresholding (SVT) Algorithm
Definition 1.
The singular value decomposition (SVD) of a matrix
Definition 2 (see [15]).
For each
The standard SVT algorithm proposed in [15] is a solution for solving the convex optimization (2).
Algorithm 1: Singular value thresholding (SVT) algorithm, algorithm 1 of [15].
Input: sampled set
Output:
Description: recover a low-rank matrix
(1) Set
(2) Set
(3) for
(4) Set
(5) repeat
(6) Compute
(7) Set
(8) until
(9) Set
(10) Set
(11) if
(12) Set
(13) end
(14) Set
Remark 1.
Due to the ability of producing low-rank solutions with the soft-thresholding operator, the SVT algorithm was shown to be extremely efficient at addressing problems with low-rank optimal solutions such as recommender systems.
2.2. The Accelerated Singular Value Thresholding (ASVT) Algorithm
Introduce the Lagrangian function of problem (3) as
In terms of the dual approach,
Problem (10) was computed via Nesterov’s method with an adaptive line search scheme. Then, Algorithm 2 has been provided.
Algorithm 2: The adaptive linear search scheme, algorithm 1 of [22].
Input:
Output:
(1) for
(2) while 1 do
(3) Compute
(4) Compute
(5) Compute
(6) Compute
(7) if
(8) go to Line 13
(9) else
(10)
(11) end if
(12) end while
(13) Set
(14) Set
(15) end for
Based on the above, the accelerated singular value thresholding (ASVT) algorithm has been proposed in [22]. Furthermore, Wang et at. [23] presented the Ne-SVT by replacing the adaptive linear search with Nemirovski’s technique and the M-ASVT algorithms by using the same search technique in the ASVT algorithm. The overall steps of the later can be organized as Algorithm 3.
Algorithm 3: The modified ASVT (m-asvt) algorithm, algorithm 1 of [23]
Input:
Output:
Description: recover a low-rank matrix
(1) for
(2) Compute
(3) Compute
(4) while 1 do
(5) Compute
(6) if
(7)
(8) else
(9)
(10) end if
(11) end while
(12) Set
(13) end for
It is reported that M-ASVT needs much fewer iterations than ASVT algorithm under the same level of accuracy and the same cost of computing.
2.3. The Singular Value Thresholding with Diagonal-Update (D-SVT) Algorithm
We are now in the position to introduce a modified singular value thresholding algorithm by using a diagonal-update technique, as shown in Algorithm 4.
Algorithm 4: Singular value thresholding with diagonal-update (D-SVT) algorithm, Algorithm 1 of [15]
Input: sampled set
Output:
Description: recover a low-rank matrix
(1) Set
(2) Set
(3) for
(4) Set
(5) repeat
(6) Compute
(7) Set
(8) until
(9) Set
(10) Set
(11) Compute
(12) Set
(13) if
(14) Set
(15) end
(16) Set
Set
Equation (11) is easy to compute since it is so simple just some arithmetic operation required, without extra cost. In fact, the exact solution of (11) is given by
Remark 2.
The sequence matrices generated by the new algorithm are approximated to the true solution well, which saves significant computational cost of the SVT algorithm performance, without actually extra complexity. It is designed as Algorithm 4 by plugging some steps into the SVT method. It should be seen that Algorithm 4 has three lines (as shown in lines 10–12) more than Algorithm 1. The new algorithm includes Algorithm 1 as special case when
3. Convergence Analysis
In this section, the convergence theory is discussed for the singular value thresholding which involves diagonal-update algorithm.
Theorem 1.
Suppose that
Proof.
It follows from Algorithm 4 that
In term of the assumption of this theorem,
Let
Hence,
Moreover, it is obtained that
Moreover,
Theorem 2.
Let
Proof.
It is obtained that
Hence, for any feasible matrix
Thus,
Theorem 3.
Suppose that
Proof.
Note that
From Theorem 2, we have
Therefore,
4. Numerical Experiments
In this section, we provide the performance of our D-SVT algorithm in comparison with the SVT, ASVT, and M-ASVT algorithms mentioned in Section 2 and report the running time in seconds (denoted by “time (s)”), the numbers of iterations (denoted by “IT”) it takes to reach convergence, and the relative errors of the reconstruction (denoted by” Error 1 and Error 2″) as follows:
All the experiments are conducted on the same workstation with an Intel (R) Core (TM) i7-6700 CPU @3.40GHZ that has 16 GB memory and 64 bit operating system, running Windows 7, and Matlab (vision 2016a). For conciseness, the tests presented consider square matrices as is typical in the study. That is to say, suppose to simplify that the unknown matrix
In our implement, we generate
The tested matrix dimensions (denoted by “size (n)”) are from 1,000 to 12,000. The experimental results are shown in Tables 1–6. Our experiments suggest that Algorithm 4 is fast and significantly outperforms the other algorithms in terms of both number of iteration steps and computing time. The new algorithm is especially well suited for problems of very large sizes.
Table 1
Computational results for small-size problems when
Unknown | Computational results | ||||||
Size ( | Rank (r) | Algorithm | IT | Time (s) | Error 1 | Error 2 | |
20 | 0.20 | SVT | 275 | 101.6901 | 1.9826e − 04 | 3.2515e − 04 | |
ASVT | 267 | 97.7211 | 1.0021e − 04 | 1.8735e − 04 | |||
M-ASVT | 251 | 86.5238 | 1.2541e − 04 | 2.3361e − 04 | |||
D-SVT | 186 | 71.0987 | 1.9793 e − 04 | 3.1915 e − 04 | |||
30 | 0.30 | SVT | 255 | 88.2853 | 1.9663e − 04 | 3.0680e − 04 | |
ASVT | 248 | 83.2158 | 1.0002e − 04 | 2.1714e − 04 | |||
M-ASVT | 229 | 80.2655 | 1.3326e − 04 | 1.9855e − 04 | |||
D-SVT | 170 | 70.0549 | 1.9705 e − 04 | 3.0682 e − 04 | |||
30 | 0.15 | SVT | 271 | 837.7041 | 1.9803e − 04 | 3.2390e − 04 | |
ASVT | 266 | 821.4554 | 1.0204e − 04 | 2.1444e − 04 | |||
M-ASVT | 258 | 799.8985 | 1.2130e − 04 | 2.2225e − 04 | |||
D-SVT | 182 | 675.1372 | 1.9687 e − 04 | 3.1833 e − 04 | |||
40 | 0.20 | SVT | 262 | 783.2010 | 1.9843e − 04 | 3.1685e − 04 | |
ASVT | 248 | 766.1247 | 1.0203e − 04 | 1.5682e − 04 | |||
M-ASVT | 233 | 758.1213 | 1.8860e − 04 | 2.3310e − 04 | |||
D-SVT | 165 | 619.6730 | 1.9892 e − 04 | 3.1476 e − 04 | |||
30 | 0.10 | SVT | 276 | 2758.2 | 1.9699e − 04 | 3.2527e − 04 | |
ASVT | 265 | 2710.2236 | 1.0007e − 04 | 1.9985e − 04 | |||
M-ASVT | 260 | 2681.2874 | 1.5623e − 04 | 2.1414e − 04 | |||
D-SVT | 185 | 2076.1 | 1.9885 e − 04 | 3.2650 e − 04 | |||
40 | 0.13 | SVT | 267 | 2692.5 | 1.9609e − 04 | 3.1899e − 04 | |
ASVT | 261 | 2551.4478 | 1.0014e − 04 | 1.8569e − 04 | |||
M-ASVT | 255 | 2432.1278 | 1.7784e − 04 | 2.3369e − 04 | |||
D-SVT | 174 | 2105.3 | 1.9943 e − 04 | 3.2126 e − 04 | |||
50 | 0.17 | SVT | 259 | 2670.6 | 1.9688e − 04 | 3.1486e − 04 | |
ASVT | 254 | 2641.2587 | 1.0012e − 04 | 2.3331e − 04 | |||
M-ASVT | 239 | 2598.5623 | 1.7456e − 04 | 2.2114e − 04 | |||
D-SVT | 162 | 2092.8 | 1.9611 e − 04 | 3.1200 e − 04 | |||
30 | 0.07 | SVT | 276 | 6793 | 1.9824e − 04 | 3.2954e − 04 | |
ASVT | 270 | 6659 | 1.0040e − 04 | 2.3322e − 04 | |||
M-ASVT | 261 | 6555 | 1.9998e − 04 | 2.2288e − 04 | |||
D-SVT | 186 | 5409.1 | 1.9823 e − 04 | 3.2674 e − 04 | |||
40 | 0.10 | SVT | 271 | 6184.3 | 1.9895e − 04 | 3.2622e − 04 | |
ASVT | 276 | 6155 | 1.1012e − 04 | 2.1170e − 04 | |||
M-ASVT | 269 | 6099 | 1.9963e − 04 | 2.0208e − 04 | |||
D-SVT | 180 | 5289.6 | 1.9689 e − 04 | 3.2015 e − 04 | |||
50 | 0.12 | SVT | 269 | 6296.3 | 1.9889e − 04 | 3.2247e − 04 | |
ASVT | 263 | 6241.2 | 1.0500e − 04 | 1.5589e − 04 | |||
M-ASVT | 260 | 6258.2 | 1.1010e − 04 | 1.6601e − 04 | |||
D-SVT | 186 | 4687.2 | 1.9802 e − 04 | 3.1833 e − 04 |
Table 2
Computational results for small-size problems when
Unknown | Computational results | ||||||
Size ( | Rank (r) | Algorithm | IT | Time (s) | Error | Error 2 | |
20 | 0.20 | SVT | 223 | 76.3553 | 1.9830e − 04 | 3.2593e − 04 | |
ASVT | 206 | 71.2289 | 1.0054e − 04 | 1.9965e − 04 | |||
M-ASVT | 194 | 69.5563 | 1.1023 | 1.9877e − 04 | |||
D-SVT | 156 | 66.9487 | 1.9748 e − 04 | 3.2191 e − 04 | |||
30 | 0.30 | SVT | 207 | 70.3992 | 1.9988e − 04 | 3.1406e − 04 | |
ASVT | 200 | 68.2241 | 1.1145e − 04 | 2.1365e − 04 | |||
M-ASVT | 192 | 63.2289 | 1.0089e − 04 | 2.0005e − 04 | |||
D-SVT | 144 | 61.4286 | 1.9333 e − 04 | 3.0025 e − 04 | |||
30 | 0.15 | SVT | 220 | 712.9493 | 1.9956e − 04 | 3.2718e − 04 | |
ASVT | 217 | 704.3365 | 1.0203e − 04 | 2.1475e − 04 | |||
M-ASVT | 211 | 699.3785 | 1.0057e − 04 | 2.0624e − 04 | |||
D-SVT | 152 | 535.2846 | 1.9969 e − 04 | 3.2316 e − 04 | |||
40 | 0.20 | SVT | 212 | 644.3027 | 1.9890e − 04 | 3.1802e − 04 | |
ASVT | 205 | 634.9870 | 1.1110e − 04 | 2.3875e − 04 | |||
M-ASVT | 199 | 618.9952 | 1.0074e − 04 | 2.2271e − 04 | |||
D-SVT | 148 | 522.8293 | 1.9969 e − 04 | 3.1650 e − 04 | |||
30 | 0.10 | SVT | 224 | 2248.9 | 1.9831e − 04 | 3.2891e − 04 | |
ASVT | 218 | 2213.4 | 1.3354e − 04 | 2.6522e − 04 | |||
M-ASVT | 211 | 2168.8 | 1.1532e − 04 | 2.3301e − 04 | |||
D-SVT | 158 | 1824.8 | 1.9718 e − 04 | 3.2220 e − 04 | |||
40 | 0.13 | SVT | 216 | 2222.4 | 1.9616e − 04 | 3.1860e − 04 | |
ASVT | 207 | 2025.3 | 1.1004e − 04 | 2.3110e − 04 | |||
M-ASVT | 201 | 1999.2 | 1.2121e − 04 | 2.5403e − 04 | |||
D-SVT | 152 | 1807.7 | 1.9394 e − 04 | 3.1228 e − 04 | |||
50 | 0.17 | SVT | 211 | 2234.1 | 1.9597e − 04 | 3.1398e − 04 | |
ASVT | 204 | 2106.4 | 1.1140e − 04 | 2.5510e − 04 | |||
M-ASVT | 198 | 1999.4 | 1.0042e − 04 | 2.1030e − 04 | |||
D-SVT | 148 | 1733.6 | 1.9507 e − 04 | 3.1039 e − 04 | |||
30 | 0.07 | SVT | 224 | 5314.0 | 1.9728e − 04 | 3.2816e − 04 | |
ASVT | 218 | 5127.1 | 1.2544e − 04 | 2.3365e − 04 | |||
M-ASVT | 209 | 5001.2 | 1.1004e − 04 | 1.9952e − 04 | |||
D-SVT | 160 | 4324.1 | 1.9500 e − 04 | 3.2075 e − 04 | |||
40 | 0.10 | SVT | 218 | 5209.7 | 1.9845e − 04 | 3.2463e − 04 | |
ASVT | 211 | 5101.0 | 1.1140e − 04 | 2.0407e − 04 | |||
M-ASVT | 205 | 4998.2 | 1.0045e − 04 | 1.8854e − 04 | |||
D-SVT | 155 | 4237.7 | 1.9460 e − 04 | 3.1656 e − 04 | |||
50 | 0.12 | SVT | 215 | 5114.3 | 1.9847e − 04 | 3.2178e − 04 | |
ASVT | 207 | 5003.5 | 1.6652e − 04 | 2.3310e − 04 | |||
M-ASVT | 200 | 4889.2 | 1.1143e − 04 | 2.1113e − 04 | |||
D-SVT | 151 | 4068.9 | 1.9986 e − 04 | 3.2133 e − 04 |
Table 3
Computational results for small-size problems when
Unknown | Computational results | ||||||
Size ( | Rank (r) | Algorithm | IT | Time (s) | Error 1 | Error 2 | |
20 | 0.198 | SVT | 172 | 58.9324 | 1.9708e − 04 | 3.2399e − 04 | |
ASVT | 165 | 55.3211 | 1.2114e − 04 | 2.1140e − 04 | |||
M-ASVT | 160 | 51.3333 | 1.1002e − 04 | 2.0047e − 04 | |||
D-SVT | 119 | 50.8026 | 1.9333 e − 04 | 3.1028 e − 04 | |||
30 | 0.2955 | SVT | 158 | 55.6121 | 1.9651e − 04 | 3.0783e − 04 | |
ASVT | 150 | 51.3361 | 1.2254e − 04 | 2.5512e − 04 | |||
M-ASVT | 143 | 48.2210 | 1.1102e − 04 | 2.0005e − 04 | |||
D-SVT | 110 | 48.3159 | 1.9430 e − 04 | 3.0057 e − 04 | |||
30 | 0.1489 | SVT | 169 | 509.3663 | 1.9555e − 04 | 3.1915e − 04 | |
ASVT | 161 | 500.2369 | 1.2323e − 04 | 1.9985e − 04 | |||
M-ASVT | 156 | 498.2323 | 1.1110e − 04 | 1.8541e − 04 | |||
D-SVT | 117 | 410.1959 | 1.9399 e − 04 | 3.1251 e − 04 | |||
40 | 0.198 | SVT | 163 | 491.4569 | 1.9544e − 04 | 3.1199e − 04 | |
ASVT | 157 | 482.3145 | 1.3324e − 04 | 2.5666e − 04 | |||
M-ASVT | 152 | 469.2221 | 1.1142e − 04 | 2.1104e − 04 | |||
D-SVT | 113 | 399.2143 | 1.9981 e − 04 | 3.1611 e − 04 | |||
30 | 0.0995 | SVT | 173 | 1810.3 | 1.9523e − 04 | 3.2420e − 04 | |
ASVT | 166 | 1796.2 | 1.5562e − 04 | 2.0004e − 04 | |||
M-ASVT | 159 | 1652.8 | 1.1124e − 04 | 1.8989e − 04 | |||
D-SVT | 119 | 1386.7 | 1.9967 e − 04 | 3.2635 e − 04 | |||
40 | 0.1324 | SVT | 166 | 1713.3 | 1.9533e − 04 | 3.1671e − 04 | |
ASVT | 159 | 1695.2 | 1.5252e − 04 | 2.8485e − 04 | |||
M-ASVT | 154 | 1666.3 | 1.2235e − 04 | 2.1103e − 04 | |||
D-SVT | 116 | 1355.3 | 1.9953 e − 04 | 3.2249 e − 04 | |||
50 | 0.1653 | SVT | 162 | 1664.9 | 1.9418e − 04 | 3.1127e − 04 | |
ASVT | 157 | 1662.3 | 1.6523e − 04 | 2.1104e − 04 | |||
M-ASVT | 157 | 1660.2 | 1.0023e − 04 | 2.1001e − 04 | |||
D-SVT | 113 | 1333.3 | 1.9641 e − 04 | 3.1282 e − 04 | |||
30 | 0.0747 | SVT | 174 | 4112.2 | 1.9442e − 04 | 3.2322e − 04 | |
ASVT | 170 | 4000.3 | 1.1115e − 04 | 2.1104e − 04 | |||
M-ASVT | 168 | 3995.2 | 1.0002e − 04 | 1.9952e − 04 | |||
D-SVT | 120 | 3247.9 | 1.9820 e − 04 | 3.2760 e − 04 | |||
40 | 0.0995 | SVT | 168 | 4023.7 | 1.9513e − 04 | 3.1937e − 04 | |
ASVT | 163 | 3998.2 | 1.6674e − 04 | 2.5510e − 04 | |||
M-ASVT | 159 | 3885.1 | 1.3114e − 04 | 2.0047e − 04 | |||
D-SVT | 117 | 3056.1 | 1.9684 e − 04 | 3.1983 e − 04 | |||
50 | 0.1242 | SVT | 165 | 3803.5 | 1.9604e − 04 | 3.1727e − 04 | |
ASVT | 160 | 3685.2 | 1.2246e − 04 | 2.8525e − 04 | |||
M-ASVT | 156 | 3459.2 | 1.1006e − 04 | 2.1139e − 04 | |||
D-SVT | 115 | 3104.3 | 1.9875 e − 04 | 3.1965 e − 04 |
Table 4
Computational results for large-size problems when
Unknown | Computational results | ||||||
Size ( | Rank (r) | Algorithm | IT | Time (s) | Error 1 | Error 2 | |
40 | 0.08 | SVT | 271 | 12334 | 1.9826e − 04 | 3.2645e − 04 | |
ASVT | 266 | 12301 | 1.2225e − 04 | 2.1147e − 04 | |||
M-ASVT | 264 | 12295 | 1.0558e − 04 | 2.0036e − 04 | |||
D-SVT | 190 | 9845.8 | 1.9556 e − 04 | 3.1916 e − 04 | |||
50 | 0.10 | SVT | 265 | 11505 | 1.9898e − 04 | 3.2434e − 04 | |
ASVT | 263 | 11427 | 1.2558e − 04 | 3.0001e − 04 | |||
M-ASVT | 260 | 11582 | 1.1010e − 04 | 2.5563e − 04 | |||
D-SVT | 186 | 9312.2 | 1.9976 e − 04 | 3.2374 e − 04 | |||
60 | 0.12 | SVT | 262 | 11983 | 1.9795e − 04 | 3.1961e − 04 | |
ASVT | 259 | 11886 | 1.5252e − 04 | 2.5474e − 04 | |||
M-ASVT | 256 | 11963 | 1.1023e − 04 | 2.1136e − 04 | |||
D-SVT | 185 | 9656.4 | 1.9581 e − 04 | 3.1450 e − 04 | |||
50 | 0.06 | SVT | 269 | 48699 | 1.9902e − 04 | 3.2707e − 04 | |
ASVT | 263 | 49987 | 1.1298e − 04 | 2.6634e − 04 | |||
M-ASVT | 256 | 50000 | 1.0002e − 04 | 1.8859e − 04 | |||
D-SVT | 191 | 40793 | 1.9714 e − 04 | 3.2171 e − 04 | |||
60 | 0.07 | SVT | 266 | 46717 | 1.9926e − 04 | 3.2572e − 04 | |
ASVT | 259 | 47558 | 1.3320e − 04 | 2.0045e − 04 | |||
M-ASVT | 251 | 48800 | 1.0223e − 04 | 1.8879e − 04 | |||
D-SVT | 189 | 37324 | 1.9591 e − 04 | 3.1842 e − 04 | |||
100 | 0.12 | SVT | 258 | 48230 | 1.9750e − 04 | 3.1624e − 04 | |
ASVT | 254 | 47889 | 1.2220e − 04 | 2.1145e − 04 | |||
M-ASVT | 250 | 47556 | 1.0023e − 04 | 1.9987e − 04 | |||
D-SVT | 183 | 38534 | 1.9695 e − 04 | 3.1436 e − 04 | |||
150 | 0.19 | SVT | 250 | 43758 | 1.9707e − 04 | 3.0923e − 04 | |
ASVT | 248 | 43665 | 1.3365e − 04 | 2.1104e − 04 | |||
M-ASVT | 245 | 43666 | 1.1102e − 04 | 1.9965e − 04 | |||
D-SVT | 177 | 35029 | 1.9794 e − 04 | 3.1001 e − 04 | |||
200 | 0.20 | SVT | 247 | 85840 | 1.9976e − 04 | 3.1183e − 04 | |
ASVT | 241 | 85662 | 1.2003e − 04 | 2.0014e − 04 | |||
M-ASVT | 238 | 85441 | 1.0041e − 04 | 1.9941e − 04 | |||
D-SVT | 176 | 69674 | 1.9795 e − 04 | 3.0853 e − 04 | |||
500 | 0.49 | SVT | 215 | 71656 | 1.9760e − 04 | 2.81173e − 04 | |
ASVT | 211 | 71221 | 1.2258e − 04 | 2.0041e − 04 | |||
M-ASVT | 207 | 71234 | 1.0005e − 04 | 1.9595e − 04 | |||
D-SVT | 155 | 59264 | 1.9531 e − 04 | 2.7785 e − 04 | |||
300 | 0.25 | SVT | 242 | 139650 | 1.9701e − 04 | 3.0300e − 04 | |
ASVT | 239 | 139666 | 1.2254e − 04 | 2.3314e − 04 | |||
M-ASVT | 234 | 139556 | 1.0005e − 04 | 2.0101e − 04 | |||
D-SVT | 173 | 128650 | 1.9664 e − 04 | 3.0193 e − 04 | |||
800 | 0.64 | SVT | 195 | 110750 | 1.9964e − 04 | 2.6726e − 04 | |
ASVT | 191 | 115544 | 1.2023e − 04 | 1.8856e − 04 | |||
M-ASVT | 187 | 115531 | 1.0006e − 04 | 1.5654e − 04 | |||
D-SVT | 141 | 97547 | 1.9785 e − 04 | 2.6492 e − 04 |
Table 5
Computational results for large-size problems when
Unknown | Computational results | ||||||
Size ( | Rank (r) | Algorithm | IT | Time (s) | Error 1 | Error 2 | |
40 | 0.08 | SVT | 219 | 9835.9 | 1.9870e − 04 | 3.2720e − 04 | |
ASVT | 215 | 9755.2 | 1.4451e − 04 | 2.5656e − 04 | |||
M-ASVT | 209 | 9741.3 | 1.0907e − 04 | 2.1036e − 04 | |||
D-SVT | 156 | 8041.4 | 1.9393 e − 04 | 3.1676 e − 04 | |||
50 | 0.10 | SVT | 216 | 9453.7 | 1.9655e − 04 | 3.2005e − 04 | |
ASVT | 211 | 9431.2 | 1.2001e − 04 | 2.5543e − 04 | |||
M-ASVT | 206 | 9388.2 | 1.1007e − 04 | 2.1016e − 04 | |||
D-SVT | 152 | 7509.2 | 1.9991 e − 04 | 2.2336 e − 04 | |||
60 | 0.12 | SVT | 213 | 9383.4 | 1.9786e − 04 | 3.1958e − 04 | |
ASVT | 209 | 9321.0 | 1.3357e − 04 | 2.5151e − 04 | |||
M-ASVT | 201 | 9289.1 | 1.1110e − 04 | 2.1910e − 04 | |||
D-SVT | 151 | 7576.3 | 1.9481 e − 04 | 3.1298 e − 04 | |||
50 | 0.06 | SVT | 220 | 39280 | 1.9652e − 04 | 3.2339e − 04 | |
ASVT | 218 | 38923 | 1.5001e − 04 | 2.4410e − 04 | |||
M-ASVT | 211 | 38009 | 1.0944e − 04 | 2.1106e − 04 | |||
D-SVT | 156 | 31915 | 1.9525 e − 04 | 3.1915 e − 04 | |||
60 | 0.07 | SVT | 217 | 38236 | 1.9566e − 04 | 3.1953e − 04 | |
ASVT | 214 | 38001 | 1.2238e − 04 | 2.3131e − 04 | |||
M-ASVT | 208 | 37996 | 1.1010e − 04 | 2.0045e − 04 | |||
D-SVT | 153 | 30619 | 1.9923 e − 04 | 3.2345 e − 04 | |||
100 | 0.12 | SVT | 209 | 37745 | 1.9771e − 04 | 3.1676e − 04 | |
ASVT | 206 | 37512 | 1.2325e − 04 | 2.0109e − 04 | |||
M-ASVT | 200 | 37008 | 1.0056e − 04 | 1.9221e − 04 | |||
D-SVT | 148 | 30537 | 1.9913 e − 04 | 3.1791 e − 04 | |||
150 | 0.19 | SVT | 203 | 37414 | 1.9488e − 04 | 3.0607e − 04 | |
ASVT | 199 | 37002 | 1.1125e − 04 | 2.0087e − 04 | |||
M-ASVT | 196 | 36912 | 1.0023e − 04 | 1.8989e − 04 | |||
D-SVT | 144 | 29507 | 1.9832 e − 04 | 3.1065 e − 04 | |||
200 | 0.20 | SVT | 201 | 67135 | 1.9474e − 04 | 3.0422e − 04 | |
ASVT | 198 | 68852 | 1.2365e − 04 | 2.5620e − 04 | |||
M-ASVT | 195 | 69592 | 1.0101e − 04 | 1.9962e − 04 | |||
D-SVT | 144 | 56015 | 1.9315 e − 04 | 3.0102 e − 04 | |||
500 | 0.49 | SVT | 174 | 58453 | 1.9652e − 04 | 2.7972e − 04 | |
ASVT | 171 | 60033 | 1.2020e − 04 | 1.9963e − 04 | |||
M-ASVT | 167 | 60114 | 1.0023e − 04 | 1.5651e − 04 | |||
D-SVT | 126 | 48758 | 1.9709 e − 04 | 2.8051 e − 04 | |||
300 | 0.25 | SVT | 196 | 11284 | 1.9603e − 04 | 3.0148e − 04 | |
ASVT | - | - | - | - | |||
M-ASVT | - | - | - | - | |||
D-SVT | 131 | 10375 | 1.9734 e − 04 | 3.0320 e − 04 | |||
800 | 0.64 | SVT | 158 | 88478 | 1.9661e − 04 | 2.6335e − 04 | |
ASVT | - | - | - | - | |||
M-ASVT | - | - | - | - | |||
D-SVT | 115 | 78069 | 1.9509 e − 04 | 2.6139 e − 04 |
Note. The mark “-” indicates that the iteration is failing.
Table 6
Computational results for large-size problems when
Unknown | Computational results | ||||||
Size ( | Rank (r) | Algorithm | IT | Time (s) | Error 1 | Error 2 | |
40 | 0.0797 | SVT | 169 | 7838.0 | 1.9546e − 04 | 3.2133e − 04 | |
ASVT | 161 | 7775.1 | 1.2230e − 04 | 2.3232e − 04 | |||
M-ASVT | 156 | 7701.2 | 1.0023e − 04 | 2.0045e − 04 | |||
D-SVT | 118 | 6406.5 | 1.9422 e − 04 | 3.1763 e − 04 | |||
50 | 0.0995 | SVT | 166 | 7343.0 | 1.9364e − 04 | 3.1582e − 04 | |
ASVT | 160 | 7265.2 | 1.2325e − 04 | 2.4789e − 04 | |||
M-ASVT | 155 | 7189.2 | 1.0089e − 04 | 2.1004e − 04 | |||
D-SVT | 116 | 5780.4 | 1.9280 e − 04 | 3.1234 e − 04 | |||
60 | 0.1193 | SVT | 164 | 7485.6 | 1.9450e − 04 | 3.1432e − 04 | |
ASVT | 160 | 7401.2 | 1.5002e − 04 | 2.3004e − 04 | |||
M-ASVT | 154 | 7356.3 | 1.0980e − 04 | 2.0152e − 04 | |||
D-SVT | 115 | 5833.1 | 1.9562 e − 04 | 3.1424 e − 04 | |||
50 | 0.0623 | SVT | 168 | 31774 | 1.9968e − 04 | 3.2825e − 04 | |
ASVT | 165 | 32589 | 1.2223e − 04 | 2.0058e − 04 | |||
M-ASVT | 160 | 32441 | 1.0041e − 04 | 1.8856e − 04 | |||
D-SVT | 118 | 24511 | 1.9267 e − 04 | 3.1490 e − 04 | |||
60 | 0.0747 | SVT | 166 | 29378 | 1.9945e − 04 | 3.2418e − 04 | |
ASVT | 161 | 29654 | 1.2412e − 04 | 2.0306e − 04 | |||
M-ASVT | 157 | 30001 | 1.0002e − 04 | 1.5859e − 04 | |||
D-SVT | 117 | 23791 | 1.9287 e − 04 | 3.1360 e − 04 | |||
100 | 0.1242 | SVT | 160 | 30315 | 1.9888e − 04 | 3.1854e − 04 | |
ASVT | 156 | 31256 | 1.2004e − 04 | 2.1113e − 04 | |||
M-ASVT | 149 | 33000 | 1.0100e − 04 | 1.8588e − 04 | |||
D-SVT | 115 | 23863 | 1.9124 e − 04 | 3.0560 e − 04 | |||
150 | 0.1857 | SVT | 155 | 27204 | 1.9643e − 04 | 3.0842e − 04 | |
ASVT | 150 | 28693 | 1.2203e − 04 | 2.3334e − 04 | |||
M-ASVT | 144 | 29003 | 1.0045e − 04 | 201414e − 04 | |||
D-SVT | 112 | 22002 | 1.9250 e − 04 | 3.0152 e − 04 | |||
200 | 0.1980 | SVT | 154 | 52658 | 1.9363e − 04 | 3.0252e − 04 | |
ASVT | 153 | 53321 | 1.1414e − 04 | 2.3006e − 04 | |||
M-ASVT | 150 | 53662 | 1.0051e − 04 | 2.0001e − 04 | |||
D-SVT | 111 | 44680 | 1.9978 e − 04 | 3.1153 e − 04 | |||
500 | 0.1980 | SVT | 133 | 44718 | 1.9555e − 04 | 2.7855e − 04 | |
ASVT | 131 | 45512 | 1.3224e − 04 | 2.5101e − 04 | |||
M-ASVT | 126 | 45510 | 1.0021e − 04 | 1.5880e − 04 | |||
D-SVT | 98 | 38655 | 1.9057 e − 04 | 2.7139 e − 04 | |||
300 | 0.2469 | SVT | 150 | 86110 | 1.9587e − 04 | 3.0132e − 04 | |
ASVT | - | - | - | - | |||
M-ASVT | - | - | - | - | |||
D-SVT | 109 | 81902 | 1.9999 e − 04 | 3.0734 e − 04 | |||
800 | 0.6444 | SVT | 121 | 68906 | 1.9236e − 04 | 2.5786e − 04 | |
ASVT | - | - | - | - | |||
M-ASVT | - | - | - | - | |||
D-SVT | 89 | 63517 | 1.9120 e − 04 | 2.5640 e − 04 |
Note. The mark “-”indicates that the iteration is failing.
In order to show convergence behave of the algorithms briefly, convergence curves of several algorithms are clearly given, which are shown in Figure 1 for the different parameters. It is easy to see that our algorithm takes much less computational cost from iteration steps and computing time. That is to say, the D-SVT algorithm is more efficient than the other algorithms especially when the size of matrix
[figures omitted; refer to PDF]
5. Concluding Remarks
In this paper, we focus on the problem of completing a low-rank matrix from a small subset of its entries. This model can characterize many applications arising from the areas of signal and image processing, statistical inference, and machine learning. We have proposed a modification of the SVT algorithm for solving the low-rank matrix sparse model. The key step of the algorithm is to update each iteration matrix by a weighting diagonal matrix, without the extra cost. The weighting matrix
Authors’ Contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Acknowledgments
This work was supported by the NSF of China (11371275), NSF of Shanxi Province (201901D211423), STIP of Shanxi Provincial Department of Education (2020L0719), and the CSREP in Shanxi (no. 2019KJ035).
[1] Y. Amit, M. Fink, N. Srebro, S. Ullman, "Unconvering shared structures in malticalass classification," pp. 17-24, .
[2] A. Argyriou, T. Evgeniou, M. Pontil, "Multi-task feature learning," Advances in Neural Information Processing Systems, vol. 19, pp. 41-48, 2007.
[3] Z. Liu, L. Vandenberghe, "Interior-point method for nuclear norm approximation with application to system identification," SIAM Journal on Matrix Analysis and Applications, vol. 31, pp. 1235-1256, 2009.
[4] J.-F. Cai, T. Wang, K. Wei, "Spectral compressed sensing via projected gradient descent," SIAM Journal on Optimization, vol. 28 no. 3, pp. 2625-2653, DOI: 10.1137/17m1141394, 2018.
[5] M. Mesbahi, G. P. Papavassilopoulos, "On the rank minimization problem over a positive semidefinite linear matrix inequality," Institute of Electrical and Electronics Engineers Transactions on Automatic Control, vol. 42 no. 2, pp. 239-243, DOI: 10.1109/9.554402, 1997.
[6] L. Eldén, Matrix Methods in Data Mining and Pattern Recognization, 2007.
[7] M. Bertalmio, G. Sapiro, V. Caselles, C. Ballester, "Multi-task feature learing, image inpainting," GR Computer & Stationary, vol. 34, pp. 417-424, 2000.
[8] J. -F. Cai, S. Liu, W. Xu, "A fast algorithm for restruction of spectrally sparse signals in super-resolution," .
[9] J.-F. Cai, X. Qu, W. Xu, G.-B. Ye, "Robust recovery of complex exponential signals from random Gaussian projections via low rank Hankel matrix reconstruction," Applied and Computational Harmonic Analysis, vol. 41 no. 2, pp. 470-490, DOI: 10.1016/j.acha.2016.02.003, 2016.
[10] J.-F. Cai, T. Wang, K. Wei, "Fast and provable algorithms for spectrally sparse signal reconstruction via low-rank Hankel matrix completion," Applied and Computational Harmonic Analysis, vol. 46 no. 1, pp. 94-121, DOI: 10.1016/j.acha.2017.04.004, 2019.
[11] C. Tomasi, T. Kanade, "Shape and motion from image streams under orthography: a factorization method," International Journal of Computer Vision, vol. 9 no. 2, pp. 137-154, DOI: 10.1007/bf00129684, 1992.
[12] M. Fazel, Matrix Rank Minimization with Applications, 2002. Ph.D. Dissertation
[13] E. J. Candès, B. Recht, "Exact matrix completion via convex optimization," Foundations of Computational Mathematics, vol. 9 no. 6, pp. 717-772, DOI: 10.1007/s10208-009-9045-5, 2009.
[14] N. J. Harvey, D. R. Karger, S. Yekhanin, "The complexity of matrix completion," Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA, pp. 1103-1111, .
[15] J.-F. Cai, E. J. Candès, Z. Shen, "A singular value thresholding algorithm for matrix completion," SIAM Journal on Optimization, vol. 20 no. 4, pp. 1956-1982, DOI: 10.1137/080738970, 2010.
[16] K. C. Toh, S. Yun, "An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems," Pacific Journal of Optimization, vol. 6, pp. 615-640, 2010.
[17] Z. Lin, M. Chen, L. Wu, Y. Ma, The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices, 2010.
[18] C. Chen, B. He, X. Yuan, "Matrix completion via an alternating direction method," IMA Journal of Numerical Analysis, vol. 32 no. 1, pp. 227-245, DOI: 10.1093/imanum/drq039, 2012.
[19] P. Jain, P. Netrapalli, S. Sanghavi, "Low-rank matrix completion using alternating minimization," Proceedings of the 45th Annual ACM Symposium on Theory of Computing (STOC), pp. 665-674, .
[20] J. Tanner, K. Wei, "Low rank matrix completion by alternating steepest descent methods," Applied and Computational Harmonic Analysis, vol. 40 no. 2, pp. 417-429, DOI: 10.1016/j.acha.2015.08.003, 2016.
[21] R.-P. Wen, L.-X. Liu, "The two-stage iteration algorithms based on the shortest distance for low-rank matrix completion," Applied Mathematics and Computation, vol. 314, pp. 133-141, DOI: 10.1016/j.amc.2017.07.024, 2017.
[22] Y. Hu, D. B. Zhang, J. Liu, J. P. Ye, X. F. He, "Accelerated singular value thresholding for matrix completion," Proceedings of the Eighteenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, .
[23] L. Wang, J. Hu, C. Chen, "On accelerated singular value thresholding algorithm for matrix completion," Applied Mathematics, vol. 05 no. 21, pp. 3445-3451, DOI: 10.4236/am.2014.521322, 2014.
[24] R.-P. Wen, X.-H. Yan, "A new gradient projection method for matrix completion," Applied Mathematics and Computation, vol. 258, pp. 537-544, DOI: 10.1016/j.amc.2015.02.041, 2015.
[25] T. H. Oh, Y. Matsushita, Y. W. Tai, I. S. Kweon, "Fast randomized singular value thresholding for low-rank optimization," Institute of Electrical and Electronics Engineers Transactions on Pattern Analysis and Machine Intelligence, vol. 99, pp. 376-391, 2015.
[26] Y. P. Song, J. A. Westerhuis, A. K. Smilde, "Logistic Principal Component Analysis via Non-convex Singular Value Thresholding," 2019. https://arxiv.org/abs/1902.09486
[27] S. F. Yeganli, R. Yu, "Image Inpainting via Singular Value Thresholding," Proceedings of the IEEE: Signal Processing and Communications Applications Conference, .
[28] E. J. Candès, C. A. Sing-Long, J. D. Trzasko, "Unbiased risk estimates for singular value thresholding and spectral estimators," Institute of Electrical and Electronics Engineers Transactions on Signal Processing, vol. 61 no. 19, pp. 4643-4657, DOI: 10.1109/tsp.2013.2270464, 2013.
[29] S. Chatterjee, "Matrix estimation by universal singular value thresholding," The Annals of Statistics, vol. 43 no. 1, pp. 177-214, DOI: 10.1214/14-aos1272, 2015.
[30] D. Donoho, M. Gavish, "Minimax risk of matrix denoising by singular value thresholding," The Annals of Statistics, vol. 42 no. 6, pp. 2413-2440, DOI: 10.1214/14-aos1257, 2014.
[31] A. Dutta, B. Gong, X. Li, M. Shah, "Weighted singular value thresholding and its application to background estimation," Journal of Machine Learning Research, 2017.
[32] O. Klopp, "Matrix completion by singular value thresholding: sharp bounds," Electronic Journal of Statistics, vol. 9 no. 2, pp. 2348-2369, DOI: 10.1214/15-ejs1076, 2015.
[33] L. Ma, Y. Xu, "Received signal strength recovery in green WLAN indoor positioning system using singular value thresholding," Sensors, vol. 15 no. 1, pp. 1292-1311, DOI: 10.3390/s150101292, 2015.
[34] H. Zhang, L. Z. Cheng, W. Zhu, "A lower bound guaranteeing exact matrix completion via singular value thresholding algorithm," Applied and Computational Harmonic Analysis, vol. 31 no. 3, pp. 454-459, DOI: 10.1016/j.acha.2011.04.004, 2011.
[35] X. Q. Zhang, Z. Y. Zhou, D. Wang, Y. Ma, "Hybrid singular value thresholding for tensor completion," Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, pp. 1362-1368, .
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2020 Yong-Hong Duan et al. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0/
Abstract
The singular value thresholding (SVT) algorithm plays an important role in the well-known matrix reconstruction problem, and it has many applications in computer vision and recommendation systems. In this paper, an SVT with diagonal-update (D-SVT) algorithm was put forward, which allows the algorithm to make use of simple arithmetic operation and keep the computational cost of each iteration low. The low-rank matrix would be reconstructed well. The convergence of the new algorithm was discussed in detail. Finally, the numerical experiments show the effectiveness of the new algorithm for low-rank matrix completion.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer