About the Authors:
Jian Liu
Affiliations School of Information Science and Engineering, Qufu Normal University, Rizhao, 276826, Shandong, China, School of Information and Electrical Engineering, China University of Mining and Technology, Xuzhou, 221000, Jiangsu, China
Jin-Xing Liu
* E-mail: [email protected]
Affiliations School of Information Science and Engineering, Qufu Normal University, Rizhao, 276826, Shandong, China, Bio-Computing Research Center, Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen, 518055, Guangdong, China
Ying-Lian Gao
Affiliation: Library of Qufu Normal University, Qufu Normal University, Rizhao, 276826, Shandong, China
Xiang-Zhen Kong
Affiliation: School of Information Science and Engineering, Qufu Normal University, Rizhao, 276826, Shandong, China
Xue-Song Wang
Affiliations School of Information and Electrical Engineering, China University of Mining and Technology, Xuzhou, 221000, Jiangsu, China, The Key Laboratory of Complex Systems and Intelligence Science, Institute of Automation Chinese Academy of Sciences, Beijing, 100000, China
Dong Wang
Affiliation: School of Information Science and Engineering, Qufu Normal University, Rizhao, 276826, Shandong, China
Introduction
With the development of DNA microarray technology, it is possible for biologists to monitor the expression of thousands of genes simultaneously [1, 2]. Besides, these genes have been detected more comprehensively than ever before. A great challenge of the current bioinformatics is to explain the microarray gene expression data to gain insight into biological processes. A large number of studies have been reported to identify the characteristic genes from gene expression data. Feature extraction is a typical application of gene expression data.
A prominent feature of gene expression data is that the number of samples is far less than the number of genes. Generally speaking, on each experiment, gene expression data always contain thousands or even more than 10,000 genes, while the number of samples is generally less than 100. Statistically, it is called the small-sample-size problem, which makes many feature extraction methods lose effectiveness. The number of genes in expression data is so huge that it is quite difficult to analyze the gene expression data. Fortunately, opposed to the whole number of genes, only a small number of genes can regulate the gene expression. The minor number genes associated with a special biological process are called differentially expressed genes. Therefore, the importance of differentially expressed genes catches more and more biologists' attention. Correspondingly, it is particularly important how to discover these genes effectively.
Up to now, to find a group of genes which are relevant to a biological process from gene expression data, various feature extraction methods have been proposed for recognizing differentially expression genes. For example, Liu et al. selected characteristic genes by utilizing weight principal components by singular values [3]; the differential gene pathways were identified via principal component analysis by Ma et al. [4]; Zheng et al. selected feature genes using nonnegative matrix factorization and sparse nonnegative matrix factorization [5]. Many extraction methods, especially sparse methods, are always taking advantage of norm, and different methods using different norm. L0-norm and L1-norm are the commonly used norm, for example, for sparse principal component analysis (SPCA) method, Journée et al. took L0-norm penalty to analyze gene expression data [6]; for penalized matrix decomposition (PMD) method [7] which was used to extract plants differentially expressed genes responding to abiotic stress [8], L1-norm was taken as the penalty function. These methods have been successfully implemented on gene expression data and have high identification accuracies [9]. But the non-robust of these methods with respect to severely damaged observations in gene expression data often makes them invalid.
Recently, in the field of matrix completion, Nie et al. proposed a novel method named as joint Schatten p-norm and Lp-norm robust matrix completion method for missing value recovery [10]. Matrix completion methods always presume that the values in the data matrix are associated and the rank of matrix (approximately) is low. The missing values in the data matrix can be recovered according to the observed values of the data matrix by minimizing the rank of the matrix. Therefore, the trace norm was minimized as the convex relaxation of the rank function [11–13]. Meanwhile, the prediction errors on the observed values were minimized using the squared error function by Mazumder et al.[11]. Nevertheless, the trace norm minimization may make the solution seriously deviate from the original solution in spite of it is a convex problem with a global solution. In order to solve a better approximation of the rank problem, the Schatten p-norm (0 ≤ pS ≤ 1) is used to reformulate this problem; furthermore, the Lp-norm (0 < pL ≤ 1) is taken as the error function to improve the robustness of matrix recovery methods [10].
This method has been successfully applied to recover the data matrix in [10], however, whether the Schatten p-norm and Lp-norm are effective for gene expression data analysis needs to be measured. According to [14], the gene expression data always lie near many low dimensional subspace, from which it is easy to speculate that the genes data of non-differential expression are approximately low rank. Therefore, the Schatten p-norm can be applied to analysis the gene expression data as well. As mentioned above, the matrix norm was widespread used to identify differentially expressed genes, so the Lp-norm as one special form of the norm can be served as the penalty function when processing the gene expression data.
In this paper, based on the Schatten p-norm (0 ≤ pS ≤ 1) and Lp-norm (0 < pL ≤ 1), a novel method named as p-norm Robust Feature Extraction (PRFE) method is put forward for identifying differentially expressed genes. In our method, we denote the gene expression data as the observed matrix X. To obtain the eigensamples which contain the characteristic structure of the gene expression data, matrix X is decomposed into W (the product of U and D) and VT by using SVD, where W is the collection of all the eigensamples [8, 15]. That is to say, the critical information of differentially expressed genes can be captured by the matrix W. Therefore, the optimization problem for X is converted into the optimization problem for W. We take the Lp-norm as the error function to improve the robustness of W. And the Schatten p-norm is used as the regularization function to make W be a low-rank matrix which can solve the small-sample-size problem in gene expression data. Eventually, the differentially expressed genes can be identified according to the optimized W. The briefly introduction of PRFE is as follows: Firstly, the gene expression data matrix X is decomposed into two matrices W (the product of U and D) and VT by using SVD. Secondly, the Lp-norm is applied to solve the optimization problem: , and the Schatten p-norm is used to approximate the rank of W: . Thirdly, the differentially expressed genes are identified according to the optimized matrix W. Finally, the identified genes are appraised using the Gene Ontology tool.
To evaluate the validity of our method, both simulation data and real gene expression data sets are handled by PRFE method in the experiments. By comparing PMD and SPCA methods, all empirical results show that the novel method outperforms the competitive methods for identifying differentially expressed genes.
In summary, the main contributions of this paper are as follows:
1. - On one hand, based on the Schatten p-norm and Lp-norm, for the first time it proposes a novel idea and method PRFE for identifying differentially expressed genes.
2. - On the other hand, extensive experiments are conducted on gene identification.
The remainder of the paper is structured as follows. Section 2 shows the methodology of PRFE. Then how to identify differentially expressed genes using PRFE is introduced. The experimental results on simulation data and real gene expression data sets are presented in Section 3. In Section 4, the conclusion is shown.
Methodology
2.1 Definitions of Lp-norm and Schatten p-norm
For a matrix W contains m rows and n columns, the Lp-norm (0 < pL < ∞) to the power pL can be defined as(1)where wij is the i-th row and j-th column element of W.
The extended Schatten p-norm (0 < pS < ∞) of the matrix W to the power pS can be written as(2)where σi is the i-th singular value of W. When pS = 1, the Schatten 1-norm is also known as the nuclear norm or trace norm, which is usually taken as the following form: ‖W‖*. When pS = 0, if we define 00 = 0, Eq 2 is the rank of W [10].
2.2 The definition of PRFE
Denote by X an m×n matrix, each row of X represents the expression level of a gene in n samples, and each column of X represents the expression level of all the m genes in one sample. As mentioned above, for gene expression researches, the gene number m is much larger than the sample number n. The PRFE method decomposes the matrix X into two matrices W (the product of U and D) and VT by using SVD(3)where W is an m×K matrix and VT is a K×n matrix, VVT = In. The general feature extraction minimization problem [7, 8] is defined as follows:(4)where ‖•‖F is the Frobenius norm. The differentially expressed genes are usually identified according to W [8, 15], so the Eq 4 can be easily converted to the following form:(5)which can make it more convenient to optimize W. To improve the robustness to outliers in gene expression data, we use the Lp-norm (0 < pL ≤ 1) to obtain an optimized W:(6)
When pS → 0, relative to the trace norm ‖W‖*, Schatten p-norm will approximate the rank of W [16], hence, we replace the ‖W‖* by Schatten p-norm (0 ≤ pS ≤ 1) . Finally, the PRFE method can be used to solve the feature extraction problem as follows:(7)where λ is the regularization parameter.
2.3 Solving the PRFE problem
Eq 7 is intractable since the two items are non smooth. Therefore, the Augmented Largrangian Multiplier (AML) method [17–19] is taken to solve Eq 7. In this subsection, we first introduce the AML method briefly.
For a matrix A, the constrained optimization problem can be written as(8)
Suppose that the matrix B satisfies the condition that B = A, then the AML algorithm to solve Eq 8 is described as follows:
Algorithm 1. AML algorithm to solve
Set 1 < η < 2. Initialize Ω and φ > 0.
while not converge do
Update A by
Update Ω by Ω = Ω + B − A
Update φ by φ = ηφ
end while
To facilitate the writing, in Eq 7 we replace the W − XV with C and replace W with D. According to AML algorithm, Eq 7 can be rewritten as follows:(9)
In Eq 9, there are three variables W, C and D which make the formula quite difficult to be solved. The alternating direction method [20] can be utilized to deal with this thorny problem exactly. The core idea to resolve Eq 9 is the case that the problem is optimized only by one variable when fixing the remaining two variables. In this way, three new but solvable problems arise.
Problem 1: When fixing W and D, Eq 9 can be written as the following form:(10)
In this case, can be denote as a constant e. And note that the elements in W can be decoupled, so for each element, only the following problem need to be solved:(11)where τ denotes . Then we denote f(w) as the objective function in Eq 11:(12)
In Eq 12, there is only one variable w, and the convexity of the equation can be easily analyzed. When w = 0, f(w) is not differentiable, so we only consider the case of w ≠ 0 in the following analysis. Then we compare the minimal solution to f(w) (w ≠ 0) with to obtain the optimum solution to f(w). When w ≠ 0, the first, second and third derivatives of f(w) are as follows:(13)(14)(15)where sgn(w) is defined as follows: sgn(w) = 1 if w > 0, and sgn(w) = −1 if w < 0. The local minimum of f(w) can be obtained by finding the root of f′(w) = 0, so we analysis f′(w) at first. According to Eq 15, f′(w) is convex at w > 0 and f′(w) is concave at w < 0. In order to find the extrema of f′(w), we let f″(w) = 0 and obtain the solution:(16)
In this case, we denote a constant a (a > 0) as w (w > 0), that is f″(a) = 0 and f″(−a) = 0. Therefore, f′(w) can obtain the maximum f″(−a) at w < 0, and f′(w) can obtain the minimum f″(a) at w > 0. There are three cases to solve f(w):
(a) f′(a) ≥ 0 and f′(−a) ≤ 0
In this case, f′(w) ≤ 0 when w < 0 and f′(w) ≥ 0 when w > 0, so the minimal solution to f(w) is w = 0.
(b) f′(−a) > 0
In this case, according to Eq 13, f′(a) > 0. f′(w) ≥ 0 when w > 0 and w < 0, f′(w) = 0 has two roots which indicate that f(w) is convex at w < −a and f(w) is concave at –a ≤ w < 0. So the minimal solution to f(w) is the root of f′(w) = 0 at e < w < −a.
(c) f′(a) < 0
In this case, according to Eq 13, f′(−a) < 0. f′(w) ≤ 0 when w < 0 and w > 0, f′(w) = 0 has two roots which indicate that f(w) is convex at w > a and f(w) is concave at 0 ≤ w < a. So the minimal solution to f(w) is the root of f′(w) = 0 at a < w < e.
In summary, the Eq 11 can be optimized by(17)where w1 ∈ (e, −a) and w2 ∈ (a, e) are the roots of f′(w) = 0. The roots can be acquired by the Newton method initialized at e [10].
Problem 2: When fixing W and C, Eq 9 can be written as:(18)
In this case, can be denoted as E. For simplicity, Eq 18 can be rewritten as follows:(19)where ρ denotes . Suppose D and E are decomposed into UΔVT and QΣRT, respectively, where Δ and Σ are the singular value matrices. So Eq 19 can be written as(20)
To obtain the solution of Eq 20, we first introduce the theorem: For any two matrices , then tr(ATB) ≤ tr(σ(A)T σ(B)), where σ(A) and σ(B) are the singular value matrices of A and B, respectively. According to the theorem, we have the following formula(21)
When U = Q and VT = RT, the equality holds in Eq 21, so the optimal problem in Eq 20 can be converted as the following form(22)
Suppose σi and δi are the i-th singular values of D and E, respectively, then Eq 22 can be written as(23)
The form of Eq 23 is the same as Eq 11, so the optimal solution to Eq 23 can be obtained in the same way with the optimal solution of Eq 11.
Problem 3: When fixing C and D, Eq 9 can be written as:(24)
Denote and , Eq 24 can be simplified to the following form:(25)
The problem in Eq 25 is equivalent to solving a quadratic function and it is easy to obtain the solution(26)
In summary, the brief algorithm of PRFE is shown as follows.
Algorithm 2. PRFE method
Input: Data matrix:
Schatten p-norm value: pS
Lp-norm value: pL
Regularization parameter: λ
Output: Optimized matrix
The data matrix X is decomposed into W and VT by SVD, where W is the product of U and D.
W ⇐ XV.
Solve the problem in by AML method.
Set 1 < η < 2. Initialize C, D, Ω, Ψ and φ > 0.
while not converge do
Update C by the optimal solution to
Update D by the optimal solution to
Update W by
Update Ω by Ω = Ω + φ(C – W + XV)
Update Ψ by Ψ = Ψ + φ(W − D)
Update φ by φ = ηφ
end while
2.4 Identifying differentially expressed genes by PRFE
The gene expression data can be denoted as a matrix X of size m×n, each row of X represents the expression level of a gene in n samples, and each column of X represents the expression level of all the m genes in one sample. Fig 1 shows the graphical depiction of processing the matrix X by PRFE. Following the convention in [15], the PRFE method decomposes the matrix X into two matrices W and VT, where sj (j = 1,2,⋯, n) is the sample expression profile, gi is the gene transcriptional responses, wk is an eigensample of column of W, vk is an engienpattern of row of VT, is the j-th column of VT and contains the coordinates of the j-th sample in X.
[Figure omitted. See PDF.]
Fig 1. The PRFE model of gene expression data used for gene identification.
https://doi.org/10.1371/journal.pone.0133124.g001
To identify differentially expressed genes from X, the critical information of the differentially expressed genes in sj needs to be studied. Since includes the positional information of the j-th sample in X, according to the formula(27)the important information of differentially expressed genes in sj can be captured by wk. That is, the differentially expressed genes are identified based on W.
After W has been optimized by PRFE method, a novel can be obtained. Therefore, according to , the differentially expressed genes are identified. can be denoted as follows:(28)
Following the description in [21], the differentially expressed genes are usually grouped into two classes: up-regulated genes and down-regulated genes, which can be reflected by the positive items and negative items in . Here, we only consider the absolute value of the items in to identify the differentially expressed genes. Then, the matrix is summed by rows to obtain the evaluating vector [22](29)
Generally speaking, the larger the item in is, the more differential the gene is. Therefore, we sort the elements in in descending order and take the top h (h ≪ m is a number that can be defined according to the corresponding requirement) genes as the differentially expressed ones.
2.5 Discussion of the selection of p value in PRFE
In Eq 7, the values of pL and pS in PRFE method are specified within 0 < pL ≤ 1 and 0 ≤ pS ≤ 1, respectively. However, the special values of pL and pS are more interesting to be selected for solving the problem in Eq 7.
To improve the robustness to outliers in gene expression data, the Lp-norm is taken as the error function. In PRFE, the value of pL should be in the range of (0, 1], and it does not mean that the smaller value of pL can acquire the better performance. Conversely, we suggest taking the L1-norm to improve the robustness to outliers since the error function is convex while pL = 1 [10].
The Schatten p-norm is used as the regularization function to obtain a low-rank matrix. As mentioned in Subsection 2.1 and 2.2, when pS → 0, the Schatten p-norm approximates the rank function. That is, the Schatten p-norm can achieve more accurate to approximate rank in the range of [0, 1). However, in this case the Schatten p-norm is not convex. When pS = 1, the Schatten p-norm is convex while it cannot achieve accurate to approximate rank [10]. Thus, we have the flexibility to choose different pS values corresponding to the different situations.
In order to verify the reasonableness of the values of pL and pS, the simulation experiments are given in Subsection 3.1.
Results and Discussion
This section shows the experimental results on simulation data and real gene expression data sets. For simplicity, the regularization parameter λ in Eq 7 is taken as 1 in whole experiments [10]. To demonstrate the effectiveness of our method for recognizing the differentially expressed genes, the PMD [7], SPCA [6], CIPMD [9] and SVM-RFE [23]are used for comparison.
3.1 Results on simulation data
3.1.1 Data source.
We describe here a general scheme to generate simulation data. Suppose we want to generate data from such that the q (q < p) leading eigenvectors of the covariance matrix Σ are sparse. Denote the first q eigenvectors as v1,⋯, vq, which are specified to be sparse and orthonormal. The remaining p − q eigenvectors are not specified to be sparse. Denote the positive eigenvalues of Σ in decreasing order as c1,⋯,cp.
We first need to generate the other p − q orthonormal eigenvectors of Σ. To this end, form a full-rank matrix , where v1,⋯,vq are the pre-specified sparse eigenvectors and are arbitrary. For example, can be randomly drawn from (0, 1); if V* is not of full-rank for one random draw, we can draw another set of vectors. Then we apply the Gram-Schmidt orthogonalization method to V* to obtain an orthogonal matrix V = [v1,⋯ vq, vq+1,⋯, vp], which is actually the matrix Q from the QR decomposition of V*. Given the orthogonal matrix V, we form the covariance matrix Σ using the following eigen decomposition expression , where C = diag{c1,⋯, cp} is the eigenvalue matrix. The first q eigenvectors of Σ are the pre-specified sparse vectors v1,⋯, vq. To generate data from the covariance matrix Σ, let Σ be a random draw from N(0, Ip) and , then cov(X) = Σ, as described in [24].
The simulation data are generated as X ∼ (0, ∑4) with m = 3000. Let be four 3000-dimensional vectors, such as , and ; , and ; , and ; , and . Let E ∼ N(0, 1) be a noise matrix with 3000-dimension, which is added into . The four eigenvectors of ∑4 can be denoted as . And to make the four eigenvectors dominate, the eigenvalues in X can be denoted as c1 = 200, c2 = 150, c3 = 100, c4 = 50 and ck = 1 for k = 5,⋯, 3000. The detailed synthetic idea can be found in [24].
3.1.2 Simulation results.
In order to evaluate the performance of five methods, the experiment is repeated for 30 times and the average identification accuracies are reported. For fair comparison, 500 genes are identified by the five methods with their unique parameters. Since PMD, CIPMD and SPCA are sparse methods, α1, α2 and γ are the control-sparsity parameters of PMD, CIPMD and SPCA, respectively. Because pL and pS are the most important parameters of PRFE method, their impacts on our method should be studied at first. According to Subsection 2.5, we suggest taking pL = 1 to improve the robustness to outliers and taking pS = 0 to approximate the rank function. Therefore, when pL = 1, we test the performance of our model with different values of pS in the range of {0, 0.1,⋯, 1} and define this special case as PRFE pL = 1. Similarly, when pS = 0, we investigate the performance of our method with different values of pL in the range of {0.1, 0.2,⋯, 1} and define this special case as PRFE pS = 0. Fig 2 shows the average identification accuracies of the five methods with different parameters while the simulation data are 3000×10. In Fig 2, it can be clearly seen that either PRFE pL = 1 or PRFE pS = 0 is superior to the other four methods in spite of PMD, SPCA, CIPMD and SVM-RFE can also reach higher identification accuracies. This result clearly justifies the serviceability of the PRFE method to introduce pL-norm and pS-norm in gene identification. To be precise, while the parameters are larger than 0.4, PMD and CIPMD reach their highest point and becomes stable. The accuracies of SPCA is monotonically decreasing when the parameter are larger than 0.1. Due to SVM-RFE is not a sparse method, so it is not sensitive to the parameters. The accuracies of PRFE pL = 1 is also monotonically decreasing in all of the parameters, this verifies the Schatten p-norm can achieve more accurate to approximate rank when pS → 0. The accuracies of PRFE pS = 0 is increasing with the increasing parameters which can demonstrate that Lp-norm, as the error function, can acquire a better performance when pL = 1.
[Figure omitted. See PDF.]
Fig 2. Identification accuracies of the five methods on simulation data with different parameters, where pS is taken as the parameter in the case of PRFE pL = 1 to test the performance of different pS values; pL is taken as the parameter in the case of PRFE pS = 1 to test the performance of different pL values; α1, α2 and γ are the control-sparsity parameters of PMD, CIPMD and SPCA, respectively.
https://doi.org/10.1371/journal.pone.0133124.g002
The number of samples in gene expression data has an influence on the identification accuracy when we recognize differentially expressed genes using feature extraction methods. The five methods are tested with different sample numbers to find the regular pattern that how the sample numbers affect the identification accuracy. Here, for the PRFE method, we select pL = 1 and pS = 0 since PRFE can obtain the best result in this case. PMD and CIPMD can reach its highest point and becomes stable when the parameters are larger than 0.4, so we select 0.4 as the sparse parameter for PMD and CIPMD. For SPCA, we choose 0.1 as the sparse parameter since SPCA can acquire its best result when parameter is 0.1. Fig 3 shows the average identification accuracies of different methods with different sample numbers. It is obvious to be seen that with the increasing of sample numbers, the accuracies of the four methods are increased. The accuracies of SVM-RFE method is monotonically decreasing. The proposed method can dominate the other methods in all the sample numbers. Moreover, the accuracies of our method are close to 100% when n ≥ 60.
[Figure omitted. See PDF.]
Fig 3. Identification accuracies of the five methods on simulation data with different samples.
https://doi.org/10.1371/journal.pone.0133124.g003
To further investigate the performance of the methods, the average receiver operator characteristic (ROC) curve is shown in Fig 4 with the optimal parameter of different methods. Fig 4 shows that PRFE and the competitive methods can identify differentially expressed genes effectively. However, through the True Positive Rate and False Positive Rate we can find that PRFE have the best outcome. Since we add a noise matrix into simulation data, so the false positive and false negative appear.
[Figure omitted. See PDF.]
Fig 4. ROC curve for simulation data.
https://doi.org/10.1371/journal.pone.0133124.g004
The area under curve (AUC) statistics are listed in Table 1 with the optimal parameter of different methods. From Table 1 we can conclude that the ascending order of accuracy given by the five methods is: SPCA, PMD, SVM-RFE, CIPMD and PRFE.
[Figure omitted. See PDF.]
Table 1. AUC statistics for simulation data.
https://doi.org/10.1371/journal.pone.0133124.t001
3.2 Results on gene expression data sets
To evaluate the proposed method, two publically available gene expression data are adopted: gene expression data of plants responding to abiotic stresses [25, 26] and the leukemia data set [27]. To compare with PRFE method, PMD, CIPMD, SVM-RFE and SPCA are also used to identify differentially expressed genes.
3.2.1 Parameters selection.
As mentioned in Subsection 3.1, PRFE method can reach the best performance when pL = 1 and pS = 0. Therefore, for PRFE method, we take pL = 1 and pS = 0 to identify the differentially expressed genes on real gene expression data. PMD, CIPMD and SPCA are parse methods, whose sparse parameters have an enormous influence on the identification accuracy. According to the results on simulation data in Subsection 3.1, by choosing the sparse parameters α1, α2 and γ appropriately, PMD, CIPMD and SPCA can obtain their optimal performance respectively.
3.2.2 Results on gene expression data of plants responding to abiotic stresses.
(a) Data source and GO analysis
Gene expression data of plants responding to abiotic stresses include two classes: roots and shoots in each stress. The Affymetix CEL files were downloaded from NASCArrays [http://affy.arabidopsis.info/link_to_iplant.shtml] [25], reference numbers are: control, NASCArrays-137; cold stress, NASCArrays-138; osmotic stress, NASCArrays-139; salt stress, NASCArrays-140; drought stress, NASCArrays-141; UV-B light stress, NASCArrays-144; and heat stress, NASCArrays-146. There are 22810 genes in each sample and the sample number of each stress type in the raw data is listed in Table 2. The raw arrays are adjusted by using GC-RMA software [26] in order to avoid the background of optional noise and normalized by using quantile normalization. The GC-RMA results are gathered in a matrix to be processed by SPCA, PMD, CIPMD, SVM-RFE and PRFE.
[Figure omitted. See PDF.]
Table 2. The sample number of each stress type in the raw data.
https://doi.org/10.1371/journal.pone.0133124.t002
For fair comparison, 500 genes are identified by PMD, CIPMD and SPCA by choosing α1, α2 and γ appropriately. SVM-RFE has no sparse parameters, the top 500 genes of SVM-RFE method are selected as the differentially expressed genes. And according to Subsection 2.4, the top 500 genes of PRFE method are selected as the differentially expressed genes. The identified genes are checked by Gene Ontology (GO) Term Enrichment tools which can be used to describe genes in the input or query set and to help discover what functions the genes may have in common [28]. GOTermFinder, as a web-based tool, can find the significant GO terms among plenty of genes and it is publicly available at http://go.princeton.edu/cgi-bin/GOTermFinder [29]. Therefore, GOTermFinder offers some significant information for the biological explanation of high-throughput experiments. The threshold parameters are given as follows: maximum P-value is set to 0.01 and minimum number of gene products is set to 2. In the following, only the primary outcomes of GO Term Enrichment are shown.
(b) Term responding to stress
The numbers of genes and P-value of response to stress (GO:0006950) in root and shoot samples are given in Table 3.
[Figure omitted. See PDF.]
Table 3. Response to stress (GO:0006950).
In this table, the response to stress on differentially expressed genes is shown, whose background frequency in TAIR is 4044/30322 (13.3%), where 4044/30322 represents having 4044 genes response to stimulus in whole 30322 genes. SF and PV represent the sample frequency and P-value, respectively. The sample frequency, e.g. 223, represents the method identifies 500 genes, in which there are 223 genes responding to stress. Root and shoot denote the root samples and shoot samples, respectively.
https://doi.org/10.1371/journal.pone.0133124.t003
From Table 3 we can see that all the five methods can identify the differentially expressed genes with higher sample frequency which can reflect the accuracy of the feature extraction method and lower P-value. PRFE, SPCA and PMD are unsupervised methods, so we first compare the three algorithms. In the 12 terms, there is only two of them (osmotic stress in shoot samples and salt stress in root samples) that the proposed method is surpassed by PMD slightly. In the remaining 10 terms, PRFE method outperforms PMD and SPCA. Generally speaking, since supervised methods take the class labels into consideration, they usually have better performance than unsupervised methods. However, unsupervised methods have unique advantages than supervised methods. For example, when a data set has no class information, in this case the supervised methods are always helpless in analyzing the data set, but unsupervised methods like PMD, SPCA and PRFE can analyze the data without class labels effectively. Table 3 shows that PRFE outperforms CIPMD on drought stress in shoot samples, salt stress in root samples and UV-B stress in shoot samples. Furthermore, only on drought stress in shoot and root samples, osmotic stress in root samples, salt stress in shoot samples and UV-B stress in root samples SVM-RFE is superior to our method.
(c) Term responding to abiotic stimulus
Table 4 shows the gene numbers and P-value of response to abiotic stimulus (GO:0009628) in root and shoot samples.
[Figure omitted. See PDF.]
Table 4. Response to abiotic stimulus (GO:0009628).
In this table, the response to abiotic stimulus on differentially expressed genes is shown, whose background frequency in TAIR is 2842/30322 (9.4%), where 2842/30322 represents having 2842 genes response to stimulus in whole 30322 genes. SF and PV represent the sample frequency and P-value, respectively. The sample frequency can reflect the identify accuracy of the diffenrent methods, e.g. 155, represents the method identifies 500 genes, in which there are 155 genes responding to abiotic stimulus. Root and shoot denote the root samples and shoot samples, respectively.
https://doi.org/10.1371/journal.pone.0133124.t004
As Table 4 lists, each of the five methods can acquire good performance when it is used to identify the differentially expressed genes responding to abiotic stimulus. We still analysis the unsupervised methods at first. The proposed method is superior to SPCA and PMD in 11 terms, only for the salt stress data set in root samples, PRFE method is dominated by PMD and SPCA. For the supervised methods, PRFE is superior to CIPMD on cold stress in the root and shoot samples, drought stress in the shoot samples and osmotic stress in the shoot samples. On cold stress in the shoot samples and drought stress in the shoot samples our method outperforms SVM-RFE.
3.2.3 Results on leukemia data.
(a) Data source and GO analysis
The leukemia data set consists of 27 cases of acute lymphoblastic leukemia (ALL) and 11 cases of acute myelogenous leukemia (AML) [27]. It is summarized by a 5000×38 matrix for further processed.
For fair comparison, 100 genes are identified by the five methods. The Gene Ontology (GO) enrichment of functional annotation of the identified genes by five methods is detected by ToppFun [30] which is publicly available at http://toppgene.cchmc.org/enrichment.jsp. Here, GO: Biological Process is the main objective to analysis. The P-value is set to 0.01 and number of gene limits is set to 2 by ToppFun.
(b) Terms relate to leukemia data
Table 5 lists the top 10 closely related terms corresponding to different methods. From Table 5 it can be clearly found that PRFE method outperforms PMD and CIPMD in all 10 terms. Our method can identify the same number of genes as SPCA in the following three terms: defense response, regulation of immune system process and leukocyte activation. However, we have lower P-values than SPCA in these three terms. Though in the term: cell activation our method is surpassed by SPCA, PRFE outperforms SPCA in the remaining terms. SVM-RFE method performs best in all the five methods. But in the term: response to reactive oxygen species, only PRFE and PMD can identify differentially expressed genes, in addition, PRFE can identify more genes than PMD.
[Figure omitted. See PDF.]
Table 5. The terms of genes identified by different methods.
In this table, 'Term in Genome' denotes the number of genes associated with the term in global genome; 'Input' denotes the number of genes associated with the term from input.
https://doi.org/10.1371/journal.pone.0133124.t005
To further study the performance of the methods, a Venn diagram is shown in Fig 5. From Fig 5 we can see that both PRFE and SVM-RFE identify less 'unique' differentially expressed genes than PMD, SPCA and CIPMD. There are 17 genes shared by all the five methods. The detailed information of the 5 'unique' differentially expressed genes extracted by PRFE are shown in Table 6. From Table 6 we can see that the 5 'unique' differentially expressed genes extracted by PRFE and neglected by other methods are associated with leukemia. Therefore, this suggests that PRFE is an excellent method for identifying differentially expressed genes on leukemia data set.
[Figure omitted. See PDF.]
Fig 5. Venn diagram of five methods on leukemia data.
https://doi.org/10.1371/journal.pone.0133124.g005
[Figure omitted. See PDF.]
Table 6. The detailed information of the 5 'unique' genes identified by PRFE.
https://doi.org/10.1371/journal.pone.0133124.t006
(c) Genes correlate with leukemia data
To further study the correlation between the identified genes and leukemia data, they are verified based on the literatures. For simplicity, the top 30 genes identified by PRFE are taken into consideration. Depending on [31], there are 50 genes most closely correlated with the leukemia data set distinction in the known samples. Among these 50 genes, 3 genes are contained in the top 30 genes identified by PRFE. The Affymetrix ID and Gene Symbol of 3 genes are given as follows: M13792_at (ADA), M69043_at (NFKBIA), Y00787_s_at (IL8). The article [31] was written by Golub et al. in 1999, at that time, only 50 genes were found to be associated with the leukemia data set. As time goes on, many other genes were found to be closely correlated with leukemia. According to [32], there are 210 genes is related to leukemia. All the 30 genes identified by our method can be found in [32]. The detailed information of the 30 genes are shown in Table 7.
[Figure omitted. See PDF.]
Table 7. The detailed information of the 30 genes identified by PRFE.
https://doi.org/10.1371/journal.pone.0133124.t007
Conclusion
In this paper, based on the Schatten p-norm and Lp-norm, we propose a novel feature extraction method named as PRFE to identify differentially expressed genes in gene expression data sets. The method combined the Schatten p-norm and Lp-norm to provide an effective way for gene identification. Numerous experiments on simulation data and real gene expression data sets demonstrate that the proposed method has a better performance than the other state-of-the-art gene identification methods. Moreover, the identified genes are confirmed that they are closely correlated with the corresponding gene expression data.
Author Contributions
Conceived and designed the experiments: JL JXL. Performed the experiments: JL JXL XSW. Analyzed the data: JL YLG XZK. Contributed reagents/materials/analysis tools: YLG DW. Wrote the paper: JL JXL XSW.
Citation: Liu J, Liu J-X, Gao Y-L, Kong X-Z, Wang X-S, Wang D (2015) A P-Norm Robust Feature Extraction Method for Identifying Differentially Expressed Genes. PLoS ONE 10(7): e0133124. https://doi.org/10.1371/journal.pone.0133124
1. Heller MJ. DNA microarray technology: devices, systems, and applications. Annual review of biomedical engineering. 2002;4(1):129–53.
2. Sarmah CK, Samarasinghe S. Microarray gene expression: A study of between-platform association of Affymetrix and cDNA arrays. Computers in biology and medicine. 2011;41(10):980–6. pmid:21917247
3. Liu J-X, Xu Y, Zheng C-H, Wang Y, Yang J-Y. Characteristic gene selection via weighting principal components by singular values. PloS one. 2012;7(7):e38873. pmid:22808018
4. Ma S, Kosorok MR. Identification of differential gene pathways with principal component analysis. Bioinformatics. 2009;25(7):882–9. pmid:19223452
5. Zheng C-H, Ng T-Y, Zhang D, Shiu C-K, Wang H-Q. Tumor classification based on non-negative matrix factorization using gene expression data. NanoBioscience, IEEE Transactions on. 2011;10(2):86–93.
6. Journée M, Nesterov Y, Richtárik P, Sepulchre R. Generalized power method for sparse principal component analysis. The Journal of Machine Learning Research. 2010;11:517–53.
7. Witten DM, Tibshirani R, Hastie T. A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis. Biostatistics. 2009;10(3):515–34. pmid:19377034
8. Liu J-X, Zheng C-H, Xu Y. Extracting plants core genes responding to abiotic stresses by penalized matrix decomposition. Computers in biology and medicine. 2012;42(5):582–9. pmid:22364779
9. Liu J-X, Liu J, Gao Y-L, Mi J-X, Ma C-X, Wang D. A Class-Information-Based Penalized Matrix Decomposition for Identifying Plants Core Genes Responding to Abiotic Stresses. PloS one. 2014;9(9):e106097. pmid:25180509
10. Nie F, Wang H, Huang H, Ding C. Joint Schatten p-norm and lp-norm robust matrix completion for missing value recovery. Knowledge and Information Systems. 2013:1–20.
11. Mazumder R, Hastie T, Tibshirani R. Spectral regularization algorithms for learning large incomplete matrices. The Journal of Machine Learning Research. 2010;11:2287–322. pmid:21552465
12. Toh K-C, Yun S. An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pacific Journal of Optimization. 2010;6(615–640):15.
13. Huang J, Nie F, Huang H, Lei Y, Ding C, editors. Social trust prediction using rank-k matrix recovery. Proceedings of the Twenty-Third international joint conference on Artificial Intelligence; 2013: AAAI Press.
14. Eckart C, Young G. The approximation of one matrix by another of lower rank. Psychometrika. 1936;1(3):211–8.
15. Liang F. Use of SVD-based probit transformation in clustering gene expression profiles. Computational Statistics & Data Analysis. 2007;51(12):6355–66.
16. Nie F, Huang H, Ding CH, editors. Low-Rank Matrix Recovery via Efficient Schatten p-Norm Minimization. AAAI; 2012; AAAI conference on artificial intelligence.
17. Powell MJ. " A method for non-linear constraints in minimization problems": UKAEA; 1967.
18. Hestenes MR. Multiplier and gradient methods. Journal of optimization theory and applications. 1969;4(5):303–20.
19. Bertsekas DP. Constrained optimization and Lagrange multiplier methods. Computer Science and Applied Mathematics, Boston: Academic Press, 1982. 1982;1.
20. Gabay D, Mercier B. A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Computers & Mathematics with Applications. 1976;2(1):17–40.
21. Kilian J, Whitehead D, Horak J, Wanke D, Weinl S, Batistic O, et al. The AtGenExpress global stress expression data set: protocols, evaluation and model data analysis of UV‐B light, drought and cold stress responses. The Plant Journal. 2007;50(2):347–63. pmid:17376166
22. Liu J-X, Wang Y-T, Zheng C-H, Sha W, Mi J-X, Xu Y. Robust PCA based method for discovering differentially expressed genes. BMC bioinformatics. 2013;14(Suppl 8):S3. pmid:23815087
23. Guyon I, Weston J, Barnhill S, Vapnik V. Gene selection for cancer classification using support vector machines. Machine learning. 2002;46(1–3):389–422.
24. Shen H, Huang JZ. Sparse principal component analysis via regularized low rank matrix approximation. Journal of multivariate analysis. 2008;99(6):1015–34.
25. Craigon DJ, James N, Okyere J, Higgins J, Jotham J, May S. NASCArrays: a repository for microarray data generated by NASC’s transcriptomics service. Nucleic acids research. 2004;32(suppl 1):D575–D7.
26. Wu Z, Irizarry RA, Gentleman R, Murillo FM, Spencer F. A model based background adjustment for oligonucleotide expression arrays. Journal of the American Statistical Association. 2004;99(468):909–17.
27. Brunet J-P, Tamayo P, Golub TR, Mesirov JP. Metagenes and molecular pattern discovery using matrix factorization. Proceedings of the national academy of sciences. 2004;101(12):4164–9.
28. Ashburner M, Ball CA, Blake JA, Botstein D, Butler H, Cherry JM, et al. Gene Ontology: tool for the unification of biology. Nature genetics. 2000;25(1):25–9. pmid:10802651
29. Boyle EI, Weng S, Gollub J, Jin H, Botstein D, Cherry JM, et al. GO:: TermFinder—open source software for accessing Gene Ontology information and finding significantly enriched Gene Ontology terms associated with a list of genes. Bioinformatics. 2004;20(18):3710–5. pmid:15297299
30. Chen J, Bardes EE, Aronow BJ, Jegga AG. ToppGene Suite for gene list enrichment analysis and candidate gene prioritization. Nucleic acids research. 2009;37(suppl 2):W305–W11.
31. Golub TR, Slonim DK, Tamayo P, Huard C, Gaasenbeek M, Mesirov JP, et al. Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. science. 1999;286(5439):531–7. pmid:10521349
32. Wu M-Y, Dai D-Q, Zhang X-F, Zhu Y. Cancer Subtype Discovery and Biomarker Identification via a New Robust Network Clustering Algorithm. PloS one. 2013;8(6):e66256. pmid:23799085
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2015 Liu et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
In current molecular biology, it becomes more and more important to identify differentially expressed genes closely correlated with a key biological process from gene expression data. In this paper, based on the Schatten p-norm and Lp-norm, a novel p-norm robust feature extraction method is proposed to identify the differentially expressed genes. In our method, the Schatten p-norm is used as the regularization function to obtain a low-rank matrix and the Lp-norm is taken as the error function to improve the robustness to outliers in the gene expression data. The results on simulation data show that our method can obtain higher identification accuracies than the competitive methods. Numerous experiments on real gene expression data sets demonstrate that our method can identify more differentially expressed genes than the others. Moreover, we confirmed that the identified genes are closely correlated with the corresponding gene expression data.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer