1. Introduction
In personalized medicine, it is important to predict patients’ short-term or long-term outcomes from their past or current characteristics, called features, predictors, or covariates. Usually, the original clinical data have a large number of features that are possibly associated with an outcome. The features consist of various types of data from different sources, such as genomic experiments, health screening, wearable devices, lab tests, and so on. Oftentimes, the number of candidate features is larger than the number of cases, which makes the traditional statistical methods unusable for this kind of data analysis. So, in prediction model building, feature selection is a critical step to improve the performance of fitted prediction models. Machine learning (ML) methods have emerged as a strong tool for building prediction models with high-dimensional data [1]. While the number of candidate features is big, usually only a small number of them are truly associated with the outcome.
Least absolute shrinkage and selection operator (LASSO [2]) is one of the most popular ML methods due to its strong prediction accuracy using different types of outcome variables. However, it is well-known that LASSO tends to over-select features [3]. If a prediction model over-selects features, many of the features included in a fitted prediction model will be falsely selected. In this case, the falsely selected features will act like random errors in the fitted model, so that they are deleterious to the prediction accuracy. Selecting too many features can also incur excessive cost since those features should be continuously collected to predict the outcome of future subjects. For example, suppose that we want to develop a prediction model using gene expression data. In such a project, we usually start with commercial microarray chips covering thousands of genes to identify the genes that are believed to be associated with a patient’s outcome. Once a prediction model is developed and validated, it will be used to predict the outcome of future patients. To this end, we would develop customized chips that include only the selected genes to increase the accuracy of assay and cut down the price of arrays [4]. In this case, if the fitted model includes too many genes, the price of the customized chips will be high and the performance of the assay to measure the expression of included genes will be compromised. The long-term use of a fitted prediction model can be very costly if it includes features that are obtained from expensive or time-consuming procedures, especially if they are not really associated with the outcome.
By using an -norm penalty, LASSO selects features based on the size of their regression estimates, rather than the statistical significance associated with the outcome. If all features have the same distribution, a variable selection based on the size of regression estimates will be identical to that based on statistical significance. As such, before applying an ML method, we standardize the features as an effort to make their distributions similar. No existing standardization method, however, makes the distributions of a large number of features perfectly identical. In particular, if different variable types (e.g., discrete and continuous variable types) are mixed among features, it will be impossible to unify their distributions.
Elastic net (EN [5]) uses a combination of -norm and -norm penalties, so that its regularization is even less strict in variable selection than the -norm penalty for LASSO. Hence, EN has a more serious over-selection problem.
To resolve these issues of LASSO and EN, Liu et al. [3] proposed to use a standard (un-penalized) regression method combined with the stepwise forward variable selection procedure to develop prediction models with high-dimensional data. The ML methods provide 0-shrinkage estimators, even for a reduced model, while standard regression models with stepwise selection provide the maximum likelihood estimators (MLEs). Furthermore, contrary to LASSO and EN, the standard regression methods select features based on the statistical significance of the predictors. Liu et al. [3] show that standard regression models equipped with a stepwise variable selection method include much fewer predictors in the fitted prediction model while maintaining strong prediction accuracy—comparable to LASSO and EN. However, as the dimension of features increases, the computing of stepwise selection can be quite heavy since it goes through every combination of predictors until it finds a set of all significant predictors.
For prediction with a large number of candidate predictors, we propose a repeated sieving method that partitions the predictors into many small blocks, so that traditional regression with stepwise variable selection can be conducted quickly and efficiently. Often, predictors are correlated. In this case, if there are two correlated covariates that are associated with the outcome, then both of them should be included in the same block for unbiased estimation [6]. When there are true covariates that are correlated with each other, the chance that all of them belong to the same block will be very small, especially if the block size is small and the number of candidate features is large. To increase this chance, we permute the covariates and apply the same procedure repeatedly. The final set of candidate predictors will be the union of the covariates selected (or sieved) from each block and each permutation. We may conduct the procedure of Liu et al. [3] to fit a prediction model if the number of covariates in the final set is not too large, say, smaller than 1000. Otherwise, we may apply another round of repeated sieving to the reduced set of features before conducting the procedure of Liu et al. [3]. This approach divides a complicated and heavy-computing task into multiple simple and fast-computing tasks, which enhances the efficiency and accuracy of the model fitting. Note that repeated sieving can also make use of parallel computing, which will further expedite the computations.
There have been numerous papers comparing the performance of logistic regression and some ML methods [7,8,9,10,11,12] and some studies comparing the performance of ML methods for survival outcomes with that of Cox regression [13]. Some previous studies also compare the prediction power of stepwise variable selection with LASSO and EN [14,15]. Most of the studies above are anecdotal in the sense that their findings are based on real data analyses without any systematic simulation studies. Furthermore, their example data sets are not really high dimensional because the number of features is not very large while the number of cases is large. For this kind of data set, standard regression methods work perfectly and we do not need a ML method. In this sense, they do not really evaluate the performance of ML methods for high-dimensional data. Hastie et al. [16] conducted extensive numerical studies to compare the prediction accuracy of forward stepwise selection and LASSO using continuous outcomes, but the stepwise method they use selects the covariates based on model fitting criteria such as R-square or AIC/BIC instead of un-penalized MLE or p-values from hypothesis testing.
In this paper, we compare the variable selection performance and prediction accuracy of LASSO, EN, and our repeated sieving method. In our numerical studies, we conduct extensive simulations with binary outcomes using logistic regression and survival outcomes using Cox regression model, and demonstrate our findings using a real data example.
2. Materials and Methods
We want to compare the performance of our repeated sieving method with LASSO and EN using simulations and the analysis of real data. We introduce our proposed method and briefly review LASSO and EN, and introduce the measurements to evaluate the performance of fitted prediction models.
Our repeated sieving method is nested with a standard regression method and stepwise variable selection [3]. The standard regression method for binary outcome is logistic regression with stepwise variable selection (L-SVS), and that for time-to-event outcome is Cox regression with stepwise variable selection (C-SVS). Suppose that there are n subjects, and we observe an outcome variable y and m features from each subject. The observed data set will look like . For high-dimensional data, m is much larger than n, while the number of features that are truly associated with the outcome, denoted as , is often very small. We consider 50–50 hold-out to partition a given data set into training and validation sets.
2.1. Statistical Models
Let denote a subset of features that are possibly associated with an outcome variable, and let denote their regression coefficients. We assume that k is much smaller than the number of cases, n.
2.1.1. Logistic Regression
Logistic regression is a popular method to associate a binary outcome variable y with covariates Z [17]. Suppose that, from n patients, we have data , where is a binary outcome that takes 1 if patient i has a response and 0 otherwise. A logistic model associates the response rate with covariates by
where is an unknown intercept term.Then, given covariates , outcomes are independent Bernoulli random variables with response rate
so that regression estimates are obtained by maximizing the log-likelihood function with respect to .2.1.2. Cox Proportional Hazards Model
Due to its efficiency and robustness, Cox’s [18] proportional hazards model is popularly used to relate a time-to-event endpoint with covariates. For subject , let denote the minimum of survival time and censoring time, and the event indicator that takes 1 if subject i has an event and 0 otherwise. The data from n patients are summarized as . We assume that, conditioning on covariates, censoring time is independent of survival time. Using the Cox proportional hazards model, the hazard function, , of patient i is expressed as
where is an unknown baseline hazard function. The regression estimates are obtained by maximizing the partial log-likelihood function where is the indicator function.2.1.3. Stepwise Variable Selection
Variable selection, also called dimension reduction, is an important procedure in building prediction models using high-dimensional data because the number of candidate features m is much larger than the sample size n, while the number of features that are truly associated with outcome is small. Popular variable selection methods for standard regression methods include forward stepwise procedure, backward elimination procedure, and all possible combination procedure. Backward elimination and all possible combination procedures are not workable for high-dimensional data because the estimation procedure of regression models does not converge for the models with a large number of features.
On the contrary, forward selection procedure is very useful, especially when the number of covariates that are truly related with the outcome is small. For a forward selection procedure, we specify two alpha values, for insertion and for deletion. The selection procedure starts with an intercept term (or an empty model for regression models not including an intercept term, like Cox’s proportional hazards model). In each step, it selects the most significant covariate if its p-value is smaller than , and the extraneous covariates are removed if they become insignificant after adding a new variable (i.e., if their p-values are larger than ). This procedure continues until no more variables are added to the current model. We can control the number of features to be selected for a prediction model by using appropriate and values.
Some data analysis computer programs, e.g., SAS, use penalized likelihood criteria for variable selection, such as Akaike information criterion or Bayesian information criterion, rather than performing hypothesis testing to calculate p-values. Using these methods, we do not know how significant the selected covariates are and cannot control the number of selected features.
2.2. Machine Learning Methods
2.2.1. LASSO
LASSO is a regularized regression method, imposing an -norm penalty to the objective function of traditional regression models. For a binary outcome, the negative log-likelihood function of a logistic regression is added with an -norm penalty and the regression estimates are obtained by minimizing [2]
with respect to .For a time-to-event outcome, an -norm penalty is imposed to the negative log-partial likelihood function of a proportional hazards model, and the regression estimates are obtained by minimizing [19]
with respect to . In this paper, the tuning parameter is determined by minimizing the regularized objective function through an internal cross-validation.2.2.2. Elastic Net
EN is a generalized regularized regression method, imposing a combination of - and -norm penalties to the objective function. For logistic regression for a binary outcome, the regression estimates are obtained by minimizing [5]
with respect to .For Cox regression for a time-to-event outcome, regression estimates are obtained by minimizing [5]
with respect to . The tuning parameters are obtained by minimizing the regularized objective function using an internal cross-validation as in LASSO.2.3. Repeated Sieving Method
In high-dimensional data, often the number of features that are truly associated with the outcome is small while the number of candidate features is very big. In this case, Liu et al. [3] have shown that, compared to LASSO and EN, traditional regression methods with stepwise variable selection (R-SVS) has similar or even better prediction performance with much fewer selections owing to their higher selection precision. In this paper, we consider higher-dimensional data with a level of features greater than 10K and possible dependency among features. To address the issue of heavy computation and selection error due to multi-collinearity, we propose a repeated sieving approach that conducts a two- or multi-step variable selection.
If the dimension, m, of a data set is really high, Liu et al.’s [3] R-SVS is very time-consuming since it progresses through every combination of each low dimensional set of features and conducts hypothesis testing on the selected features. Our repeated sieving method partitions the whole features into many small blocks, say, of size , and sorts (or sieves) significant features from each block. Computing for R-SVS with small blocks will be very fast. The features selected from blocks are candidate features that are truly associated with the outcome.
Note that features belonging to different blocks are never included in a common joint regression model at this step. According to Gail et al. [6], if a covariate is not associated with the outcome or independent of other covariates that are included in the model, then a regression model with the covariate missing has no bias issue at all. However, if two covariates are correlated and both are truly associated with the outcome, then both covariates should be included in a regression model for unbiased estimation of the regression coefficients. In order to take care of this issue, in the next step, we randomly permute the features and partition them into blocks of size , again with the hope that two dependent true features are assigned to the same block. We apply the same sieving procedure to the permuted data as that in the first step. Since the block size is very small, e.g., 50, compared to the dimension of the whole data m, we repeat the permutations many times, say, times. The features selected from the multiple permutations are candidate features to be selected for the final prediction model. Note that the set of selected features may grow as we repeat more permutations. If the set of selected features is not so big (for example, includes less than 1000 features), then we apply R-SVS to this set of features. If this set of selected features is still too big, however, we may apply another round of sieving to this reduced set of features.
In each stage of repeated sieving, we need to specify two values, for insertion and for deletion, to be used in R-SVS for each block. In the variable selection for the final model, we may use different alpha values of and for deletion.
-
For the original data set with n cases and m features, permute the order of features;
-
For each permuted data, partition the features into blocks and apply R-SVS to each block using ;
-
Permute the data set P times in total, and the sieved candidate features are the union of the selections from all permutations;
-
Apply R-SVS to the set of sieved features using to select features for the final model.
2.4. Performance Measurements
The variable selection performance of these prediction methods are evaluated by the total number of selected covariates and the number of selected covariates that are truly associated with the outcome, where the proportion of true selection over total selection shows the selection precision. Let denote the risk score, where Z is the vector of features included in the fitted prediction model and is the vector of their regression estimates. For a data set with a binary outcome, , the prediction performance of a fitted prediction model can be evaluated by the area under the curve (AUC) of the ROC curve generated using , where . An accurate prediction model has a large AUC value close to 1. On the other hand, for a data set with survival outcome, , the prediction performance of a fitted prediction model can be evaluated by Harrell’s concordance C-index between and . We also compute of the univariate Cox proportional hazards model, regressing on . For a survival outcome, an accurate prediction model has a large C-index and a large negative log p-value.
All data analyses for model fitting are conducted using open-source R software, R Foundation for Statistical Computing, under version 3.6.0. The R packages and functions we use for LASSO and EN is cv.glmnet from glmnet package. We developed our own R function for R-SVS for repeated sieving based on specified alpha values for insertion and deletion.
3. Results
3.1. Simulation Studies
In the simulation study, we investigate the impact of over-selection on prediction accuracy and compare the variable selection and prediction performance of our repeated sieving method with LASSO and EN. LASSO and EN do not control the number of selection, while our repeated sieving method, nested with R-SVS, controls the number of selections by setting various alpha values for insertion and deletion. We denote as the alpha values for insertion and deletion for R-SVS within blocks and as those for the final variable selection.
We generate samples of features using a multivariate Gaussian distribution with mean 0 and variances 1. The random vector of features is generated to consist of 500 independent blocks with block size 20. Within each block, the multivariate Gaussian distribution has a compound symmetry structure with a common correlation coefficient . We assume that predictors out of the candidate predictors are truly associated with the outcome, and every two of them belong to the same blocks. Thus, the true predictors belong to three different blocks.
At first, we consider a binary outcome case. For subject and the true predictors , the outcome is generated from a Bernoulli distribution with the success probability fitted using the logistic regression model
where is the vector of regression coefficients corresponding to the true predictors . We set the values of regression coefficients as , and is the intercept term. We use 50–50 hold-out for a training set of size and a validation set of size .By performing more permutations, the chance of jointly selecting true covariates from the same block becomes higher. We prove this idea by comparing the results of our repeated sieving method with and 100 permutations. For each permuted data set, we partition the covariates into blocks with covariates each, for which L-SVS can be easily carried out. We apply L-SVS to each block of the training set, and, after P permutations, the candidate covariates for the final model are the union of all sieved (or selected) covariates from all permutations. The alpha values are found to select around 200 sieved candidate covariates. The L-SVS applied to the set of these candidate covariates uses to fit the final prediction model.
For the fitted final prediction model, we count the total number of covariates included in the model, called total selection, and the number of true covariates among them, called true selection. Let Z denote the vector of covariates selected by the final model and the corresponding regression estimates. Then, we calculate the AUC value using the fitted risk score and outcome from the training set and the validation set. We repeat this simulation times, and calculate the mean total selection and mean true selection from the training sets, together with the mean AUC from the training and validation sets. AUC for the training set measures how well the final prediction model fits the training data, but it does not measure the real prediction accuracy because the fitted model tends to fit the training set better by including more predictors [20]. This is called an over-fitting issue. The AUC value from an independent validation set really measures the prediction accuracy, resolving the over-fitting issue.
The simulation results are summarized in Table 1. The -norm penalty of LASSO is stricter than the - and -norm combination penalty of EN, so that LASSO has a smaller mean total selection than EN. EN also has a little more true selection than LASSO, probably because of the larger total selection. The results also show that LASSO and EN select a large number of features, while the repeated sieving methods only select fewer than 10 features in total. The true selections of the two ML methods are slightly larger than that of repeated sieving methods, but considering their big total selections, this difference in true selection is negligible.
By selecting a much larger number of covariates, the two ML methods have a slightly better fitting of the training sets than our repeated sieving. However, from the average AUCs for validation sets, we find that our repeated sieving method has better prediction accuracy than LASSO and EN. We have these results because the ML methods have large proportions of false selections that act like error terms in the fitted prediction models and, hence, lower the prediction accuracy.
Overall, the repeated sieving method has better prediction accuracy, even with a much smaller number of total selections than LASSO and EN. On the other hand, for our repeated sieving method, by performing more permutations (i.e., vs. ), both total and true selections increase, but the increase in true selection is larger. Hence, repeated sieving with permutations has a slightly higher AUC from validation sets.
For the survival outcome case, the covariates are generated in the same way as in the binary outcome case. For subject with true predictors , the hazard rate of the survival distribution is given by
where is the vector of regression coefficients and is an unknown baseline hazard function. The true features are drawn in the same way as in the binary outcome case, and we set for with baseline hazard rate . We consider either 30% or 10% censoring proportion. For the 30% censoring case, censoring times are generated from a uniform distribution . Fixing the accrual period a, the censoring times for 10% censoring are generated from another uniform distribution with an additional follow-up period b. We apply 50–50 hold-out for splitting a data set of size .The process of repeated sieving for survival outcome is the same as that for binary outcome, except the nested L-SVS is replaced with C-SVS. We compare the performance of LASSO, EN, and the repeated sieving using C-SVS with or 100 permutations. For the repeated sieving, we use to select fewer than 500 candidate features from the permuted features and for the final prediction model by applying C-SVS to the set of candidate features. We count the total selection and the true selection by the final prediction model fitted from the training set.
Let Z denote the selected predictors by the final model and their corresponding coefficients; then, we fit a univariate Cox regression model with the survival outcome on the covariate using the training or the validation set. The p-value of the single covariate is calculated and p-value is used to measure the prediction accuracy. We also estimate the association between the risk score and the outcome using Harrell’s C-index for the training and validation sets. Large p-value and C-index for the training set mean that the final prediction model fits the training set well, and those for the validation set indicate that the fitted model has a good prediction accuracy. Through simulations, the mean total and true selections are calculated from the training sets, and the mean p-value and C-index are calculated from both training and validation sets.
The simulation results for the survival outcome case are shown in Table 2. With 10% censoring, all the methods have more true selections and higher prediction accuracy in terms of p-value and C-index for validation sets than with 30% censoring. As in the binary outcome case, LASSO and EN have much larger total selections than our repeated sieving method. With 30% censoring, the mean true selections of LASSO and EN are slightly larger than that of our repeated sieving method. With 10% censoring, however, the mean true selection of the repeated sieving with permutations is higher than LASSO and EN. With much smaller total selection and high true selection, our repeated sieving method has higher prediction accuracy than LASSO and EN overall. The results also show that the repeated sieving method with permutations has higher true selection and higher prediction accuracy in terms of p-value and C-index from validation sets.
3.2. Real Data Examples
Wang et al. [21] published a data set from the tumour bank at the Erasmus Medical Center (Rotterdam, Netherlands) collected from frozen tumour samples of patients with lymph-node-negative breast cancer. The patients received treatment during 1980–1995, but they did not receive systemic neoadjuvant or adjuvant therapy. This human recurrence breast cancer microarray data set can also be retrieved from the Gene Expression Omnibus database. This data set contains samples and the expression of transcripts from total RNA of the samples. Estrogen receptor (ER) status denotes the cell type where ER+ means normal cells and ER- stands for abnormal cells. The relapse-free survival (RFS) rates of the patients are also included in the data set.
For the binary outcome of ER status, we randomly select samples for validation and the remaining samples are used to train the model. LASSO, EN, and repeated sieving with permutations are applied to fit a prediction model. For the repeated sieving method, we partition the features into blocks of size and alpha values . Figure 1 shows the ROC curves of the fitted models for the training and validation sets. All three models have large AUCs for the training set because of over-fitting. But, for the validation set, EN and repeated sieving have slightly larger AUC than LASSO. More analysis results are shown in Table 3. The repeated sieving method selects only two features (that are also selected by LASSO and EN), compared with 22 for LASSO and 520 for EN, but it has the same prediction accuracy as or slightly higher prediction accuracy than the ML methods.
From these results, we find that LASSO and EN select too many features and many of them are false selections. For the binary outcome of ER-status, we apply L-SVS to the set of features included in the final prediction models by LASSO and EN, called LASSO-set and EN-set, respectively, using for insertion and for deletion. Since repeated sieving selects only two features, we do not have to further reduce the number of features. Table 4 reports the results, comparing the performance of the final prediction models fitted by L-SVS from LASSO-set and EN-set. Note that the fitted model, obtained by applying L-SVS to the LASSO-set, selects exactly the same two features as repeated sieving, so that these two methods have the same final prediction models and identical prediction performance. However, the final model of L-SVS applied to EN-set selects three features, among which only one is commonly selected by the other two methods. In spite of one more selection than the other two methods, the final model by L-SVS applied to EN-set has slightly lower prediction accuracy in terms of AUC from the validation set. Combined with L-SVS, LASSO and EN have much fewer selections (2 vs. 22 and 3 vs. 520, respectively) while maintaining similar prediction accuracy as our repeated sieving. This demonstrates the issues associated with over-selection. We apply multivariate regression to the models from Table 4 and summarize the coefficients and significance of the selected covariates in Table 5.
For the time-to-event outcome of RFS, we use the same training and validation sets as in the ER-status analysis. We apply LASSO, EN, and repeated sieving with permutations, block size for sieving , and alpha values 0.0002, 0.0005, 0.001) to the training set. The analysis results are summarized in Table 6. As in the analysis of ER status, repeated sieving selects much fewer features than LASSO and EN. Due to the over-selection, LASSO and EN fit the training data better in terms of larger p-value and C-index, but repeated sieving has the highest prediction accuracy in terms of its highest p-value and C-index for validation set. We find that five out of the nine features selected by repeated sieving are also selected by both LASSO and EN.
Since LASSO and EN select too many features, we apply C-SVS to the sets of features selected by LASSO and EN to further reduce the dimension. Table 7 reports the performance of the final models and the list of features included in the final model. The three methods select a similar number of features and 219312_s_at is selected by all three models. By applying C-SVS to LASSO-set and EN-set, we achieve significant dimension reduction (to 8 from 54 and to 10 from 174). By comparing the p-value and C-index for the validation set between Table 5 and Table 6, we find that EN followed by C-SVS (Table 6) improves the prediction accuracy drastically from EN alone (Table 5), although LASSO followed by C-SVS does not improve the prediction accuracy from LASSO alone at all. Table 8 summarizes the coefficient size and significance of the covariates from the fitted models in Table 7.
4. Discussion
ML methods such as LASSO and EN have been widely used to develop prediction models with high-dimensional data, and the recent recognition of ML methods promotes collective applications and investment. This feat stimulates the development of clinical data analysis but also induces potential overuse risk since the over-selection and unfavorable prediction accuracy issues have been identified and reviewed by many studies. Liu et al. [3] provide an alternative approach, which combines standard regression methods with stepwise variable selection, called R-SVS here. We extend their method to prediction model building with very high-dimensional data. Both simulations and real data examples show that the ML methods over-select features and lose prediction power due to over-selecting features. However, our repeated sieving method has much fewer selections but higher selection precision and better prediction accuracy compared with LASSO and EN.
There has been some recent research on variants of LASSO trying to resolve the over-selection problem [22,23], but we only consider the traditional LASSO in this paper and it may be our future topic to compare the performance between these modified ML methods with our repeated sieving method. We find that the repeated sieving method produces precise prediction models with efficient computing, but without the complicated modifications as in these ML models.
SAS has a program that performs stepwise variable selection that uses information criteria such as AIC, BIC, or R-square. Like LASSO and EN, the stepwise selection procedures using this information cannot control the amount of selection. We extend the R program for R-SVS developed by Liu et al. [3] and propose the repeated sieving approach to handle high-dimensional data. Even though it is known that traditional regression is not appropriate for high-dimensional data analysis, our repeated sieving method resolves this problem by partitioning the high-dimensional features into low-dimensional blocks for which R-SVS can be conducted efficiently. Note that the repeated sieving method is more applicable, especially when the number of truly significant predictors is much smaller than the sample size. Our repeated sieving method can be applied to any type of outcomes as far as regression methods exist. Further, it is composed of standard statistical methods, so that researchers without deep knowledge of bioinformatics theory can still use it for analyzing high-dimensional data. Our software is available at the link
In SVS, can be selected so that the final model includes a reasonable number of features. We may try different alpha values so that the p-value from validation set is minimized. The repeated sieving method in this paper has a common goal with the SVS by Liu et al. [3], i.e., prediction model building from high-dimensional data, but the ideal dimensionality of data is different between the two methods. SVS works well for a data set with up to m = 1000 features, but repeated sieving is more appropriate for data with a larger number of features, such as . If the final prediction model includes K features, the number of regression models to be fitted is on the order for SVS, while it is on the order . If m is large, e.g., in the millions, this difference will be huge, so that SVS will require excessive computing time, whereas the sieving method can be used almost limitlessly in the dimension of data.
The repeated sieving method takes a little more computing time to train models than LASSO and EN, since they select variables in different ways. Repeated sieving, which is nested with stepwise selection, spends most of the running time performing hypothesis testing and calculating the p-values of covariates for variable selection based on statistical significance. However, LASSO and EN just estimate regression coefficients to select covariates based on their sizes. Another process that makes repeated sieving require a longer computing time is permutations. We find that repeated sieving with 100 permutations has slightly higher prediction performance than with 10 permutations. The computing time of LASSO, EN, and repeated sieving for real data analysis are 8.76 s, 291.6 s, and 13,237.1 s for binary outcome, and 79.46 s, 887.62 s, and 13,204.85 s for survival outcome. It is expected that repeated sieving takes the most time because it conducts permutations, and the running time can be further reduced with parallel computing power.
L.L. conducted simulation study and data analysis and wrote the manuscript with S.-H.J. S.-H.J. guided L.L. through the simulation studies and real data analysis. All authors have read and agreed to the published version of the manuscript.
Not applicable.
Not applicable.
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
The authors declare no conflicts of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1. ROC curves from the prediction of ER status using different methods for Wang et al.’s [21] data.
Binary outcome case: Simulation results of the three prediction methods with
Repeated Sieving | ||||
---|---|---|---|---|
LASSO | EN | | | |
Total Selections | 67.05 | 94.6 | 8.61 | 8.77 |
True Selections | 5.71 | 5.77 | 5.31 | 5.66 |
AUC-Training | 1.00 | 1.00 | 0.98 | 0.99 |
AUC-Validation | 0.81 | 0.79 | 0.87 | 0.89 |
Survival outcome case: Simulation results of the three prediction methods with
Repeated Sieving | ||||
---|---|---|---|---|
LASSO | EN | | | |
(i) 30% Censoring | ||||
Total Selections | 33.93 | 40.4 | 8.26 | 9.63 |
True Selections | 5.58 | 5.63 | 5.17 | 5.54 |
41.10 | 42.74 | 34.67 | 37.05 | |
19.18 | 18.55 | 21.02 | 22.65 | |
C-index (training) | 0.84 | 0.86 | 0.82 | 0.83 |
C-index (validation) | 0.72 | 0.72 | 0.73 | 0.74 |
(ii) 10% Censoring | ||||
Total Selections | 35.29 | 36.71 | 8.25 | 9.4 |
True Selections | 5.68 | 5.77 | 5.69 | 5.83 |
47.22 | 47.33 | 40.58 | 42.37 | |
26.45 | 25.83 | 28.97 | 29.72 | |
C-index (training) | 0.83 | 0.84 | 0.81 | 0.82 |
C-index (validation) | 0.74 | 0.74 | 0.75 | 0.76 |
Analysis results of Wang et al.’s [
Method | # Selected Features | AUC-Training | AUC-Validation |
---|---|---|---|
LASSO | 22 | 0.98 | 0.87 |
Elastic Net | 520 | 1.00 | 0.88 |
Repeated Sieving | 2 | 0.97 | 0.88 |
Analysis results of Wang et al.’s [
Method | # Selected Features | AUC-Training | AUC-Validation |
---|---|---|---|
LASSO-stepwise | 2 | 0.97 | 0.88 |
Elastic Net-stepwise | 3 | 0.98 | 0.86 |
Repeated Sieving | 2 | 0.97 | 0.88 |
Selected Features | |||
LASSO-stepwise | 209604_s_at, 218146_at | ||
Elastic Net-stepwise | 209604_s_at, 207754_at, 204495_s_at | ||
Repeated Sieving | 209604_s_at, 218146_at |
Multivariate logistic regression on ER status using the training set of Wang et al.’s [
LASSO-Stepwise | Elastic Net-Stepwise | ||||
---|---|---|---|---|---|
Feature | Coef. | p -Value | Feature | Coef. | p -Value |
209604_s_at | | | 209604_s_at | | |
218146_at | | | 207754_at | | |
204495_s_at | | | |||
Repeated Sieving | |||||
Feature | Coef. | p -Value | |||
209604_s_at | | | |||
218146_at | | |
Analysis results of Wang et al.’s [
# Selected | C-Index | ||||
---|---|---|---|---|---|
Method | Features | Training | Validation | Training | Validation |
LASSO | 54 | 30.71 | 0.80 | 0.94 | 0.62 |
Elastic Net | 174 | 29.60 | 0.49 | 0.95 | 0.58 |
Repeated Sieving | 9 | 26.43 | 2.65 | 0.85 | 0.64 |
Analysis results of Wang et al.’s [
# Selected | C-Index | ||||
---|---|---|---|---|---|
Method | Features | Training | Validation | Training | Validation |
LASSO-stepwise | 8 | 24.45 | 0.23 | 0.84 | 0.54 |
Elastic Net-stepwise | 10 | 26.65 | 3.78 | 0.87 | 0.68 |
Repeated Sieving | 9 | 26.43 | 2.65 | 0.85 | 0.64 |
Selected Features | |||||
LASSO-stepwise | 219312_s_at, 219408_at, 218270_at, 212431_at, 212900_at, | ||||
207763_at, 201598_s_at, 212898_at | |||||
Elastic Net-stepwise | 219312_s_at, 203218_at, 216822_x_at, 219478_at, 212990_at, | ||||
205239_at, 219116_s_at, 214592_s_at, 212431_at, 217840_at | |||||
Repeated Sieving | 219312_s_at, 203218_at, 216822_x_at, 219478_at, 209524_at, | ||||
211004_s_at, 206012_at, 204991_s_at, 205551_at |
Multivariate Cox regression on RFS using the training set of Wang et al.’s [
LASSO-Stepwise | Elastic Net-Stepwise | ||||
---|---|---|---|---|---|
Feature | Coef. | p -Value | Feature | Coef. | p -Value |
219312_s_at | | | 219312_s_at | | |
219408_at | | | 203218_at | | |
218270_at | | | 216822_x_at | | |
212431_at | | | 219478_at | | |
212900_at | | | 212990_at | | |
207763_at | | | 205239_at | | |
201598_s_at | | | 219116_s_at | | |
212898_at | | | 214592_s_at | | |
212431_at | | | |||
217840_at | | | |||
Repeated Sieving | |||||
Feature | Coef. | p -Value | |||
219312_s_at | | | |||
203218_at | | | |||
216822_x_at | | | |||
219478_at | | | |||
209524_at | | | |||
211004_s_at | | | |||
206012_at | | | |||
204991_s_at | | | |||
205551_at | | |
References
1. Engelhard, M.M.; Navar, A.M.; Pencina, M.J. Incremental Benefits of Machine Learning—When Do We Need a Better Mousetrap. JAMA Cardiol.; 2021; 6, pp. 621-623. [DOI: https://dx.doi.org/10.1001/jamacardio.2021.0139] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33688913]
2. Tibshirani, R. Regression Shrinkage and Selection via the Lasso. J. R. Stat. Soc. Ser. B (Methodol.); 1996; 58, pp. 267-288. [DOI: https://dx.doi.org/10.1111/j.2517-6161.1996.tb02080.x]
3. Liu, L.; Gao, J.; Beasley, G.; Jung, S.H. LASSO and Elastic Net Tend to Over-Select Features. Mathematics; 2023; 11, 3738. [DOI: https://dx.doi.org/10.3390/math11173738]
4. Lee, J.; Sohn, I.; Do, I.G.; Kim, K.M.; Park, S.H.; Park, J.O.; Park, Y.S.; Lim, H.Y.; Sohn, T.S.; Bae, J.M. et al. Nanostring-based multigene assay to predict recurrence for gastric cancer patients after surgery. PLoS ONE; 2014; 9, e90133. [DOI: https://dx.doi.org/10.1371/journal.pone.0090133]
5. Zou, H.; Hastie, T. Regularization and Variable Selection via the Elastic Net. J. R. Stat. Soc. Ser. B Statistical Methodol.; 2005; 67, pp. 301-320. [DOI: https://dx.doi.org/10.1111/j.1467-9868.2005.00503.x]
6. Gail, M.H.; Wie, S.; Piantadosi, S. Biased estimates of treatment effect in radomized experiments with nonlinear regressions and omitted covariates. Biometrika; 1984; 71, pp. 431-444. [DOI: https://dx.doi.org/10.1093/biomet/71.3.431]
7. Kuhle, S.; Maguire, B.; Zhang, H.; Hamilton, D.; Allen, A.C.; Joseph, K.S.; Allen, V.M. Comparison of logistic regression with machine learning methods for the prediction of fetal growth abnormalities: A retrospective cohort study. BMC Pregnancy Childbirth; 2018; 18, 333. [DOI: https://dx.doi.org/10.1186/s12884-018-1971-2]
8. Christodoulou, E.; Ma, J.; Collins, G.S.; Steyerberg, E.W.; Verbakel, J.Y.; Van Calster, B. A systematic review shows no performance benefit of machine learning over logistic regression for clinical prediction models. J. Clin. Epidemiol.; 2019; 110, pp. 12-22. [DOI: https://dx.doi.org/10.1016/j.jclinepi.2019.02.004]
9. Piros, P.; Ferenci, T.; Fleiner, R.; Andréka, P.; Fujita, H.; Főző, L.; Kovács, L.; Jánosi, A. Comparing machine learning and regression models for mortality prediction based on the Hungarian Myocardial Infarction Registry. Knowl.-Based Syst.; 2019; 179, pp. 1-7. [DOI: https://dx.doi.org/10.1016/j.knosys.2019.04.027]
10. Khera, R.; Haimovich, J.; Hurley, N.C.; McNamara, R.; Spertus, J.A.; Desai, N. Use of Machine Learning Models to Predict Death After Acute Myocardial Infarction. JAMA Cardiol.; 2021; 6, pp. 633-641. [DOI: https://dx.doi.org/10.1001/jamacardio.2021.0122]
11. Song, X.; Liu, X.; Liu, F.; Wang, C. Comparison of machine learning and logistic regression models in predicting acute kidney injury: A systematic review and meta-analysis. Int. J. Med. Inform.; 2021; 151, 104484. [DOI: https://dx.doi.org/10.1016/j.ijmedinf.2021.104484] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33991886]
12. Jing, B.; Boscardin, W.J.; Deardorff, W.J.; Jeon, S.Y.; Lee, A.K.; Donovan, A.L.; Lee, S.J. Comparing Machine Learning to Regression Methods for Mortality Prediction Using Veterans Affairs Electronic Health Record Clinical Data. Med. Care; 2022; 60, pp. 470-479. [DOI: https://dx.doi.org/10.1097/MLR.0000000000001720] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35352701]
13. Kattan, M.W. Comparison of Cox regression with other methods for determining prediction models and nomograms. J. Urol.; 2003; 170, pp. S6-S10. [DOI: https://dx.doi.org/10.1097/01.ju.0000094764.56269.2d] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/14610404]
14. Gauthier, P.A.; Scullion, W.; Berry, A. Sound quality prediction based on systematic metric selection and shrinkage: Comparison of stepwise, lasso, and elastic-net algorithms and clustering preprocessing. J. Sound Vib.; 2017; 400, pp. 134-153. [DOI: https://dx.doi.org/10.1016/j.jsv.2017.03.025]
15. Kumar, S.; Attri, S.D.; Singh, K.K. Comparison of Lasso and stepwise regression technique for wheat yield prediction. J. Agrometeorol.; 2019; 21, pp. 188-192. [DOI: https://dx.doi.org/10.54386/jam.v21i2.231]
16. Hastie, T.; Tibshirani, R.; Tibshirani, R. Best Subset, Forward Stepwise or Lasso? Analysis and Recommendations Based on Extensive Comparisons. Stat. Sci.; 2020; 35, pp. 579-592. [DOI: https://dx.doi.org/10.1214/19-STS733]
17. Tolles, J.; Meurer, W.J. Logistic Regression: Relating Patient Characteristics to Outcomes. JAMA; 2016; 316, pp. 533-534. [DOI: https://dx.doi.org/10.1001/jama.2016.7653]
18. Cox, D.R. Regression Models and Life-Tables. J. R. Stat. Soc. Ser. B (Methodol.); 1972; 34, pp. 187-220. [DOI: https://dx.doi.org/10.1111/j.2517-6161.1972.tb00899.x]
19. Tibshirani, R. The lasso Method for Variable Selection in the Cox Model. Stat. Med.; 1997; 16, pp. 385-395. [DOI: https://dx.doi.org/10.1002/(SICI)1097-0258(19970228)16:4<385::AID-SIM380>3.0.CO;2-3]
20. Simon, R.; Radmacher, M.D.; Dobbin, K.; McShane, L.M. Pitfalls in the use of DNA microarray data for diagnostic and prognostic classification. J. Natl. Cancer Inst.; 2003; 95, pp. 14-18. [DOI: https://dx.doi.org/10.1093/jnci/95.1.14]
21. Wang, Y.; Klijn, J.G.; Zhang, Y.; Sieuwerts, A.M.; Look, M.P.; Yang, F.; Talantov, D.; Timmermans, M.; Meijer-van Gelder, M.E.; Yu, J. et al. Gene-expression profiles to predict distant metastasis of lymph-node-negative primary breast cancer. Lancet; 2005; 365, pp. 671-679. [DOI: https://dx.doi.org/10.1016/S0140-6736(05)17947-1] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/15721472]
22. Yamada, M.; Koh, T.; Iwata, T.; Shawe-Taylor, J.; Kaski, S. Localized Lasso for High-Dimensional Regression. Proc. Mach. Learn. Res.; 2017; 54, pp. 325-333.
23. Liang, J.; Wang, C.; Zhang, D.; Xie, Y.; Zeng, Y.; Li, T.; Zuo, Z.; Ren, J.; Zhao, Q. VSOLassoBag: A variable-selection oriented LASSO bagging algorithm for biomarker discovery in omic-based translational research. J. Genet. Genom.; 2023; 50, pp. 151-162. [DOI: https://dx.doi.org/10.1016/j.jgg.2022.12.005] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/36608930]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Background: The prediction of patients’ outcomes is a key component in personalized medicine. Oftentimes, a prediction model is developed using a large number of candidate predictors, called high-dimensional data, including genomic data, lab tests, electronic health records, etc. Variable selection, also called dimension reduction, is a critical step in developing a prediction model using high-dimensional data. Methods: In this paper, we compare the variable selection and prediction performance of popular machine learning (ML) methods with our proposed method. LASSO is a popular ML method that selects variables by imposing an
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer