Skip to main content
Skip to main menu Skip to spotlight region Skip to secondary region Skip to UGA region Skip to Tertiary region Skip to Quaternary region Skip to unit footer

Slideshow

Yongdai Kim

Kim
Seoul National University, South Korea

Spare regularization methods for high dimensional regression have received much attention recently as an alternative of subset selection methods. Examples are lasso (Tibshirani 1996), bridge regression (1993), scad (2001), to name just few. An advantage of sparse regularization methods is that it gives a stable estimator with automatic variable selection and hence the resulting estimator performs well in prediction. Also, sparse regularization methods have many desirable properties when the true model is sparse. However, there are several disadvantages of sparse regularization methods as discussed by Zou and Hastie (2005). First, when p > n, the solution has at most n many nonzero coefficients. Second, if there is a group of covariates whose correlations are very high, the solution usually takes one covariate from the group and does not care which one is selected. Third, empirically, where there high correlations between covariates, the prediction performance of sparse solutions are dominated by non-sparse solutions such as ridge regression (Tibshirani 1996) where all of the highly correlated covariates are used. Hence, there is a need of less sparse solutions than the aforementioned sparse regularization methods. To make a less sparse solution, Zou and Hastie (2005) proposed the elastic net penalty, which is a linear combination of the lasso and ridge penalties. Friedman and Popescu (2004) proposed the gradient directed regularization which is a modified gradient descent method where more than one predictor variables are updated at each iteration. There are limitations in the elastic net and modified gradient directed regularization. The elastic net requires the rescaling of the solution to avoid overshrinkage. Even though Zou and Hastie (2005) proposed the rescaling factor heuristically for linear regression, it is not clear how to rescale the solution for other problems such as logistic regression. The estimator of the gradient directed regularization is not defined by a minimizer of a penalized empirical risk and hence it is difficult to study properties of the estimator. In this talk, we propose a new regularization method called the sparse regression with incentive (SRI) to overcome deficiencies of the elastic net and gradient directed regularization. Advantages of the SRI over the elastic net and gradient directed regularization is that the estimator is defined by a minimizer of a penalized empirical risk and it does not require the post-hoc rescaling as the elastic net does. Also, we can incorporate prior information of the group structure of covariates. This is a Joint Colloquium with the Department of Epidemiology and Biostatistics

Support us

We appreciate your financial support. Your gift is important to us and helps support critical opportunities for students and faculty alike, including lectures, travel support, and any number of educational events that augment the classroom experience. Click here to learn more about giving.

Every dollar given has a direct impact upon our students and faculty.