We consider a general adaptive L2-regularized optimization problem ^ ¯(¸) = arg min¯;¤ `(y; ¯)+¸¯T¤W¯, where ` is a loss function, ¤ and W are two diagonal matrices. We show that with appropriate choice of ¸ and ¤, if ` is differentiable, then the above adaptive L2 penalty term is equivalent to adaptive L1 penalty, adaptive L2 penalty, and combined adaptive L1 and L2 penalty. Therefore, this method is a data-driven method, which automatically choose a penalty among the three penalty terms. We consider details when ` is a negative log-likelihood function which covers generalized linear model, and develop two algorithms using Newton- Raphson method for the general approach, and sequential minimal optimization (SMO) method for the case p >> n. TheXiangrong Yin, Junshan Qiu e±cacy of our approach is illustrated by simulations, comparisons with other methods and real data analysis.

TR Number: 
2008-02
Xiangrong Yin, Junshan Qiu

To request a copy of this report, please email us. We will send you a pdf copy if one is available.