![]() 3, 1229–1243 (2003)īiau, G., Devroye, L., Lugosi, G.: Consistency of random forests and other averaging classifiers. 105, 157–170 (2011)īi, J., Bennett, K.P., Embrechts, M., Brenemanand, C.M., Song, M.: Dimensionality reduction via sparse support vector machines. 52, 2249–2260 (2008)Īuret, L., Aldrich, C.: Empirical comparison of tree ensemble variable importance measures. 99, 6562–6566 (2002)Īrcher, K.J., Kimes, R.V.: Empirical characterization of random forest variable importance measures. Finally, this selection algorithm is tested on the Landsat Satellite data from the UCI Machine Learning Repository.Īmbroise, C., McLachlan, G.J.: Selection bias in gene extraction on the basis of microarray gene-expression data. Next various simulation experiments illustrate the efficiency of the RFE algorithm for selecting a small number of variables together with a good prediction error. This algorithm recursively eliminates the variables using permutation importance measure as a ranking criterion. Our results motivate the use of the recursive feature elimination (RFE) algorithm for variable selection in this context. This allows us to describe how the correlation between predictors impacts the permutation importance. Firstly we provide a theoretical study of the permutation importance measure for an additive regression model. In high-dimensional regression or classification frameworks, variable selection is a difficult task, that becomes even more challenging in the presence of highly correlated predictors. This paper is about variable selection with the random forests algorithm in presence of correlated predictors. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |