Bagging Regularizes

Intuitively, we expect that averaging — or bagging — different regressors with low correlation should smooth their behavior and be somewhat similar to regularization. In this note we make this intuition precise. Using an almost classical definition of stability, we prove that a certain form of averaging provides generalization bounds with a rate of convergence of the same order as Tikhonov regularization — similar to fashionable RKHSbased learning algorithms. This report describes research done within the Center for Biological & Computational Learning which is part of the McGovern Institute, the Department of Brain & Cognitive Sciences and the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. This research was sponsored by grants from: Office of Naval Research (DARPA) under contract No. N00014-001-0907, National Science Foundation (ITR) under contract No. IIS-0085836, National Science Foundation (KDI) under contract No. DMS-9872936, and National Science Foundation under contract No. IIS-9800032 Additional support was provided by: Central Research Institute of Electric Power Industry, Eastman Kodak Company, DaimlerChrysler AG, Compaq, Honda R&D Co., Ltd., Komatsu Ltd., NEC Fund, Siemens Corporate Research, Inc., and The Whitaker Foundation.