New combination coefficients for AdaBoost algorithms

Boosting algorithms are greedy methods for forming linear combinations of base hypotheses. The algorithm maintains a distribution on training examples, and this distribution is updated according to the combination coefficients of base hypothesis. The main difference of some AdaBoost algorithms is the different updating combining coefficient chosen per trial. In this paper we give some new combination coefficients for AdaBoost algorithms after introducing a new up-bound function of the potential and minimizing it. Some experimental results show that the new coefficients work well comparing with the original one and always can achieve a lager margin. Especially for a larger training problem with small optimal margin they outperform much.