Pascal Germain, Researcher in Machine Learning

This page is deprecated! My new webage is here.


We show that convex KL-regularized objective functions are obtained from a
PAC-Bayes risk bound when using convex loss functions for the stochastic Gibbs
classifier that upper-bound the standard zero-one loss used for the weighted majority
vote. By restricting ourselves to a class of posteriors, that we call quasi
uniform, we propose a simple coordinate descent learning algorithm to minimize
the proposed KL-regularized cost function. We show that standard lp-regularized
objective functions currently used, such as ridge regression and lp-regularized
boosting, are obtained from a relaxation of the KL divergence between the quasi
uniform posterior and the uniform prior. We present numerical experiments where
the proposed learning algorithm generally outperforms ridge regression and Ada-
Boost.

[ Link to text file ]

<< Go back.