We propose new PAC-Bayes bounds for the risk of the weighted majority vote that
depend on the mean and variance of the error of its associated Gibbs classifier. We
show that these bounds can be smaller than the risk of the Gibbs classifier and can
be arbitrarily close to zero even if the risk of the Gibbs classifier is close to 1/2.
Moreover, we show that these bounds can be uniformly estimated on the training
data for all possible posteriors Q. Moreover, they can be improved by using a
large sample of unlabelled data.
[ Link to text file ]
<< Go back.