We introduce a new neural network learning algorithm suited to the context of domain adaptation,
in which data at training and test time come from similar but different distributions. Our
algorithm is inspired by theory on domain adaptation suggesting that, for effective domain
transfer to be achieved, predictions must be made based on a data representation that cannot
discriminate between the training (source) and test (target) domains. We propose a training
objective that implements this idea in the context of a neural network, whose hidden layer is
trained to be predictive of the classification target, but uninformative as to the domain of the
input. Our experiments on a sentiment analysis classification benchmark, where the target data
available at the training time is unlabeled, show that our neural network for domain adaption
algorithm has better performance than either a standard neural networks and a SVM, trained on
input features extracted with the state-of-the-art marginalized stacked denoising autoencoders
of Chen et al. (2012).
[ Link to text file ]
<< Go back.