Poelitz/etal/2016a: Interpretable Domain Adaptation via Optimization over the Stiefel Manifold

Bibtype Article
Bibkey Poelitz/etal/2016a
Author Poelitz, Christian and Duivestijn, Wouter and Morik, Katharina
Ls8autor Morik, Katharina
Pölitz, Christian
Title Interpretable Domain Adaptation via Optimization over the Stiefel Manifold
Journal Machine Learning
Volume 104
Number 2-3
Pages 315-336
Abstract In domain adaptation, the goal is to find common ground between two,
potentially differently distributed, data sets. By finding common concepts
present in two sets of words pertaining to different domains, one could
leverage the performance of a classifier for one domain for use on the other
domain. We propose a solution to the domain adaptation task, by efficiently
solving an optimization problem through Stochastic Gradient Descent. We
provide update rules that allow us to run Stochastic Gradient Descent
directly on a matrix manifold: the steps compel the solution to stay on the
Stiefel manifold. This manifold encompasses projection matrices of word
vectors onto low-dimensional latent feature representations, which allows us
to interpret the results: the rotation magnitude of the word vector
projection for a given word corresponds to the importance of that word
towards making the adaptation. Beyond this interpretability benefit,
experiments show that the Stiefel manifold method performs better than
state-of-the-art methods.
Year 2016
Projekt SFB876-A1

  • Impressum