Subspace learning using consensus on the grassmannian manifold
High-dimensional structure of data can be explored and task-specific representations can be obtained using manifold learning and lowdimensional embedding approaches. However, the uncertainties in data and the sensitivity of the algorithms to parameter settings, reduce the reliability of such representations, and make visualization and interpretation of data very challenging. A natural approach to combat challenges pertinent to data visualization is to use linearized embedding approaches. In this paper, we explore approaches to improve the reliability of linearized, subspace embedding frameworks by learning a plurality of subspaces and computing a geometric mean on the Grassmannian manifold. Using the proposed algorithm, we build variants of popular unsupervised and supervised graph embedding algorithms, and show that we can infer high-quality embeddings, thereby significantly improving their usability in visualization and classification.