Publication
WI 2010
Conference paper

DSP: Robust semi-supervised dimensionality reduction using dual subspace projections

View publication

Abstract

High-dimensional data usually incur learning deficiencies and computational difficulties. We present a novel semi-supervised dimensionality reduction technique that embeds high-dimensional data in an optimal low-dimensional subspace, which is learned with a few user supplied constraints as well as the structure of input data. We study two types of constraints that indicate whether or not pairs of data points originate from the same class. Data partitions that satisfy both types of constraints may be conflicting. To solve this problem, our method projects data into two different subspaces, one in the kernel space and one in the original input space, each is designed for enforcing one type of constraints. Projections in the two spaces interact and data are embedded in an optimal low-dimensional subspace where constraints are maximally satisfied. Besides constraints, our method also preserves the intrinsic data structure, such that nearby/far away data points in the original space are still near to/far from each other in the embedded space. Compared to existing techniques, our method has the following advantages: 1) It can benefit from constraints even when only a few are available. 2) It is robust and does not suffer from overfitting. 3) It handles nonlinearly separable data, but learns a linear data transformation. Thus the method can be easily generalized to new data points and is efficient in dealing with large data sets. Experiments on real data from multiple domains clearly demonstrate that significant improvements in learning accuracy can be achieved after dimensionality reduction by employing only a few constraints. © 2010 IEEE.

Date

13 Dec 2010

Publication

WI 2010

Authors

Topics

Share