From multiple views to single view : A neural network approach
In most general learning problems, data is obtained from multiple sources. Hence, the features can be inherently partitioned into multiple views or feature sets. For example, a media clip can have both audio and video features. If we concatenate these features to form a single view, we essentially lose some statistical properties exhibited by the views. Since conventional Machine Learning algorithms do not deal with multiple views, Multi-View Learning (MVL) approaches like Co-training and Canonical Correlation Analysis were introduced. In this work, we propose an approach to multi-view learning based on a recently proposed autoencoder model called Predictive AutoEncoder (PAE). Standard PAE works with only two views. We propose ways to generalize the PAE to handle more than two views. Experimental results show that the proposed approach performs better than the existing MVL approaches like co-training and Canonical Correlation Analysis.