About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
IEEE TKDE
Paper
A Generalized Hierarchical Multi-Latent Space Model for Heterogeneous Learning
Abstract
In many real world applications such as image annotation, gene function prediction, and insider threat detection, the data collected from heterogeneous sources often exhibit multiple types of heterogeneity, such as task heterogeneity, view heterogeneity, and label heterogeneity. To address this problem, we propose a Hierarchical Multi-Latent Space (HiMLS) learning framework to jointly model the triple types of heterogeneity. The basic idea is to learn a hierarchical multi-latent space by which we can simultaneously leverage the task relatedness, view consistency and the label correlations to improve the learning performance. We first propose a multi-latent space approach to model the complex heterogeneity, which is then used as a building block to stack up a multi-layer structure in order to learn the hierarchical multi-latent space. In such a way, we can gradually learn the more abstract concepts in the higher level. We present two instantiated models of the generalized framework using different divergence measures. The two-phase learning algorithms are used to train the multi-layer models. We drive the multiplicative update rules for pre-training and fine-tuning in each model, and prove the convergence and correctness of the update methods. The effectiveness of the proposed approach is verified on various data sets.