About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
ICME 2002
Conference paper
New kernels for analyzing multimodal data in multimedia using kernel machines
Abstract
Research in automated analysis of digital media content has led to a large collection of low-level feature extractors, such as face detectors, videotext extractors, speech and speaker identifiers, people/vehicle trackers, and event locators. These media metadata are often symbolic rather than continuous-valued, and pose significant difficulty to subsequent tasks such as classification and dimensionality reduction which traditionally deal with continuous-valued data. This paper proposes a novel mechanism that extends tasks traditionally limited to continuous-valued feature spaces, such as (a) dimensionality reduction, (b) de-noising, and (c) clustering, to domains with symbolic features. To this end, we introduce new kernels based on well-known distance metrics, and prove Mercer validity of these kernels for analyzing symbolic feature spaces. We demonstrate their usefulness within the context of kernel-space methods such as Kernel PCA and SVM, in classifying machine learning datasets from the UCI repository and in temporal clustering and tracking of videotext in multimedia. We show that the generalized kernels help capture information from symbolic feature spaces, visualize symbolic data, and aid tasks such as classification and clustering, and therefore are useful in multimodal analysis of multimedia.