Publication
ICASSP 2009
Conference paper

Visual saliency with side information

View publication

Abstract

We propose novel algorithms for organizing large image and video datasets using both the visual content and the associated side-information, such as time, location, authorship, and so on. Earlier research have used side-information as pre-filter before visual analysis is performed, and we design a machine learning algorithm to model the join statistics of the content and the side information. Our algorithm, Diverse-Density Contextual Clustering (D 2C2), starts by finding unique patterns for each sub-collection sharing the same side-info, e.g., scenes from winter. It then finds the common patterns that are shared among all subsets, e.g., persistent scenes across all seasons. These unique and common prototypes are found with Multiple Instance Learning and subsequent clustering steps. We evaluate D 2C2 on two web photo collections from Flickr and one news video collection from TRECVID. Results show that not only the visual patterns found by D2C2 are intuitively salient across different seasons, locations and events, classifiers constructed from the unique and common patterns also outperform state-of-the-art bag-of-features classifiers. ©2009 IEEE.

Date

Publication

ICASSP 2009

Authors

Share