About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
ACSSC 2006
Conference paper
Detecting generic visual events with temporal cues
Abstract
We present novel algorithms for detecting generic visual events from video. Target event models will produce binary decisions on each shot about classes of events involving object actions and their interactions with the scene, such as airplane taking off, exiting car, riot. While event detection has been studied in scenarios with strong scene and imaging assumptions, the detection of generic visual events from an unconstrained domain such as broadcast news has not been explored. This work extends our recent work [3] on event detection by (1) using a novel bag-of-features representation along with the earth movers' distance to account for the temporal variations within a shot, (2) learn the importance among input modalities with a double-convex combination along both different kernels and different support vectors, which is in turn solved via multiple kernel learning. Experiments show that the bag-of-features representation significantly outperforms the static baseline; multiple kernel learning yields promising performance improvement while providing intuitive explanations for the importance of the input kernels.