About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
ICME 2000
Conference paper
Adaptive synthesis in progressive retrieval of audio-visual data
Abstract
With the advent of pervasive computing, a growing diversity of client devices is gaining access to audio-visual content. The increased variability in client device processing power, storage, band-width, and server loading require adaptive solutions for image, video and audio retrieval. Progressive retrieval is one prominent mode of access in which views at different resolutions are incrementally retrieved and refined over time. In this paper, we present a new framework for adaptively partitioning the synthesis operations in progressive retrieval of audio-visual signals. The framework considers that the server and client cooperate in synthesizing the views in order to best utilize the available processing power and bandwidth. We provide experimental results that demonstrate a significant reduction in latency in the progressive retrieval of images under different conditions of the client, server and network.