About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Abstract
We present a computer vision system for robust object tracking in 3D by combining evidence from multiple calibrated cameras. This kernel-based 3D tracker is automatically bootstrapped by constructing 3D point clouds. These points clouds are then clustered and used to initialize the trackers and validate their performance. The framework describes a complete tracking system that fuses appearance features from all available camera sensors and is capable of automatic initialization and drift detection. Its elegance resides in its inherent ability to handle problems encountered by various 2D trackers, including scale selection, occlusion, view-dependence, and correspondence across views. Tracking results for an indoor smart room and a multi-camera outdoor surveillance scenario are presented. We demonstrate the effectiveness of this unified approach by comparing its performance to a baseline 3D tracker that fuses results of independent 2D trackers, as well as comparing the re-initialization results to known ground truth. © 2007 IEEE.