Voting Based Visual Cue Integration

 

Traditionally, fusion of visual information for tracking has been based on explicit models for uncertainty and integra-tion. Most of the proposed approaches use some form of Bayesian statistics, where strong models are employed. In this paper, we argue that for cases where a large number of visual features are available, it is possible to use weak models for integration. In particular, integration using voting based methods is analyzed. Two methods are proposed and experimentally evaluated: i) response fusion and ii) action fusion. The proposed methods differ in the choice of the underlying voting space: the former method integrates the visual information directly in the image space, while the latter represents the information in a velocity space. The emphasis is also put on the evaluation of four different weighting techniques and their impact on the overall performance of the proposed tracking system.

Details

 

 

 

Related Publications

Cue Integration for Visual Servoing
(Danica Kragic, Henrik I Christensen)
IEEE Transactions on Robotics and Automation, vol. 17(1), February, 2001.

Active Visual Tracking of An End-Effector: Integration of Various Cues
(Danica Kragic and Henrik I. Christensen)
In M. Vincze and G.D. Hager (Eds.), Robust Vision for Vision-Based Control of Motion ,
IEEE ISBN 0780353781, February 2000.

Integration of visual cues for active tracking of an end--effector
(Danica Kragic and Henrik I. Christensen)
In IEEE/RSJ International Conference on Intelligent Robots and Systems, 1999. IROS'99,
vol 1, pp 362-268, October 1999. Kyongju, Korea


Back to research