Multi-modal semantic place classification
Andrzej Pronobis, O. Martinez Mozos, B. Caputo and
Patric Jensfelt
Abstract:
The ability to represent knowledge about space and its position
therein is crucial for a mobile robot. To this end, topological and
semantic descriptions are gaining popularity for augmenting purely
metric space representations. In this paper we present a multi-modal
place classification system that allows a mobile robot to identify
places and recognize semantic categories in an indoor environment.
The system effectively utilizes information from different robotic sensors
by fusing multiple visual cues and laser range data. This is
achieved using a high-level cue integration scheme based on a Support
Vector Machine (SVM) that learns how to optimally combine and
weight each cue. Our multi-modal place classification approach can
be used to obtain a real-time semantic space labeling system which
integrates information over time and space. We perform an extensive
experimental evaluation of the method for two different platforms and
environments, on a realistic off-line database and in a live experiment
on an autonomous robot. The results clearly demonstrate the effectiveness
of our cue integration scheme and its value for robust place classification
under varying conditions.
BibTeX Entry:
@Article{Pronobis10b,
author = {A. Pronobis and O. Martinez Mozos and B. Caputo and and P. Jensfelt},
title = {Multi-modal semantic place classification},
journal = {The International Journal of Robotics Research (IJRR)},
year = {2010},
OPTkey = {},
volume = {29},
number = {2-3},
pages = {298–320},
month = feb,
doi = {doi:10.1177/0278364909356483},
}
Download: pdf (1.7Mb)