Published in

Wiley, Journal of Robotic Systems, 1(21), p. 23-32, 2003

DOI: 10.1002/rob.10125

Links

Tools

Export citation

Search in Google Scholar

Fusing Visual and Inertial Sensing to Recover Robot Ego-motion

Journal article published in 2003 by Guillem Alenyà ORCID, Elisa Martínez, Carme Torras
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Red circle
Preprint: archiving forbidden
Orange circle
Postprint: archiving restricted
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

A method for estimating mobile robot ego-motion is presented, which relies on tracking contours in real-time images acquired with a calibrated monocular video system. After fitting an active contour to an object in the image, 3D motion is derived from the affine deformations suffered by the contour in an image sequence. More than one object can be tracked at the same time, yielding some different pose estimations. Then, improvements in pose determination are achieved by fusing all these different estimations. Inertial information is used to obtain better estimates, as it introduces in the tracking algorithm a measure of the real velocity. Inertial information is also used to eliminate some ambiguities arising from the use of a monocular image sequence. As the algorithms developed are intended to be used in real-time control systems, considerations on computation costs are taken into account. © 2004 Wiley Periodicals, Inc.