Dissemin is shutting down on January 1st, 2025

Published in

Institute of Electrical and Electronics Engineers, IEEE Transactions on Robotics, 6(29), p. 1353-1365, 2013

DOI: 10.1109/tro.2013.2272251

Links

Tools

Export citation

Search in Google Scholar

Visual Homing From Scale With an Uncalibrated Omnidirectional Camera

Journal article published in 2013 by Ming Liu ORCID, Cedric Pradalier, Roland Siegwart
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Visual homing enables a mobile robot to move to a reference position using only visual information. The approaches that we present in this paper utilize matched image key points (e.g., scale-invariant feature transform) that are extracted from an omnidirectional camera as inputs. First, we propose three visual homing methods that are based on feature scale, bearing, and the combination of both, under an image-based visual servoing framework. Second, considering computational cost, we propose a simplified homing method which takes an advantage of the scale information of key-point features to compute control commands. The observability and controllability of the algorithm are proved. An outlier rejection algorithm is also introduced and evaluated. The results of all these methods are compared both in simulations and experiments. We report the performance of all related methods on a series of commonly cited indoor datasets, showing the advantages of the proposed method. Furthermore, they are tested on a compact dataset of omnidirectional panoramic images, which is captured under dynamic conditions with ground truth for future research and comparison.