Published in

IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004

DOI: 10.1109/robot.2004.1307255

Links

Tools

Export citation

Search in Google Scholar

3D simultaneous localization and modeling from stereo vision

Proceedings article published in 2004 by M. A. Garcia, A. Solanas ORCID
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

This work presents a new algorithm for determining the trajectory of a mobile robot and, simultaneously, creating a detailed volumetric 3D model of its workspace. The algorithm exclusively utilizes information provided by a single stereo vision system, avoiding thus the use both of more costly laser systems and error-prone odometry. Six-degrees-of-freedom egomotion is directly estimated from images acquired at relatively close positions along the robot's path. Thus, the algorithm can deal with both planar and uneven terrain in a natural way, without requiring extra processing stages or additional orientation sensors. The 3D model is based on an octree that encapsulates clouds of 3D points obtained through stereo vision, which are integrated after each egomotion stage. Every point has three spatial coordinates referred to a single frame, as well as true-color components. The spatial location of those points is continuously improved as new images are acquired and integrated into the model.