Published in

Springer Verlag, Lecture Notes in Computer Science, p. 254-261

DOI: 10.1007/978-3-319-03176-7_33

Links

Tools

Export citation

Search in Google Scholar

Understanding Movement and Interaction: An Ontology for Kinect-Based 3D Depth Sensors

This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Microsoft Kinect has attracted great attention from research communities, resulting in numerous interaction and entertainment applications. However, to the best of our knowledge, there does not exist an ontology for 3D depth sensors. Including automated semantic reasoning in these settings would open the doors for new research, making possible not only to track but also understand what the user is doing. We took a first step towards this new paradigm and developed a 3D depth sensor ontology, modelling different features regarding user movement and object interaction. We believe in the potential of integrating semantics into computer vision. As 3D depth sensors and ontology-based applications improve further, the ontology could be used, for instance, for activity recognition, together with semantic maps for supporting visually impaired people or in assistance technologies, such as remote rehabilitation.