Published in

Springer, International Journal of Advanced Manufacturing Technology, 9(124), p. 3099-3111, 2021

DOI: 10.1007/s00170-021-08125-9

Links

Tools

Export citation

Search in Google Scholar

Comparison of RGB-D and IMU-based gesture recognition for human-robot interaction in remanufacturing

This paper was not found in any repository, but could be made available legally by the author.
This paper was not found in any repository, but could be made available legally by the author.

Full text: Unavailable

Green circle
Preprint: archiving allowed
Orange circle
Postprint: archiving restricted
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

AbstractWith product life-cycles getting shorter and limited availability of natural resources, the paradigm shift towards the circular economy is being impulsed. In this domain, the successful adoption of remanufacturing is key. However, its associated process efficiency is to date limited given high flexibility requirements for product disassembly. With the emergence of Industry 4.0, natural human-robot interaction is expected to provide numerous benefits in terms of (re)manufacturing efficiency and cost. In this regard, vision-based and wearable-based approaches are the most extended when it comes to establishing a gesture-based interaction interface. In this work, an experimental comparison of two different movement-estimation systems—(i) position data collected from Microsoft Kinect RGB-D cameras and (ii) acceleration data collected from inertial measurement units (IMUs)—is addressed. The results point to our IMU-based proposal, OperaBLE, having recognition accuracy rates up to 8.5 times higher than these of Microsoft Kinect, which proved to be dependent on the movement’s execution plane, subject’s posture, and focal distance.