Published in

SAGE Publications, International Journal of Advanced Robotic Systems, 5(17), p. 172988142093307, 2020

DOI: 10.1177/1729881420933077

Links

Tools

Export citation

Search in Google Scholar

Human motion recognition based on limit learning machine

Journal article published in 2020 by Hong Chen, Hongdong Zhao, Baoqiang Qi, Shi Wang, Nan Shen, Yuxiang Li
This paper is made freely available by the publisher.
This paper is made freely available by the publisher.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Green circle
Published version: archiving allowed
Data provided by SHERPA/RoMEO

Abstract

With the development of technology, human motion capture data have been widely used in the fields of human–computer interaction, interactive entertainment, education, and medical treatment. As a problem in the field of computer vision, human motion recognition has become a key technology in somatosensory games, security protection, and multimedia information retrieval. Therefore, it is important to improve the recognition rate of human motion. Based on the above background, the purpose of this article is human motion recognition based on extreme learning machine. Based on the existing action feature descriptors, this article makes improvements to features and classifiers and performs experiments on the Microsoft model specific register (MSR)-Action3D data set and the Bonn University high density metal (HDM05) motion capture data set. Based on displacement covariance descriptor and direction histogram descriptor, this article described both combine to produce a new combination; the description can statically reflect the joint position relevant information and at the same time, the change information dynamically reflects the joint position, uses the extreme learning machine for classification, and gets better recognition result. The experimental results show that the combined descriptor and extreme learning machine recognition rate on these two data sets is significantly improved by about 3% compared with the existing methods.