Published in

Springer Verlag, Lecture Notes in Computer Science, p. 99-113

DOI: 10.1007/11682127_8

Links

Tools

Export citation

Search in Google Scholar

Distributed modular toolbox for multi-modal context recognition

Proceedings article published in 2006 by David Bannach, Ks Kai Kunze, Kai S. Kunze, Paul Lukowicz, Od Oliver Amft ORCID
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

We present a GUI-based C++ toolbox that allows for build- ing distributed, multi-modal context recognition systems by plugging to- gether reusable, parameterizable components. The goals of the toolbox are to simplify the steps from prototypes to online implementations on low-power mobile devices, facilitate portability between platforms and foster easy adaptation and extensibility. The main features of the tool- box we focus on here are a set of parameterizable algorithms including different filters, feature computations and classifiers, a runtime environ- ment that supports complex synchronous and asynchronous data flows, encapsulation of hardware-specific aspects including sensors and data types (e.g., int vs. float), and the ability to outsource parts of the com- putation to remote devices. In addition, components are provided for group-wise, event-based sensor synchronization and data labeling. We describe the architecture of the toolbox and illustrate its functionality on two case studies that are part of the downloadable distribution.