Published in

Springer, Lecture Notes in Computer Science, p. 56-72, 2005

DOI: 10.1007/11551201_4

Links

Tools

Export citation

Search in Google Scholar

Analysis of chewing sounds for dietary monitoring

Proceedings article published in 2005 by Od Oliver Amft ORCID, Mathias Stäger, Paul Lukowicz, Gerhard Tröster
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Red circle
Preprint: archiving forbidden
Orange circle
Postprint: archiving restricted
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

The paper reports the results of the first stage of our work on an automatic dietary monitoring system. The work is part of a large Eu- ropean project on using ubiquitous systems to support healthy lifestyle and cardiovascular disease prevention. We demonstrate that sound from the user's mouth can be used to detect that he/she is eating. The paper also shows how different kinds of food can be recognized by analyzing chewing sounds. The sounds are acquired with a microphone located in- side the ear canal. This is an unobtrusive location widely accepted in other applications (hearing aids, headsets). To validate our method we present experimental results containing 3500 seconds of chewing data from four subjects on four different food types typically found in a meal. Up to 99% accuracy is achieved on eating recognition and between 80% to 100% on food type classification.