About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
ICHI 2019
Conference paper
Towards automatic cough and snore detection
Abstract
Motivation: Cough is a common symptom of many respiratory diseases such as chronic obstructive pulmonary disease (COPD) [1] or asthma. It is a three-phase expulsive motor act, characterized by the inspiratory, followed by a forced expiratory phase against the closed glottis, with a sudden opening of the glottis and thus, rapid expiratory airflow phase, which can end with a further partial glottis closure phase. As a result, up to three distinct acoustic phases can be observed in a cough event: Φ-1) explosive phase, Φ-2) intermediate phase and Φ-3) voiced phase (Figure 1(a)). An automatic detection of cough events from audio data can help to overcome limitations of the current medical best practice for cough symptom assessment using self-reported questionnaires, such as the Leicester Cough Questionnaire (LCQ), Cough-Specific Quality-of-Life Questionnaire (CQLQ) or COPD Assessment Test (CAT), which often suffer from various biases such as recall bias. Similarly, snore events, which consist of multiple periodic patterns (Figure 1(b)), can be detected automatically from audio data; thereby, can help to assess the quality of sleep [2], [3]. In this work, we present an audio analysis approach to detect individual cough and snore events. Our preliminary results underline the potential of smartphones to objectively report on patient symptoms.