Detecting Respiratory Effort-Related Arousals in Polysomnographic Data Using LSTM Networks

Sven Schellenberger1, Kilin Shi2, Melanie Mai2, Jan Philipp Wiedemann2, Tobias Steigleder3, Björn Eskofier4, Robert Weigel2, Alexander Kölpin1
1Chair for Electronics and Sensor Systems, Brandenburg University of Technology, 2Institute for Electronics Engineering, Friedrich-Alexander University Erlangen-Nuremberg, 3Department of Palliative Care and Department of Neurology, Universitätsklinikum Erlangen, 4Machine Learning and Data Analytics Lab, Friedrich-Alexander University Erlangen-Nuremberg


Abstract

Sleep is undoubtedly of great importance for the overall health and well-being. Apart from the well-researched Obstructive Sleep Apnea Hypopnea Syndrome, Respiratory Effort Related Arousals (RERA) constitute another class of sleep disturbances. RERAs lead to a shallower sleep which means less deep recuperation. Based on the definition of arousals by the American Academy of Sleep Medicine (AASM), RERAs are abrupt frequency increases in the EEG signal which can comprise alpha, theta or frequencies greater than 16Hz. The minimum duration of a RERA event is 3s. As part of the Physionet/CinC Challenge 2018, a variety of physiological signals shall be evaluated to detect those non-apnea arousals during sleep.

Since arousals have an effect on the sleep stage, EEG promises to be a good indicator for arousal detection. The frequency spectra of the two central EEG leads are calculated. Afterwards, the mean absolute values of the alpha, beta, gamma, delta and theta bands are computed. Furthermore, to exclude arousals which arise from apnea, the airflow and electromyography chest sensors are processed. For each sliding window of these signals, the difference between minimum and maximum value indicates the breathing effort.

As mentioned in the beginning, RERAs are defined, amongst other criteria, as a sudden increase in brainwave frequency. Therefore, a time dependency must be implemented. To do so, the features from each window are combined with the features from its preceding window. The first window is always set to non-RERA since the precedent signal is unknown. All these features are fed into a simple nearest-neighbor classificator. Using cross-validation, an accuracy of 88.6% and an area under the precision-recall-curve of 0.15 are achieved. By implementing further RERA-specific features derived from signals such as chin movement and heart rate variability, higher scores are expected.