Specializing CNN Models for Sleep Staging based on Heart Rate

Miriam Goldammer1, Sebastian Zaunseder2, Hagen Malberg1, Felix Gräßer3
1TU Dresden, 2FH Dortmund, 3Technische Universität Dresden


Abstract

Aims: This work aims to classify sleep stages based on raw tachograms using Convolutional Neural Networks (CNNs) and investigate advantages of specialized classifiers over general classifiers.

Methods: The tachograms of 5026 patients were extracted from the Sleep Heart Health Study (SHHS-1). A 1D-Convolutional Neural Network was trained to classify each 30s epoch into four distinct sleep stages. The patients were divided into subgroups by Apnoe-Hypopnoe-Index (AHI): a healthy subgroup (AHI 0-5) and three subgroups with increasing apnoe severity. This resulted in one general group and four subgroups of different sizes. From each subgroup, 20% of patients were held out as test data, the remaining patients were used for training. One general model was trained on all patients and four specialized models were trained on each subgroup, respectively. Furthermore, the general model was retrained with data from each of the three subgroups with increasing AHI, yielding three additional transfer learning models.

Results: Our general model gained an average Cohen's kappa score of 0.51 on all available test data. The general model outperformed the specialized models on each test subset. From the specialized models, best overall performance was achieved by training on the subgroup with AHI 5-15. However, a correlation exists between the size of training set and classification quality. Transfer learning resulted in small Cohen’s kappa improvements on the respective subgroup test data.

Conclusion: CNN models are capable of learning features from tachograms with very good classification performance compared to other works using heart rate only. For AHI 0-30, no advantage was found in using a specialized model. For AHI greater 30, we found it is beneficial to have a specialized model or to have significantly more mixed training data. Transfer learning gains slight improvement on subgroups without any obvious disadvantages.