SleepTight: Identifying Sleep Arousals Using Inter- and Intra-Relation of Multimodal Signals

Tanuka Bhattacharjee1, Deepan Das1, Shahnawaz Alam1, Achuth Rao M V2, Prasanta Kumar Ghosh2, Ayush Ranjan Lohani3, Rohan Banerjee4, Anirban Dutta Choudhury5, Arpan Pal6
1TATA Consultancy Services, 2Indian Institute of Science, 3Indian Institute of Engineering Science and Technology, 4Tata Consultancy Services Ltd, 5TCS Innovation Lab, 6


Abstract

Sleep arousal directly affects sleep stages, thereby quality of sleep. Physionet Challenge 2018 aims to correctly identify designated target arousal and non-arousal regions from simultaneously recorded multiple biomedical signals. Our contribution lies in a feature extraction algorithm that extracts morphological and statistical features from different biomedical signals available in the challenge provided dataset to form a composite feature vector. SpO2 level and the net airflow volume during respiration are found to contain significant patterns differentiating arousal and non-arousal phases. Statistical (e.g. kurtosis, mobility, mean, variance, skewness) and asymmetry features are extracted from five standard frequency bands of six-channel EEG, EMG and EOG signals in a windowed approach. State-of-art features, related to heart rate variability (SDNN, RMSSD, variance of heart rate) are extracted from ECG. 12 most significant features are empirically selected based on Maximal Information Coefficient (MIC) scores for final classification for a multivariate logistic regression approach. A total of 400 subjects are randomly selected from Physionet Challenge 2018 ‘training’ dataset for quick experimentation purpose. Our algorithm yields AUROC = 0.521 and AUPRC = 0.06 on the selected dataset using 5-fold cross validation. Besides domain independent statistical features, we are currently focusing on more domain specific approaches based on inputs from medical professionals, like Independent Component Analysis, Common Spatial Patterns and cascaded classification (features that specifically distinguish arousal from non-arousal followed by RERA versus apneas). As an alternate, we are creating a multimodal deep-learning approach using 1D convolutional neural network, which we are planning to explore further during the official phase.