Automatic 12-lead ECG classification using a deep neural network and unsupervised pre-training

Antonio H. Ribeiro1, Daniel Gedon2, Daniel Martins Teixeira1, Manoel Horta Ribeiro3, Antonio Luiz Ribeiro1, Thomas B. Schön2, Wagner Meira Jr4
1Universidade Federal de Minas Gerais, 2Uppsala University, 3École Polytechnique Fédérale de Lausanne, 4Universidade Federal de Minas Gerasi


Abstract

The 12-lead electrocardiogram (ECG) is a major diagnostic tool for cardiovascular diseases. Enhanced automated analysis tools might lead to more reliable diagnosis and improved clinical practice. Deep neural networks are models composed of stacked transformations that learn tasks by examples and are deployed here for automatic ECG classification. Inspired by the success of these models in Natural Language Processing (NLP) and Computer Vision, we propose an end-to-end approach for the task at hand. We trained the model in the dataset of 6,877 samples provided in the Physionet 2020 Challenge: 70% for optimizing the weights and 30% only for evaluating different setups. We use a two-stage approach. In a first unsupervised stage, given a partial ECG signal, the model is trained to predict unseen samples of the signal. This stage does not use any labels and is similar to NLP approaches. In a second supervised stage, we use a convolutional neural network, commonly used for image classification, but adapted here for unidimensional signals. We use the same architecture described in Nat Commun 11, 1760 (2020) for 12-lead ECGs classification. However, instead of the ECG signal, the output of the pre-trained block is used as input to the convolutional neural network. In the unofficial phase submissions, without using the pre-training stage, we obtained a model with F-2 score of 0.727 and G-2 score of 0.509. Adding the pre-training stage, we obtained scores of, respectively, 0.750 and 0.523, indicating the potential of this novel approach. Deep learning is known to work the best with large datasets, thus we expect significant improvements in our model performance as additional training data are made available by the challenge organizers. The self-supervised stage provides an interesting approach to develop large end-to-end models given the scarcity of labels in the medical domain and might yield interesting future developments.