For over 20 years, the annual PhysioNet / CinC Challenges set out to address clinically relevant questions that are hitherto not well-solved. In recent years, a trend towards larger datasets can be observed, manifested for example in more than 30.000 12-lead ECGs in the training sets of the challenges 2020 and 2021. Thus, it comes as no surprise that the challenges are attracting an increasing number of participants from all over the world that are applying, modifying, and developing machine learning algorithms.
In addition to the creation of technological advances for specific problems, the challenges have immense educational value. First, by using gamification concepts such as a leaderboard, groups are encouraged to develop algorithms competitively, often using state-of-the-art concepts. Second, by retaining a diverse hidden dataset, groups are forced to avoid overfitting to the training data to be successful in the competition.
In the wake of the COVID-Pandemic and the lockdown of universities, novel concepts for teaching needed to be developed. In this paper, results from the class “Artificial Intelligence in Medicine Challenge” are reported, which was implemented as an online project seminar at TU Darmstadt. In the class, students were tasked with the topic of the 2017 challenge “AF Classification from a Short Single Lead ECG Recording”. A typical problem is to get submissions to run on the automatic evaluation system. In the class with a limited number of participants, this was addressed by fixing the programming language to python, using the universities compute facilities, and by implementing a human supervised evaluation scheme. Several teams were able to implement approaches based on state-of-the-art algorithms achieving F1 scores above / close to 90% on a hidden test-set of Holter recordings. Moreover, the self-assessment of the students reported a notable increase in machine learning knowledge.