4 research outputs found
Do Not Sleep on Linear Models: Simple and Interpretable Techniques Outperform Deep Learning for Sleep Scoring
Over the last few years, research in automatic sleep scoring has mainly
focused on developing increasingly complex deep learning architectures.
However, recently these approaches achieved only marginal improvements, often
at the expense of requiring more data and more expensive training procedures.
Despite all these efforts and their satisfactory performance, automatic sleep
staging solutions are not widely adopted in a clinical context yet. We argue
that most deep learning solutions for sleep scoring are limited in their
real-world applicability as they are hard to train, deploy, and reproduce.
Moreover, these solutions lack interpretability and transparency, which are
often key to increase adoption rates. In this work, we revisit the problem of
sleep stage classification using classical machine learning. Results show that
state-of-the-art performance can be achieved with a conventional machine
learning pipeline consisting of preprocessing, feature extraction, and a simple
machine learning model. In particular, we analyze the performance of a linear
model and a non-linear (gradient boosting) model. Our approach surpasses
state-of-the-art (that uses the same data) on two public datasets: Sleep-EDF
SC-20 (MF1 0.810) and Sleep-EDF ST (MF1 0.795), while achieving competitive
results on Sleep-EDF SC-78 (MF1 0.775) and MASS SS3 (MF1 0.817). We show that,
for the sleep stage scoring task, the expressiveness of an engineered feature
vector is on par with the internally learned representations of deep learning
models. This observation opens the door to clinical adoption, as a
representative feature vector allows to leverage both the interpretability and
successful track record of traditional machine learning models.Comment: The first two authors contributed equally. Submitted to Biomedical
Signal Processing and Contro
