An Intrinsically Explainable Method to Decode P300 Waveforms from EEG Signal Plots Based on Convolutional Neural Networks.
ALS
BCI
CNN
EEG
P300
XAI
deep learning
waveform
Journal
Brain sciences
ISSN: 2076-3425
Titre abrégé: Brain Sci
Pays: Switzerland
ID NLM: 101598646
Informations de publication
Date de publication:
20 Aug 2024
20 Aug 2024
Historique:
received:
19
07
2024
revised:
15
08
2024
accepted:
16
08
2024
medline:
31
8
2024
pubmed:
31
8
2024
entrez:
29
8
2024
Statut:
epublish
Résumé
This work proposes an intrinsically explainable, straightforward method to decode P300 waveforms from electroencephalography (EEG) signals, overcoming the black box nature of deep learning techniques. The proposed method allows convolutional neural networks to decode information from images, an area where they have achieved astonishing performance. By plotting the EEG signal as an image, it can be both visually interpreted by physicians and technicians and detected by the network, offering a straightforward way of explaining the decision. The identification of this pattern is used to implement a P300-based speller device, which can serve as an alternative communication channel for persons affected by amyotrophic lateral sclerosis (ALS). This method is validated by identifying this signal by performing a brain-computer interface simulation on a public dataset from ALS patients. Letter identification rates from the speller on the dataset show that this method can identify the P300 signature on the set of 8 patients. The proposed approach achieves similar performance to other state-of-the-art proposals while providing clinically relevant explainability (XAI).
Identifiants
pubmed: 39199527
pii: brainsci14080836
doi: 10.3390/brainsci14080836
pii:
doi:
Types de publication
Journal Article
Langues
eng
Subventions
Organisme : Instituto Tecnológico de Buenos Aires (ITBA)
ID : ITBACyT-2020