Skip to Content

TSCL-LwF: A Cross-Subject Emotion Recognition Model via Multi-Scale CNN and Incremental Learning Strategy

Brain Sciences

Abstract


Background/Objectives: Wearable affective human–computer interaction increasingly relies on sparse-channel EEG signals to ensure comfort and practicality in real-life scenarios. However, the limited information provided by sparse-channel EEG, together with pronounced inter-subject variability, makes reliable cross-subject emotion recognition particularly challenging. Methods: To address these challenges, we propose a cross-subject emotion recognition model, termed TSCL-LwF, based on sparse-channel EEG. It combines a multi-scale convolutional network (TSCL) and an incremental learning strategy with Learning without Forgetting (LwF). Specifically, the TSCL is utilized to capture the spatio-temporal characteristics of sparse-channel EEG, which employs diverse receptive fields of convolutional networks to extract and fuse the interaction information within the local prefrontal area. The incremental learning strategy with LwF introduces a limited set of labeled target domain data and incorporates the knowledge distillation loss to retain the source domain knowledge while enabling rapid target domain adaptation. Results: Experiments on the DEAP dataset show that the proposed TSCL-LwF achieves accuracy of 77.26% for valence classification and 80.12% for arousal classification. Moreover, it also exhibits superior accuracy when evaluated on the self-collected dataset EPPVR. Conclusions: The successful implementation of cross-subject emotion recognition based on a sparse-channel EEG will facilitate the development of wearable EEG technologies with practical applications.

Brain Sciences Vol. 16 Pages 84 2026


Authors

Wan, C., Tang, X., Hu, C., Yang, J., Zhang, S., & Chen, D.

  https://doi.org/10.3390/brainsci16010084

The impact of awareness on affective emoji priming in visual word recognition
Scientific Reports