Content area
Decoding algorithms are used to predict behaviour from patterns of neural activity. Traditional decoding algorithms rely on subject-optimized models, limiting generalization and scalability to novel subjects and tasks. Building on recent advances in deep learning and large-scale data, here we developed EMGNet – an EMG foundation model for neural decoding. EMGNet was trained on over 197 hours of EMG recordings from 1,667 individuals. We uniquely used unsupervised learning to pretrain our feature encoder on unlabeled data, followed by supervised learning on our benchmark dataset of motor behaviors. Additionally, we performed large-scale architecture searches to develop a custom encoder-decoder model composed of convolutional and transformer layers, optimized for both scalability and performance. Our model consistently outperformed the state-of-the-art (i.e., subject-optimized models) across both in-distribution and out-of-distribution evaluations. For in-distribution evaluation, few-shot fine-tuning yielded an F1-score of 0.726, compared to 0.685 for subject-optimized models. For out-of-distribution evaluation on clinical populations, We achieved an F1-score of up to 0.877, compared to 0.477 for subject-optimized baselines. Taken together, our results highlight the value of foundation modeling for robust and generalizable neural decoding. By publicly releasing our pretrained weights and training pipeline, EMGNet has the potential to support future research and development in computational neuroscience and neural-machine interfaces, analogous to ImageNet in computer vision.