Enhancing Cognitive Signal Processing: Advanced CNN Architectures for EEG-Inferred Digit Recognition
DOI:
https://doi.org/10.62051/x569n718Keywords:
Cognitive Technology, EEG Pattern Analysis, Machine Learning, Neural Computation, Signal Decoding.Abstract
The EEG, or Electroencephalogram, is a non-invasive tool that records electrical activity along the scalp, offering a window into the complex workings of the brain. By analyzing these signals, researchers can gain insights into cognitive processes such as attention, memory, and decision-making. This investigation utilizes Convolutional Neural Networks (CNN) for the classification of EEG signals into numerical digits, tackling the challenge of deciphering cognitive states via non-invasive techniques. The core of the study is the utilization of extensive datasets and sophisticated CNN models to evaluate the performance of consumer-grade EEG headsets in brain activity interpretation. The results showcase high accuracy in numerical cognition identification, demonstrating the robustness of the methods used and suggesting their wider applicability for cognitive state analysis. Overall, the research conducted in the field of EEG-based diagnostics represents a significant milestone in the cognitive technologies industry. Its contributions to the field are numerous and far-reaching, setting a new benchmark for subsequent investigations into this exciting domain. As we continue to explore the potential of EEG technology, we can look forward to a future where cognitive technologies become increasingly personalized, accurate, and effective.
Downloads
References
R. Mishra, K. Sharma, and A. Bhavsar, “Visual brain decoding for short duration eeg signals,” EUSIPCO, 2021.
N. Khaleghi, S. Hashemi, S. Z. Ardabili, S. Sheykhivand, and S. Danishvar, “Salient arithmetic data extraction from brain activity via an improved deep network,” Sensors, 2023, 23:9351.
G. Di Flumeri, G. Borghini, P. Arico, N. Sciaraffa, P. Lanzi, S. Pozzi, V. Vignali, C. Lantieri, A. Bichicchi, A. Simone, et al., “Eeg-based mental workload neurometric to evalu- ate the impact of different traffic and road conditions in real driving settings,” Frontiers in Human Neuroscience, 2018, 12, 509.
Y. Gao, H. J. Lee, and R. M. Mehmood, “Deep learning of eeg signals for emotion recognition,” in 2015 IEEE In- ternational Conference on Multimedia & Expo Workshops (ICMEW), 2015, 1–5.
J. Chen, P. Zhang, Z. Mao, Y. Huang, D. Jiang, and Y. Zhang, “Accurate eeg-based emotion recognition on combined features using deep convolutional neural net- works,” IEEE Access, 2019, 7, 44317–44328.
H. Ghimatgar, K. Kazemi, M. S. Helfroush, and A. Aarabi, “An automatic single-channel eeg-based sleep stage scor- ing method based on hidden markov model,” Journal of Neuroscience Methods, 2019, 108320.
R. T. Schirrmeister, J. T. Springenberg, L. D. J. Fiederer, M. Glasstetter, K. Eggensperger, M. Tangermann, F. Hutter, W. Burgard, and T. Ball, “Deep learning with convolutional neural networks for eeg decoding and visualization,” Hu- man Brain Mapping, 2017, 38(11), 5391–5420.
P. Bashivan, I. Rish, M. Yeasin, and N. Codella, “Learning representations from eeg with deep recurrent-convolutional neural networks,” arXiv preprint arXiv:1511.06448, 2015.
A. G. Huth, W. A. De Heer, T. L. Griffiths, F. E. Theunis- sen, and J. L. Gallant, “Natural speech reveals the semantic maps that tile human cerebral cortex,” Nature, 2016, 532(7600), 453–458.
A. G. Huth, T. Lee, S. Nishimoto, N. Y. Bilenko, A. T. Vu, and J. L. Gallant, “Decoding the semantic content of natural movies from human brain activity,” Frontiers in Systems Neuroscience, 2016, 10, 81.
S. Nishimoto, A. T. Vu, T. Naselaris, Y. Benjamini, B. Yu, and J. L. Gallant, “Reconstructing visual experiences from brain activity evoked by natural movies,” Current Biology, 2011, 21(19), 1641–1646.
A. Kapoor, P. Shenoy, and D. Tan, “Combining brain computer interfaces with vision for object categorization,” in 2008 IEEE Conference on Computer Vision and Pattern Recognition, 2008, 1–8.
N. Bigdely-Shamlo, A. Vankov, R. R. Ramirez, and S. Makeig, “Brain activity-based image classification from rapid serial visual presentation,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2008, 16(5), 432–441.
L. Deng, “The mnist database of handwritten digit images for machine learning research [best of the web],” IEEE Signal Processing Magazine, 2012, 29(6), 141–142.
T. Raduntz, “Signal quality evaluation of emerging eeg devices,” Frontiers in Physiology, 2018, 9, 98.
K. Stytsenko, E. Jablonskis, and C. Prahm, “Evaluation of consumer eeg device emotiv epoc,” in MEi:CogSci Conference 2011, (Ljubljana), 2011.
A. Craik, Y. He, and J. L. Contreras-Vidal, “Deep learn- ing for electroencephalogram (eeg) classification tasks: a review,” Journal of Neural Engineering, 2019, 16(3), 031001.
J. J. Bird, D. R. Faria, L. J. Manso, A. Ekart, and C. D. Buckingham, “A deep evolutionary approach to bioinspired classifier optimisation for brain-machine interaction,” Complexity, 2019, 2019.
B. L. K. Jolly, P. Aggrawal, S. S. Nath, V. Gupta, M. S. Grover, and R. R. Shah, “Universal eeg encoder for learning diverse intelligent tasks,” in 2019 IEEE Fifth International Conference on Multimedia Big Data (BigMM), 2019, 213– 218.
E. Sejdic´, I. Djurovic´, and J. Jiang, “Time–frequency feature representation using energy concentration: An overview of recent advances,” Digital Signal Processing, 2009, 19(1), 153–183.
S. Russell, I. S. Moskowitz, and B. Jalaian, “Context: Separating the forest and the trees—wavelet contextual conditioning.” 2023.
Downloads
Published
Conference Proceedings Volume
Section
License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.