If you're interested in becoming a contributor or requesting changes then click here to join the discord

MEDUSA

From Brain Computer Interface Wiki
Revision as of 09:19, 17 April 2024 by Medusa (talk | contribs)
Jump to navigation Jump to search

MEDUSA© is a Python-based open-source software ecosystem to facilitate the creation of brain-computer interface (BCI) systems and neuroscience experiments [1]. The software boasts a range of features, including complete compatibility with lab streaming layer, a collection of ready-made examples for common BCI paradigms, extensive tutorials and documentation, an accessible online app marketplace, and a robust modular design, among others.

Software architecture design

MEDUSA© comprises a modular design composed of three main independent entities:

  1. MEDUSA© Platform: the platform is a Python-based user interface for visualizing biosignals and conducting real-time experiments. Primarily built on PyQt, it offers straightforward installation via binaries or execution through the source code. Real-time visualization is fully customizable, encompassing temporal graphs, power spectral density graphs, as well as power-based and connectivity-based topoplots. By using an user management system, the platform allows users to install and develop apps directly linked to their accounts.
  2. MEDUSA© Kernel: the kernel stands as an independent PyPI package that encapsulates all classes and functions required to record and process the biosignals of the experiments. While employed by the MEDUSA© Platform for real-time processing, the kernel holds the versatility to be installed as a Python package in any local Python project. These encompass linear and non-linear temporal methods, spectral metrics, statistical analysis, as well as specialized algorithms for electroencephalogram (EEG) and magnetoencephalogram (MEG) data, alongside state-of-the-art processing algorithms to process many BCI control signals. A comprehensive list of the processing algorithms can be found in the documentation.
  3. App marketplace: the MEDUSA© website provides users with the capability to create and manage a profile within the ecosystem. Within the app marketplace, users can explore and download open-source apps or contribute their own creations. User-developed apps may be designated as public or private, with accessibility options for selected individuals. Presently, MEDUSA© offers a comprehensive range of BCI paradigms, encompassing code-modulated visual evoked potentials (c-VEP), P300-based spellers, motor imagery (MI), and neurofeedback (NF). Additionally, it includes a variety of cognitive psychology tests such as the Stroop task, Go/No-Go test, Dual N-back test, Corsi Block-Tapping test, and more.

Main features

The main features of MEDUSA© can be summarized as follows:

  • Open-source: MEDUSA© is an open-source project, allowing users to freely access and modify the code to suit their research needs. Furthermore, the kernel can be used in any custom Python project to load biosignals recorded by MEDUSA© and/or to use the processing algorithms included in the ecosystem.
  • Full Python-based: the platform is constructed entirely using Python, a high-level programming language. Applications such as BCI paradigms and neuroscience experiments can be developed within Python using PyQt, or in any other programming language by utilizing a built-in TCP/IP asynchronous protocol to communicate with the platform. Notably, many publicly available apps are developed in Unity. This choice was driven by specific requirements, such as precise synchronization between EEG and stimuli for paradigms like c-VEP, or to enhance user experience through visually appealing designs, as seen in MI and NF based apps.
  • LSL compatible: MEDUSA© can record and process signals streamed through the lab streaming layer (LSL) protocol, making it compatible with a wide range of biomedical devices. This feature empowers users to engage in multimodal studies, enabling them to utilize an unlimited number of biosignals concurrently within the same experiment.
  • Modular design: the software is composed of (1) the platform (user interface and signal management) and (2) the kernel (a PyPI package with processing functions and classes), providing a modular and scalable framework.
  • Control signals: the ecosystem also includes pre-built examples for various BCI paradigms, such as P300, c-VEP, SMR, and NF applications, streamlining the development process for researchers.

Supported EEG-based BCI paradigms

MEDUSA© already supports most state-of-the-art noninvasive BCI paradigms utilized in scientific literature, covering both exogenous and endogenous control signals extracted from electroencephalography (EEG):

  • Code-modulated visual evoked potentials (c-VEP): MEDUSA© stands as the only general-purpose system for developing BCIs that supports c-VEP paradigms. This exogenous control signal originates from the primary visual cortex (occipital cortex) when users focus on flashing commands following a pseudorandom sequence [2]. Different selection commands are encoded with distinct sequences, displaying minimal correlation between them. Typically, this is achieved by employing a time series with a flat autocorrelation profile (e.g., m-sequences) and encoding commands using temporally shifted versions of the original sequence. This approach, known as the circular shifting paradigm, requires a calibration to extract the brain's response to the original code (main template). Templates for the rest of commands are computed by temporally shifting the main template according to each command's lag. This enables online decoding by computing the correlation between the online response and the commands' templates [2]. It has been demonstrated that this approach is able to reach high performances (over 90%, 2-5 s per selection) with small calibration times (1-2 min) [2]. MEDUSA© incorporates several built-in apps utilizing this paradigm: the "c-VEP Speller," a speller that utilizes binary m-sequences (black & white flashes) for command encoding; or the "P-ary c-VEP Speller," which employs p-ary m-sequences encoded with different shades of grey (or custom colors) to alleviate visual fatigue [3].
  • P300 evoked potentials: P300 are positive event-related potentials (ERP) that occur over centro-parietal locations approximately 300 ms after the presentation of an unexpected stimulus that requires attention or cognitive processing . P300s are commonly elicited using oddball paradigms, where sequences of repetitive stimuli are infrequently interrupted by a deviant stimulus [4]. This can be extended to provide real-time communication using noninvasive BCIs by employing the row-column paradigm (RCP). In this paradigm, a command matrix is presented to the users, with rows and columns randomly flashing. The user must attend to the desired command, generating a P300 component only when the row and column containing that command flash. By detecting the most likely row and column based on the decoding of these P300 components, the system can determine the target command in real-time [4]. MEDUSA© already implements several built-in apps for P300-based BCIs, such as the "RCP speller".
  • Sensorimotor rhythms (SMR): (in progress)
  • Neurofeedback (NF): (in progress)

Supported cognitive psychology tests

(in progress)

Links

Website LinkedIn GitHub Twitter YouTube

  1. Santamaría-Vázquez, E., Martínez-Cagigal, V., Marcos-Martínez, D., Rodríguez-González, V., Pérez-Velasco, S., Moreno-Calderón, S., & Hornero, R. (2023). MEDUSA©: A novel Python-based software ecosystem to accelerate brain-computer interface and cognitive neuroscience research. Computer methods and programs in biomedicine, 230, 107357, DOI: https://doi.org/10.1016/j.cmpb.2023.107357
  2. 2.0 2.1 2.2 Martínez-Cagigal, V., Thielen, J., Santamaria-Vazquez, E., Pérez-Velasco, S., Desain, P., & Hornero, R. (2021). Brain–computer interfaces based on code-modulated visual evoked potentials (c-VEP): a literature review. Journal of Neural Engineering, 18(6), 061002, DOI: https://doi.org/10.1088/1741-2552/ac38cf
  3. Martínez-Cagigal, V., Santamaría-Vázquez, E., Pérez-Velasco, S., Marcos-Martínez, D., Moreno-Calderón, S., & Hornero, R. (2023). Non-binary m-sequences for more comfortable brain–computer interfaces based on c-VEPs. Expert Systems with Applications, 232, 120815, DOI: https://doi.org/10.1016/j.eswa.2023.120815
  4. 4.0 4.1 Wolpaw, Jonathan, and Elizabeth Winter Wolpaw (eds), Brain–Computer Interfaces: Principles and Practice (2012; online edn, Oxford Academic, 24 May 2012), DOI: https://doi.org/10.1093/acprof:oso/9780195388855.001.0001