If you're interested in becoming a contributor or requesting changes then click here to join the discord

Difference between revisions of "MEDUSA"

From Brain Computer Interface Wiki
Jump to navigation Jump to search
(Description of medusa in progress)
m
Line 6: Line 6:
 
[[Category:Twitter Accounts]]
 
[[Category:Twitter Accounts]]
 
[[Category:LinkedIn Accounts]]
 
[[Category:LinkedIn Accounts]]
'''[https://www.medusabci.com/ MEDUSA©] is a Python-based open-source software ecosystem to facilitate the creation of brain-computer interface (BCI) systems and neuroscience experiments''' <ref>Eduardo Santamaría-Vázquez, Víctor Martínez-Cagigal, Diego Marcos-Martínez, Víctor Rodríguez-González, Sergio Pérez-Velasco, Selene Moreno-Calderón, Roberto Hornero, MEDUSA©: A novel Python-based software ecosystem to accelerate brain-computer interface and cognitive neuroscience research. Computer Methods and Programs in Biomedicine, 2023, DOI: [https://doi.org/10.1016/j.cmpb.2023.107357 doi.org/10.1016/j.cmpb.2023.107357].</ref>. The software boasts a range of features, including complete compatibility with lab streaming layer, a collection of ready-made examples for common BCI paradigms, extensive tutorials and [https://docs.medusabci.com/ documentation], an accessible online app marketplace, and a robust modular design, among others.
+
'''[https://www.medusabci.com/ MEDUSA©] is a Python-based open-source software ecosystem to facilitate the creation of brain-computer interface (BCI) systems and neuroscience experiments''' <ref>Santamaría-Vázquez, E., Martínez-Cagigal, V., Marcos-Martínez, D., Rodríguez-González, V., Pérez-Velasco, S., Moreno-Calderón, S., & Hornero, R. (2023). MEDUSA©: A novel Python-based software ecosystem to accelerate brain-computer interface and cognitive neuroscience research. ''Computer methods and programs in biomedicine'', ''230'', 107357, DOI: https://doi.org/10.1016/j.cmpb.2023.107357</ref>. The software boasts a range of features, including complete compatibility with lab streaming layer, a collection of ready-made examples for common BCI paradigms, extensive tutorials and [https://docs.medusabci.com/ documentation], an accessible online app marketplace, and a robust modular design, among others.
  
 
== Software architecture design ==
 
== Software architecture design ==
Line 13: Line 13:
 
# '''[https://www.medusabci.com/solutions/medusa-platform/ MEDUSA© Platform]''': the platform is a Python-based user interface for visualizing biosignals and conducting real-time experiments. Primarily built on PyQt, it offers straightforward installation via [https://www.medusabci.com/ binaries] or execution through the [https://github.com/medusabci/medusa-platform source code]. Real-time visualization is fully customizable, encompassing temporal graphs, power spectral density graphs, as well as power-based and connectivity-based topoplots. By using an user management system, the platform allows users to install and develop apps directly linked to their accounts.
 
# '''[https://www.medusabci.com/solutions/medusa-platform/ MEDUSA© Platform]''': the platform is a Python-based user interface for visualizing biosignals and conducting real-time experiments. Primarily built on PyQt, it offers straightforward installation via [https://www.medusabci.com/ binaries] or execution through the [https://github.com/medusabci/medusa-platform source code]. Real-time visualization is fully customizable, encompassing temporal graphs, power spectral density graphs, as well as power-based and connectivity-based topoplots. By using an user management system, the platform allows users to install and develop apps directly linked to their accounts.
 
# '''[https://www.medusabci.com/solutions/medusa-kernel/ MEDUSA© Kernel]''': the kernel stands as an independent [https://pypi.org/project/medusa-kernel/ PyPI package] that encapsulates all classes and functions required to record and process the biosignals of the experiments. While employed by the MEDUSA© Platform for real-time processing, the kernel holds the versatility to be installed as a Python package in any local Python project. These encompass linear and non-linear temporal methods, spectral metrics, statistical analysis, as well as specialized algorithms for electroencephalogram (EEG) and magnetoencephalogram (MEG) data, alongside state-of-the-art processing algorithms to process many BCI control signals. A comprehensive list of the processing algorithms can be found in the [https://docs.medusabci.com/kernel documentation].
 
# '''[https://www.medusabci.com/solutions/medusa-kernel/ MEDUSA© Kernel]''': the kernel stands as an independent [https://pypi.org/project/medusa-kernel/ PyPI package] that encapsulates all classes and functions required to record and process the biosignals of the experiments. While employed by the MEDUSA© Platform for real-time processing, the kernel holds the versatility to be installed as a Python package in any local Python project. These encompass linear and non-linear temporal methods, spectral metrics, statistical analysis, as well as specialized algorithms for electroencephalogram (EEG) and magnetoencephalogram (MEG) data, alongside state-of-the-art processing algorithms to process many BCI control signals. A comprehensive list of the processing algorithms can be found in the [https://docs.medusabci.com/kernel documentation].
# '''[https://www.medusabci.com/market/ App marketplace]''': the MEDUSA© website provides users with the capability to create and manage a profile within the ecosystem. Within the app marketplace, users can explore and download open-source apps or contribute their own creations. User-developed apps may be designated as public or private, with accessibility options for selected individuals. Presently, MEDUSA© offers a comprehensive range of BCI paradigms, encompassing c-VEP, ERP, P300-based spellers, motor imagery (MI), and neurofeedback (NF). Additionally, it includes a variety of cognitive psychology tests such as the [[wikipedia:Stroop_effect|Stroop task]], [[wikipedia:Go/no-go|Go/No-Go test]], Dual [[wikipedia:N-back|N-back test]], [[wikipedia:Corsi_block-tapping_test|Corsi Block-Tapping test]], and more.
+
# '''[https://www.medusabci.com/market/ App marketplace]''': the MEDUSA© website provides users with the capability to create and manage a profile within the ecosystem. Within the app marketplace, users can explore and download open-source apps or contribute their own creations. User-developed apps may be designated as public or private, with accessibility options for selected individuals. Presently, MEDUSA© offers a comprehensive range of BCI paradigms, encompassing code-modulated visual evoked potentials (c-VEP), P300-based spellers, motor imagery (MI), and neurofeedback (NF). Additionally, it includes a variety of cognitive psychology tests such as the [[wikipedia:Stroop_effect|Stroop task]], [[wikipedia:Go/no-go|Go/No-Go test]], Dual [[wikipedia:N-back|N-back test]], [[wikipedia:Corsi_block-tapping_test|Corsi Block-Tapping test]], and more.
  
 
== Main features ==
 
== Main features ==
 
The main features of MEDUSA© can be summarized as follows:
 
The main features of MEDUSA© can be summarized as follows:
  
* Open-source: MEDUSA© is an open-source project, allowing users to freely access and modify the code to suit their research needs. Furthermore, the kernel can be used in any custom Python project to load biosignals recorded by MEDUSA© and/or to use the processing algorithms included in the ecosystem.  
+
* '''Open-source''': MEDUSA© is an open-source project, allowing users to freely access and modify the code to suit their research needs. Furthermore, the [https://www.medusabci.com/solutions/medusa-kernel/ kernel] can be used in any custom Python project to load biosignals recorded by MEDUSA© and/or to use the processing algorithms included in the ecosystem.
* Full Python-based: the platform is constructed entirely using Python, a high-level programming language. Applications such as BCI paradigms and neuroscience experiments can be developed within Python using PyQt, or in any other programming language by utilizing a built-in TCP/IP asynchronous protocol to communicate with the platform. Notably, many publicly available apps are developed in Unity. This choice was driven by specific requirements, such as precise synchronization between EEG and stimuli for paradigms like c-VEP, or to enhance user experience through visually appealing designs, as seen in MI and NF based apps.
+
* '''Full Python-based''': the platform is constructed entirely using Python, a high-level programming language. Applications such as BCI paradigms and neuroscience experiments can be developed within Python using PyQt, or in any other programming language by utilizing a built-in TCP/IP asynchronous protocol to communicate with the platform. Notably, many publicly available apps are developed in Unity. This choice was driven by specific requirements, such as precise synchronization between EEG and stimuli for paradigms like c-VEP, or to enhance user experience through visually appealing designs, as seen in MI and NF based apps.
* LSL compatible: MEDUSA© can record and process signals streamed through the [https://labstreaminglayer.org/#/ lab streaming layer] (LSL) protocol, making it compatible with a wide range of biomedical devices.
+
* '''LSL compatible''': MEDUSA© can record and process signals streamed through the [https://labstreaminglayer.org/#/ lab streaming layer] (LSL) protocol, making it compatible with a wide range of biomedical devices. This feature empowers users to engage in multimodal studies, enabling them to utilize an unlimited number of biosignals concurrently within the same experiment.
* Modular design: the software is composed of (1) the platform (user interface and signal management) and (2) the kernel (a PyPI package with processing functions and classes), providing a modular and scalable framework.
+
* '''Modular design''': the software is composed of (1) the [https://www.medusabci.com/solutions/medusa-platform/ platform] (user interface and signal management) and (2) the [https://www.medusabci.com/solutions/medusa-kernel/ kernel] (a [https://pypi.org/project/medusa-kernel/ PyPI package] with processing functions and classes), providing a modular and scalable framework.
* Control signals: the ecosystem also includes pre-built examples for various BCI paradigms, such as P300, c-VEP, SMR, and NF applications, streamlining the development process for researchers.
+
* '''Control signals''': the ecosystem also includes pre-built examples for various BCI paradigms, such as P300, c-VEP, SMR, and NF applications, streamlining the development process for researchers.
  
== Supported BCI paradigms ==
+
== Supported EEG-based BCI paradigms ==
 +
MEDUSA© already supports most state-of-the-art noninvasive BCI paradigms utilized in scientific literature, covering both exogenous and endogenous control signals extracted from electroencephalography (EEG):
 +
 
 +
* '''Code-modulated visual evoked potentials (c-VEP)''': MEDUSA© stands as the only general-purpose system for developing BCIs that supports c-VEP paradigms. This exogenous control signal originates from the primary visual cortex (occipital cortex) when users focus on flashing commands following a pseudorandom sequence <ref name=":0">Martínez-Cagigal, V., Thielen, J., Santamaria-Vazquez, E., Pérez-Velasco, S., Desain, P., & Hornero, R. (2021). Brain–computer interfaces based on code-modulated visual evoked potentials (c-VEP): a literature review. ''Journal of Neural Engineering'', ''18''(6), 061002, DOI: https://doi.org/10.1088/1741-2552/ac38cf</ref>. Different selection commands are encoded with distinct sequences, displaying minimal correlation between them. Typically, this is achieved by employing a time series with a flat autocorrelation profile (e.g., [[wikipedia:Maximum_length_sequence|m-sequences]]) and encoding commands using temporally shifted versions of the original sequence. This approach, known as the circular shifting paradigm, requires a calibration to extract the brain's response to the original code (main template). Templates for the rest of commands are computed by temporally shifting the main template according to each command's lag. This enables online decoding by computing the correlation between the online response and the commands' templates <ref name=":0" />. It has been demonstrated that this approach is able to reach high performances (over 90%, 2-5 s per selection) with small calibration times (1-2 min) <ref name=":0" />. MEDUSA© incorporates several built-in apps utilizing this paradigm: the [https://www.medusabci.com/market/cvep_speller/ "''c-VEP Speller'',"] a speller that utilizes binary m-sequences (black & white flashes) for command encoding; or the [https://www.medusabci.com/market/pary_cvep/ "P-ary c-VEP Speller,"] which employs p-ary m-sequences encoded with different shades of grey (or custom colors) to alleviate visual fatigue <ref>Martínez-Cagigal, V., Santamaría-Vázquez, E., Pérez-Velasco, S., Marcos-Martínez, D., Moreno-Calderón, S., & Hornero, R. (2023). Non-binary m-sequences for more comfortable brain–computer interfaces based on c-VEPs. ''Expert Systems with Applications'', ''232'', 120815, DOI: https://doi.org/10.1016/j.eswa.2023.120815</ref>.
 +
* '''P300 evoked potentials:''' P300 are positive event-related potentials (ERP) that occur over centro-parietal locations approximately 300 ms after the presentation of an unexpected stimulus that requires attention or cognitive processing . P300s are commonly elicited using [[wikipedia:Oddball_paradigm|oddball]] paradigms, where sequences of repetitive stimuli are infrequently interrupted by a deviant stimulus <ref name=":1">Wolpaw, Jonathan, and Elizabeth Winter Wolpaw (eds), ''Brain–Computer Interfaces: Principles and Practice'' (2012; online edn, Oxford Academic, 24 May 2012), DOI: <nowiki>https://doi.org/10.1093/acprof:oso/9780195388855.001.0001</nowiki></ref>. This can be extended to provide real-time communication using noninvasive BCIs by employing the row-column paradigm (RCP). In this paradigm, a command matrix is presented to the users, with rows and columns randomly flashing. The user must attend to the desired command, generating a P300 component only when the row and column containing that command flash. By detecting the most likely row and column based on the decoding of these P300 components, the system can determine the target command in real-time <ref name=":1" />. MEDUSA© already implements several built-in apps for P300-based BCIs, such as the "[https://www.medusabci.com/market/rcp_speller/ RCP speller]".
 +
* '''Sensorimotor rhythms (SMR):''' (in progress)
 +
* '''Neurofeedback (NF):''' (in progress)
 +
 
 +
== Supported cognitive psychology tests ==
 
(in progress)
 
(in progress)
  

Revision as of 09:19, 17 April 2024

MEDUSA© is a Python-based open-source software ecosystem to facilitate the creation of brain-computer interface (BCI) systems and neuroscience experiments [1]. The software boasts a range of features, including complete compatibility with lab streaming layer, a collection of ready-made examples for common BCI paradigms, extensive tutorials and documentation, an accessible online app marketplace, and a robust modular design, among others.

Software architecture design

MEDUSA© comprises a modular design composed of three main independent entities:

  1. MEDUSA© Platform: the platform is a Python-based user interface for visualizing biosignals and conducting real-time experiments. Primarily built on PyQt, it offers straightforward installation via binaries or execution through the source code. Real-time visualization is fully customizable, encompassing temporal graphs, power spectral density graphs, as well as power-based and connectivity-based topoplots. By using an user management system, the platform allows users to install and develop apps directly linked to their accounts.
  2. MEDUSA© Kernel: the kernel stands as an independent PyPI package that encapsulates all classes and functions required to record and process the biosignals of the experiments. While employed by the MEDUSA© Platform for real-time processing, the kernel holds the versatility to be installed as a Python package in any local Python project. These encompass linear and non-linear temporal methods, spectral metrics, statistical analysis, as well as specialized algorithms for electroencephalogram (EEG) and magnetoencephalogram (MEG) data, alongside state-of-the-art processing algorithms to process many BCI control signals. A comprehensive list of the processing algorithms can be found in the documentation.
  3. App marketplace: the MEDUSA© website provides users with the capability to create and manage a profile within the ecosystem. Within the app marketplace, users can explore and download open-source apps or contribute their own creations. User-developed apps may be designated as public or private, with accessibility options for selected individuals. Presently, MEDUSA© offers a comprehensive range of BCI paradigms, encompassing code-modulated visual evoked potentials (c-VEP), P300-based spellers, motor imagery (MI), and neurofeedback (NF). Additionally, it includes a variety of cognitive psychology tests such as the Stroop task, Go/No-Go test, Dual N-back test, Corsi Block-Tapping test, and more.

Main features

The main features of MEDUSA© can be summarized as follows:

  • Open-source: MEDUSA© is an open-source project, allowing users to freely access and modify the code to suit their research needs. Furthermore, the kernel can be used in any custom Python project to load biosignals recorded by MEDUSA© and/or to use the processing algorithms included in the ecosystem.
  • Full Python-based: the platform is constructed entirely using Python, a high-level programming language. Applications such as BCI paradigms and neuroscience experiments can be developed within Python using PyQt, or in any other programming language by utilizing a built-in TCP/IP asynchronous protocol to communicate with the platform. Notably, many publicly available apps are developed in Unity. This choice was driven by specific requirements, such as precise synchronization between EEG and stimuli for paradigms like c-VEP, or to enhance user experience through visually appealing designs, as seen in MI and NF based apps.
  • LSL compatible: MEDUSA© can record and process signals streamed through the lab streaming layer (LSL) protocol, making it compatible with a wide range of biomedical devices. This feature empowers users to engage in multimodal studies, enabling them to utilize an unlimited number of biosignals concurrently within the same experiment.
  • Modular design: the software is composed of (1) the platform (user interface and signal management) and (2) the kernel (a PyPI package with processing functions and classes), providing a modular and scalable framework.
  • Control signals: the ecosystem also includes pre-built examples for various BCI paradigms, such as P300, c-VEP, SMR, and NF applications, streamlining the development process for researchers.

Supported EEG-based BCI paradigms

MEDUSA© already supports most state-of-the-art noninvasive BCI paradigms utilized in scientific literature, covering both exogenous and endogenous control signals extracted from electroencephalography (EEG):

  • Code-modulated visual evoked potentials (c-VEP): MEDUSA© stands as the only general-purpose system for developing BCIs that supports c-VEP paradigms. This exogenous control signal originates from the primary visual cortex (occipital cortex) when users focus on flashing commands following a pseudorandom sequence [2]. Different selection commands are encoded with distinct sequences, displaying minimal correlation between them. Typically, this is achieved by employing a time series with a flat autocorrelation profile (e.g., m-sequences) and encoding commands using temporally shifted versions of the original sequence. This approach, known as the circular shifting paradigm, requires a calibration to extract the brain's response to the original code (main template). Templates for the rest of commands are computed by temporally shifting the main template according to each command's lag. This enables online decoding by computing the correlation between the online response and the commands' templates [2]. It has been demonstrated that this approach is able to reach high performances (over 90%, 2-5 s per selection) with small calibration times (1-2 min) [2]. MEDUSA© incorporates several built-in apps utilizing this paradigm: the "c-VEP Speller," a speller that utilizes binary m-sequences (black & white flashes) for command encoding; or the "P-ary c-VEP Speller," which employs p-ary m-sequences encoded with different shades of grey (or custom colors) to alleviate visual fatigue [3].
  • P300 evoked potentials: P300 are positive event-related potentials (ERP) that occur over centro-parietal locations approximately 300 ms after the presentation of an unexpected stimulus that requires attention or cognitive processing . P300s are commonly elicited using oddball paradigms, where sequences of repetitive stimuli are infrequently interrupted by a deviant stimulus [4]. This can be extended to provide real-time communication using noninvasive BCIs by employing the row-column paradigm (RCP). In this paradigm, a command matrix is presented to the users, with rows and columns randomly flashing. The user must attend to the desired command, generating a P300 component only when the row and column containing that command flash. By detecting the most likely row and column based on the decoding of these P300 components, the system can determine the target command in real-time [4]. MEDUSA© already implements several built-in apps for P300-based BCIs, such as the "RCP speller".
  • Sensorimotor rhythms (SMR): (in progress)
  • Neurofeedback (NF): (in progress)

Supported cognitive psychology tests

(in progress)

Links

Website LinkedIn GitHub Twitter YouTube

  1. Santamaría-Vázquez, E., Martínez-Cagigal, V., Marcos-Martínez, D., Rodríguez-González, V., Pérez-Velasco, S., Moreno-Calderón, S., & Hornero, R. (2023). MEDUSA©: A novel Python-based software ecosystem to accelerate brain-computer interface and cognitive neuroscience research. Computer methods and programs in biomedicine, 230, 107357, DOI: https://doi.org/10.1016/j.cmpb.2023.107357
  2. 2.0 2.1 2.2 Martínez-Cagigal, V., Thielen, J., Santamaria-Vazquez, E., Pérez-Velasco, S., Desain, P., & Hornero, R. (2021). Brain–computer interfaces based on code-modulated visual evoked potentials (c-VEP): a literature review. Journal of Neural Engineering, 18(6), 061002, DOI: https://doi.org/10.1088/1741-2552/ac38cf
  3. Martínez-Cagigal, V., Santamaría-Vázquez, E., Pérez-Velasco, S., Marcos-Martínez, D., Moreno-Calderón, S., & Hornero, R. (2023). Non-binary m-sequences for more comfortable brain–computer interfaces based on c-VEPs. Expert Systems with Applications, 232, 120815, DOI: https://doi.org/10.1016/j.eswa.2023.120815
  4. 4.0 4.1 Wolpaw, Jonathan, and Elizabeth Winter Wolpaw (eds), Brain–Computer Interfaces: Principles and Practice (2012; online edn, Oxford Academic, 24 May 2012), DOI: https://doi.org/10.1093/acprof:oso/9780195388855.001.0001