Skip to content

Commit f9ddaf3

Browse files
1 parent 3ca69a1 commit f9ddaf3

372 files changed

Lines changed: 27921 additions & 7708 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

docs/.buildinfo

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
# Sphinx build info version 1
22
# This file records the configuration used when building these files. When it is not found, a full rebuild will be done.
3-
config: 928002e86064286118f490d27fcf9185
3+
config: b3da49418314ae0f1d898c4e4e8b292d
44
tags: 645f666f9bcd5a90fca523b33c5a78b7

docs/README.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@
4141
<link rel="stylesheet" type="text/css" href="_static/sg_gallery-dataframe.css?v=2082cf3c" />
4242
<link rel="stylesheet" type="text/css" href="_static/sg_gallery-rendered-html.css?v=1277b6f3" />
4343
<link rel="stylesheet" type="text/css" href="_static/sphinx-design.min.css?v=95c83b7e" />
44-
<link rel="stylesheet" type="text/css" href="_static/css/custom.css?v=32f982a3" />
44+
<link rel="stylesheet" type="text/css" href="_static/css/custom.css?v=f118ea32" />
4545
<link rel="stylesheet" type="text/css" href="https://cdn.datatables.net/v/dt/dt-2.0.4/b-3.0.2/b-html5-3.0.2/datatables.min.css" />
4646

4747
<!-- So that users can add custom icons -->
Binary file not shown.
Binary file not shown.
Binary file not shown.
Lines changed: 151 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,151 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"\n# Time-Resolved Decoding with SlidingEstimator\n\nThis example shows how to perform time-resolved decoding of EEG signals using\n:class:`mne.decoding.SlidingEstimator`. Instead of reducing the entire trial to\na single score, a SlidingEstimator fits an independent classifier at each time\npoint, revealing *when* during a trial the neural signal carries information\nabout the mental state.\n\nThis approach is a natural alternative to pseudo-online evaluation (using\noverlapping windows): rather than simulating an online scenario by slicing\nthe raw signal with a sliding window, we directly assess decoding accuracy\nat each sample of the already-epoched trial.\n\nWe use the BNCI2014-001 motor-imagery dataset (left- vs right-hand) and apply\na logistic-regression classifier wrapped in a SlidingEstimator. For each\nsubject the score is evaluated via stratified 5-fold cross-validation using\n:func:`mne.decoding.cross_val_multiscore`, and the results are averaged across\nsubjects and visualised as a time course.\n"
8+
]
9+
},
10+
{
11+
"cell_type": "code",
12+
"execution_count": null,
13+
"metadata": {
14+
"collapsed": false
15+
},
16+
"outputs": [],
17+
"source": [
18+
"# Authors: MOABB contributors\n#\n# License: BSD (3-clause)\n# sphinx_gallery_thumbnail_number = 1\n\nimport warnings\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom mne.decoding import SlidingEstimator, cross_val_multiscore\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.preprocessing import StandardScaler\n\nimport moabb\nfrom moabb.datasets import BNCI2014_001\nfrom moabb.paradigms import LeftRightImagery\n\n\nmoabb.set_log_level(\"info\")\nwarnings.filterwarnings(\"ignore\")"
19+
]
20+
},
21+
{
22+
"cell_type": "markdown",
23+
"metadata": {},
24+
"source": [
25+
"## Loading the Dataset\n\nWe instantiate the BNCI2014-001 dataset and restrict the analysis to the\nfirst 9 subjects to keep the example reasonably fast.\n\n"
26+
]
27+
},
28+
{
29+
"cell_type": "code",
30+
"execution_count": null,
31+
"metadata": {
32+
"collapsed": false
33+
},
34+
"outputs": [],
35+
"source": [
36+
"dataset = BNCI2014_001()\ndataset.subject_list = dataset.subject_list[:9]"
37+
]
38+
},
39+
{
40+
"cell_type": "markdown",
41+
"metadata": {},
42+
"source": [
43+
"## Choosing a Paradigm\n\nThe :class:`~moabb.paradigms.LeftRightImagery` paradigm extracts\nleft-hand and right-hand motor-imagery epochs, applies a band-pass filter\n(8\u201332 Hz by default), and returns the data as a 3-D NumPy array of shape\n``(n_trials, n_channels, n_times)``.\n\n"
44+
]
45+
},
46+
{
47+
"cell_type": "code",
48+
"execution_count": null,
49+
"metadata": {
50+
"collapsed": false
51+
},
52+
"outputs": [],
53+
"source": [
54+
"paradigm = LeftRightImagery()"
55+
]
56+
},
57+
{
58+
"cell_type": "markdown",
59+
"metadata": {},
60+
"source": [
61+
"## Building a Time-Resolved Pipeline\n\nA :class:`~mne.decoding.SlidingEstimator` wraps any scikit-learn compatible\nestimator and fits/scores it independently at every time point.\nHere we use a simple logistic-regression classifier with Z-score\nnormalisation. The ``scoring='roc_auc'`` argument tells the estimator to\nuse AUC as the evaluation metric.\n\n"
62+
]
63+
},
64+
{
65+
"cell_type": "code",
66+
"execution_count": null,
67+
"metadata": {
68+
"collapsed": false
69+
},
70+
"outputs": [],
71+
"source": [
72+
"clf = make_pipeline(StandardScaler(), LogisticRegression(max_iter=1000))\nsliding = SlidingEstimator(clf, scoring=\"roc_auc\", n_jobs=1)"
73+
]
74+
},
75+
{
76+
"cell_type": "markdown",
77+
"metadata": {},
78+
"source": [
79+
"## Evaluating Each Subject\n\nFor each subject we:\n\n1. Retrieve the preprocessed epochs via the paradigm.\n2. Run stratified 5-fold cross-validation with\n :func:`~mne.decoding.cross_val_multiscore`, which returns an array of\n shape ``(n_folds, n_times)``.\n3. Average over folds to obtain a single time course per subject.\n\nAll per-subject time courses are collected for later aggregation.\n\n"
80+
]
81+
},
82+
{
83+
"cell_type": "code",
84+
"execution_count": null,
85+
"metadata": {
86+
"collapsed": false
87+
},
88+
"outputs": [],
89+
"source": [
90+
"all_scores = []\n\nfor subject in dataset.subject_list:\n X, y, meta = paradigm.get_data(dataset=dataset, subjects=[subject])\n\n # cross_val_multiscore returns (n_folds, n_times)\n scores = cross_val_multiscore(sliding, X, y, cv=5, n_jobs=1)\n all_scores.append(scores.mean(axis=0)) # average over folds\n\n# Stack into (n_subjects, n_times)\nall_scores = np.array(all_scores)"
91+
]
92+
},
93+
{
94+
"cell_type": "markdown",
95+
"metadata": {},
96+
"source": [
97+
"## Building the Time Vector\n\nThe time axis of the decoded epochs starts at ``tmin`` (0 s relative to the\nmotor-imagery cue) and ends at the trial duration defined by the dataset\n(4 s for BNCI2014-001 at 250 Hz).\n\n"
98+
]
99+
},
100+
{
101+
"cell_type": "code",
102+
"execution_count": null,
103+
"metadata": {
104+
"collapsed": false
105+
},
106+
"outputs": [],
107+
"source": [
108+
"sfreq = 250 # BNCI2014-001 sampling frequency\ntmin = paradigm.tmin # 0.0 s\ntmax = dataset.interval[1] - dataset.interval[0] # 4.0 s\ntimes = np.linspace(tmin, tmax, all_scores.shape[1])"
109+
]
110+
},
111+
{
112+
"cell_type": "markdown",
113+
"metadata": {},
114+
"source": [
115+
"## Plotting Time-Resolved Decoding Accuracy\n\nWe plot the group-average AUC score together with the standard error of the\nmean (SEM) across subjects. A horizontal dashed line at 0.5 indicates\nchance level.\n\n"
116+
]
117+
},
118+
{
119+
"cell_type": "code",
120+
"execution_count": null,
121+
"metadata": {
122+
"collapsed": false
123+
},
124+
"outputs": [],
125+
"source": [
126+
"mean_scores = all_scores.mean(axis=0)\nsem_scores = all_scores.std(axis=0) / np.sqrt(len(dataset.subject_list))\n\nfig, ax = plt.subplots(figsize=(8, 4))\nax.plot(times, mean_scores, label=\"Mean AUC across subjects\", color=\"steelblue\")\nax.fill_between(\n times,\n mean_scores - sem_scores,\n mean_scores + sem_scores,\n alpha=0.3,\n color=\"steelblue\",\n label=\"\u00b1SEM\",\n)\nax.axhline(0.5, linestyle=\"--\", color=\"k\", label=\"Chance level (AUC = 0.5)\")\nax.axvline(0, linestyle=\":\", color=\"gray\", label=\"Cue onset\")\nax.set_xlabel(\"Time (s)\")\nax.set_ylabel(\"AUC\")\nax.set_title(\"Time-Resolved Decoding \u2013 Left vs. Right Motor Imagery\\n(BNCI2014-001)\")\nax.legend(loc=\"upper left\")\nax.set_xlim(times[0], times[-1])\nax.set_ylim(0.4, 1.0)\nplt.tight_layout()\nplt.show()"
127+
]
128+
}
129+
],
130+
"metadata": {
131+
"kernelspec": {
132+
"display_name": "Python 3",
133+
"language": "python",
134+
"name": "python3"
135+
},
136+
"language_info": {
137+
"codemirror_mode": {
138+
"name": "ipython",
139+
"version": 3
140+
},
141+
"file_extension": ".py",
142+
"mimetype": "text/x-python",
143+
"name": "python",
144+
"nbconvert_exporter": "python",
145+
"pygments_lexer": "ipython3",
146+
"version": "3.10.19"
147+
}
148+
},
149+
"nbformat": 4,
150+
"nbformat_minor": 0
151+
}
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.

0 commit comments

Comments
 (0)