The MneExperiment
Pipeline
See also
MneExperiment
class reference for details on all available methodsPipeline wiki page for additional information
Contents
Introduction
The MneExperiment
class is a template for an MEG/EEG analysis pipeline. The pipeline is adapted to a specific experiment by creating a subclass, and specifying properties of the experiment as attributes.
Once set up, an MneExperiment
subclass instance provides access into the pipeline at different stages of analysis through its methods:
.load_...
methods are for loading data.
.make_...
methods are for generating various intermediate results. Most of these methods don’t have to be called by the user, but they are used internally when needed. The exception are those that require user input, like ICA component selection, which are mentioned below.
.show_...
methods are for retrieving and displaying information at different stages.
.plot_...
methods are for generating plots of the data.
An MneExperiment
instance has a state, which determines what data and settings it is currently using. Not all settings are always relevant. For example, subject is relevenat for steps applied separately to each subject, like make_ica_selection()
, whereas group defines the group of subjects in group level analysis, such as in load_test()
. For more information, see State Parameters.
Step by Step
Contents
Setting up the file structure
The pipeline expects input files in a strictly determined folder/file structure.
In the schema below, curly brackets indicate slots that the pipeline will replace with specific
names, for example {subject}
will be replaced with each specific subject’s name:
Root
MRI directory /mri
MRI subject /{subject}
M/EEG directory /{data_dir}
M/EEG subject /{subject}
trans-file /{subject}-trans.fif
raw-file /{subject}_{session}-raw.fif
The first step in setting up the pipeline consists in:
Arranging the files in the expected file structure (MRI and
trans-file
are optional and only needed for source localization)Defining an
MneExperiment
subclass with the parameters required to find those files
The {data_dir}
directory in which the pipeline looks for the raw data is determined by the MneExperiment.data_dir
attribute. By default it is 'meg'
, but it can be changed, for example, to 'eeg'
. This is merely to make the filenames less confusing when e.g. working with EEG data, it does not influence the analysis in any other way.
The {session}
refers to the name of the recording session. The name of one or several recording session(s) has to be specified on an MneExperiment
subclass, using the MneExperiment.sessions
attribute. Those names will be used to find the raw data files, by filling in the raw-file
template from above:
from eelbrain import *
class WordExperiment(MneExperiment):
sessions = 'words'
The final step to locating the files is providing the root
location when initializing that subclass:
e = WordExperiment("/files")
If the files are in the right order, the pipeline will determine the subject names based on the names of the folders inside the M/EEG directory. Only names matching a specific expression will be considered, for example “R” followed by 3 digits. This expression can be customized in MneExperiment.subject_re
.
If that all works, and assuming the first subject is named “R0001”, the pipeline will look for data at the following locations:
The FreeSurfer MRI-directory at
/files/mri/R0001
The raw data file at
/files/meg/R0001/R0001_words-raw.fif
(the session is called “words” which is specified inWordExperiment.sessions
)The trans-file from the coregistration at
/files/meg/R0001/R0001-trans.fif
The setup can be tested using MneExperiment.show_subjects()
, which shows
a list of the subjects and corresponding MRIs that were discovered:
>>> e.show_subjects()
# subject mri
-----------------------------------------
0 R0026 R0026
1 R0040 fsaverage * 0.92
2 R0176 fsaverage * 0.954746600461
...
Note
To specify an alternative input data format, replace MneExperiment.raw["raw"]
with a custom RawSource
instance.
- MneExperiment.visits
Note
If participants come back for the experiment on multiple occasions, a
visits
attribute might also be needed. For details see the
corresponding wiki page.
Pre-processing
Make sure an appropriate pre-processing pipeline is defined as
MneExperiment.raw
.
To inspect raw data for a given pre-processing step use:
>>> e.set(raw='1-40')
>>> y = e.load_raw(ndvar=True)
>>> p = plot.TopoButterfly(y, xlim=10, w=0)
Which will plot a 10 s excerpt and allow scrolling through the rest of the data.
Events
If needed, set MneExperiment.merge_triggers
to handle spurious events.
Then, add event labels.
Initially, events are only labeled with the trigger ID. Use the
MneExperiment.variables
settings to add labels.
Events are represented as Dataset
objects and can be inspected with
corresponding methods and functions, for example:
>>> e = WordExperiment("/files")
>>> data = e.load_events()
>>> data.head()
>>> print(table.frequencies('trigger', data=data))
For more complex designs and variables, you can override methods that provide complete control over the events. These are the transformations applied to the triggers extracted from raw files (in this order):
MneExperiment.fix_events()
: Change event order, timing and remove/add events
MneExperiment.variables
: Add labels based on triggers
MneExperiment.label_events()
: Add any more complex labels
Defining data epochs
Once events are properly labeled, define MneExperiment.epochs
.
There is one special epoch to define, which is called 'cov'
. This is the
data epoch that will be used to estimate the sensor noise covariance matrix for
source estimation.
In order to find the right sel
epoch parameter, it can be useful to actually
load the events with MneExperiment.load_events()
and test different
selection strings. The epoch selection is determined by
selection = event_ds.eval(epoch['sel'])
. Thus, a specific setting could be
tested with:
>>> data = e.load_events()
>>> print(data.sub("event == 'value'"))
Bad channels
Flat channels are automatically excluded from the analysis.
An initial check for noisy channels can be done by looking at the raw data (see
Pre-processing above).
If this inspection reveals bad channels, they can be excluded using
MneExperiment.make_bad_channels()
.
Another good check for bad channels is plotting the average evoked response, and looking for channels which are uncorrelated with neighboring channels. To plot the average before trial rejection, use:
>>> data = e.load_epochs(epoch='epoch', reject=False)
>>> plot.TopoButterfly('meg', data=data)
The neighbor correlation can also be quantified, using:
>>> nc = neighbor_correlation(concatenate(data['meg']))
>>> nc.sensor.names[nc < 0.3]
Datalist(['MEG 099'])
A simple way to cycle through subjects when performing a given pre-processing
step is MneExperiment.next()
.
If a general threshold is adequate, the selection of bad channels based on
neighbor-correlation can be automated using the
MneExperiment.make_bad_channels_neighbor_correlation()
method:
>>> for subject in e:
... e.make_bad_channels_neighbor_correlation()
ICA
If preprocessing includes ICA, select which ICA components should be removed.
To open the ICA selection GUI, The experiment raw
state needs to be set to
the ICA stage of the pipeline:
>>> e.set(raw='ica')
>>> e.make_ica_selection()
See MneExperiment.make_ica_selection()
for more information on display
options and on how to precompute ICA decomposition for all subjects.
When selecting ICA components for multiple subject, a simple way to cycle
through subjects is MneExperiment.next()
, like:
>>> e.make_ica_selection(epoch='epoch', decim=10)
>>> e.next()
subject: 'R1801' -> 'R2079'
>>> e.make_ica_selection(epoch='epoch', decim=10)
>>> e.next()
subject: 'R2079' -> 'R2085'
...
Trial selection
For each primary epoch that is defined, bad trials can be rejected using
MneExperiment.make_epoch_selection()
. Rejections are specific to a given raw
state:
>>> e.set(raw='ica1-40', epoch='word')
>>> e.make_epoch_selection()
>>> e.next()
subject: 'R1801' -> 'R2079'
>>> e.make_epoch_selection()
...
To reject trials based on a pre-determined threshold, a loop can be used:
>>> for subject in e:
... e.make_epoch_selection(auto=1e-12)
...
Analysis
With preprocessing completed, there are different options for analyzing the data.
The most flexible option is loading data from the desired processing stage using
one of the many .load_...
methods of the MneExperiment
. For
example, load a Dataset
with source-localized condition averages using
MneExperiment.load_evoked_stc()
, then test a hypothesis using one of the
mass-univariate test from the testnd
module. To make this kind of
analysis replicable, it is probably useful to write the complete analysis as a
separate script that imports the experiment (see the example experiment folder).
Many statistical comparisons can also be specified in the
MneExperiment.tests
attribute, and then loaded directly using the
MneExperiment.load_test()
method. This has the advantage that the tests
will be cached automatically and, once computed, can be loaded very quickly.
However, these definitions are not quite as flexible as writing a custom script.
Finally, for tests defined in MneExperiment.tests
, the
MneExperiment
can generate HTML report files. These are generated with
the MneExperiment.make_report()
and MneExperiment.make_report_rois()
methods.
Warning
If source files are changed (raw files, epoch rejection or bad channel
files, …) reports are not updated automatically unless the corresponding
MneExperiment.make_report()
function is called again. For this reason
it is useful to have a script to generate all desired reports. Running the
script ensures that all reports are up-to-date, and will only take seconds
if nothing has to be recomputed (for an example see make-reports.py
in
the example experiment folder).
Example
The following is a complete example for an experiment class definition file
(the source file can be found in the Eelbrain examples folder at
examples/mouse/mouse.py
):
# skip test: data unavailable
from eelbrain.pipeline import *
class Mouse(MneExperiment):
# Name of the experimental session(s), used to locate *-raw.fif files
sessions = 'CAT'
# Pre-processing pipeline: each entry in `raw` specifies one processing step. The first parameter
# of each entry specifies the source (another processing step or 'raw' for raw input data).
raw = {
# Maxwell filter as first step (taking input from raw data, 'raw')
'tsss': RawMaxwell('raw', st_duration=10., ignore_ref=True, st_correlation=0.9, st_only=True),
# Band-pass filter data between 1 and 40 Hz (taking Maxwell-filtered data as input, 'tsss)
'1-40': RawFilter('tsss', 1, 40),
# Perform ICA on filtered data
'ica': RawICA('1-40', 'CAT', n_components=0.99),
}
# Variables determine how event triggeres are mapped to meaningful labels. Events are represented
# as data-table in which each row corresponds to one event (i.e., one trigger). Each variable
# defined here adds one column in that data-table, assigning a label or value to each event.
variables = {
# The first parameter specifies the source variable (here the trigger values),
# the second parameter a mapping from source to target labels/values
'stimulus': LabelVar('trigger', {(162, 163): 'target', (166, 167): 'prime'}),
'prediction': LabelVar('trigger', {(162, 166): 'expected', (163, 167): 'unexpected'}),
}
# Epochs specify how to extract time-locked data segments ("epochs") from the continuous data.
epochs = {
# A PrimaryEpoch definition extracts epochs directly from continuous data. The first argument
# specifies the recording session from which to extract the data (here: 'CAT'). The second
# argument specifies which events to extract the data from (here: all events at which the
# 'stimulus' variable, defined above, has a value of either 'prime' or 'target').
'word': PrimaryEpoch('CAT', "stimulus.isin(('prime', 'target'))", samplingrate=200),
# A secondary epoch inherits its properties from the base epoch ("word") unless they are
# explicitly modified (here, selecting a subset of events)
'prime': SecondaryEpoch('word', "stimulus == 'prime'"),
'target': SecondaryEpoch('word', "stimulus == 'target'"),
# The 'cov' epoch defines the data segments used to compute the noise covariance matrix for
# source localization
'cov': SecondaryEpoch('prime', tmax=0),
}
tests = {
'=0': TTestOneSample(),
'surprise': TTestRelated('prediction', 'unexpected', 'expected'),
'anova': ANOVA('prediction * subject'),
}
parcs = {
'frontotemporal-lh': CombinationParc('aparc', {
'frontal-lh': 'parsorbitalis + parstriangularis + parsopercularis',
'temporal-lh': 'transversetemporal + superiortemporal + '
'middletemporal + inferiortemporal + bankssts',
}, views='lateral'),
}
root = '~/Data/Mouse'
e = Mouse(root)
The event structure is illustrated by looking at the first few events:
>>> from mouse import *
>>> data = e.load_events()
>>> data.head()
trigger i_start T SOA subject stimulus prediction
--------------------------------------------------------------------
182 104273 104.27 12.04 S0001
182 116313 116.31 1.313 S0001
166 117626 117.63 0.598 S0001 prime expected
162 118224 118.22 2.197 S0001 target expected
166 120421 120.42 0.595 S0001 prime expected
162 121016 121.02 2.195 S0001 target expected
167 123211 123.21 0.596 S0001 prime unexpected
163 123807 123.81 2.194 S0001 target unexpected
167 126001 126 0.598 S0001 prime unexpected
163 126599 126.6 2.195 S0001 target unexpected
Experiment Definition
Contents
Basic setup
Set MneExperiment.owner
to your email address if you want to be able to
receive notifications. Whenever you run a sequence of commands with
mne_experiment.notification:
you will get an email once the respective code
has finished executing or run into an error, for example:
>>> e = MyExperiment()
>>> with e.notification:
... e.make_report('mytest', tstart=0.1, tstop=0.3)
...
will send you an email as soon as the report is finished (or the program encountered an error)
Whenever a MneExperiment
instance is initialized with a valid
root
path, it checks whether changes in the class definition invalidate
previously computed results. By default, the user is prompted to confirm
the deletion of invalidated results. Set auto_delete_results
to True
to delete them automatically without interrupting initialization.
MneExperiment
caches various intermediate results. By default, if a
change in the experiment definition would make cache files invalid, the outdated
files are automatically deleted. Set auto_delete_cache
to 'ask'
to
ask for confirmation before deleting files. This can be useful to prevent
accidentally deleting files that take long to compute when editing the pipeline
definition.
When using this option, set MneExperiment.screen_log_level
to
'debug'
to learn about what change caused the cache to be invalid.
Determines the amount of information displayed on the screen while using
an MneExperiment
(see logging
).
The defaults dictionary can contain default settings for experiment analysis parameters (see State Parameters), e.g.:
defaults = {
'epoch': 'my_epoch',
'cov': 'noreg',
'raw': '1-40',
}
Finding files
The name, or a list of names of the raw data files (see Setting up the file structure).
Folder name for the raw data directory. By default, this is meg
, i.e., the experiment will look for raw files at root/meg/{subject}/{subject}_{session}-raw.fif
. After setting data_dir = 'eeg'
, the experiment will look at root/eeg/{subject}/{subject}_{session}-raw.fif
.
Subjects are identified on initialization by looking for folders in the data directory (meg
by default) whose name matches the MneExperiment.subject_re
regular expression. By default, this is '(R|A|Y|AD|QP)(\d{3,})$'
, which matches R-numbers like R1234
, but also numbers prefixed by A
, Y
, AD
or QP
(for information about how to define a different regular expression, see re
).
Reading files
Note
Gain more control over reading files through adding a RawPipe
to MneExperiment.raw
.
By default, events are loaded from all stim channels; use this parameter to restrict events to one or several stim channels.
Use a non-default merge
parameter for load.mne.events()
.
Set this attribute to shift all trigger times by a constant (in seconds). For example, with trigger_shift = 0.03
a trigger that originally occurred 35.10 seconds into the recording will be shifted to 35.13. If the trigger delay differs between subjects, this attribute can also be a dictionary mapping subject names to shift values, e.g. trigger_shift = {'R0001': 0.02, 'R0002': 0.05, ...}
.
Specify the MEG system used to acquire the data so that the right sensor neighborhood graph can be loaded. This is usually automatic, but is needed for KIT files convert with with mne
< 0.13. Equivalent to the sysname
parameter in load.mne.epochs_ndvar()
etc. For example, for data from NYU New York, the correct value is meg_system="KIT-157"
.
Pre-processing (raw)
- MneExperiment.raw
Define a pre-processing pipeline as a series of linked processing steps
(mne
refers to data that is not time-locked to specific events as
Raw
, with filenames matching *-raw.fif
):
|
Filter raw pipe |
|
ICA raw pipe |
|
Apply ICA estimated in a |
|
Maxwell filter raw pipe |
|
Oversampled temporal projection: see |
|
Raw data source |
|
Re-reference EEG data |
The raw data that constitutes the input to the pipeline can be accessed in a pipe named "raw"
Each subsequent preprocessing step is defined with its input as first argument
(source
).
For example, the following definition sets up a pipeline for MEG, using TSSS, a band-pass filter and ICA:
class Experiment(MneExperiment):
sessions = 'session'
raw = {
'tsss': RawMaxwell('raw', st_duration=10., ignore_ref=True, st_correlation=0.9, st_only=True),
'1-40': RawFilter('tsss', 1, 40),
'ica': RawICA('1-40', 'session', 'extended-infomax', n_components=0.99),
}
To use the raw --> TSSS --> 1-40 Hz band-pass
pipeline, use e.set(raw="1-40")
.
To use raw --> TSSS --> 1-40 Hz band-pass --> ICA
, select e.set(raw="ica")
.
The following is an example for EEG using band-pass filter, ICA and re-referencing:
class Experiment(MneExperiment):
data_dir = 'eeg'
sessions = ['stories', 'tones']
raw = {
'1-20': RawFilter('raw', 1, 20, cache=False),
'ica': RawICA('1-20', 'stories'),
'reref': RawReReference('ica', ['A1', 'A2'], 'A2')
# Use the ICA with a high pass filter with a lower cutuff frequency:
'0.2-20': RawFilter('raw', 0.2, 20, cache=False),
'0.2-20ica': RawApplyICA('0.2-20', 'ica'),
'0.2reref': RawReReference('0.2-20ica', ['A1', 'A2'], 'A2'),
}
Note
Continuous files take up a lot of hard drive space. By default, files for most pre-processing steps are cached. This can be controlled with the cache
parameter. To delete files correspoding to a specific step (e.g., raw='1-40'
), use the MneExperiment.rm()
method:
>>> e.rm('cached-raw-file', True, raw='1-40')
Events
Note
Gain more control over events through overriding MneExperiment.fix_events()
and MneExperiment.label_events()
.
- MneExperiment.variables
Event variables add labels and variables to the events:
|
Variable assigning labels to values |
|
Variable based on evaluating a statement |
|
Group membership for each subject |
Most of the time, the main purpose of this attribute is to turn trigger values into meaningful labels:
class Mouse(MneExperiment):
variables = {
'stimulus': LabelVar('trigger', {(162, 163): 'target', (166, 167): 'prime'}),
'prediction': LabelVar('trigger', {162: 'expected', 163: 'unexpected'}),
}
This defines a variable called “stimulus”, and on this variable all events
that have triggers 162 and 163 have the value "target"
, and events with
trigger 166 and 167 have the value "prime"
.
The “prediction” variable only labels triggers 162 and 163.
Unmentioned trigger values are assigned the empty string (''
).
Epochs
- MneExperiment.epochs
Epochs are specified as a {name: epoch_definition}
dictionary. Names are
str
, and epoch_definition
are instances of the classes
described below:
|
Epoch based on selecting events from a raw file |
|
Epoch inheriting events from another epoch |
|
Combine several other epochs |
Examples:
epochs = {
# some primary epochs:
'picture': PrimaryEpoch('words', "stimulus == 'picture'"),
'word': PrimaryEpoch('words', "stimulus == 'word'"),
# use the picture baseline for the sensor covariance estimate
'cov': SecondaryEpoch('picture', tmax=0),
# another secondary epoch:
'animal_words': SecondaryEpoch('noun', sel="word_type == 'animal'"),
# a superset-epoch:
'all_stimuli': SuperEpoch(('picture', 'word')),
}
Tests
- MneExperiment.tests
Statistical tests are defined as {name: test_definition}
dictionary.
This allows automatic caching of permutation test results when using MneExperiment.load_test()
.
Tests are defined using the following classes:
|
One-sample t-test |
|
Related measures t-test |
|
Independent measures t-test (comparing groups of subjects) |
|
ANOVA test |
|
Contrasts of T-maps (see |
|
Two-stage test: T-test of regression coefficients |
Example:
tests = {
'my_anova': ANOVA('noise * word_type * subject'),
'my_ttest': TTestRelated('noise', 'a_lot_of_noise', 'no_noise'),
}
Subject groups
- MneExperiment.groups
A subject group called 'all'
containing all subjects is always implicitly
defined. Additional subject groups can be defined in
MneExperiment.groups
with {name: group_definition}
entries:
|
Group defined as collection of subjects |
|
Group defined by removing subjects from a base group |
Example:
groups = {
'good': SubGroup('all', ['R0013', 'R0666']),
'bad': Group(['R0013', 'R0666']),
}
Parcellations (parcs
)
- MneExperiment.parcs
The parcellation determines how the brain surface is divided into regions.
A number of standard parcellations are automatically defined (see
parc/mask (parcellations) below). Additional parcellations can be defined in
the MneExperiment.parcs
dictionary with {name: parc_definition}
entries. There are a couple of different ways in which parcellations can be
defined, described below.
|
A subset of labels in another parcellation |
|
Recombine labels from an existing parcellation |
|
Parcellation that is grown from seed coordinates |
|
Seed parcellation with individual seeds for each subject |
|
Parcellation that is created outside Eelbrain for each subject |
|
Fsaverage parcellation that is morphed to individual subjects |
Visualization defaults
- MneExperiment.brain_plot_defaults
The MneExperiment.brain_plot_defaults
dictionary can contain options
that changes defaults for brain plots (for reports and movies). The following
options are available:
- surf‘inflated’ | ‘pial’ | ‘smoothwm’ | ‘sphere’ | ‘white’
Freesurfer surface to use as brain geometry.
- views
str
| iterator ofstr
View or views to show in the figure. Can also be set for each parcellation, see
MneExperiment.parc
.- foregroundmayavi color
Figure foreground color (i.e., the text color).
- backgroundmayavi color
Figure background color.
- smoothing_steps
None
|int
Number of smoothing steps to display data.
State Parameters
These are parameters that can be set after an MneExperiment
has been
initialized to affect the analysis, for example:
>>> my_experiment = MneExperiment()
>>> my_experiment.set(raw='1-40', cov='noreg')
sets up my_experiment
to use a 1-40 Hz band-pass filter as preprocessing, and to use sensor covariance matrices without regularization. Most methods also accept state parameters, so MneExperiment.set()
does not have to be used separately.
Contents
session
Which raw session to work with (one of MneExperiment.sessions
; usually
set automatically when epoch is set)
visit
Which visit to work with (one of MneExperiment.visits
)
raw
Select the preprocessing pipeline applied to the continuous data. Options are
all the processing steps defined in MneExperiment.raw
, as well as
"raw"
for using unprocessed raw data.
subject
Any subject in the experiment (subjects are identified based on MneExperiment.subject_re
).
group
Any group defined in MneExperiment.groups
. Will restrict the analysis
to that group of subjects.
epoch
Any epoch defined in MneExperiment.epochs
. Specify the epoch on which
the analysis should be conducted.
rej
(trial rejection)
Trial rejection can be turned off e.set(rej='')
, meaning that no trials are
rejected, and back on, meaning that the corresponding rejection files are used
e.set(rej='man')
.
model
While the epoch state parameter determines which events are
included when loading data, the model
parameter determines how these events
are split into different condition cells. The parameter should be set to the
name of a categorial event variable which defines the desired cells.
In the Example,
e.load_evoked(epoch='target', model='prediction')
would load responses to the target, averaged for expected and unexpected trials.
Cells can also be defined based on crossing two variables using the %
sign.
In the Example, to load corresponding primes together with
the targets, you would use
e.load_evoked(epoch='word', model='stimulus % prediction')
.
equalize_evoked_count
By default, the analysis uses all epochs marked as good during rejection.
Set equalize_evoked_count='eq'
to discard trials to make sure the same number of epochs goes into each cell of the model (see equal_count
parameter to Dataset.aggregate()
).
- ‘’ (default)
Use all epochs.
- ‘eq’
Make sure the same number of epochs
n
is used in each cell by discarding epochs. The firstn
epochs are used for each condition (assuming that habituation increases by condition).
cov
The method for correcting the sensor covariance.
- ‘noreg’
Use raw covariance as estimated from the data (do not regularize).
- ‘bestreg’ (default)
Find the regularization parameter that leads to optimal whitening of the baseline.
- ‘reg’
Use the default regularization parameter (0.1).
- ‘auto’
Use automatic selection of the optimal regularization method, as described in
mne.compute_covariance()
.- empty_room
Empty room covariance; for required setup, see Empty room covariance.
- ‘ad_hoc’
Use diagonal covariance based on
mne.cov.make_ad_hoc_cov()
.
src
The source space to use.
ico-x
: Surface source space based on icosahedral subdivision of the white matter surfacex
steps (e.g.,ico-4
, the default).
vol-x
: Volume source space based on a volume grid withx
mm resolution (x
is the distance between sources, e.g.vol-10
for a 10 mm grid).
inv
What inverse solution to use for source localization. This parameter can also be
set with MneExperiment.set_inv()
, which has a more detailed description of
the options. The inverse solution can be set directly using the appropriate
string as in e.set(inv='fixed-1-MNE')
.
parc
/mask
(parcellations)
The parcellation determines how the brain surface is divided into regions. There are a number of built-in parcellations:
FreeSurfer Parcellations:
aparc.a2005s
,aparc.a2009s
,aparc
,aparc.DKTatlas
,PALS_B12_Brodmann
,PALS_B12_Lobes
,PALS_B12_OrbitoFrontal
,PALS_B12_Visuotopic
.cortex
: All sources in cortex, based on the FreeSurfer “cortex” label.lobes
: Modified version ofPALS_B12_Lobes
in which the limbic lobe is merged into the other 4 lobes.lobes-op
: One large region encompassing occipital and parietal lobe in each hemisphere.lobes-ot
: One large region encompassing occipital and temporal lobe in each hemisphere.
Additional parcellation can be defined in the MneExperiment.parcs
attribute. Parcellations are used in different contexts:
When loading source space data, the current
parc
state determines the parcellation of the souce space (change the state parameter withe.set(parc='aparc')
).When loading tests, setting the
parc
parameter treats each label as a separate ROI. For spatial cluster-based tests that means that no clusters can cross the boundary between two labels. On the other hand, using themask
parameter treats all named labels as connected surface, but discards any sources labeled as"unknown"
. For example, loading a test withmask='lobes'
will perform a whole-brain test on the cortex, while discarding subcortical sources.
Parcellations are set with their name, with the expception of
SeededParc
: for those, the name is followed by the radious in mm, for
example, to use seeds defined in a parcellation named 'myparc'
with a radius
of 25 mm around the seed, use e.set(parc='myparc-25')
.
connectivity
Possible values: ''
, 'link-midline'
Connectivity refers to the edges connecting data channels (sensors for sensor
space data and sources for source space data). These edges are used to find
clusters in cluster-based permutation tests. For source spaces, the default is
to use FreeSurfer surfaces in which the two hemispheres are unconnected. By
setting connectivity='link-midline'
, this default connectivity can be
modified so that the midline gyri of the two hemispheres get linked at sources
that are at most 15 mm apart. This parameter currently does not affect sensor
space connectivity.
select_clusters
(cluster selection criteria)
In thresholded cluster test, clusters are initially filtered with a minimum
size criterion. This can be changed with the select_clusters
analysis
parameter with the following options:
Name |
Min time |
Min sources |
Min sensors |
---|---|---|---|
|
|||
|
10 ms |
10 |
4 |
|
25 ms |
10 |
4 |
|
25 ms |
20 |
8 |
To change the cluster selection criterion use for example:
>>> e.set(select_clusters='all')