Eelbrain¶
Eelbrain is an open-source Python package for accessible statistical analysis of MEG and EEG data. It is maintained by Christian Brodbeck at the Computational sensorimotor systems lab at University of Maryland, College Park.
If you use Eelbrain in work that is published, please acknowledge it by citing it with the appropriate version and DOI.
Manual¶
Installing¶
Note
Because of the fluid nature of Python development, the recommended way of installing Eelbrain changes occasionally. For up-to-date information, see the corresponding Eelbrain wiki page.
Getting Started¶
Documentation¶
For an introduction to Eelbrain, see Introduction and the other Examples. For details on each functionality see the API Reference.
MacOS: Framework Build¶
On macOS, the GUI tool Eelbrain uses requires a special build of Python called a “Framework build”. You might see this error when trying to create a plot:
SystemExit: This program needs access to the screen.
Please run with a Framework build of python, and only when you are
logged in on the main display of your Mac.
In order to avoid this, Eelbrain installs a shortcut to start IPython with a Framework build:
$ eelbrain
This automatically launches IPython with the “eelbrain” profile. A default
startup script that executes from eelbrain import *
is created, and can be
changed in the corresponding IPython profile.
Quitting iPython¶
Sometimes iPython seems to get stuck after this line:
Do you really want to exit ([y]/n)? y
In those instances, pressing ctrl-c usually terminates iPython immediately.
Windows: Scrolling¶
Scrolling inside a plot axes normally uses arrow keys, but this is currently not possible on Windows (due to an issue in Matplotlib). Instead, the following keys can be used:
↑ i |
||
← j |
→ l |
|
↓ k |
Changes¶
New in 0.32¶
- Requires at least Python 3.7
- API changes:
- Consistent class names for tests in
test
,testnd
andpipeline
. plot.Timeplot
argument order: second and third argument switched to facilitate plotting single category.
- Consistent class names for tests in
boosting
:- Different
tstart
/tstop
for different predictors (contributed by Joshua Kulasingham) - Cross-validation of model fit (
test
parameter)
- Different
plot.Style
to control advanced plotting options by category (see Boxplot example).- New functions/methods:
- Experiment
pipeline
:- Methods with
decim
parameter now also havesamplingrate
parameter - More control over Events
- Methods with
New in 0.31¶
- API changes:
Var
andNDVar
argument order changed to be consistent with other data objectscombine()
: CombiningNDVar
with unequal dimensions will now raise an error; to combine them by taking the intersection of valid elements (previous behavior), usedim_intersection=True
Dataset.save_txt()
:delim
parameter renamed todelimiter
testnd
API: For permutation tests, thesamples
parameter now defaults to 10,000 (previously 0)table.difference()
: Theby
parameter is deprecated, usematch
insteadNDVar.smooth()
with a window with an even number of samples, andBoostingResult.h
forboosting()
with a basis with an even number of samples: the time axis is now consistent withscipy.signal.convolve()
(was previously shifted by half a sample)testnd.LMGroup.coefficients_dataset()
now returns a wide form table by defaultplot.Topomap.mark_sensors()
,plot.TopomapBins.mark_sensors()
andplot.SensorMap.mark_sensors()
: The second argument now specifies axis to mark
- New functions:
plot.Boxplot
: Accepts additional arguments (label_fliers
andboxplot()
parameters)plot.BarplotHorizontal
: Horizontal bar-plot- Non-parametric univariate tests
test.MannWhitneyU
andtest.WilcoxonSignedRank
MneExperiment
pipeline:SubParc
: Simplified subset parcellation
New in 0.30¶
- Support for vector data (with many contributions from Proloy Das):
Space
dimension to represent physical spaceVolumeSourceSpace
to represent volume source spaces- Statistical tests:
testnd.Vector
,testnd.VectorDifferenceRelated
- Plotting with
plot.GlassBrain
- ICA-GUI: tool to find high amplitude signals
- Documentation: New Examples section
Dataset.summary()
method- Element-wise
maximum()
andminimum()
functions forNDVar
objects MneExperiment
pipeline:RawApplyICA
preprocessing pipe to apply ICA estimated in a different branch of the pipeline.MneExperiment.load_evoked_stc()
API more closely matchesMneExperiment.load_epochs_stc()
MneExperiment.load_neighbor_correlation()
New in 0.29¶
- API: Better default parameters for
resample()
- Predictor-specific stopping for
boosting()
- New
NDVar
functioncorrelation_coefficient()
- Plotting:
- General layout parameters for plot size relative to screen size
- Better plots for masked statistic maps
MneExperiment
pipeline:- API:
MneExperiment.make_rej()
renamed toMneExperiment.make_epoch_selection()
- Object-based definitions (see The MneExperiment Pipeline)
- New method
MneExperiment.plot_raw()
- API:
New in 0.28¶
- Transition to Python 3.6
- API changes:
testnd.ANOVA
: Thematch
parameter is now determined automatically and does not need to be specified anymore in most cases.testnd.TTestOneSample.diff
renamed totestnd.TTestOneSample.difference
.plot.Histogram
: followingmatplotlib
, thenormed
parameter was renamed todensity
.- Previously capitalized argument and attribute names
Y
,X
andXax
are now lowercase. - Topomap-plot argument order changed to provide consistency between different plots.
NDVar
andVar
support forround(x)
MneExperiment
pipeline:- Independent measures t-test
New in 0.27¶
- API changes:
- To change the parcellation of an
NDVar
with source-space data, use the new functionset_parc()
. TheSourceSpace.set_parc()
method has been removed because dimension objects should be treated as immutable, as they can be shared between differentNDVar
instances. Analogously,UTS.set_tmin()
is nowset_tmin()
. table.frequencies()
: If the inputy
is aVar
object, the output will also be aVar
(wasFactor
).NDVar.smooth()
: window-based smoothing now uses a symmetric window, which can lead to slightly different results.
- To change the parcellation of an
concatenate()
: concatenate multipleNDVar
objects to form a new dimension.NDVar.ols()
: regress on a dimension.plot.brain.SequencePlotter
to plot multiple anatomical images on one figure.- New functions and objects:
- New methods:
New in 0.26¶
- API changes:
- A new global
configure()
function replaces module-level configuration functions. Dataset
: when a one-dimensional array is assigned to an unused key, the array is now automatically converted to aVar
object.SourceSpace.vertno
has been renamed toSourceSpace.vertices
.
- A new global
- Plotting:
- The new
name
argument allows setting the window title without adding a title to the figure. - Plots that reresent time have a new method to synchronize the time axis on
multiple plots:
link_time_axis()
. - Plot source space time series:
plot.brain.butterfly()
- The new
- ANOVAs now support mixed models with between- and within-subjects factors
(see examples at
test.ANOVA
). load.fiff
: when generating epochs from raw data, a newtstop
argument allows specifying the time interval exclusive of the last sample.- New functions:
- New methods:
MneExperiment
pipeline:MneExperiment.reset()
(replacingMneExperiment.store_state()
andMneExperiment.restore_state()
)- New
MneExperiment.auto_delete_results
attribute to control whether invalidated results are automatically deleted. MneExperiment.screen_log_level
New in 0.25¶
- Installation with
conda
(see Installing) and$ eelbrain
launch script (see Getting Started). - API:
- GUI/plotting:
- When using iPython 5 or later, GUI start and stop is now automatic. It is
possible to revert to the old behavior with
plot.configure()
. - There are new hotkeys for most plots (see the individual plots’ help for details).
- Plots automatically rescale when the window is resized.
- When using iPython 5 or later, GUI start and stop is now automatic. It is
possible to revert to the old behavior with
MneExperiment
:- A new
MneExperiment.sessions
attribute replacesdefaults['experiment']
, with support for multiple sessions in one experiment (see Setting up the file structure). - The
MneExperiment.epochs
parametersel_epoch
has been removed, usebase
instead. - The setting
raw='clm'
has been renamed toraw='raw'
. - Custom preprocessing pipelines (see
MneExperiment.raw
). - The
model
parameter for ANOVA tests is now optional (seeMneExperiment.tests
).
- A new
- Reverse correlation using
boosting()
. - Loading and saving
*.wav
files (load.wav()
andsave.wav()
).
New in 0.24¶
- API:
MneExperiment
: For data from the NYU New York system converted withmne
< 0.13, theMneExperiment.meg_system
attribute needs to be set to"KIT-157"
to distinguish it from data collected with the KIT UMD system.masked_parameter_map()
method of cluster-based test results: use ofpmin=None
is deprecated, usepmin=1
instead.
- New test:
test.TTestRelated
. MneExperiment.make_report_rois()
includes corrected p-values in reports for tests in more than one ROIMneExperiment.make_rej()
now has adecim
parameter to improve display performance.MneExperiment
: BEM-solution files are now created dynamically withmne
and are not cached any more. This can lead to small changes in results due to improved numerical precision. Delete old files to free up space withmne_experiment.rm('bem-sol-file', subject='*')
.- New
MneExperiment.make_report_coreg()
method. - New
MneExperiment
: analysis parameter connectivity plot.TopoButterfly
: pressShift-T
for a large topo-map with sensor names.
New in 0.23¶
- API
plot.colors_for_twoway()
andplot.colors_for_categorial()
: new color model, different options. testnd.TContrastRelated
contrasts can contain*
to include the average of multiple cells.- New
NDVar
methods:NDVar.envelope()
,NDVar.fft()
.
New in 0.22¶
- Epoch Rejection GUI:
- New “Tools” menu.
- New “Info” tool to show summary info on the rejection.
- New “Find Bad Channels” tool to automatically find bad channels.
- Set marked channels by clicking on topo-map.
- Faster page redraw.
plot.Barplot
andplot.Boxplot
: newcells
argument to customize the order of bars/boxes.MneExperiment
: new methodMneExperiment.show_rej_info()
.NDVar
: new methodNDVar.label_clusters()
.plot.configure()
: option to revert to wxPython backend forplot.brain
.
New in 0.21¶
MneExperiment
:- New epoch parameters:
trigger_shift
andvars
(seeMneExperiment.epochs
). load_selected_events()
: newvardef
parameter to load variables from a test definition.- Log files stored in the root directory.
- Parcellations (
MneExperiment.parcs
) based on combinations can also include split commands.
- New epoch parameters:
- New
Factor
method:Factor.floodfill()
. Model
methods:get_table()
replaced withas_table()
, newhead()
andtail()
.- API:
.sort_idx
methods renamed to.sort_index
.
New in 0.20¶
MneExperiment
: new analysis parameterselect_clusters='all'
to keep all clusters in cluster tests (see select_clusters (cluster selection criteria)).- Use
testnd.configure()
to limit the number of CPUs that are used in permutation cluster tests.
New in 0.19¶
- Two-stage tests (see
MneExperiment.tests
). - Safer cache-handling. See note at Analysis.
Dataset.head()
andDataset.tail()
methods for more efficiently inspecting partial Datasets.- The default format for plots in reports is now SVG since they are displayed
correctly in Safari 9.0. Use
plot.configure()
to change the default format. - API: Improvements in
plot.Topomap
with concomitant changes in the constructor signature. For examples see the meg/topographic plotting example. - API:
plot.ColorList
has a new argument called labels. - API:
testnd.ANOVA
attributeprobability_maps
renamed top
analogous to othertestnd
results. - Rejection-GUI: The option to plot the data range only has been removed.
New in 0.18¶
- API: The first argument for
MneExperiment.plot_annot()
is now parc. - API: the
fill_in_missing
parameter tocombine()
has been deprecated and replaced with a new parameter calledincomplete
. - API: Several plotting functions have a new
xticklabels
parameter to suppress x-axis tick labels (e.g.plot.UTSStat
). - The objects returned by
plot.brain
plotting functions now contain aplot_colorbar()
method to create a correspondingplot.ColorBar
plot. - New function
choose()
to combine data in differentNDVars
on a case by case basis. - Rejection-GUI (
gui.select_epochs()
): Press Shift-i when hovering over an epoch to enter channels for interpolation manually. MneExperiment.show_file_status()
now shows the last modification date of each file.- Under OS X 10.8 and newer running code under a notifier statement now automatically prevents the computer from going to sleep.
New in 0.17¶
MneExperiment.brain_plot_defaults
can be used to customize PySurfer plots in movies and reports.MneExperiment.trigger_shift
can now also be a dictionary mapping subject name to shift value.- The rejection GUI now allows selecting individual channels for interpolation using the ‘i’ key.
- Parcellations based on combinations of existing labels, as well as
parcellations based on regions around points specified in MNI coordinates can
now be defined in
MneExperiment.parcs
. - Source space
NDVar
can be indexed with lists of region names, e.g.,ndvar.sub(source=['cuneus-lh', 'lingual-lh'])
. - API:
plot.brain.bin_table()
function signature changed slightly (more parameters, newhemi
parameter inserted to match other functions’ argument order). - API:
combine()
now raisesKeyError
when trying to combineDataset
objects with unequal keys; setfill_in_missing=True
to reproduce previous behavior. - API: Previously,
Var.as_factor()
mapped unspecified values tostr(value)
. Now they are mapped to''
. This also applies toMneExperiment.variables
entries with unspecified values.
New in 0.16¶
- New function for plotting a legend for annot-files:
plot.brain.annot_legend()
(automatically used in reports). - Epoch definitions in
MneExperiment.epochs
can now include a'base'
parameter, which will copy the given “base” epoch and modify it with the current definition. MneExperiment.make_mov_ttest()
andMneExperiment.make_mov_ga_dspm()
are fixed but require PySurfer 0.6.- New function:
table.melt_ndvar()
. - API:
plot.brain
function signatures changed slightly to accommodate more layout-related arguments. - API: use
Brain.image()
instead ofplot.brain.image()
.
New in 0.15¶
- The Eelbrain package on the PYPI is now compiled with Anaconda. This means
that the package can now be installed into an Anaconda distribution with
pip
, whereaseasy_install
has to be used for the Canopy distribution. - GUI
gui.select_epochs()
: Set marked channels through menu (View > Mark Channels) - Datasets can be saved as tables in RTF format (
Dataset.save_rtf()
). - API
plot.Timeplot
: the default spread indicator changed to SEM, and there is a new argument for timelabels. - API:
test.anova
is now a function with a slightly changed signature. The old class has been renamed totest.ANOVA
. - API:
test.oneway
was removed. Usetest.ANOVA
. - API: the default value of the
plot.Timeplot
parameter bottom changed from 0 to None (determined by the data). - API:
Factor.relabel()
renamed toFactor.update_labels()
. - Plotting: New option for the figure legend
'draggable'
(drag the legend with the mouse pointer).
New in 0.14¶
- API: the
plot.Topomap
argument sensors changed to sensorlabels. - GUI: The python/Quit Eelbrain menu command now closes all windows to ensure that unsaved documents are handled properly. In order to yield to the terminal without closing windows, use the Go/Yield to Terminal command (Command-Alt-Q).
testnd.TContrastRelated
: support for unary operation abs.
New in 0.13¶
- The
gui.select_epochs()
GUI can now also be used to set bad channels.MneExperiment
subclasses will combine bad channel information from rejection files with bad channel information from bad channel files. Note that while bad channel files set bad channels for a given raw file globally, rejection files set bad channels only for the given epoch. Factor
objects can now remember a custom cell order which determines the order in tables and plots.- The
Var.as_factor()
method can now assign all unmentioned codes to a default value. MneExperiment
:- API: Subclasses should remove the
subject
andexperiment
parameters fromMneExperiment.label_events()
. - API:
MneExperiment
can now be imported directly fromeelbrain
. - API: The
MneExperiment._defaults
attribute should be renamed toMneExperiment.defaults
. - A draft for a guide at The MneExperiment Pipeline.
- Cached files are now saved in a separate folder at
root/eelbrain-cache
. The cache can be cleared usingMneExperiment.clear_cache()
. To preserve cached test results, move theroot/test
folder into theroot/eelbrain-cache
folder.
- API: Subclasses should remove the
New in 0.12¶
- API:
Dataset
construction changed, allows setting the number of cases in the Dataset. - API:
plot.SensorMap2d
was renamed toplot.SensorMap
. MneExperiment
:- API: The default number of samples for reports is now 10‘000.
- New epoch parameter
'n_cases'
: raise an error if an epoch definition does not yield expected number of trials. - A custom baseline period for epochs can now be specified as a parameter in
the epoch definition (e.g.,
'baseline': (-0.2, -0.1)
). When loading data, specifyingbaseline=True
uses the epoch’s custom baseline.
New in 0.11¶
MneExperiment
:- Change in the way the covariance matrix
is defined: The epoch for the covariance matrix should be specified in
MneExperiment.epochs['cov']
. The regularization is no longer part ofset_inv()
, but is instead set withMneExperiment.set(cov='reg')
orMneExperiment.set(cov='noreg')
. - New option
cov='bestreg'
automatically selects the regularization parameter for each subejct.
- Change in the way the covariance matrix
is defined: The epoch for the covariance matrix should be specified in
Var.as_factor()
allows more efficient labeling when multiple values share the same label.- API: Previously
plot.configure_backend()
is nowplot.configure()
New in 0.10¶
- Tools for generating colors for categories (see Plotting).
- Plots now all largely respect matplotlib rc-parameters (see Customizing Matplotlib).
- Fixed an issue in the
testnd
module that could affect permutation based p-values when multiprocessing was used.
New in 0.9¶
Factor
API change: Therep
argument was renamed torepeat
.- T-values for regression coefficients through
NDVar.ols_t()
. MneExperiment
: subject name patterns and eog_sns are now handled automatically.UTSStat
andBarplot
plots can use pooled error for variability estimates (on by default for related measures designs, can be turned off using thepool_error
argument).- API: for consistency, the argument to specify the kind of error to plot
changed to
error
in both plots.
- API: for consistency, the argument to specify the kind of error to plot
changed to
New in 0.8¶
New in 0.6¶
- New recipes (outdated).
New in 0.5¶
- The
eelbrain.lab
andeelbrain.eellab
modules are deprecated. Everything can now me imported fromeelbrain
directly.
New in 0.4¶
- Optimized ANOVA evaluation, support for unbalanced fixed effects models.
- rpy2 interaction:
Dataset.from_r()
to create aDataset
from an R Data Frame, andDataset.to_r()
to cerate an R Data Frame from aDataset
.
New in 0.3¶
- Optimized clustering for cluster permutation tests.
New in 0.2¶
gui.SelectEpochs
Epoch rejection GIU has a new “GA” button to plot the grand average of all accepted trials- Cluster permutation tests in
testnd
use multiple cores; To disable multiprocessing seteelbrain._stats.testnd.multiprocessing = False
.
New in 0.1.7¶
gui.SelectEpochs
can now be initialized with a singlemne.Epochs
instance (data needs to be preloaded).- Parameters that take
NDVar
objects now also acceptmne.Epochs
andmne.fiff.Evoked
objects.
New in 0.1.5¶
plot.topo.TopoButterfly
plot: new keyboard commands (t
,left arrow
,right arrow
).
Publications using Eelbrain¶
Ordered alphabetically according to authors’ last name:
[1] | Esti Blanco-Elorrieta and Liina Pylkkänen. Bilingual language switching in the lab vs. in the wild: the spatio-temporal dynamics of adaptive language control. Journal of Neuroscience, pages 0553–17, 2017. URL: http://www.jneurosci.org/content/early/2017/08/16/JNEUROSCI.0553-17.2017.abstract. |
[2] | Christian Brodbeck, Laura Gwilliams, and Liina Pylkkänen. EEG can track the time course of successful reference resolution in small visual worlds. Frontiers in psychology, 6:1787, 2015. URL: https://www.frontiersin.org/articles/10.3389/fpsyg.2015.01787. |
[3] | Christian Brodbeck, Laura Gwilliams, and Liina Pylkkänen. Language in context: MEG evidence for modality general and specific responses to reference resolution. eNeuro, pages ENEURO–0145, 2016. URL: http://www.eneuro.org/content/early/2016/12/15/ENEURO.0145-16.2016.abstract. |
[4] | Christian Brodbeck, L Elliot Hong, and Jonathan Z Simon. Rapid transformation from auditory to linguistic representations of continuous speech. Current Biology, 28(24):3976–3983, 2018. URL: https://www.sciencedirect.com/science/article/pii/S096098221831409X. |
[5] | Christian Brodbeck, Alessandro Presacco, Samira Anderson, and Jonathan Z Simon. Over-representation of speech in older adults originates from early response in higher order auditory cortex. Acta Acustica united with Acustica, 104(5):774–777, 2018. URL: https://www.ingentaconnect.com/content/dav/aaua/2018/00000104/00000005/art00013. |
[6] | Christian Brodbeck, Alessandro Presacco, and Jonathan Z Simon. Neural source dynamics of brain responses to continuous stimuli: speech processing from acoustics to comprehension. NeuroImage, 172:162–174, 2018. URL: https://www.sciencedirect.com/science/article/pii/S1053811918300429. |
[7] | Christian Brodbeck and Liina Pylkkänen. Language in context: characterizing the comprehension of referential expressions with MEG. NeuroImage, 147:447–460, 2017. URL: https://www.sciencedirect.com/science/article/pii/S1053811916307169. |
[8] | Teon L Brooks and Daniela Cid de Garcia. Evidence for morphological composition in compound words using MEG. Frontiers in human neuroscience, 9:215, 2015. URL: https://www.frontiersin.org/articles/10.3389/fnhum.2015.00215. |
[9] | Julien Dirani and Liina Pylkkanen. Lexical access in comprehension vs. production: spatiotemporal localization of semantic facilitation and interference. bioRxiv, pages 449157, 2018. URL: https://www.biorxiv.org/content/early/2018/10/23/449157.1.abstract. |
[10] | Graham Flick, Yohei Oseki, Amanda R Kaczmarek, Meera Al Kaabi, Alec Marantz, and Liina Pylkkänen. Building words and phrases in the left temporal lobe. Cortex, 2018. URL: https://www.sciencedirect.com/science/article/pii/S0010945218301904. |
[11] | Graham Flick and Liina Pylkkänen. Isolating syntax in natural language: MEG evidence for an early contribution of left posterior temporal cortex. BioRxiv, pages 439158, 2018. URL: https://www.biorxiv.org/content/early/2018/10/09/439158.abstract. |
[12] | Phoebe Gaston and Alec Marantz. The time course of contextual cohort effects in auditory processing of category-ambiguous words: MEG evidence for a single “clash” as noun or verb. Language, Cognition and Neuroscience, 33(4):402–423, 2018. URL: https://www.tandfonline.com/doi/abs/10.1080/23273798.2017.1395466. |
[13] | Laura Gwilliams, GA Lewis, and Alec Marantz. Functional characterisation of letter-specific responses in time, space and current polarity using magnetoencephalography. NeuroImage, 132:320–333, 2016. URL: https://www.sciencedirect.com/science/article/pii/S105381191600166X. |
[14] | Laura Gwilliams and Alec Marantz. Morphological representations are extrapolated from morpho-syntactic rules. Neuropsychologia, 114:77–87, 2018. URL: https://www.sciencedirect.com/science/article/pii/S0028393218301568. |
[15] | Thomas Hartmann and Nathan Weisz. Auditory cortical generators of the frequency following response are modulated by intermodal attention. NeuroImage, 203:116185, 2019. doi:10.1016/j.neuroimage.2019.116185. |
[16] | William Matchin, Christian Brodbeck, Christopher Hammerly, and Ellen Lau. The temporal dynamics of structure and content in sentence comprehension: evidence from fMRI-constrained MEG. Human brain mapping, 40(2):663–678, 2019. URL: https://onlinelibrary.wiley.com/doi/abs/10.1002/hbm.24403. |
[17] | Kyriaki Neophytou, Christina Manouilidou, Linnaea Stockall, and Alec Marantz. Syntactic and semantic restrictions on morphological recomposition: MEG evidence from greek. Brain and language, 183:11–20, 2018. URL: https://www.sciencedirect.com/science/article/pii/S0093934X1730130X. |
[18] | Krishna C Puvvada, Marisel Villafane-Delgado, Christian Brodbeck, and Jonathan Z Simon. Neural coding of noisy and reverberant speech in human auditory cortex. bioRxiv, pages 229153, 2017. URL: https://www.biorxiv.org/content/early/2017/12/04/229153.abstract. |
[19] | Victoria Sharpe, Samir Reddigari, Liina Pylkkänen, and Alec Marantz. Automatic access to verb continuations on the lexical and categorical levels: evidence from MEG. Language, Cognition and Neuroscience, 34(2):137–150, 2019. URL: https://www.tandfonline.com/doi/abs/10.1080/23273798.2018.1531139. |
[20] | Eline Verschueren, Jonas Vanthornhout, and Tom Francart. Semantic context enhances neural envelope tracking. bioRxiv, pages 421727, 2018. URL: https://www.biorxiv.org/content/early/2018/09/19/421727.abstract. |
[21] | Eline Verschueren, Jonas Vanthornhout, and Tom Francart. The effect of stimulus choice on an EEG-based objective measure of speech intelligibility. bioRxiv, pages 421727, 2019. URL: https://www.biorxiv.org/content/10.1101/421727v2.full-text. |
[22] | Adina Williams, Samir Reddigari, and Liina Pylkkänen. Early sensitivity of left perisylvian cortex to relationality in nouns and verbs. Neuropsychologia, 100:131–143, 2017. URL: https://www.sciencedirect.com/science/article/pii/S0028393217301586. |
[23] | Linmin Zhang and Liina Pylkkänen. Composing lexical versus functional adjectives: evidence for uniformity in the left temporal lobe. Psychonomic bulletin & review, 25(6):2309–2322, 2018. URL: https://link.springer.com/article/10.3758/s13423-018-1469-y. |
[24] | Linmin Zhang and Liina Pylkkänen. Semantic composition of sentences word by word: MEG evidence for shared processing of conceptual and logical elements. Neuropsychologia, 119:392–404, 2018. URL: https://www.sciencedirect.com/science/article/pii/S0028393218305037. |
Development¶
Eelbrain is actively developed and maintained by Christian Brodbeck at the Computational sensorimotor systems lab at University of Maryland, College Park.
Eelbrain is fully open-source and new contributions are welcome on
GitHub. Suggestions can be
raised as issues, and modifications can be made as pull requests into the
master
branch.
The Development Version¶
The Eelbrain source code is hosted on
GitHub. Development takes
place on the master
branch, while release versions are maintained on
branches called r/0.26
etc. For further information on working with
GitHub see
GitHub’s instructions.
Installing the development version requires the presence of a compiler. On macOS, make sure Xcode is installed (open it once to accept the license agreement). Windows will indicate any needed files when the install command is run.
After cloning the repository, the development version can be installed by
running, from the Eelbrain
repository’s root directory:
$ python setup.py develop
On macOS, the $ eelbrain
shell script to run iPython
with the framework
build is not installed properly by setup.py
; in order to fix this, run:
$ ./fix-bin
In Python, you can make sure that you are working with the development version:
>>> import eelbrain
>>> eelbrain.__version__
'dev'
To switch back to the release version use $ pip uninstall eelbrain
.
Building with Conda¶
To build Eelbrain with conda
, make sure that conda-build
is installed.
Then, from Eelbrain/conda
run:
$ conda build eelbrain
After building successfully, the build can be installed with:
$ conda install --use-local eelbrain
Contributing¶
Style guides:
- Python: PEP8
- Documentation: NumPy documentation style
- ReStructured Text Primer
Useful tools:
- Graphical frontend for git: SourceTree
- Python IDE: PyCharm
Testing¶
Tests for individual modules are included in folders called tests
, usually
on the same level as the module.
To run all tests, run $ make test
from the Eelbrain project directory.
On macOS, tests needs to run with the framework build of Python;
if you get a corresponding error, run $ ./fix-bin pytest
from the
Eelbrain
repository root.
Reference¶
Data Classes¶
Primary data classes:
Dataset ([items, name, caption, info, n_cases]) |
Store multiple variables pertaining to a common set of measurement cases |
Factor (x[, name, random, repeat, tile, …]) |
Container for categorial data. |
Var (x[, name, info, repeat, tile]) |
Container for scalar data. |
NDVar (x, dims[, name, info]) |
Container for n-dimensional data. |
Datalist ([items, name, fmt]) |
list subclass for including lists in in a Dataset. |
Model classes (not usually initialized by themselves but through operations on primary data-objects):
Interaction (base) |
Represents an Interaction effect. |
Model (x) |
A list of effects. |
NDVar dimensions (not usually initialized by themselves but through
load
functions):
Case (n[, connectivity]) |
Case dimension |
Categorial (name, values[, connectivity]) |
Simple categorial dimension |
Scalar (name, values[, unit, tick_format, …]) |
Scalar dimension |
Sensor (locs[, names, sysname, proj2d, …]) |
Dimension class for representing sensor information |
SourceSpace (vertices[, subject, src, …]) |
MNE surface-based source space |
VolumeSourceSpace (vertices[, subject, src, …]) |
MNE volume source space |
Space (directions[, name]) |
Represent multiple directions in space |
UTS (tmin, tstep, nsamples) |
Dimension object for representing uniform time series |
File I/O¶
Eelbrain objects can be
pickled. Eelbrain’s own
pickle I/O functions provide backwards compatibility for Eelbrain objects
(although files saved in Python 3 can only be opened in Python 2 if they are
saved with protocol<=2
):
save.pickle (obj[, dest, protocol]) |
Pickle a Python object. |
load.unpickle ([path]) |
Load pickled Python objects from a file. |
load.arrow ([file_path]) |
Load object serialized with pyarrow . |
save.arrow (obj[, dest]) |
Save a Python object with pyarrow . |
load.update_subjects_dir (obj, subjects_dir) |
Update NDVar SourceSpace.subjects_dir attributes |
Import¶
Functions and modules for loading specific file formats as Eelbrain object:
load.wav ([filename, name, backend]) |
Load a wav file as NDVar |
load.tsv ([path, names, types, delimiter, …]) |
Load a Dataset from a text file. |
load.eyelink |
Tools for loading data form eyelink edf files. |
load.fiff |
Tools for importing data through mne. |
load.txt |
Tools for loading data from text files. |
load.besa |
Tools for loading data from the BESA-MN pipeline. |
Export¶
Dataset with only univariate data can be saved as text using the
save_txt()
method. Additional export functions:
save.txt (iterator[, fmt, delim, dest]) |
Write any object that supports iteration to a text file. |
save.wav (ndvar[, filename, toint]) |
Save an NDVar as wav file |
Sorting and Reordering¶
align (d1, d2[, i1, i2, out]) |
Align two data-objects based on index variables |
align1 (d, to[, by, out]) |
Align a data object to an index variable |
Celltable (y[, x, match, sub, cat, ds, …]) |
Divide y into cells defined by x. |
choose (choice, sources[, name]) |
Combine data-objects picking from a different object for each case |
combine (items[, name, check_dims, …]) |
Combine a list of items of the same type into one item. |
shuffled_index (n[, cells]) |
Return an index to shuffle a data-object |
NDVar Initializers¶
gaussian (center, width, time) |
Gaussian window NDVar |
powerlaw_noise (dims, exponent) |
Gaussian ![]() |
NDVar Operations¶
NDVar
objects support the native abs()
and round()
functions. See also NDVar
methods.
Butterworth (low, high, order[, sfreq]) |
Butterworth filter |
complete_source_space (ndvar[, fill, mask]) |
Fill in missing vertices on an NDVar with a partial source space |
concatenate (ndvars[, dim, name, tmin, info, …]) |
Concatenate multiple NDVars |
convolve (h, x[, ds, name]) |
Convolve h and x along the time dimension |
correlation_coefficient (x, y[, dim, name]) |
Correlation between two NDVars along a specific dimension |
cross_correlation (in1, in2[, name]) |
Cross-correlation between two NDVars along the time axis |
cwt_morlet (y, freqs[, use_fft, n_cycles, …]) |
Time frequency decomposition with Morlet wavelets (mne-python) |
dss (ndvar) |
Denoising source separation (DSS) |
filter_data (ndvar, l_freq, h_freq[, …]) |
Apply mne.filter.filter_data() to an NDVar |
find_intervals (ndvar[, interpolate]) |
Find intervals from a boolean NDVar |
find_peaks (ndvar) |
Find local maxima in an NDVar |
frequency_response (b[, frequencies]) |
Frequency response for a FIR filter |
label_operator (labels[, operation, exclude, …]) |
Convert labeled NDVar into a matrix operation to extract label values |
labels_from_clusters (clusters[, names]) |
Create Labels from source space clusters |
maximum (ndvars[, name]) |
Element-wise maximum of multiple NDVar objects |
minimum (ndvars[, name]) |
Element-wise minimum of multiple NDVar objects |
morph_source_space (ndvar[, subject_to, …]) |
Morph source estimate to a different MRI subject |
neighbor_correlation (x[, dim, obs, name]) |
Calculate Neighbor correlation |
psd_welch (ndvar[, fmin, fmax, n_fft, …]) |
Power spectral density with Welch’s method |
resample (ndvar, sfreq[, npad, window, pad, name]) |
Resample an NDVar along the time dimension |
segment (continuous, times, tstart, tstop[, …]) |
Segment a continuous NDVar |
set_parc (ndvar, parc[, dim]) |
Change the parcellation of an NDVar with SourceSpace dimension |
set_time (ndvar, time[, mode]) |
Crop and/or pad an NDVar to match the time axis time |
set_tmin (ndvar[, tmin]) |
Shift the time axis of an NDVar relative to its data |
xhemi (ndvar[, mask, hemi, parc]) |
Project data from both hemispheres to hemi of fsaverage_sym |
Reverse Correlation¶
boosting (y, x, tstart, tstop[, scale_data, …]) |
Estimate a linear filter with coordinate descent |
BoostingResult (y, x, tstart, tstop, …[, …]) |
Result from boosting a temporal response function |
epoch_impulse_predictor (shape[, value, …]) |
Time series with one impulse for each of n epochs |
event_impulse_predictor (shape[, time, …]) |
Time series with multiple impulses |
Tables¶
Manipulate data tables and compile information about data objects such as cell frequencies:
table.cast_to_ndvar (data, dim_values, match) |
Create an NDVar by converting a data column to a dimension |
table.difference (y, x, c1, c0, match[, sub, …]) |
Subtract data in one cell from another |
table.frequencies (y[, x, of, sub, ds]) |
Calculate frequency of occurrence of the categories in y |
table.melt (name, cells, cell_var_name, ds[, …]) |
Restructure a Dataset such that a measured variable is in a single column |
table.melt_ndvar (ndvar[, dim, cells, ds, …]) |
Transform data to long format by converting an NDVar dimension into a variable |
table.repmeas (y, x, match[, sub, ds]) |
Create a repeated-measures table |
table.stats (y, row[, col, match, sub, fmt, …]) |
Make a table with statistics |
Statistics¶
Univariate statistical tests:
test.Correlation (y, x[, sub, ds]) |
Pearson product moment correlation between y and x |
test.TTestOneSample (y[, match, sub, ds, …]) |
One-sample t-test |
test.TTestIndependent (y, x[, c1, c0, match, …]) |
Independent measures t-test |
test.TTestRelated (y, x[, c1, c0, match, …]) |
Related-measures t-test |
test.ANOVA (y, x[, sub, ds, title, caption]) |
Univariate ANOVA. |
test.pairwise (y, x[, match, sub, ds, par, …]) |
Pairwise comparison table |
test.ttest (y[, x, against, match, sub, …]) |
T-tests for one or more samples |
test.correlations (y, x[, cat, sub, ds, asds]) |
Correlation with one or more predictors |
test.pairwise_correlations (xs[, sub, ds, labels]) |
Pairwise correlation table |
test.MannWhitneyU (y, x[, c1, c0, match, …]) |
Mann-Whitney U-test (non-parametric independent measures test) |
test.WilcoxonSignedRank (y, x[, c1, c0, …]) |
Wilcoxon signed-rank test (non-parametric related measures test) |
test.lilliefors (data[, formatted]) |
Lilliefors’ test for normal distribution |
Mass-Univariate Statistics¶
testnd.TTestOneSample (y[, popmean, match, …]) |
Mass-univariate one sample t-test |
testnd.TTestRelated (y, x[, c1, c0, match, …]) |
Mass-univariate related samples t-test |
testnd.TTestIndependent (y, x[, c1, c0, …]) |
Mass-univariate independent samples t-test |
testnd.TContrastRelated (y, x, contrast[, …]) |
Mass-univariate contrast based on t-values |
testnd.ANOVA (y, x[, sub, ds, samples, pmin, …]) |
Mass-univariate ANOVA |
testnd.Correlation (y, x[, norm, sub, ds, …]) |
Mass-univariate correlation |
testnd.Vector (y[, match, sub, ds, samples, …]) |
Test a vector field for vectors with non-random direction |
testnd.VectorDifferenceRelated (y, x[, c1, …]) |
Test difference between two vector fields for non-random direction |
By default the tests in this module produce maps of statistical parameters along with maps of p-values uncorrected for multiple comparison. Using different parameters, different methods for multiple comparison correction can be applied (for more details and options see the documentation for individual tests):
- 1: permutation for maximum statistic (
samples=n
) - Look for the maximum
value of the test statistic in
n
permutations and calculate a p-value for each data point based on this distribution of maximum statistics. - 2: Threshold-based clusters (
samples=n, pmin=p
) - Find clusters of data
points where the original statistic exceeds a value corresponding to an
uncorrected p-value of
p
. For each cluster, calculate the sum of the statistic values that are part of the cluster. Do the same inn
permutations of the original data and retain for each permutation the value of the largest cluster. Evaluate all cluster values in the original data against the distributiom of maximum cluster values (see [1]). - 3: Threshold-free cluster enhancement (
samples=n, tfce=True
) - Similar to (1), but each statistical parameter map is first processed with the cluster-enhancement algorithm (see [2]). This is the most computationally intensive option.
Two-stage tests¶
Two-stage tests proceed by estimating parameters for a fixed effects model for
each subject, and then testing hypotheses on these parameter estimates on the
group level. Two-stage tests are implemented by fitting an LM
for each subject, and then combining them in a LMGroup
to
retrieve coefficients for group level statistics (see Two-stage test
example).
testnd.LM (y, model[, ds, coding, subject, sub]) |
Fixed effects linear model |
testnd.LMGroup (lms) |
Group level analysis for linear model LM objects |
References¶
[1] | Maris, E., & Oostenveld, R. (2007). Nonparametric statistical testing of EEG- and MEG-data. Journal of Neuroscience Methods, 164(1), 177-190. 10.1016/j.jneumeth.2007.03.024 |
[2] | Smith, S. M., and Nichols, T. E. (2009). Threshold-Free Cluster Enhancement: Addressing Problems of Smoothing, Threshold Dependence and Localisation in Cluster Inference. NeuroImage, 44(1), 83-98. 10.1016/j.neuroimage.2008.03.061 |
Plotting¶
Plot univariate data (Var
objects):
plot.Barplot (y[, x, match, sub, cells, …]) |
Barplot for a continuous variable |
plot.BarplotHorizontal (y[, x, match, sub, …]) |
Horizontal barplot for a continuous variable |
plot.Boxplot (y[, x, match, sub, cells, …]) |
Boxplot for a continuous variable |
plot.PairwiseLegend ([size, trend]) |
Legend for colors used in pairwise comparisons |
plot.Correlation (y, x[, cat, sub, ds, c, …]) |
Plot the correlation between two variables |
plot.Histogram (y[, x, match, sub, ds, …]) |
Histogram plots with tests of normality |
plot.Regression (y, x[, cat, match, sub, ds, …]) |
Plot the regression of y on x |
plot.Timeplot (y, time[, categories, match, …]) |
Plot a variable over time |
Color tools for plotting:
plot.colors_for_categorial (x[, hue_start, cmap]) |
Automatically select colors for a categorial model |
plot.colors_for_oneway (cells[, hue_start, …]) |
Define colors for a single factor design |
plot.colors_for_twoway (x1_cells, x2_cells[, …]) |
Define cell colors for a two-way design |
plot.soft_threshold_colormap (cmap, …[, …]) |
Soft-threshold a colormap to make small values transparent |
plot.two_step_colormap (left_max, left[, …]) |
Colormap using lightness to extend range |
plot.Style ([color, marker, hatch, linestyle]) |
Control color/pattern by category |
plot.ColorBar (cmap[, vmin, vmax, label, …]) |
A color-bar for a matplotlib color-map |
plot.ColorGrid (row_cells, column_cells, colors) |
Plot colors for a two-way design in a grid |
plot.ColorList (colors[, cells, labels, size, h]) |
Plot colors with labels |
Plot uniform time-series:
plot.LineStack (y[, x, sub, ds, offset, …]) |
Stack multiple lines vertically |
plot.UTS (y[, xax, axtitle, ds, sub, xlabel, …]) |
Value by time plot for UTS data |
plot.UTSClusters (res[, pmax, ptrend, …]) |
Plot permutation cluster test results |
plot.UTSStat (y[, x, xax, match, sub, ds, …]) |
Plot statistics for a one-dimensional NDVar |
Plot multidimensional uniform time series:
plot.Array (y[, xax, xlabel, ylabel, …]) |
Plot UTS data to a rectangular grid. |
plot.Butterfly (y[, xax, sensors, axtitle, …]) |
Butterfly plot for NDVars |
Plot topographic maps of sensor space data:
plot.TopoArray (y[, xax, ds, sub, vmax, …]) |
Channel by sample plots with topomaps for individual time points |
plot.TopoButterfly (y[, xax, ds, sub, vmax, …]) |
Butterfly plot with corresponding topomaps |
plot.Topomap (y[, xax, ds, sub, vmax, vmin, …]) |
Plot individual topogeraphies |
plot.TopomapBins (y[, xax, ds, sub, …]) |
Topomaps in time-bins |
Plot sensor layout maps:
plot.SensorMap (sensors[, labels, proj, …]) |
Plot sensor positions in 2 dimensions |
plot.SensorMaps (sensors[, select, proj, …]) |
Multiple views on a sensor layout. |
xax parameter¶
Many plots have an xax
parameter which is used to sort the data in y
into different categories and plot them on separate axes. xax
can be
specified through categorial data, or as a dimension in y
.
If a categorial data object is specified for xax
, y
is split into the
categories in xax
, and for every cell in xax
a separate subplot is
shown. For example, while
>>> plot.Butterfly('meg', ds=ds)
will create a single Butterfly plot of the average response,
>>> plot.Butterfly('meg', 'subject', ds=ds)
where 'subject'
is the xax
parameter, will create a separate subplot
for every subject with its average response.
A dimension on y
can be specified through a string starting with .
.
For example, to plot each case of meg
separately, use:
>>> plot.Butterfly('meg', '.case', ds=ds)
General layout parameters¶
Most plots that also share certain layout keyword arguments. By default, all those parameters are determined automatically, but individual values can be specified manually by supplying them as keyword arguments.
h
,w
: scalar- Height and width of the figure. Use a number ≤ 0 to defined the size relative
to the screen (e.g.,
w=0
to use the full screen width). axh
,axw
: scalar- Height and width of the axes.
nrow
,ncol
:None
|int
- Limit number of rows/columns. If neither is specified, a square layout is produced
margins
:dict
Absolute subplot parameters (in inches). Implies
tight=False
. Ifmargins
is specified,axw
andaxh
are interpreted exclusive of the margins, i.e.,axh=2, margins={'top': .5}
for a plot with one axes will result in a total height of 2.5. Example:margins={ 'top': 0.5, # from top of figure to top axes 'bottom': 1, # from bottom of figure to bottom axes 'hspace': 0.1, # height of space between axes 'left': 0.5, # from left of figure to left-most axes 'right': 0.1, # from right of figure to right-most axes 'wspace': 0.1, # width of space between axes }
ax_aspect
: scalar- Width / height aspect of the axes.
frame
:bool
|str
How to frame the axes of the plot. Options:
True
(default): normal matplotlib frame, spines around axesFalse
: omit top and right spines't'
: draw spines at x=0 and y=0, common for ERPs'none'
: no spines at all
name
:str
- Window title (not displayed on the figure itself).
title
:str
- Figure title (displayed on the figure).
tight
:bool
- Use matplotlib’s
tight_layout
to resize all axes to fill the figure. right_of
: eelbrain plot- Position the new figure to the right of this figure.
below
: eelbrain plot- Position the new figure below this figure.
Plots that do take those parameters can be identified by the **layout
in
their function signature.
GUI Interaction¶
By default, new plots are automatically shown and, if the Python interpreter is in interactive mode the GUI main loop is started. This behavior can be controlled with 2 arguments when constructing a plot:
- show : bool
- Show the figure in the GUI (default True). Use False for creating figures and saving them without displaying them on the screen.
- run : bool
- Run the Eelbrain GUI app (default is True for interactive plotting and False in scripts).
The behavior can also be changed globally using configure()
.
By default, Eelbrain plots open in windows with enhance GUI features such as
copying a figure to the OS clip-board. To plot figures in bare matplotlib
windows, configure()
Eelbrain with eelbrain.configure(frame=False)
.
Plotting Brains¶
The plot.brain
module contains specialized functions to plot
NDVar
objects containing source space data.
For this it uses a subclass of PySurfer’s surfer.Brain
class.
The functions below allow quick plotting.
More specific control over the plots can be achieved through the
Brain
object that is returned.
plot.brain.brain (src[, cmap, vmin, vmax, …]) |
Create a Brain object with a data layer |
plot.brain.butterfly (y[, cmap, vmin, vmax, …]) |
Shortcut for a Butterfly-plot with a time-linked brain plot |
plot.brain.cluster (cluster[, vmax]) |
Plot a spatio-temporal cluster |
plot.brain.dspm (src[, fmin, fmax, fmid]) |
Plot a source estimate with coloring for dSPM values (bipolar). |
plot.brain.p_map (p_map[, param_map, p0, p1, …]) |
Plot a map of p-values in source space. |
plot.brain.annot (annot[, subject, surf, …]) |
Plot the parcellation in an annotation file |
plot.brain.annot_legend (lh, rh, *args, **kwargs) |
Plot a legend for a freesurfer parcellation |
Brain (subject, hemi[, surf, title, cortex, …]) |
PySurfer Brain subclass returned by plot.brain functions |
plot.brain.SequencePlotter () |
Grid of anatomical images in one figure |
plot.GlassBrain (ndvar[, cmap, vmin, vmax, …]) |
Plot 2d projections of a brain volume |
plot.GlassBrain.butterfly (y[, cmap, vmin, …]) |
Shortcut for a butterfly-plot with a time-linked glassbrain plot |
In order to make custom plots, a Brain
figure
without any data added can be created with
plot.brain.brain(ndvar.source, mask=False)
.
Surface options for plotting data on fsaverage
:
white |
smoothwm |
inflated_pre |
inflated |
inflated_avg |
sphere |
GUIs¶
Tools with a graphical user interface (GUI):
gui.select_components (path, ds[, sysname, …]) |
GUI for selecting ICA-components |
gui.select_epochs (ds[, data, accept, blink, …]) |
GUI for rejecting trials of MEG/EEG data |
gui.load_stcs () |
GUI for detecting and loading source estimates |
Controlling the GUI Application¶
Eelbrain uses a wxPython based application to create GUIs. This GUI appears as a separate application with its own Dock icon. The way that control of this GUI is managed depends on the environment form which it is invoked.
When Eelbrain plots are created from within iPython, the GUI is managed in the
background and control returns immediately to the terminal. There might be cases
in which this is not desired, for example when running scripts. After execution
of a script finishes, the interpreter is terminated and all associated plots are
closed. To avoid this, the command gui.run(block=True)
can be inserted at
the end of the script, which will keep all gui elements open until the user
quits the GUI application (see gui.run()
below).
In interpreters other than iPython, input can not be processed from the GUI
and the interpreter shell at the same time. In that case, the GUI application
is activated by default whenever a GUI is created in interactive mode
(this can be avoided by passing run=False
to any plotting function).
While the application is processing user input,
the shell can not be used. In order to return to the shell, quit the
application (the python/Quit Eelbrain menu command or Command-Q). In order to
return to the terminal without closing all windows, use the alternative
Go/Yield to Terminal command (Command-Alt-Q). To return to the application
from the shell, run gui.run()
. Beware that if you terminate the Python
session from the terminal, the application is not given a chance to assure that
information in open windows is saved.
gui.run ([block]) |
Hand over command to the GUI (quit the GUI to return to the terminal) |
Formatted Text¶
The fmtxt
submodule provides tools for exporting results. Most eelbrain
functions and methods that print tables in fact return fmtxt
objects,
which can be exported in different formats, for example:
>>> ds = datasets.get_uts()
>>> table.stats('Y', 'A', 'B', ds=ds)
B
---------------
b0 b1
--------------------
a0 0.8857 0.4525
a1 0.3425 1.377
>>> type(table.stats('Y', 'A', 'B', ds=ds))
eelbrain.fmtxt.Table
This means that the result can be exported as formatted text, for example:
>>> fmtxt.save_pdf(table.stats('Y', 'A', 'B', ds=ds))
See available export functions and the module documentation for details:
fmtxt |
Document model for formatted text documents |
fmtxt.copy_pdf (fmtext) |
Copy an FMText object to the clipboard as PDF |
fmtxt.copy_tex (fmtext) |
Copy an FMText object to the clipboard as tex code |
fmtxt.save_html (fmtext[, path, …]) |
Save an FMText object in HTML format |
fmtxt.save_pdf (fmtext[, path]) |
Save an FMText object as a PDF (requires LaTeX installation) |
fmtxt.save_rtf (fmtext[, path]) |
Save an FMText object in Rich Text format |
fmtxt.save_tex (fmtext[, path]) |
Save an FMText object as TeX code |
Experiment Pipeline¶
The MneExperiment
class provides a template for analyzing EEG and MEG
data. The objects for specifying the analysis are all in the
pipeline
submodule.
See also
For the guide on working with the MneExperiment
class see
The MneExperiment Pipeline.
MneExperiment ([root, find_subjects]) |
Analyze an MEG experiment (gradiometer only) with MNE |
Result containers:
ROITestResult (subjects, samples, …) |
Test results for temporal tests in one or more ROIs |
ROI2StageResult (subjects, samples, …) |
Test results for 2-stage tests in one or more ROIs |
Participant groups:
Group (subjects) |
Group defined as collection of subjects |
SubGroup (base, exclude) |
Group defined by removing subjects from a base group |
Pre-processing:
RawSource ([filename, reader, sysname, …]) |
Raw data source |
RawFilter (source[, l_freq, h_freq, cache]) |
Filter raw pipe |
RawICA (source, session[, method, …]) |
ICA raw pipe |
RawMaxwell (source[, bad_condition]) |
Maxwell filter raw pipe |
RawReReference (source[, reference, add, drop]) |
Re-reference EEG data |
RawApplyICA (source, ica[, cache]) |
Apply ICA estimated in a RawICA pipe |
Event variables:
EvalVar (code[, session]) |
Variable based on evaluating a statement |
GroupVar (groups[, session]) |
Group membership for each subject |
LabelVar (source, codes[, default, session]) |
Variable assigning labels to values |
Epochs:
PrimaryEpoch (session[, sel]) |
Epoch based on selecting events from a raw file |
SecondaryEpoch (base[, sel]) |
Epoch inheriting events from another epoch |
SuperEpoch (sub_epochs, **kwargs) |
Combine several other epochs |
Tests:
ANOVA (x[, model, vars]) |
ANOVA test |
TTestOneSample ([tail]) |
One-sample t-test |
TTestIndependent (model, c1, c0[, tail, vars]) |
Independent measures t-test (comparing groups of subjects) |
TTestRelated (model, c1, c0[, tail]) |
Related measures t-test |
TContrastRelated (model, contrast[, tail]) |
Contrasts of T-maps (see eelbrain.testnd.TContrastRelated ) |
TwoStageTest (stage_1[, vars, model]) |
Two-stage test: T-test of regression coefficients |
Brain parcellations:
SubParc (base, labels[, views]) |
A subset of labels in another parcellation |
CombinationParc (base, labels[, views]) |
Recombine labels from an existing parcellation |
FreeSurferParc ([views]) |
Parcellation that is created outside Eelbrain for each subject |
FSAverageParc ([views]) |
Fsaverage parcellation that is morphed to individual subjects |
SeededParc (seeds[, mask, surface, views]) |
Parcellation that is grown from seed coordinates |
IndividualSeededParc (seeds[, mask, surface, …]) |
Seed parcellation with individual seeds for each subject |
Datasets¶
Datasets for experimenting and testing:
datasets.get_loftus_masson_1994 () |
Dataset used for illustration purposes by Loftus and Masson (1994) |
datasets.get_mne_sample ([tmin, tmax, …]) |
Load events and epochs from the MNE sample data |
datasets.get_uts ([utsnd, seed, nrm, vector3d]) |
Create a sample Dataset with 60 cases and random data. |
datasets.get_uv ([seed, nrm, vector]) |
Dataset with random univariate data |
datasets.simulate_erp ([n_trials, seed]) |
Simulate event-related EEG data |
Configuration¶
-
eelbrain.
configure
(n_workers=None, frame=None, autorun=None, show=None, format=None, figure_background=None, prompt_toolkit=None, animate=None, nice=None, tqdm=None)¶ Set basic configuration parameters for the current session
Parameters: - n_workers : bool | int
Number of worker processes to use in multiprocessing enabled computations.
False
to disable multiprocessing.True
(default) to use as many processes as cores are available. Negative numbers to use all but n available CPUs.- frame : bool
Open figures in the Eelbrain application. This provides additional functionality such as copying a figure to the clipboard. If False, open figures as normal matplotlib figures.
- autorun : bool
When a figure is created, automatically enter the GUI mainloop. By default, this is True when the figure is created in interactive mode but False when the figure is created in a script (in order to run the GUI at a specific point in a script, call
eelbrain.gui.run()
).- show : bool
Show plots on the screen when they’re created (disable this to create plots and save them without showing them on the screen).
- format : str
Default format for plots (for example “png”, “svg”, …).
- figure_background : bool | matplotlib color
While
matplotlib
uses a gray figure background by default, Eelbrain uses white. Set this parameter toFalse
to use the default frommatplotlib.rcParams
, or set it to a valid matplotblib color value to use an arbitrary color.True
to revert to the default white.- prompt_toolkit : bool
In IPython 5, prompt_toolkit allows running the GUI main loop in parallel to the Terminal, meaning that the IPython terminal and GUI windows can be used without explicitly switching between Terminal and GUI. This feature is enabled by default, but can be disabled by setting
prompt_toolkit=False
.- animate : bool
Animate plot navigation (default True).
- nice : int [-20, 19]
Scheduling priority for muliprocessing (larger number yields more to other processes; negative numbers require root privileges).
- tqdm : bool
Enable or disable
tqdm
progress bars.
Examples¶
Note
Examples frequently use the print()
function. In interactive
work, this function can often be omitted. For example, to show a summary of
a dataset, examples use >>> print(ds.summary())
, but in interactive work
simply entering >>> ds.summary()
will print the summary.
Datasets¶
Examples of how to construct and manipulate Dataset
objects.
Note
Click here to download the full example code
Introduction¶
Data are represented with there primary data-objects:
Factor
for categorial variablesVar
for scalar variablesNDVar
for multidimensional data (e.g. a variable measured at different time points)
Multiple variables belonging to the same dataset can be grouped in a
Dataset
object.
Factor¶
A Factor
is a container for one-dimensional, categorial data: Each
case is described by a string label. The most obvious way to initialize a
Factor
is a list of strings:
# sphinx_gallery_thumbnail_number = 5
from eelbrain import *
a = Factor(['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b'], name='A')
print(a)
Out:
Factor(['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b'], name='A')
Since Factor initialization simply iterates over the given data, the same Factor could be initialized with:
a = Factor('aaaabbbb', name='A')
print(a)
Out:
Factor(['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b'], name='A')
There are other shortcuts to initialize factors (see also
the Factor
class documentation):
a = Factor(['a', 'b', 'c'], repeat=4, name='A')
print(a)
Out:
Factor(['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b', 'c', 'c', 'c', 'c'], name='A')
Indexing works like for arrays:
print(a[0])
print(a[0:6])
Out:
a
Factor(['a', 'a', 'a', 'a', 'b', 'b'], name='A')
All values present in a Factor
are accessible in its
Factor.cells
attribute:
print(a.cells)
Out:
('a', 'b', 'c')
Based on the Factor’s cell values, boolean indexes can be generated:
print(a == 'a')
print(a.isany('a', 'b'))
print(a.isnot('a', 'b'))
Out:
[ True True True True False False False False False False False False]
[ True True True True True True True True False False False False]
[False False False False False False False False True True True True]
Interaction effects can be constructed from multiple factors with the %
operator:
b = Factor(['d', 'e'], repeat=2, tile=3, name='B')
print(b)
i = a % b
print(i)
Out:
Factor(['d', 'd', 'e', 'e', 'd', 'd', 'e', 'e', 'd', 'd', 'e', 'e'], name='B')
A % B
Interaction effects are in many ways interchangeable with factors in places where a categorial model is required:
print(i.cells)
print(i == ('a', 'd'))
Out:
(('a', 'd'), ('a', 'e'), ('b', 'd'), ('b', 'e'), ('c', 'd'), ('c', 'e'))
[ True True False False False False False False False False False False]
Var¶
The Var
class is a container for one-dimensional
numpy.ndarray
:
y = Var([1, 2, 3, 4, 5, 6])
print(y)
Out:
Var([1, 2, 3, 4, 5, 6])
Indexing works as for factors
print(y[5])
print(y[2:])
Out:
6
Var([3, 4, 5, 6])
Many array operations can be performed on the object directly
print(y + 1)
Out:
Var([2, 3, 4, 5, 6, 7])
For any more complex operations the corresponding numpy.ndarray
can be retrieved in the Var.x
attribute:
print(y.x)
Out:
[1 2 3 4 5 6]
Note
The Var.x
attribute is not intended to be replaced; rather, a new
Var
object should be created for a new array.
NDVar¶
NDVar
objects are containers for multidimensional data, and manage the
description of the dimensions along with the data. NDVar
objects are
often not constructed from scratch but imported from existing data. For
example, mne
source estimates can be imported with
load.fiff.stc_ndvar()
. As an example, consider data from a simulated EEG
experiment:
ds = datasets.simulate_erp()
eeg = ds['eeg']
print(eeg)
Out:
<NDVar 'eeg': 80 case, 140 time, 65 sensor>
This representation shows that eeg
contains 80 trials of data (cases),
with 140 time points and 35 EEG sensors. Since eeg
contains information
on the dimensions like sensor locations, plotting functions can take
advantage of that:
p = plot.TopoButterfly(eeg)
p.set_time(0.400)

NDVar
offer functionality similar to numpy.ndarray
, but
take into account the properties of the dimensions. For example, through the
NDVar.sub()
method, indexing can be done using meaningful descriptions,
such as indexing a time slice in seconds:
eeg_400 = eeg.sub(time=0.400)
plot.Topomap(eeg_400)

Out:
<Topomap: eeg>
Several methods allow aggregating data, for example an RMS over sensor:
eeg_rms = eeg.rms('sensor')
print(eeg_rms)
plot.UTSStat(eeg_rms)

Out:
<NDVar 'eeg': 80 case, 140 time>
<UTSStat: eeg>
Or a mean in a time window:
eeg_400 = eeg.mean(time=(0.350, 0.450))
plot.Topomap(eeg_400)

Out:
<Topomap: eeg>
As with a Var
, the corresponding numpy.ndarray
can always be
accessed as array. The NDVar.get_data()
method allows retrieving the
data while being explicit about which axis represents which dimension:
array = eeg_400.get_data(('case', 'sensor'))
print(array.shape)
Out:
(80, 65)
NDVar
objects can be constructed directly from an array and
corresponding dimension objects, for example:
import numpy
frequency = Scalar('frequency', [1, 2, 3, 4])
time = UTS(0, 0.01, 50)
data = numpy.random.normal(0, 1, (4, 50))
ndvar = NDVar(data, (frequency, time))
print(ndvar)
Out:
<NDVar: 4 frequency, 50 time>
A case dimension can be added by including the bare Case
class:
data = numpy.random.normal(0, 1, (10, 4, 50))
ndvar = NDVar(data, (Case, frequency, time))
print(ndvar)
Out:
<NDVar: 10 case, 4 frequency, 50 time>
Dataset¶
A Dataset
is a container for multiple variables
(Factor
, Var
and NDVar
) that describe the same
cases. It can be thought of as a data table with columns corresponding to
different variables and rows to different cases. Variables can be assigned
as to a dictionary:
ds = Dataset()
ds['x'] = Factor('aaabbb')
ds['y'] = Var([5, 4, 6, 2, 1, 3])
print(ds)
Out:
x y
-----
a 5
a 4
a 6
b 2
b 1
b 3
A variable that’s equal in all cases can be assigned quickly:
ds[:, 'z'] = 0.
The string representation of a Dataset
contains information
on the variables stored in it:
# in an interactive shell this would be the output of just typing ``ds``
print(repr(ds))
Out:
<Dataset (6 cases) 'x':F, 'y':V, 'z':V>
n_cases=6
indicates that the Dataset contains 6 cases (rows). The
subsequent dictionary-like representation shows the keys and the types of the
corresponding values (F
: Factor
, V
: Var
,
Vnd
: NDVar
).
A more extensive summary can be printed with the Dataset.summary()
method:
print(ds.summary())
Out:
Key Type Values
-------------------------------
x Factor a:3, b:3
y Var 1, 2, 3, 4, 5, 6
z Var 0:6
-------------------------------
Dataset: 6 cases
Indexing a Dataset with strings returns the corresponding data-objects:
print(ds['x'])
Out:
Factor(['a', 'a', 'a', 'b', 'b', 'b'], name='x')
numpy.ndarray
-like indexing on the Dataset can be used to access a
subset of cases:
print(ds[2:])
Out:
x y z
---------
a 6 0
b 2 0
b 1 0
b 3 0
Row and column can be indexed simultaneously (in row, column order):
print(ds[2, 'x'])
Out:
a
Arry-based indexing also allows indexing based on the Dataset’s variables:
print(ds[ds['x'] == 'a'])
Out:
x y z
---------
a 5 0
a 4 0
a 6 0
Since the dataset acts as container for variable, there is a
Dataset.eval()
method for evaluatuing code strings in the namespace
defined by the dataset, which means that dataset variables can be invoked
with just their name:
print(ds.eval("x == 'a'"))
Out:
[ True True True False False False]
Many dataset methods allow using code strings as shortcuts for expressions involving dataset variables, for example indexing:
print(ds.sub("x == 'a'"))
Out:
x y z
---------
a 5 0
a 4 0
a 6 0
Example¶
Below is a simple example using data objects (for more, see the Examples):
y = numpy.empty(21)
y[:14] = numpy.random.normal(0, 1, 14)
y[14:] = numpy.random.normal(2, 1, 7)
ds = Dataset({
'a': Factor('abc', 'A', repeat=7),
'y': Var(y, 'Y'),
})
print(ds)
Out:
a y
-------------
a -0.31237
a 0.78287
a 0.24013
a 0.29116
a -0.048041
a 2.3379
a 0.78941
b 0.091618
b 0.64554
b -2.2833
b -0.92382
b -0.54674
b 0.44263
b -0.57927
c 1.7321
c 2.0614
c 2.7883
c 1.5249
c 2.0881
c 0.75615
c 1.3063
print(table.frequencies('a', ds=ds))
Out:
a n
-----
a 7
b 7
c 7
print(test.ANOVA('y', 'a', ds=ds))
Out:
SS df MS F p
-------------------------------------------------
a 16.98 2 8.49 11.77*** < .001
Residuals 12.99 18 0.72
-------------------------------------------------
Total 29.98 20
print(test.pairwise('y', 'a', ds=ds, corr='Hochberg'))
Out:
Pairwise T-Tests (independent samples)
b c
----------------------------------------
a t(12) = 2.07` t(12) = -2.84*
p = .061 p = .015
p(c) = .061 p(c) = .030
b t(12) = -4.91**
p < .001
p(c) = .001
----------------------------------------
(* Corrected after Hochberg, 1988)
t = test.pairwise('y', 'a', ds=ds, corr='Hochberg')
print(t.get_tex())
Out:
\begin{center}
\begin{tabular}{lll}
\toprule
& b & c \\
\midrule
a & $t_{12} = 2.07^{` \ \ \ }$ & $t_{12} = -2.84^{* \ \ \ }$ \\
& $p = .061$ & $p = .015$ \\
& $p_{c} = .061$ & $p_{c} = .030$ \\
b & & $t_{12} = -4.91^{** \ \ }$ \\
& & $p < .001$ \\
& & $p_{c} = .001$ \\
\bottomrule
\end{tabular}
\end{center}
plot.Boxplot('y', 'a', ds=ds, title="My Boxplot", ylabel="value", corr='Hochberg')

Out:
<Boxplot: My Boxplot>
Note
Click here to download the full example code
Dataset basics¶
# Author: Christian Brodbeck <christianbrodbeck@nyu.edu>
from eelbrain import *
import numpy as np
A dataset can be constructed column by column, by adding one variable after another:
# initialize an empty Dataset:
ds = Dataset()
# numeric values are added as Var object:
ds['y'] = Var(np.random.normal(0, 1, 6))
# categorical data as represented in Factors:
ds['a'] = Factor(['a', 'b', 'c'], repeat=2)
# check the result:
print(ds)
Out:
y a
------------
0.48172 a
0.81985 a
0.82997 b
0.55157 b
0.026204 c
1.0083 c
For larger datasets it can be more convenient to print only the first few cases…
print(ds.head())
Out:
y a
------------
0.48172 a
0.81985 a
0.82997 b
0.55157 b
0.026204 c
1.0083 c
… or a summary of variables:
print(ds.summary())
Out:
Key Type Values
------------------------------------------------------------------------
y Var 0.0262041, 0.481716, 0.551575, 0.819852, 0.829965, 1.0083
a Factor a:2, b:2, c:2
------------------------------------------------------------------------
Dataset: 6 cases
An alternative way of constructing a dataset is case by case (i.e., row by row):
rows = []
for i in range(6):
subject = f'S{i}'
y = np.random.normal(0, 1)
a = 'abc'[i % 3]
rows.append([subject, y, a])
ds = Dataset.from_caselist(['subject', 'y', 'a'], rows, random='subject')
print(ds)
Out:
subject y a
----------------------
S0 -0.90352 a
S1 -0.76498 b
S2 -0.49906 c
S3 -0.4859 a
S4 0.5165 b
S5 0.015202 c
Note
Click here to download the full example code
Align datasets¶
Shows how to combine information from two datasets describing the same cases, but not necessarily in the same order.
# Author: Christian Brodbeck <christianbrodbeck@nyu.edu>
import random
import string
from eelbrain import *
# Generate a dataset with known sequence
ds = Dataset()
ds['ascii'] = Factor(string.ascii_lowercase)
# Add an index variable to the dataset to later identify the cases
ds.index()
# Generate two shuffled copies of the dataset (and print them to confirm that
# they are shuffled)
ds1 = ds[random.sample(range(ds.n_cases), 15)]
print(ds1.head())
ds2 = ds[random.sample(range(ds.n_cases), 16)]
print(ds2.head())
Out:
ascii index
-------------
k 10
h 7
a 0
g 6
u 20
c 2
n 13
m 12
q 16
d 3
ascii index
-------------
y 24
z 25
o 14
c 2
l 11
b 1
q 16
p 15
j 9
f 5
Align the datasets¶
Use the "index"
variable added above to identify cases and align the two
datasets
ds1_aligned, ds2_aligned = align(ds1, ds2, 'index')
# show the ascii sequences for the two datasets next to each other to
# demonstrate that they are aligned
ds1_aligned['ascii_ds2'] = ds2_aligned['ascii']
print(ds1_aligned)
Out:
ascii index ascii_ds2
-------------------------
a 0 a
g 6 g
c 2 c
m 12 m
q 16 q
l 11 l
j 9 j
s 18 s
b 1 b
Univariate Statistics¶
Note
Click here to download the full example code
Model coding¶
Illustrates how to inspect coding of regression models.
# Author: Christian Brodbeck <christianbrodbeck@nyu.edu>
from eelbrain import *
ds = Dataset()
ds['A'] = Factor(['a1', 'a0'], repeat=4)
ds['B'] = Factor(['b1', 'b0'], repeat=2, tile=2)
look at data
ds.head()
Out:
A B
-------
a1 b1
a1 b1
a1 b0
a1 b0
a0 b1
a0 b1
a0 b0
a0 b0
create a fixed effects model
m = ds.eval('A * B')
print(repr(m))
Out:
A + B + A % B
show the model coding
print(m)
Out:
intercept A B A x B
-------------------------
1 0 0 0
1 0 0 0
1 0 1 0
1 0 1 0
1 1 0 0
1 1 0 0
1 1 1 1
1 1 1 1
create random effects model
ds['subject'] = Factor(['s1', 's2'], tile=4, name='subject', random=True)
m = ds.eval('A * B * subject')
print(repr(m))
Out:
A + B + A % B + subject + A % subject + B % subject + A % B % subject
show the model coding
print(m)
Out:
intercept A B A x B subject A x subject B x subject A x B x subject
---------------------------------------------------------------------------------
1 0 0 0 1 0 0 0
1 0 0 0 0 0 0 0
1 0 1 0 1 0 1 0
1 0 1 0 0 0 0 0
1 1 0 0 1 1 0 0
1 1 0 0 0 0 0 0
1 1 1 1 1 1 1 1
1 1 1 1 0 0 0 0
Note
Click here to download the full example code
Repeated measures ANOVA¶
Based on [1].
# Author: Christian Brodbeck <christianbrodbeck@nyu.edu>
from eelbrain import *
y = Var([7, 3, 6, 6, 5, 8, 6, 7,
7, 11, 9, 11, 10, 10, 11, 11,
8, 14, 10, 11, 12, 10, 11, 12],
name='y')
a = Factor('abc', repeat=8, name='A')
Fixed effects ANOVA (independent measures, [1] p. 24):
print(test.ANOVA(y, a, title="Independent Measures"))
Out:
Independent Measures
SS df MS F p
---------------------------------------------------
A 112.00 2 56.00 22.62*** < .001
Residuals 52.00 21 2.48
---------------------------------------------------
Total 164.00 23
Repeated measures ANOVA ([1] p. 72): subject
is defined as random effect
and entered for model construction as completely crossed factor
subject = Factor(range(8), tile=3, name='subject', random=True)
print(test.ANOVA(y, a * subject, title="Repeated Measures"))
Out:
Repeated Measures
SS df MS MS(denom) df(denom) F p
-----------------------------------------------------------------------
A 112.00 2 56.00 2.71 14 20.63*** < .001
-----------------------------------------------------------------------
Total 164.00 23
Two-way repeated measures ANOVA¶
y = Var([ 7, 3, 6, 6, 5, 8, 6, 7,
7, 11, 9, 11, 10, 10, 11, 11,
8, 14, 10, 11, 12, 10, 11, 12,
16, 7, 11, 9, 10, 11, 8, 8,
16, 10, 13, 10, 10, 14, 11, 12,
24, 29, 10, 22, 25, 28, 22, 24])
a = Factor(['a0', 'a1'], repeat=3 * 8, name='A')
b = Factor(['b0', 'b1', 'b2'], tile=2, repeat=8, name='B')
subject = Factor(range(8), tile=6, name='subject', random=True)
print(test.ANOVA(y, a * b * subject, title="Repeated Measure:"))
Out:
Repeated Measure:
SS df MS MS(denom) df(denom) F p
-------------------------------------------------------------------------
A 432.00 1 432.00 10.76 7 40.14*** < .001
B 672.00 2 336.00 11.50 14 29.22*** < .001
A x B 224.00 2 112.00 6.55 14 17.11*** < .001
-------------------------------------------------------------------------
Total 1708.00 47
Bar-plot with within-subject error bars and pairwise tests
plot.Barplot(y, a % b, match=subject)

Out:
<Barplot: None ~ A x B>
Note
Click here to download the full example code
ANCOVA¶
Example 1¶
Based on [1], Exercises (page 8).
# Author: Christian Brodbeck <christianbrodbeck@nyu.edu>
from eelbrain import *
y = Var([2, 3, 3, 4,
3, 4, 5, 6,
1, 2, 1, 2,
1, 1, 2, 2,
2, 2, 2, 2,
1, 1, 2, 3], name="Growth Rate")
genotype = Factor(range(6), repeat=4, name="Genotype")
hours = Var([8, 12, 16, 24], tile=6, name="Hours")
Show the model
print(hours * genotype)
Out:
intercept Hours Genotype Hours x Genotype
-----------------------------------------------------------------------------
1 8 1 0 0 0 0 8 0 0 0 0
1 12 1 0 0 0 0 12 0 0 0 0
1 16 1 0 0 0 0 16 0 0 0 0
1 24 1 0 0 0 0 24 0 0 0 0
1 8 0 1 0 0 0 0 8 0 0 0
1 12 0 1 0 0 0 0 12 0 0 0
1 16 0 1 0 0 0 0 16 0 0 0
1 24 0 1 0 0 0 0 24 0 0 0
1 8 0 0 1 0 0 0 0 8 0 0
1 12 0 0 1 0 0 0 0 12 0 0
1 16 0 0 1 0 0 0 0 16 0 0
1 24 0 0 1 0 0 0 0 24 0 0
1 8 0 0 0 1 0 0 0 0 8 0
1 12 0 0 0 1 0 0 0 0 12 0
1 16 0 0 0 1 0 0 0 0 16 0
1 24 0 0 0 1 0 0 0 0 24 0
1 8 0 0 0 0 1 0 0 0 0 8
1 12 0 0 0 0 1 0 0 0 0 12
1 16 0 0 0 0 1 0 0 0 0 16
1 24 0 0 0 0 1 0 0 0 0 24
1 8 0 0 0 0 0 0 0 0 0 0
1 12 0 0 0 0 0 0 0 0 0 0
1 16 0 0 0 0 0 0 0 0 0 0
1 24 0 0 0 0 0 0 0 0 0 0
ANCOVA
print(test.ANOVA(y, hours * genotype, title="ANOVA"))
Out:
ANOVA
SS df MS F p
--------------------------------------------------------
Hours 7.06 1 7.06 54.90*** < .001
Genotype 27.88 5 5.58 43.36*** < .001
Hours x Genotype 3.15 5 0.63 4.90* .011
Residuals 1.54 12 0.13
--------------------------------------------------------
Total 39.62 23
Plot the slopes
plot.Regression(y, hours, genotype)

Out:
<Regression: Growth Rate ~ Hours | Genotype>
Example 2¶
Based on [2] (p. 118-20)
y = Var([16, 7, 11, 9, 10, 11, 8, 8,
16, 10, 13, 10, 10, 14, 11, 12,
24, 29, 10, 22, 25, 28, 22, 24])
cov = Var([9, 5, 6, 4, 6, 8, 3, 5,
8, 5, 6, 5, 3, 6, 4, 6,
5, 8, 3, 4, 6, 9, 4, 5], name='cov')
a = Factor([1, 2, 3], repeat=8, name='A')
Full model, with interaction
print(test.ANOVA(y, cov * a))
plot.Regression(y, cov, a)

Out:
SS df MS F p
-----------------------------------------------------
cov 199.54 1 199.54 32.93*** < .001
A 807.82 2 403.91 66.66*** < .001
cov x A 19.39 2 9.70 1.60 .229
Residuals 109.07 18 6.06
-----------------------------------------------------
Total 1112.00 23
<Regression: None ~ cov | A>
Drop interaction term
print(test.ANOVA(y, a + cov))
plot.Regression(y, cov)

Out:
SS df MS F p
-----------------------------------------------------
A 807.82 2 403.91 62.88*** < .001
cov 199.54 1 199.54 31.07*** < .001
Residuals 128.46 20 6.42
-----------------------------------------------------
Total 1112.00 23
<Regression: None ~ cov>
ANCOVA with multiple covariates¶
Based on [3], p. 139.
# Load data form a text file
ds = load.txt.tsv('Fox_Prestige_data.txt', delimiter=' ', skipinitialspace=True)
print(ds.head())
Out:
id education income women prestige census type
---------------------------------------------------------------------------
GOV.ADMINISTRATORS 13.11 12351 11.16 68.8 1113 prof
GENERAL.MANAGERS 12.26 25879 4.02 69.1 1130 prof
ACCOUNTANTS 12.77 9271 15.7 63.4 1171 prof
PURCHASING.OFFICERS 11.42 8865 9.11 56.8 1175 prof
CHEMISTS 14.62 8403 11.68 73.5 2111 prof
PHYSICISTS 15.64 11030 5.13 77.6 2113 prof
BIOLOGISTS 15.09 8258 25.65 72.6 2133 prof
ARCHITECTS 15.44 14163 2.69 78.1 2141 prof
CIVIL.ENGINEERS 14.52 11377 1.03 73.1 2143 prof
MINING.ENGINEERS 14.64 11023 0.94 68.8 2153 prof
# Variable summary
print(ds.summary())
Out:
Key Type Values
-------------------------------------------------------------------------------------------------
id Factor ACCOUNTANTS, AIRCRAFT.REPAIRMEN, AIRCRAFT.WORKERS, ARCHITECTS... (102 cells)
education Var 6.38 - 15.97
income Var 611 - 25879
women Var 0 - 97.51
prestige Var 14.8 - 87.2
census Var 1113 - 9517
type Factor NA:4, bc:44, prof:31, wc:23
-------------------------------------------------------------------------------------------------
Fox_Prestige_data.txt: 102 cases
# Exclude cases with missing type
ds2 = ds[ds['type'] != 'NA']
# ANOVA
print(test.ANOVA('prestige', '(income + education) * type', ds=ds2))
Out:
SS df MS F p
--------------------------------------------------------------
income 1131.90 1 1131.90 28.35*** < .001
education 1067.98 1 1067.98 26.75*** < .001
type 591.16 2 295.58 7.40** .001
income x type 951.77 2 475.89 11.92*** < .001
education x type 238.40 2 119.20 2.99 .056
Residuals 3552.86 89 39.92
--------------------------------------------------------------
Total 28346.88 97
Mass-univariate statistics¶
NDVar
, n-dimensional variables, allow mass-univariate statistics
analogous to univariate statistics.
Note
Click here to download the full example code
Generate NDVar
(with artificial data)¶
Shows how to initialize an NDVar
with the structure of EEG data from
(randomly generate) data arrays. The data is intended for illustrating EEG
analysis techniques and meant to vaguely resemble data from an N400 experiment,
but it is not meant to be a physiologically realistic simulation.
# sphinx_gallery_thumbnail_number = 2
import numpy as np
import scipy.spatial
from eelbrain import *
Create a Sensor dimension from an actual montage
sensor = Sensor.from_montage('standard_alphabetic')
p = plot.SensorMap(sensor)

Generate N400-like topography
i_cz = sensor.names.index('Cz')
cz_loc = sensor.locs[i_cz]
dists = scipy.spatial.distance.cdist([cz_loc], sensor.locs)[0]
dists /= dists.max()
topo = -0.7 + dists
n400_topo = NDVar(topo, sensor)
p = plot.Topomap(n400_topo, clip='circle')

Generate N400-like timing
window = scipy.signal.windows.gaussian(200, 12)[:140]
time = UTS(-0.100, 0.005, 140)
n400_timecourse = NDVar(window, time)
p = plot.UTS(n400_timecourse)

Generate random values for the independent variable (call it “cloze probability”)
rng = np.random.RandomState(0)
n_trials = 100
cloze_x = np.concatenate([
rng.uniform(0, 0.3, n_trials // 2),
rng.uniform(0.8, 1.0, n_trials // 2),
])
rng.shuffle(cloze_x)
cloze = Var(cloze_x)
p = plot.Histogram(cloze)

Put all the dimensions together to simulate the EEG signal
signal = (1 - cloze) * n400_timecourse * n400_topo
# Add noise
noise = powerlaw_noise(signal, 1)
noise = noise.smooth('sensor', 0.02, 'gaussian')
signal += noise
# Apply the average mastoids reference
signal -= signal.mean(sensor=['M1', 'M2'])
# Store EEG data in a Dataset with trial information
ds = Dataset()
ds['eeg'] = signal
ds['cloze'] = Var(cloze_x)
ds['cloze_cat'] = Factor(cloze_x > 0.5, labels={True: 'high', False: 'low'})
Plot the average simulated response
p = plot.TopoButterfly('eeg', ds=ds, vmax=1.5, clip='circle', frame='t', axh=3)
p.set_time(0.400)

Plot averages separately for high and low cloze
p = plot.TopoButterfly('eeg', 'cloze_cat', ds=ds, vmax=1.5, clip='circle', frame='t', axh=3)
p.set_time(0.400)

Average over time in the N400 time window
p = plot.Topomap('eeg.mean(time=(0.300, 0.500))', 'cloze_cat', ds=ds, vmax=1, clip='circle')

Plot the first 20 trials, labeled with cloze propability
labels = [f'{i} ({c:.2f})' for i, c in enumerate(cloze[:20])]
p = plot.Butterfly('eeg[:20]', '.case', ds=ds, axtitle=labels)

Total running time of the script: ( 0 minutes 8.046 seconds)
Note
Click here to download the full example code
Cluster-based permutation t-test¶
This example show a cluster-based permutation test for a simple design (two conditions). The example uses simulated data meant to vaguely resemble data from an N400 experiment (not intended as a physiologically realistic simulation).
# sphinx_gallery_thumbnail_number = 3
from eelbrain import *
Simulated data¶
Each function call to datasets.simulate_erp()
generates a dataset
equivalent to an N400 experiment for one subject.
The seed
argument determines the random noise that is added to the data.
ds = datasets.simulate_erp(seed=0)
print(ds.summary())
Out:
Key Type Values
--------------------------------------------------------------------
eeg NDVar 140 time, 65 sensor; -1.53495e-05 - 1.58366e-05
cloze Var 0.00563694 - 0.997675
cloze_cat Factor high:40, low:40
n_chars Var 0:2, 1:5, 2:6, 3:13, 4:23, 5:20, 6:7, 7:4
--------------------------------------------------------------------
Dataset: 80 cases
A singe trial of data:
p = plot.TopoButterfly('eeg[0]', ds=ds)
p.set_time(0.400)

The Dataset.aggregate()
method computes condition averages when sorting
the data into conditions of interest. In our case, the cloze_cat
variable
specified conditions 'high'
and 'low'
cloze:
print(ds.aggregate('cloze_cat'))
Out:
n cloze cloze_cat n_chars
----------------------------------
40 0.88051 high 4.125
40 0.17241 low 3.825
----------------------------------
NDVars: eeg
Group level data¶
This loop simulates a multi-subject experiment.
It generates data and collects condition averages for 10 virtual subjects.
For group level analysis, the collected data are combined in a Dataset
:
dss = []
for subject in range(10):
# generate data for one subject
ds = datasets.simulate_erp(seed=subject)
# average across trials to get condition means
ds_agg = ds.aggregate('cloze_cat')
# add the subject name as variable
ds_agg[:, 'subject'] = f'S{subject:02}'
dss.append(ds_agg)
ds = combine(dss)
# make subject a random factor (to treat it as random effect for ANOVA)
ds['subject'].random = True
print(ds.head())
Out:
n cloze cloze_cat n_chars subject
--------------------------------------------
40 0.88051 high 4.125 S00
40 0.17241 low 3.825 S00
40 0.89466 high 4 S01
40 0.13778 low 4.575 S01
40 0.90215 high 4.275 S02
40 0.12206 low 3.9 S02
40 0.88503 high 4 S03
40 0.14273 low 4.175 S03
40 0.90499 high 4.1 S04
40 0.15732 low 3.5 S04
--------------------------------------------
NDVars: eeg
Re-reference the EEG data (i.e., subtract the mean of the two mastoid channels):
ds['eeg'] -= ds['eeg'].mean(sensor=['M1', 'M2'])
Spatio-temporal cluster based test¶
Cluster-based tests are based on identifying clusters of meaningful effects, i.e.,
groups of adjacent sensors that show the same effect (see testnd
for references).
In order to find clusters, the algorithm needs to know which channels are
neighbors. This information is refered to as the sensor connectivity (i.e., which sensors
are connected). The connectivity graph can be visualized to confirm that it is set correctly.
p = plot.SensorMap(ds['eeg'], connectivity=True)

With the correct connectivity, we can now compute a cluster-based permutation test for a related measures t-test:
res = testnd.TTestRelated(
'eeg', 'cloze_cat', 'low', 'high', match='subject', ds=ds,
pmin=0.05, # Use uncorrected p = 0.05 as threshold for forming clusters
tstart=0.100, # Find clusters in the time window from 100 ...
tstop=0.600, # ... to 600 ms
)
Plot the test result. The top two rows show the two condition averages. The bottom butterfly plot shows the difference with only significant regions in color. The corresponding topomap shows the topography at the marked time point, with the significant region circled (in an interactive environment, the mouse can be used to update the time point shown).
p = plot.TopoButterfly(res, clip='circle')
p.set_time(0.400)

Generate a table with all significant clusters:
clusters = res.find_clusters(0.05)
print(clusters)
Out:
id tstart tstop duration n_sensors v p sig
--------------------------------------------------------------
37 0.295 0.48 0.185 61 -4859.5 0 ***
Retrieve the cluster map using its ID and visualize the spatio-temporal extent of the cluster:
cluster_id = clusters[0, 'id']
cluster = res.cluster(cluster_id)
p = plot.TopoArray(cluster, interpolation='nearest')
p.set_topo_ts(.350, 0.400, 0.450)

Often it is desirable to summarize values in a cluster. This is especially useful in more complex designs. For example, after finding a signficant interaction effect in an ANOVA, one might want to follow up with a pairwise test of the value in the cluster. This can often be achieved using binary masks based on the cluster. Using the cluster identified above, generate a binary mask:
mask = cluster != 0
p = plot.TopoArray(mask, cmap='Wistia')
p.set_topo_ts(.350, 0.400, 0.450)

Such a spatio-temporal boolean mask can be used
to extract the value in the cluster for each condition/participant.
Since mask
contains both time and sensor dimensions, using it with
the NDVar.mean()
method collapses across these dimensions and
returns a scalar for each case (i.e., for each condition/subject).
ds['cluster_mean'] = ds['eeg'].mean(mask)
p = plot.Barplot('cluster_mean', 'cloze_cat', match='subject', ds=ds, test=False)

Similarly, a mask consisting of a cluster of sensors can be used to
visualize the time course in that region of interest. A straight forward
choice is to use all sensors that were part of the cluster (mask
)
at any point in time:
roi = mask.any('time')
p = plot.Topomap(roi, cmap='Wistia')

When using a mask that ony contains a sensor dimension (roi
),
NDVar.mean()
collapses across sensors and returns a value for each time
point, i.e. the time course in sensors involved in the cluster:
ds['cluster_timecourse'] = ds['eeg'].mean(roi)
p = plot.UTSStat('cluster_timecourse', 'cloze_cat', match='subject', ds=ds, frame='t')
# mark the duration of the spatio-temporal cluster
p.set_clusters(clusters, y=0.25e-6)

Now visualize the cluster topography, marking significant sensors:
time_window = (clusters[0, 'tstart'], clusters[0, 'tstop'])
c1_topo = res.c1_mean.mean(time=time_window)
c0_topo = res.c0_mean.mean(time=time_window)
diff_topo = res.difference.mean(time=time_window)
p = plot.Topomap([c1_topo, c0_topo, diff_topo], axtitle=['Low cloze', 'High cloze', 'Low - high'], ncol=3)
p.mark_sensors(roi, -1)

Temporal cluster based test¶
Alternatively, if a spatial region of interest exists, a univariate time
course can be extracted and submitted to a temporal cluster based test. For
example, the N400 is typically expected to be strong at sensor Cz
:
ds['eeg_cz'] = ds['eeg'].sub(sensor='Cz')
res_timecoure = testnd.TTestRelated(
'eeg_cz', 'cloze_cat', 'low', 'high', match='subject', ds=ds,
pmin=0.05, # Use uncorrected p = 0.05 as threshold for forming clusters
tstart=0.100, # Find clusters in the time window from 100 ...
tstop=0.600, # ... to 600 ms
)
clusters = res_timecoure.find_clusters(0.05)
print(clusters)
p = plot.UTSStat('eeg_cz', 'cloze_cat', match='subject', ds=ds, frame='t')
p.set_clusters(clusters, y=0.25e-6)

Out:
id tstart tstop duration v p sig
--------------------------------------------------
5 0.33 0.46 0.13 -216.97 0 ***
Total running time of the script: ( 0 minutes 9.179 seconds)
Note
Click here to download the full example code
Two-stage test¶
When trials are associated with continuous predictor variables, averaging is often a poor solution that loses part of the data. In such cases, a two-stage design can be employed that allows using the continuous predictor variable to test hypotheses at the group level. A two-stage analysis involves:
- Stage 1: fit a regression model to each individual subject’s data
- Stage 2: test regression coefficients at the group level
The example uses simulated data meant to vaguely resemble data from an N400 experiment, but not intended as a physiologically realistic simulation.
# sphinx_gallery_thumbnail_number = 1
from eelbrain import *
Generate simulated data: each function call to datasets.simulate_erp()
generates a dataset for one subject. For each subject, a regression model
is fit using cloze probability as continuous predictor variables.
lms = []
for subject in range(10):
# generate data for one subject
ds = datasets.simulate_erp(seed=subject)
# Re-reference EEG data
ds['eeg'] -= ds['eeg'].mean(sensor=['M1', 'M2'])
# Fit stage 1 model
lm = testnd.LM('eeg', 'cloze', ds)
lms.append(lm)
# Collect single-subject models for group analysis
stage2 = testnd.LMGroup(lms)
The testnd.LMGroup
object allows quick access to t-tests of
individual regression coefficients.
res = stage2.column_ttest('cloze', pmin=0.05, tstart=0.100, tstop=0.600)
print(res.find_clusters(0.05))
p = plot.TopoButterfly(res, frame='t')
p.set_time(0.400)

Out:
id tstart tstop duration n_sensors v p sig
-------------------------------------------------------------
1 0.335 0.475 0.14 61 3916.2 0 ***
Since this is a regression model, it also contains an intercept coefficient reflecting signal deflections shared in all trials.
res = stage2.column_ttest('intercept', pmin=0.05)
p = plot.TopoButterfly(res, frame='t')
p.set_time(0.120)

The regression coefficients themselves can be retrieved as Dataset
:
coeffs = stage2.coefficients_dataset()
print(coeffs.summary())
Out:
Key Type Values
----------------------------------------------------------------------------------------
intercept NDVar 140 time, 65 sensor; -5.16412e-06 - 5.06379e-06
cloze NDVar 140 time, 65 sensor; -5.37403e-06 - 5.26419e-06
subject Factor S000, S001, S002, S003, S004, S005, S006, S007, S008, S009 (random)
----------------------------------------------------------------------------------------
Dataset: 10 cases
Total running time of the script: ( 0 minutes 6.151 seconds)
Plots¶
plot
examples.
Note
Click here to download the full example code
Boxplot¶
Much of the functionality is shared with plot.Barplot
.
# %matplotlib inline
from eelbrain import *
Basic boxplot¶
ds = datasets.get_uv()
ds.summary()
p = plot.Boxplot('fltvar', 'A % B', match='rm', ds=ds)

Test against population mean¶
Instead of pairwise tests, test the values in each category against a population mean. Use the test
parameter to set the value to test against. In the example, test=0
will test whether each sample is significantly different form zero. Use the tail
parameter to determine the tailedness of the test (tail=1
to test for a value greater than 0).
p = plot.Boxplot('fltvar', 'A % B', match='rm', ds=ds, test=0, tail=1)

Appearance: colored boxes and category labels¶
colors = {
('a1', 'b1'): (1.0, 0.0, 0.0),
('a1', 'b2'): (1.0, 0.7, 0.0),
('a2', 'b1'): (0.0, 0.0, 1.0),
('a2', 'b2'): (0.0, 0.8, 1.0),
}
labels = {
('a1', 'b1'): 'A-1 (B-1)',
('a1', 'b2'): 'A-1 (B-2)',
('a2', 'b1'): 'A-2 (B-1)',
('a2', 'b2'): 'A-2 (B-2)',
}
p = plot.Boxplot('fltvar', 'A % B', match='rm', ds=ds, colors=colors, xticks=labels)
colors = {
('a1', 'b1'): plot.Style((1, .5, .5), hatch=''),
('a1', 'b2'): plot.Style((1, .5, .5), hatch='///'),
('a2', 'b1'): plot.Style((.5, .5, 1), hatch=''),
('a2', 'b2'): plot.Style((.5, .5, 1), hatch='///'),
}
p = plot.Boxplot('fltvar', 'A % B', match='rm', ds=ds, colors=colors)
colors = {
('a1', 'b1'): plot.Style('w', hatch='///'),
('a1', 'b2'): plot.Style('w', hatch='/'),
('a2', 'b1'): plot.Style('w', hatch='O'),
('a2', 'b2'): plot.Style('w', hatch='.'),
}
p = plot.Boxplot('fltvar', 'A % B', match='rm', ds=ds, colors=colors)
Label fliers¶
The label_fliers=True
option is used to identify outlier points (labels are based on the match
argument).
p = plot.Boxplot('fltvar', 'A % B', match='rm', ds=ds, label_fliers=True)

Reverese Correlation¶
Note
Click here to download the full example code
Introduction to Temporal Response Functions (TRFs)¶
A temporal response functions (TRFs) is a linear model of the stimulus-response depency. The response is predicted by a linear convolution of the stimulus with the TRF. This means that for every non-zero element in the stimulus, there will be a response in the shape of the TRF:
from eelbrain import *
import numpy as np
# Construct a 10 s long stimulus
time = UTS(0, 0.01, 1000)
x = NDVar(np.zeros(len(time)), time)
# add a few impulses
x[1] = 1
x[3] = 1
x[5] = 1
# Construct a TRF of length 500 ms
trf_time = UTS(0, 0.01, 50)
trf = gaussian(0.200, 0.050, trf_time) - gaussian(0.300, 0.050, trf_time)
# The response is the convolution of the stimulus with the TRF
y = convolve(trf, x)
plot_args = dict(ncol=1, axh=2, w=10, frame='t', legend=False, colors='r')
plot.UTS([x, trf, y], ylabel=['Stimulus (x)', 'TRF', 'Response (y)'], **plot_args)

Out:
<UTS: None>
Since The convolution is linear,
- scaled stimuli cause scaled responses
- overlapping responses add up
x[2] = 0.66
x[5.2] = 1
x[7] = 0.8
x[7.2] = 0.3
x[8.8] = 0.2
x[9] = 0.5
x[9.2] = 0.7
y = convolve(trf, x)
plot.UTS([x, trf, y], ylabel=['Stimulus (x)', 'TRF', 'Response (y)'], **plot_args)

Out:
<UTS: None>
When the stimulus contains only non-zero elements this works just the same, but the TRF might not be apparent in the response anymore:
x += np.random.normal(0, 0.1, x.shape)
filter_data(x, 1, 40)
y = convolve(trf, x)
plot.UTS([x, trf, y], ylabel=['Stimulus (x)', 'TRF', 'Response (y)'], **plot_args)

Out:
<UTS: None>
Given a stimulus and a response, there are different methods to reconstruct
the TRF. Eelbrain comes with an implementation of the boosting()
coordinate descent algorithm:
fit = boosting(y, x, 0.000, 0.500, basis=0.050, partitions=2)
plot_args = {**plot_args, 'ncol': 3}
plot.UTS([trf, fit.h, None], axtitle=["Model TRF", "Reconstructed TRF"], **plot_args)

Out:
Fitting models: 0%| | 0/2 [00:00<?, ?it/s]
Fitting models: 50%|##### | 1/2 [00:00<00:00, 6.85it/s]
Fitting models: 100%|##########| 2/2 [00:00<00:00, 12.80it/s]
<UTS: None>
Note
Click here to download the full example code
EEG speech envelope TRF¶
Analyze continuous speech data from the mTRF dataset [1]: use the boosting algorithm for estimating temporal response functions (TRFs) to the acoustic envelope.
# Author: Christian Brodbeck <christianbrodbeck@nyu.edu>
# sphinx_gallery_thumbnail_number = 4
import os
from scipy.io import loadmat
import mne
from eelbrain import *
# Load the mTRF speech dataset and convert data to NDVars
root = mne.datasets.mtrf.data_path()
speech_path = os.path.join(root, 'speech_data.mat')
mdata = loadmat(speech_path)
# Time axis
tstep = 1. / mdata['Fs'][0, 0]
n_times = mdata['envelope'].shape[0]
time = UTS(0, tstep, n_times)
# Load the EEG sensor coordinates (drop fiducials coordinates, which are stored
# after sensor 128)
sensor = Sensor.from_montage('biosemi128')[:128]
# Frequency dimension for the spectrogram
band = Scalar('frequency', range(16))
# Create variables
envelope = NDVar(mdata['envelope'][:, 0], (time,), name='envelope')
eeg = NDVar(mdata['EEG'], (time, sensor), name='EEG', info={'unit': 'µV'})
spectrogram = NDVar(mdata['spectrogram'], (time, band), name='spectrogram')
# Exclude a bad channel
eeg = eeg[sensor.index(exclude='A13')]
Data¶
Plot the spectrogram of the speech stimulus:
plot.Array(spectrogram, xlim=5, w=6, h=2)

Out:
<Array: spectrogram>
Plot the envelope used as stimulus representation for reverse correlation:
plot.UTS(envelope, xlim=5, w=6, h=2)

Out:
<UTS: envelope>
Plot the corresponding EEG data:
p = plot.TopoButterfly(eeg, xlim=5, w=7, h=2)
p.set_time(1.200)

Reverse correlation¶
TRF for the envelope using boosting:
- TRF from -100 to 400 ms
- Basis of 100 ms Hamming windows
- Use 4 partitionings of the data for cross-validation based early stopping
res = boosting(eeg, envelope, -0.100, 0.400, basis=0.100, partitions=4)
p = plot.TopoButterfly(res.h_scaled, w=6, h=2)
p.set_time(.180)

Out:
Fitting models: 0%| | 0/508 [00:00<?, ?it/s]
Fitting models: 0%| | 1/508 [00:00<02:51, 2.95it/s]
Fitting models: 2%|1 | 9/508 [00:00<02:00, 4.14it/s]
Fitting models: 3%|3 | 17/508 [00:00<01:25, 5.76it/s]
Fitting models: 5%|4 | 25/508 [00:00<01:00, 7.94it/s]
Fitting models: 6%|6 | 32/508 [00:00<00:44, 10.75it/s]
Fitting models: 7%|7 | 38/508 [00:00<00:32, 14.26it/s]
Fitting models: 9%|8 | 45/508 [00:01<00:24, 18.63it/s]
Fitting models: 10%|# | 53/508 [00:01<00:19, 23.70it/s]
Fitting models: 12%|#1 | 59/508 [00:01<00:15, 28.11it/s]
Fitting models: 13%|#2 | 65/508 [00:01<00:13, 32.12it/s]
Fitting models: 14%|#3 | 71/508 [00:01<00:11, 36.50it/s]
Fitting models: 15%|#5 | 77/508 [00:01<00:11, 39.11it/s]
Fitting models: 16%|#6 | 83/508 [00:01<00:09, 42.95it/s]
Fitting models: 18%|#7 | 90/508 [00:01<00:08, 48.06it/s]
Fitting models: 19%|#9 | 98/508 [00:01<00:07, 53.28it/s]
Fitting models: 21%|## | 106/508 [00:02<00:07, 56.84it/s]
Fitting models: 22%|##2 | 113/508 [00:02<00:07, 51.16it/s]
Fitting models: 23%|##3 | 119/508 [00:02<00:07, 50.21it/s]
Fitting models: 25%|##5 | 127/508 [00:02<00:07, 53.26it/s]
Fitting models: 27%|##6 | 135/508 [00:02<00:06, 56.43it/s]
Fitting models: 28%|##7 | 141/508 [00:02<00:06, 57.31it/s]
Fitting models: 29%|##8 | 147/508 [00:02<00:06, 57.28it/s]
Fitting models: 30%|### | 153/508 [00:02<00:06, 56.32it/s]
Fitting models: 32%|###1 | 161/508 [00:03<00:05, 60.45it/s]
Fitting models: 33%|###3 | 169/508 [00:03<00:05, 61.99it/s]
Fitting models: 35%|###4 | 177/508 [00:03<00:05, 63.98it/s]
Fitting models: 36%|###6 | 185/508 [00:03<00:04, 65.83it/s]
Fitting models: 38%|###7 | 193/508 [00:03<00:04, 67.56it/s]
Fitting models: 40%|###9 | 201/508 [00:03<00:04, 67.24it/s]
Fitting models: 41%|####1 | 209/508 [00:03<00:04, 68.16it/s]
Fitting models: 43%|####2 | 217/508 [00:03<00:04, 68.96it/s]
Fitting models: 44%|####4 | 225/508 [00:03<00:04, 70.27it/s]
Fitting models: 46%|####5 | 233/508 [00:04<00:04, 65.09it/s]
Fitting models: 47%|####7 | 240/508 [00:04<00:04, 62.17it/s]
Fitting models: 49%|####8 | 248/508 [00:04<00:04, 64.15it/s]
Fitting models: 50%|##### | 256/508 [00:04<00:03, 66.11it/s]
Fitting models: 52%|#####1 | 263/508 [00:04<00:03, 63.54it/s]
Fitting models: 53%|#####3 | 270/508 [00:04<00:03, 64.42it/s]
Fitting models: 55%|#####4 | 277/508 [00:04<00:03, 63.24it/s]
Fitting models: 56%|#####5 | 284/508 [00:04<00:03, 64.11it/s]
Fitting models: 57%|#####7 | 292/508 [00:04<00:03, 66.83it/s]
Fitting models: 59%|#####9 | 300/508 [00:05<00:03, 66.82it/s]
Fitting models: 60%|###### | 307/508 [00:05<00:03, 59.58it/s]
Fitting models: 62%|######1 | 314/508 [00:05<00:03, 61.42it/s]
Fitting models: 63%|######3 | 321/508 [00:05<00:03, 54.14it/s]
Fitting models: 64%|######4 | 327/508 [00:05<00:03, 49.25it/s]
Fitting models: 66%|######5 | 333/508 [00:05<00:03, 50.31it/s]
Fitting models: 67%|######6 | 340/508 [00:05<00:03, 54.21it/s]
Fitting models: 69%|######8 | 348/508 [00:06<00:02, 57.09it/s]
Fitting models: 70%|######9 | 354/508 [00:06<00:02, 56.82it/s]
Fitting models: 71%|####### | 360/508 [00:06<00:02, 55.99it/s]
Fitting models: 72%|#######2 | 366/508 [00:06<00:02, 56.90it/s]
Fitting models: 73%|#######3 | 372/508 [00:06<00:02, 57.60it/s]
Fitting models: 74%|#######4 | 378/508 [00:06<00:02, 51.65it/s]
Fitting models: 76%|#######5 | 385/508 [00:06<00:02, 55.23it/s]
Fitting models: 77%|#######7 | 392/508 [00:06<00:02, 56.67it/s]
Fitting models: 78%|#######8 | 398/508 [00:06<00:01, 57.22it/s]
Fitting models: 80%|#######9 | 404/508 [00:07<00:01, 53.22it/s]
Fitting models: 81%|######## | 410/508 [00:07<00:02, 48.42it/s]
Fitting models: 82%|########1 | 416/508 [00:07<00:02, 43.64it/s]
Fitting models: 83%|########2 | 421/508 [00:07<00:02, 41.11it/s]
Fitting models: 84%|########4 | 428/508 [00:07<00:01, 44.31it/s]
Fitting models: 85%|########5 | 434/508 [00:07<00:01, 48.01it/s]
Fitting models: 87%|########6 | 440/508 [00:07<00:01, 48.90it/s]
Fitting models: 88%|########8 | 448/508 [00:07<00:01, 54.15it/s]
Fitting models: 89%|########9 | 454/508 [00:08<00:01, 52.76it/s]
Fitting models: 91%|######### | 460/508 [00:08<00:01, 46.98it/s]
Fitting models: 92%|#########1| 465/508 [00:08<00:00, 47.27it/s]
Fitting models: 93%|#########2| 470/508 [00:08<00:00, 42.98it/s]
Fitting models: 94%|#########3| 477/508 [00:08<00:00, 47.89it/s]
Fitting models: 95%|#########5| 483/508 [00:08<00:00, 47.99it/s]
Fitting models: 96%|#########6| 489/508 [00:08<00:00, 48.38it/s]
Fitting models: 98%|#########7| 496/508 [00:08<00:00, 52.46it/s]
Fitting models: 99%|#########9| 503/508 [00:09<00:00, 52.99it/s]
Fitting models: 100%|##########| 508/508 [00:09<00:00, 55.38it/s]
Multiple predictors¶
Multiple predictors additively explain the signal:
# Derive acoustic onsets from the envelope
onset = envelope.diff('time', name='onset').clip(0)
onset *= envelope.max() / onset.max()
plot.UTS([[envelope, onset]], xlim=5, w=6, h=2)

Out:
<UTS: envelope, onset>
res_onset = boosting(eeg, [onset, envelope], -0.100, 0.400, basis=0.100, partitions=4)
p = plot.TopoButterfly(res_onset.h_scaled, w=6, h=3)
p.set_time(.150)

Out:
Fitting models: 0%| | 0/508 [00:00<?, ?it/s]
Fitting models: 0%| | 1/508 [00:00<03:00, 2.81it/s]
Fitting models: 1%| | 5/508 [00:00<02:10, 3.87it/s]
Fitting models: 2%|1 | 9/508 [00:00<01:34, 5.28it/s]
Fitting models: 3%|2 | 13/508 [00:00<01:10, 7.05it/s]
Fitting models: 3%|3 | 17/508 [00:00<00:54, 9.06it/s]
Fitting models: 4%|4 | 21/508 [00:00<00:42, 11.57it/s]
Fitting models: 5%|4 | 24/508 [00:01<00:34, 13.84it/s]
Fitting models: 5%|5 | 27/508 [00:01<00:29, 16.26it/s]
Fitting models: 6%|6 | 31/508 [00:01<00:24, 19.18it/s]
Fitting models: 7%|6 | 35/508 [00:01<00:21, 21.60it/s]
Fitting models: 8%|7 | 39/508 [00:01<00:19, 24.50it/s]
Fitting models: 8%|8 | 43/508 [00:01<00:16, 27.36it/s]
Fitting models: 9%|9 | 47/508 [00:01<00:15, 30.11it/s]
Fitting models: 10%|# | 51/508 [00:01<00:14, 32.37it/s]
Fitting models: 11%|# | 55/508 [00:02<00:14, 30.96it/s]
Fitting models: 12%|#1 | 59/508 [00:02<00:15, 29.00it/s]
Fitting models: 12%|#2 | 63/508 [00:02<00:14, 30.02it/s]
Fitting models: 13%|#3 | 67/508 [00:02<00:13, 31.55it/s]
Fitting models: 14%|#3 | 71/508 [00:02<00:14, 30.59it/s]
Fitting models: 15%|#4 | 75/508 [00:02<00:14, 29.61it/s]
Fitting models: 16%|#5 | 79/508 [00:02<00:13, 30.67it/s]
Fitting models: 16%|#6 | 83/508 [00:02<00:14, 30.34it/s]
Fitting models: 17%|#7 | 87/508 [00:03<00:13, 30.69it/s]
Fitting models: 18%|#7 | 91/508 [00:03<00:13, 31.36it/s]
Fitting models: 19%|#8 | 95/508 [00:03<00:12, 32.63it/s]
Fitting models: 19%|#9 | 99/508 [00:03<00:12, 32.64it/s]
Fitting models: 20%|## | 103/508 [00:03<00:11, 33.86it/s]
Fitting models: 21%|##1 | 107/508 [00:03<00:12, 31.20it/s]
Fitting models: 22%|##1 | 111/508 [00:03<00:13, 30.02it/s]
Fitting models: 23%|##2 | 115/508 [00:04<00:13, 28.79it/s]
Fitting models: 23%|##3 | 118/508 [00:04<00:13, 28.75it/s]
Fitting models: 24%|##3 | 121/508 [00:04<00:13, 28.51it/s]
Fitting models: 25%|##4 | 125/508 [00:04<00:13, 28.87it/s]
Fitting models: 25%|##5 | 129/508 [00:04<00:12, 29.99it/s]
Fitting models: 26%|##6 | 133/508 [00:04<00:12, 30.96it/s]
Fitting models: 27%|##6 | 137/508 [00:04<00:13, 28.52it/s]
Fitting models: 28%|##7 | 140/508 [00:04<00:14, 26.28it/s]
Fitting models: 28%|##8 | 143/508 [00:05<00:15, 23.30it/s]
Fitting models: 29%|##8 | 147/508 [00:05<00:14, 24.28it/s]
Fitting models: 30%|##9 | 151/508 [00:05<00:13, 26.58it/s]
Fitting models: 31%|### | 155/508 [00:05<00:11, 29.43it/s]
Fitting models: 31%|###1 | 159/508 [00:05<00:10, 31.91it/s]
Fitting models: 32%|###2 | 163/508 [00:05<00:10, 33.56it/s]
Fitting models: 33%|###2 | 167/508 [00:05<00:09, 35.19it/s]
Fitting models: 34%|###3 | 171/508 [00:05<00:10, 33.13it/s]
Fitting models: 34%|###4 | 175/508 [00:05<00:09, 33.56it/s]
Fitting models: 35%|###5 | 179/508 [00:06<00:09, 34.58it/s]
Fitting models: 36%|###6 | 183/508 [00:06<00:09, 34.56it/s]
Fitting models: 37%|###6 | 187/508 [00:06<00:09, 34.48it/s]
Fitting models: 38%|###7 | 191/508 [00:06<00:08, 35.35it/s]
Fitting models: 38%|###8 | 195/508 [00:06<00:09, 31.79it/s]
Fitting models: 39%|###9 | 199/508 [00:06<00:10, 29.80it/s]
Fitting models: 40%|###9 | 203/508 [00:06<00:09, 31.35it/s]
Fitting models: 41%|#### | 207/508 [00:06<00:09, 32.75it/s]
Fitting models: 42%|####1 | 211/508 [00:07<00:09, 32.17it/s]
Fitting models: 42%|####2 | 215/508 [00:07<00:08, 32.78it/s]
Fitting models: 43%|####3 | 219/508 [00:07<00:08, 33.54it/s]
Fitting models: 44%|####3 | 223/508 [00:07<00:08, 33.16it/s]
Fitting models: 45%|####4 | 227/508 [00:07<00:08, 34.57it/s]
Fitting models: 45%|####5 | 231/508 [00:07<00:08, 31.77it/s]
Fitting models: 46%|####6 | 235/508 [00:07<00:08, 33.14it/s]
Fitting models: 47%|####7 | 239/508 [00:07<00:07, 34.72it/s]
Fitting models: 48%|####7 | 243/508 [00:08<00:07, 34.21it/s]
Fitting models: 49%|####8 | 247/508 [00:08<00:07, 34.14it/s]
Fitting models: 49%|####9 | 251/508 [00:08<00:07, 33.81it/s]
Fitting models: 50%|##### | 255/508 [00:08<00:07, 32.74it/s]
Fitting models: 51%|##### | 259/508 [00:08<00:07, 31.76it/s]
Fitting models: 52%|#####1 | 263/508 [00:08<00:07, 33.02it/s]
Fitting models: 53%|#####2 | 267/508 [00:08<00:07, 31.43it/s]
Fitting models: 53%|#####3 | 271/508 [00:08<00:07, 32.54it/s]
Fitting models: 54%|#####4 | 275/508 [00:09<00:07, 30.15it/s]
Fitting models: 55%|#####4 | 279/508 [00:09<00:07, 29.63it/s]
Fitting models: 56%|#####5 | 283/508 [00:09<00:07, 31.98it/s]
Fitting models: 56%|#####6 | 287/508 [00:09<00:06, 33.73it/s]
Fitting models: 57%|#####7 | 291/508 [00:09<00:06, 34.73it/s]
Fitting models: 58%|#####8 | 295/508 [00:09<00:06, 34.78it/s]
Fitting models: 59%|#####8 | 299/508 [00:09<00:06, 34.53it/s]
Fitting models: 60%|#####9 | 303/508 [00:09<00:06, 31.45it/s]
Fitting models: 60%|###### | 307/508 [00:10<00:06, 31.21it/s]
Fitting models: 61%|######1 | 311/508 [00:10<00:06, 31.01it/s]
Fitting models: 62%|######2 | 315/508 [00:10<00:06, 31.72it/s]
Fitting models: 63%|######2 | 319/508 [00:10<00:06, 28.25it/s]
Fitting models: 64%|######3 | 323/508 [00:10<00:06, 27.81it/s]
Fitting models: 64%|######4 | 326/508 [00:10<00:06, 27.62it/s]
Fitting models: 65%|######4 | 329/508 [00:10<00:06, 27.82it/s]
Fitting models: 65%|######5 | 332/508 [00:10<00:06, 26.08it/s]
Fitting models: 66%|######6 | 336/508 [00:11<00:06, 28.35it/s]
Fitting models: 67%|######6 | 340/508 [00:11<00:06, 27.33it/s]
Fitting models: 68%|######7 | 344/508 [00:11<00:05, 28.95it/s]
Fitting models: 69%|######8 | 348/508 [00:11<00:05, 30.34it/s]
Fitting models: 69%|######9 | 352/508 [00:11<00:04, 32.40it/s]
Fitting models: 70%|####### | 356/508 [00:11<00:04, 32.56it/s]
Fitting models: 71%|####### | 360/508 [00:11<00:04, 31.29it/s]
Fitting models: 72%|#######1 | 364/508 [00:11<00:04, 31.83it/s]
Fitting models: 72%|#######2 | 368/508 [00:12<00:04, 28.94it/s]
Fitting models: 73%|#######3 | 372/508 [00:12<00:04, 31.39it/s]
Fitting models: 74%|#######4 | 376/508 [00:12<00:04, 29.20it/s]
Fitting models: 75%|#######4 | 380/508 [00:12<00:04, 26.00it/s]
Fitting models: 75%|#######5 | 383/508 [00:12<00:04, 25.73it/s]
Fitting models: 76%|#######5 | 386/508 [00:12<00:04, 25.89it/s]
Fitting models: 77%|#######6 | 389/508 [00:12<00:04, 26.58it/s]
Fitting models: 77%|#######7 | 392/508 [00:13<00:04, 26.08it/s]
Fitting models: 78%|#######7 | 395/508 [00:13<00:04, 27.02it/s]
Fitting models: 79%|#######8 | 399/508 [00:13<00:03, 28.89it/s]
Fitting models: 79%|#######9 | 402/508 [00:13<00:04, 26.49it/s]
Fitting models: 80%|#######9 | 405/508 [00:13<00:03, 27.03it/s]
Fitting models: 80%|######## | 408/508 [00:13<00:04, 24.98it/s]
Fitting models: 81%|######## | 411/508 [00:13<00:04, 19.51it/s]
Fitting models: 81%|########1 | 414/508 [00:13<00:04, 21.66it/s]
Fitting models: 82%|########2 | 417/508 [00:14<00:04, 20.48it/s]
Fitting models: 83%|########2 | 420/508 [00:14<00:04, 19.47it/s]
Fitting models: 83%|########3 | 423/508 [00:14<00:04, 21.06it/s]
Fitting models: 84%|########3 | 426/508 [00:14<00:04, 20.05it/s]
Fitting models: 84%|########4 | 429/508 [00:14<00:03, 21.50it/s]
Fitting models: 85%|########5 | 432/508 [00:14<00:03, 22.91it/s]
Fitting models: 86%|########5 | 435/508 [00:14<00:03, 23.33it/s]
Fitting models: 86%|########6 | 439/508 [00:15<00:02, 26.27it/s]
Fitting models: 87%|########7 | 443/508 [00:15<00:02, 27.87it/s]
Fitting models: 88%|########7 | 447/508 [00:15<00:02, 27.94it/s]
Fitting models: 89%|########8 | 451/508 [00:15<00:02, 27.87it/s]
Fitting models: 89%|########9 | 454/508 [00:15<00:02, 26.14it/s]
Fitting models: 90%|########9 | 457/508 [00:15<00:02, 22.31it/s]
Fitting models: 91%|######### | 460/508 [00:15<00:02, 22.22it/s]
Fitting models: 91%|#########1| 463/508 [00:16<00:02, 21.51it/s]
Fitting models: 92%|#########1| 466/508 [00:16<00:01, 23.41it/s]
Fitting models: 92%|#########2| 469/508 [00:16<00:01, 21.14it/s]
Fitting models: 93%|#########2| 472/508 [00:16<00:01, 21.19it/s]
Fitting models: 94%|#########3| 475/508 [00:16<00:01, 23.17it/s]
Fitting models: 94%|#########4| 479/508 [00:16<00:01, 25.61it/s]
Fitting models: 95%|#########4| 482/508 [00:16<00:01, 25.63it/s]
Fitting models: 95%|#########5| 485/508 [00:16<00:01, 22.88it/s]
Fitting models: 96%|#########6| 488/508 [00:17<00:00, 20.83it/s]
Fitting models: 97%|#########6| 491/508 [00:17<00:00, 22.12it/s]
Fitting models: 97%|#########7| 494/508 [00:17<00:00, 23.17it/s]
Fitting models: 98%|#########8| 498/508 [00:17<00:00, 26.42it/s]
Fitting models: 99%|#########8| 501/508 [00:17<00:00, 27.11it/s]
Fitting models: 99%|#########9| 504/508 [00:17<00:00, 25.85it/s]
Fitting models: 100%|#########9| 507/508 [00:17<00:00, 23.87it/s]
Fitting models: 100%|##########| 508/508 [00:17<00:00, 28.46it/s]
Compare models¶
Compare model quality through the correlation between measured and predicted responses:
plot.Topomap([res.r, res_onset.r], w=4, h=2, ncol=2, axtitle=['envelope', 'envelope + onset'])

Out:
<Topomap: Correlation>
References¶
[1] | Crosse, M. J., Liberto, D., M, G., Bednar, A., & Lalor, E. C. (2016). The Multivariate Temporal Response Function (mTRF) Toolbox: A MATLAB Toolbox for Relating Neural Signals to Continuous Stimuli. Frontiers in Human Neuroscience, 10. https://doi.org/10.3389/fnhum.2016.00604 |
Total running time of the script: ( 0 minutes 32.138 seconds)
Note
Click here to download the full example code
Impulse predictors for epochs¶
epoch_impulse_predictor()
generates predictor variables for reverse
correlation in trial-based experiments with discrete events. The function
generates one impulse per trial, and these impulsescan be of varaiable magnitude
and have variable latency.
The example uses simulated data meant to vaguely resemble data from an N400 experiment, but not intended as a physiologically realistic simulation.
# sphinx_gallery_thumbnail_number = 2
from eelbrain import *
ds = datasets.simulate_erp()
print(ds.summary())
Out:
Key Type Values
--------------------------------------------------------------------
eeg NDVar 140 time, 65 sensor; -1.59126e-05 - 1.55339e-05
cloze Var 0.00563694 - 0.997675
cloze_cat Factor high:40, low:40
n_chars Var 0:2, 1:5, 2:6, 3:13, 4:23, 5:20, 6:7, 7:4
--------------------------------------------------------------------
Dataset: 80 cases
Discrete events¶
Computing a TRF for an impulse at trial onset is very similar to averaging:
any_trial = epoch_impulse_predictor('eeg', 1, ds=ds)
fit = boosting('eeg', any_trial, -0.100, 0.600, basis=0.050, ds=ds, partitions=2, delta=0.05)
average = ds['eeg'].mean('case')
trf = fit.h.sub(time=(average.time.tmin, average.time.tstop))
p = plot.TopoButterfly([trf, average], axtitle=['Impulse response', 'Average'])
p.set_time(0.400)

Out:
Fitting models: 0%| | 0/130 [00:00<?, ?it/s]
Fitting models: 1%| | 1/130 [00:00<00:27, 4.70it/s]
Fitting models: 5%|5 | 7/130 [00:00<00:19, 6.47it/s]
Fitting models: 10%|# | 13/130 [00:00<00:13, 8.68it/s]
Fitting models: 15%|#4 | 19/130 [00:00<00:09, 11.53it/s]
Fitting models: 18%|#7 | 23/130 [00:00<00:07, 14.50it/s]
Fitting models: 22%|##2 | 29/130 [00:00<00:05, 18.77it/s]
Fitting models: 27%|##6 | 35/130 [00:00<00:04, 23.50it/s]
Fitting models: 32%|###1 | 41/130 [00:01<00:03, 28.31it/s]
Fitting models: 37%|###6 | 48/130 [00:01<00:02, 33.37it/s]
Fitting models: 42%|####2 | 55/130 [00:01<00:01, 39.40it/s]
Fitting models: 48%|####7 | 62/130 [00:01<00:01, 44.02it/s]
Fitting models: 54%|#####3 | 70/130 [00:01<00:01, 48.99it/s]
Fitting models: 60%|###### | 78/130 [00:01<00:00, 53.56it/s]
Fitting models: 66%|######6 | 86/130 [00:01<00:00, 56.76it/s]
Fitting models: 72%|#######1 | 93/130 [00:01<00:00, 58.57it/s]
Fitting models: 77%|#######6 | 100/130 [00:01<00:00, 61.32it/s]
Fitting models: 82%|########2 | 107/130 [00:02<00:00, 60.40it/s]
Fitting models: 88%|########7 | 114/130 [00:02<00:00, 57.94it/s]
Fitting models: 93%|#########3| 121/130 [00:02<00:00, 53.72it/s]
Fitting models: 98%|#########7| 127/130 [00:02<00:00, 55.27it/s]
Fitting models: 100%|##########| 130/130 [00:02<00:00, 52.52it/s]
Categorial coding¶
Impulse predictors can be used like dummy codes in a regression model.
Use one impulse to code for occurrence of any word (any_word
), and a
second impulse to code for unpredictable words only (cloze
):
any_word = epoch_impulse_predictor('eeg', 1, ds=ds, name='any_word')
# effect code for cloze (1 for low cloze, -1 for high cloze)
cloze_code = Var.from_dict(ds['cloze_cat'], {'high': 0, 'low': 1})
low_cloze = epoch_impulse_predictor('eeg', cloze_code, ds=ds, name='low_cloze')
# plot the predictors for each trial
plot.UTS([any_word, low_cloze], '.case')

Out:
<UTS: any_word, low_cloze ~ .case>
Estimate response functions for these two predictors. Based on the coding,
any_word
reflects the response to predictable words, and low_cloze
reflects how unpredictable words differ from predictable words:
fit = boosting('eeg', [any_word, low_cloze], 0, 0.5, basis=0.050, model='cloze_cat', ds=ds, partitions=2, delta=0.05)
p = plot.TopoButterfly(fit.h)
p.set_time(0.400)

Out:
Fitting models: 0%| | 0/130 [00:00<?, ?it/s]
Fitting models: 1%| | 1/130 [00:00<00:22, 5.82it/s]
Fitting models: 4%|3 | 5/130 [00:00<00:16, 7.59it/s]
Fitting models: 7%|6 | 9/130 [00:00<00:12, 9.77it/s]
Fitting models: 9%|9 | 12/130 [00:00<00:09, 12.20it/s]
Fitting models: 12%|#2 | 16/130 [00:00<00:07, 15.22it/s]
Fitting models: 15%|#4 | 19/130 [00:00<00:06, 16.44it/s]
Fitting models: 17%|#6 | 22/130 [00:00<00:05, 18.72it/s]
Fitting models: 19%|#9 | 25/130 [00:01<00:05, 19.40it/s]
Fitting models: 22%|##2 | 29/130 [00:01<00:04, 22.45it/s]
Fitting models: 26%|##6 | 34/130 [00:01<00:03, 25.56it/s]
Fitting models: 29%|##9 | 38/130 [00:01<00:03, 28.32it/s]
Fitting models: 32%|###2 | 42/130 [00:01<00:02, 30.92it/s]
Fitting models: 36%|###6 | 47/130 [00:01<00:02, 34.75it/s]
Fitting models: 39%|###9 | 51/130 [00:01<00:02, 35.19it/s]
Fitting models: 43%|####3 | 56/130 [00:01<00:01, 37.05it/s]
Fitting models: 47%|####6 | 61/130 [00:01<00:01, 37.92it/s]
Fitting models: 52%|#####1 | 67/130 [00:02<00:01, 40.94it/s]
Fitting models: 56%|#####6 | 73/130 [00:02<00:01, 42.96it/s]
Fitting models: 61%|###### | 79/130 [00:02<00:01, 44.62it/s]
Fitting models: 65%|######4 | 84/130 [00:02<00:01, 45.73it/s]
Fitting models: 68%|######8 | 89/130 [00:02<00:00, 45.57it/s]
Fitting models: 72%|#######2 | 94/130 [00:02<00:00, 46.01it/s]
Fitting models: 76%|#######6 | 99/130 [00:02<00:00, 44.44it/s]
Fitting models: 80%|######## | 104/130 [00:02<00:00, 43.39it/s]
Fitting models: 84%|########3 | 109/130 [00:03<00:00, 42.36it/s]
Fitting models: 88%|########7 | 114/130 [00:03<00:00, 37.19it/s]
Fitting models: 91%|######### | 118/130 [00:03<00:00, 33.46it/s]
Fitting models: 94%|#########3| 122/130 [00:03<00:00, 30.80it/s]
Fitting models: 98%|#########7| 127/130 [00:03<00:00, 34.15it/s]
Fitting models: 100%|##########| 130/130 [00:03<00:00, 35.41it/s]
Total running time of the script: ( 0 minutes 20.418 seconds)
Recipes¶
Contents
Plots for Publication¶
In order to produce multiple plots with consistent style it is useful to set
some matplotlib
options globally (more on styles).
One way this can be done is by update rcParams
inside a script, e.g.:
import matplotlib as mpl
mpl.rcParams['font.family'] = 'sans-serif'
mpl.rcParams['font.size'] = 8
# Change all line-widths
for key in mpl.rcParams:
if 'linewidth' in key:
mpl.rcParams[key] *= 0.5
# The size of saved figures depends on the figure size (w, h) and DPI
mpl.rcParams['figure.dpi'] = 300
mpl.rcParams['savefig.dpi'] = 300
Matplotlib’s tight_layout()
functionality provides an
easy way for plots to use the available space, and most Eelbrain plots use it
by default. However, when trying to produce multiple plots with identical
scaling it can lead to unwanted discrepancies. Subplot placement can be
specified in absolute scale through the margins
parameter (see
General layout parameters).
If a script produces several plots and there is no need to show them on the
screen, they can be kept hidden through the show=False
argument:
p = plot.UTSStat('uts', 'A', ds=ds, w=5, tight=False, show=False)
When working on more complex figures, it is often desirable to save the legend separately and combine it in a layout application:
p = plot.UTSStat('uts', 'A', ds=ds, w=5, tight=False, show=False, legend=False)
p.save('plot.pdf', transparent=True)
p.save_legend('legend.pdf', transparent=True)
The MneExperiment
Pipeline¶
See also
MneExperiment
class reference for details on all available methods- Pipeline wiki page for additional information
Introduction¶
The MneExperiment
class is a template for an MEG/EEG analysis pipeline.
The pipeline is adapted to a specific experiment by creating a subclass, and
specifying properties of the experiment as attributes.
Step by step¶
Contents
Setting up the file structure¶
-
MneExperiment.
sessions
¶
The first step is to define an MneExperiment
subclass with the name
of the experiment:
from eelbrain import *
class WordExperiment(MneExperiment):
sessions = 'words'
Where sessions
is the name which you included in your raw data files after
the subject identifier.
The pipeline expects input files in a strictly determined folder/file structure.
In the schema below, curly brackets indicate slots to be replaced with specific
names, for example '{subject}'
should be replaced with each specific
subject’s label:
root
mri-sdir /mri
mri-dir /{mrisubject}
meg-sdir /meg
meg-dir /{subject}
raw-dir
trans-file /{mrisubject}-trans.fif
raw-file /{subject}_{session}-raw.fif
This schema shows path templates according to which the input
files should be organized. Assuming that root="/files"
, for a subject
called “R0001” this includes:
- MRI-directory at
/files/mri/R0001
- the raw data file at
/files/meg/R0001/R0001_words-raw.fif
(the session is called “words” which is specified inWordExperiment.sessions
) - the trans-file from the coregistration at
/files/meg/R0001/R0001-trans.fif
Once the required files are placed in this structure, the experiment class can be initialized with the proper root parameter, pointing to where the files are located:
>>> e = WordExperiment("/files")
The setup can be tested using MneExperiment.show_subjects()
, which shows
a list of the subjects that were discovered and the MRIs used:
>>> e.show_subjects()
# subject mri
-----------------------------------------
0 R0026 R0026
1 R0040 fsaverage * 0.92
2 R0176 fsaverage * 0.954746600461
...
-
MneExperiment.
visits
¶
Note
If participants come back for the experiment on multiple occasions, a
visits
attribute might also be needed. For details see the
corresponding wiki page.
Pre-processing¶
Make sure an appropriate pre-processing pipeline is defined as
MneExperiment.raw
.
To inspect raw data for a given pre-processing stage use:
>>> e.set(raw='1-40')
>>> y = e.load_raw(ndvar=True)
>>> p = plot.TopoButterfly(y, xlim=5)
Which will plot 5 s excerpts and allow scrolling through the data.
Events¶
Initially, events are only labeled with the trigger ID. Use the
MneExperiment.variables
settings to add labels.
Events are represented as Dataset
objects and can be inspected with
corresponding methods and functions, for example:
>>> e = WordExperiment("/files")
>>> ds = e.load_events()
>>> ds.head()
>>> print(table.frequencies('trigger', ds=ds))
For more complex designs and variables, you can override methods that provide complete control over the events. These are the transformations applied to the triggers extracted from raw files (in this order):
MneExperiment.fix_events()
: Change event order, timing and remove/add eventsMneExperiment.variables
: Add labels based on triggersMneExperiment.label_events()
: Add any more complex labels
Defining data epochs¶
Once events are properly labeled, define MneExperiment.epochs
.
There is one special epoch to define, which is called 'cov'
. This is the
data epoch that will be used to estimate the sensor noise covariance matrix for
source estimation.
In order to find the right sel
epoch parameter, it can be useful to actually
load the events with MneExperiment.load_events()
and test different
selection strings. The epoch selection is determined by
selection = event_ds.eval(epoch['sel'])
. Thus, a specific setting could be
tested with:
>>> ds = e.load_events()
>>> print(ds.sub("event == 'value'"))
Bad channels¶
Flat channels are automatically excluded from the analysis.
An initial check for noisy channels can be done by looking at the raw data (see
Pre-processing above).
If this inspection reveals bad channels, they can be excluded using
MneExperiment.make_bad_channels()
.
Another good check for bad channels is plotting the average evoked response,, and looking for channels which are uncorrelated with neighboring channels. To plot the average before trial rejection, use:
>>> ds = e.load_epochs(epoch='epoch', reject=False)
>>> plot.TopoButterfly('meg', ds=ds)
The neighbor correlation can also be quantified, using:
>>> nc = neighbor_correlation(concatenate(ds['meg']))
>>> nc.sensor.names[nc < 0.3]
Datalist(['MEG 099'])
A simple way to cycle through subjects when performing a given pre-processing
step is MneExperiment.next()
.
If a general threshold is adequate, the selection of bad channels based on
neighbor-correlation can be automated using the
MneExperiment.make_bad_channels_neighbor_correlation()
method:
>>> for subject in e:
... e.make_bad_channels_neighbor_correlation()
ICA¶
If preprocessing includes ICA, select which ICA components should be removed.
To open the ICA selection GUI, The experiment raw
state needs to be set to
the ICA stage of the pipeline:
>>> e.set(raw='ica')
>>> e.make_ica_selection()
See MneExperiment.make_ica_selection()
for more information on display
options and on how to precompute ICA decomposition for all subjects.
When selecting ICA components for multiple subject, a simple way to cycle
through subjects is MneExperiment.next()
, like:
>>> e.make_ica_selection(epoch='epoch', decim=10)
>>> e.next()
subject: 'R1801' -> 'R2079'
>>> e.make_ica_selection(epoch='epoch', decim=10)
>>> e.next()
subject: 'R2079' -> 'R2085'
...
Trial selection¶
For each primary epoch that is defined, bad trials can be rejected using
MneExperiment.make_epoch_selection()
. Rejections are specific to a given raw
state:
>>> e.set(raw='ica1-40')
>>> e.make_epoch_selection()
>>> e.next()
subject: 'R1801' -> 'R2079'
>>> e.make_epoch_selection()
...
To reject trials based on a pre-determined threshold, a loop can be used:
>>> for subject in e:
... e.make_epoch_selection(auto=1e-12)
...
Analysis¶
With preprocessing completed, there are different options for analyzing the data.
The most flexible option is loading data from the desired processing stage using
one of the many .load_...
methods of the MneExperiment
. For
example, load a Dataset
with source-localized condition averages using
MneExperiment.load_evoked_stc()
, then test a hypothesis using one of the
mass-univariate test from the testnd
module. To make this kind of
analysis replicable, it is probably useful to write the complete analysis as a
separate script that imports the experiment (see the example experiment folder).
Many statistical comparisons can also be specified in the
MneExperiment.tests
attribute, and then loaded directly using the
MneExperiment.load_test()
method. This has the advantage that the tests
will be cached automatically and, once computed, can be loaded very quickly.
However, these definitions are not quite as flexible as writing a custom script.
Finally, for tests defined in MneExperiment.tests
, the
MneExperiment
can generate HTML report files. These are generated with
the MneExperiment.make_report()
and MneExperiment.make_report_rois()
methods.
Warning
If source files are changed (raw files, epoch rejection or bad channel
files, …) reports are not updated automatically unless the corresponding
MneExperiment.make_report()
function is called again. For this reason
it is useful to have a script to generate all desired reports. Running the
script ensures that all reports are up-to-date, and will only take seconds
if nothing has to be recomputed (for an example see make-reports.py
in
the example experiment folder).
Example¶
The following is a complete example for an experiment class definition file
(the source file can be found in the Eelbrain examples folder at
examples/mouse/mouse.py
):
# skip test: data unavailable
from eelbrain.pipeline import *
class Mouse(MneExperiment):
# Name of the experimental session(s), used to locate *-raw.fif files
sessions = 'CAT'
# Pre-processing pipeline: each entry in `raw` specifies one processing step. The first parameter
# of each entry specifies the source (another processing step or 'raw' for raw input data).
raw = {
# Maxwell filter as first step (taking input from raw data, 'raw')
'tsss': RawMaxwell('raw', st_duration=10., ignore_ref=True, st_correlation=0.9, st_only=True),
# Band-pass filter data between 1 and 40 Hz (taking Maxwell-filtered data as input, 'tsss)
'1-40': RawFilter('tsss', 1, 40),
# Perform ICA on filtered data
'ica': RawICA('1-40', 'CAT', n_components=0.99),
}
# Variables determine how event triggeres are mapped to meaningful labels. Events are represented
# as data-table in which each row corresponds to one event (i.e., one trigger). Each variable
# defined here adds one column in that data-table, assigning a label or value to each event.
variables = {
# The first parameter specifies the source variable (here the trigger values),
# the second parameter a mapping from source to target labels/values
'stimulus': LabelVar('trigger', {(162, 163): 'target', (166, 167): 'prime'}),
'prediction': LabelVar('trigger', {(162, 166): 'expected', (163, 167): 'unexpected'}),
}
# Epochs specify how to extract time-locked data segments ("epochs") from the continuous data.
epochs = {
# A PrimaryEpoch definition extracts epochs directly from continuous data. The first argument
# specifies the recording session from which to extract the data (here: 'CAT'). The second
# argument specifies which events to extract the data from (here: all events at which the
# 'stimulus' variable, defined above, has a value of either 'prime' or 'target').
'word': PrimaryEpoch('CAT', "stimulus.isin(('prime', 'target'))", samplingrate=200),
# A secondary epoch inherits its properties from the base epoch ("word") unless they are
# explicitly modified (here, selecting a subset of events)
'prime': SecondaryEpoch('word', "stimulus == 'prime'"),
'target': SecondaryEpoch('word', "stimulus == 'target'"),
# The 'cov' epoch defines the data segments used to compute the noise covariance matrix for
# source localization
'cov': SecondaryEpoch('prime', tmax=0),
}
tests = {
'=0': TTestOneSample(),
'surprise': TTestRelated('prediction', 'unexpected', 'expected'),
'anova': ANOVA('prediction * subject'),
}
parcs = {
'frontotemporal-lh': CombinationParc('aparc', {
'frontal-lh': 'parsorbitalis + parstriangularis + parsopercularis',
'temporal-lh': 'transversetemporal + superiortemporal + '
'middletemporal + inferiortemporal + bankssts',
}, views='lateral'),
}
root = '~/Data/Mouse'
e = Mouse(root)
The event structure is illustrated by looking at the first few events:
>>> from mouse import *
>>> ds = e.load_events()
>>> ds.head()
trigger i_start T SOA subject stimulus prediction
--------------------------------------------------------------------
182 104273 104.27 12.04 S0001
182 116313 116.31 1.313 S0001
166 117626 117.63 0.598 S0001 prime expected
162 118224 118.22 2.197 S0001 target expected
166 120421 120.42 0.595 S0001 prime expected
162 121016 121.02 2.195 S0001 target expected
167 123211 123.21 0.596 S0001 prime unexpected
163 123807 123.81 2.194 S0001 target unexpected
167 126001 126 0.598 S0001 prime unexpected
163 126599 126.6 2.195 S0001 target unexpected
Experiment Definition¶
Contents
Basic setup¶
-
MneExperiment.
owner
¶
Set MneExperiment.owner
to your email address if you want to be able to
receive notifications. Whenever you run a sequence of commands with
mne_experiment.notification:
you will get an email once the respective code
has finished executing or run into an error, for example:
>>> e = MyExperiment()
>>> with e.notification:
... e.make_report('mytest', tstart=0.1, tstop=0.3)
...
will send you an email as soon as the report is finished (or the program encountered an error)
-
MneExperiment.
auto_delete_results
¶
Whenever a MneExperiment
instance is initialized with a valid
root
path, it checks whether changes in the class definition invalidate
previously computed results. By default, the user is prompted to confirm
the deletion of invalidated results. Set .auto_delete_results=True
to
delete them automatically without interrupting initialization.
-
MneExperiment.
screen_log_level
¶
Determines the amount of information displayed on the screen while using
an MneExperiment
(see logging
).
-
MneExperiment.
meg_system
¶
Starting with mne
0.13, fiff files converted from KIT files store
information about the system they were collected with. For files converted
earlier, the MneExperiment.meg_system
attribute needs to specify the
system the data were collected with. For data from NYU New York, the
correct value is meg_system="KIT-157"
.
-
MneExperiment.
trigger_shift
¶
Set this attribute to shift all trigger times by a constant (in seconds). For
example, with trigger_shift = 0.03
a trigger that originally occurred
35.10 seconds into the recording will be shifted to 35.13. If the trigger delay
differs between subjects, this attribute can also be a dictionary mapping
subject names to shift values, e.g.
trigger_shift = {'R0001': 0.02, 'R0002': 0.05, ...}
.
Subjects¶
-
MneExperiment.
subject_re
¶
Subjects are identified by looking for folders in the subjects-directory whose
name matches the subject_re
regular expression (see re
). By
default, this is '(R|A|Y|AD|QP)(\d{3,})$'
, which matches R-numbers like
R1234
, but also numbers prefixed by A
, Y
, AD
or QP
.
Defaults¶
-
MneExperiment.
defaults
¶
The defaults dictionary can contain default settings for experiment analysis parameters (see State Parameters), e.g.:
defaults = {'epoch': 'my_epoch',
'cov': 'noreg',
'raw': '1-40'}
Pre-processing (raw)¶
-
MneExperiment.
raw
¶
Define a pre-processing pipeline as a series of linked processing steps:
RawFilter (source[, l_freq, h_freq, cache]) |
Filter raw pipe |
RawICA (source, session[, method, …]) |
ICA raw pipe |
RawApplyICA (source, ica[, cache]) |
Apply ICA estimated in a RawICA pipe |
RawMaxwell (source[, bad_condition]) |
Maxwell filter raw pipe |
RawSource ([filename, reader, sysname, …]) |
Raw data source |
RawReReference (source[, reference, add, drop]) |
Re-reference EEG data |
By default the raw data can be accessed in a pipe named "raw"
(raw data
input can be customized by adding a RawSource
pipe).
Each subsequent preprocessing step is defined with its input as first argument
(source
).
For example, the following definition sets up a pipeline using TSSS, a band-pass filter and ICA:
class Experiment(MneExperiment):
sessions = 'session'
raw = {
'tsss': RawMaxwell('raw', st_duration=10., ignore_ref=True, st_correlation=0.9, st_only=True),
'1-40': RawFilter('tsss', 1, 40),
'ica': RawICA('1-40', 'session', 'extended-infomax', n_components=0.99),
}
To use the raw --> TSSS --> 1-40 Hz band-pass
pipeline, use e.set(raw="1-40")
.
To use raw --> TSSS --> 1-40 Hz band-pass --> ICA
, select e.set(raw="ica")
.
Event variables¶
-
MneExperiment.
variables
¶
Event variables add labels and variables to the events:
LabelVar (source, codes[, default, session]) |
Variable assigning labels to values |
EvalVar (code[, session]) |
Variable based on evaluating a statement |
GroupVar (groups[, session]) |
Group membership for each subject |
Most of the time, the main purpose of this attribute is to turn trigger values into meaningful labels:
class Mouse(MneExperiment):
variables = {
'stimulus': LabelVar('trigger', {(162, 163): 'target', (166, 167): 'prime'}),
'prediction': LabelVar('trigger', {162: 'expected', 163: 'unexpected'}),
}
This defines a variable called “stimulus”, and on this variable all events
that have triggers 162 and 163 have the value "target"
, and events with
trigger 166 and 167 have the value "prime"
.
The “prediction” variable only labels triggers 162 and 163.
Unmentioned trigger values are assigned the empty string (''
).
Epochs¶
-
MneExperiment.
epochs
¶
Epochs are specified as a {name: epoch_definition}
dictionary. Names are
str
, and epoch_definition
are instances of the classes
described below:
PrimaryEpoch (session[, sel]) |
Epoch based on selecting events from a raw file |
SecondaryEpoch (base[, sel]) |
Epoch inheriting events from another epoch |
SuperEpoch (sub_epochs, **kwargs) |
Combine several other epochs |
Examples:
epochs = {
# some primary epochs:
'picture': PrimaryEpoch('words', "stimulus == 'picture'"),
'word': PrimaryEpoch('words', "stimulus == 'word'"),
# use the picture baseline for the sensor covariance estimate
'cov': SecondaryEpoch('picture', tmax=0),
# another secondary epoch:
'animal_words': SecondaryEpoch('noun', sel="word_type == 'animal'"),
# a superset-epoch:
'all_stimuli': SuperEpoch(('picture', 'word')),
}
Tests¶
-
MneExperiment.
tests
¶
Statistical tests are defined as {name: test_definition}
dictionary. Test-
definitions are defined from the following:
TTestOneSample ([tail]) |
One-sample t-test |
TTestRelated (model, c1, c0[, tail]) |
Related measures t-test |
TTestIndependent (model, c1, c0[, tail, vars]) |
Independent measures t-test (comparing groups of subjects) |
ANOVA (x[, model, vars]) |
ANOVA test |
TContrastRelated (model, contrast[, tail]) |
Contrasts of T-maps (see eelbrain.testnd.TContrastRelated ) |
TwoStageTest (stage_1[, vars, model]) |
Two-stage test: T-test of regression coefficients |
Example:
tests = {
'my_anova': ANOVA('noise * word_type * subject'),
'my_ttest': TTestRelated('noise', 'a_lot_of_noise', 'no_noise'),
}
Subject groups¶
-
MneExperiment.
groups
¶
A subject group called 'all'
containing all subjects is always implicitly
defined. Additional subject groups can be defined in
MneExperiment.groups
with {name: group_definition}
entries:
Group (subjects) |
Group defined as collection of subjects |
SubGroup (base, exclude) |
Group defined by removing subjects from a base group |
Example:
groups = {
'good': SubGroup('all', ['R0013', 'R0666']),
'bad': Group(['R0013', 'R0666']),
}
Parcellations (parcs
)¶
-
MneExperiment.
parcs
¶
The parcellation determines how the brain surface is divided into regions.
A number of standard parcellations are automatically defined (see
parc/mask (parcellations) below). Additional parcellations can be defined in
the MneExperiment.parcs
dictionary with {name: parc_definition}
entries. There are a couple of different ways in which parcellations can be
defined, described below.
SubParc (base, labels[, views]) |
A subset of labels in another parcellation |
CombinationParc (base, labels[, views]) |
Recombine labels from an existing parcellation |
SeededParc (seeds[, mask, surface, views]) |
Parcellation that is grown from seed coordinates |
IndividualSeededParc (seeds[, mask, surface, …]) |
Seed parcellation with individual seeds for each subject |
FreeSurferParc ([views]) |
Parcellation that is created outside Eelbrain for each subject |
FSAverageParc ([views]) |
Fsaverage parcellation that is morphed to individual subjects |
Visualization defaults¶
-
MneExperiment.
brain_plot_defaults
¶
The MneExperiment.brain_plot_defaults
dictionary can contain options
that changes defaults for brain plots (for reports and movies). The following
options are available:
- surf : ‘inflated’ | ‘pial’ | ‘smoothwm’ | ‘sphere’ | ‘white’
- Freesurfer surface to use as brain geometry.
- views :
str
| iterator ofstr
- View or views to show in the figure. Can also be set for each parcellation,
see
MneExperiment.parc
. - foreground : mayavi color
- Figure foreground color (i.e., the text color).
- background : mayavi color
- Figure background color.
- smoothing_steps :
None
|int
- Number of smoothing steps to display data.
State Parameters¶
These are parameters that can be set after an MneExperiment
has been
initialized to affect the analysis, for example:
>>> my_experiment = MneExperiment()
>>> my_experiment.set(raw='1-40', cov='noreg')
sets up my_experiment
to use raw files filtered with a 1-40 Hz band-pass
filter, and to use sensor covariance matrices without regularization.
Contents
session
¶
Which raw session to work with (one of MneExperiment.sessions
; usually
set automatically when epoch is set)
visit
¶
Which visit to work with (one of MneExperiment.visits
)
raw
¶
Select the preprocessing pipeline applied to the continuous data. Options are
all the processing steps defined in MneExperiment.raw
, as well as
"raw"
for using unprocessed raw data.
group
¶
Any group defined in MneExperiment.groups
. Will restrict the analysis
to that group of subjects.
epoch
¶
Any epoch defined in MneExperiment.epochs
. Specify the epoch on which
the analysis should be conducted.
rej
(trial rejection)¶
Trial rejection can be turned off e.set(rej='')
, meaning that no trials are
rejected, and back on, meaning that the corresponding rejection files are used
e.set(rej='man')
.
model
¶
While the epoch state parameter determines which events are
included when loading data, the model
parameter determines how these events
are split into different condition cells. The parameter should be set to the
name of a categorial event variable which defines the desired cells.
In the Example,
e.load_evoked(epoch='target', model='prediction')
would load responses to the target, averaged for expected and unexpected trials.
Cells can also be defined based on crossing two variables using the %
sign.
In the Example, to load corresponding primes together with
the targets, you would use
e.load_evoked(epoch='word', model='stimulus % prediction')
.
equalize_evoked_count
¶
By default, the analysis uses all epoch marked as good during rejection. Set equalize_evoked_count=’eq’ to discard trials to make sure the same number of epochs goes into each cell of the model.
- ‘’ (default)
- Use all epochs.
- ‘eq’
- Make sure the same number of epochs is used in each cell by discarding epochs.
cov
¶
The method for correcting the sensor covariance.
- ‘noreg’
- Use raw covariance as estimated from the data (do not regularize).
- ‘bestreg’ (default)
- Find the regularization parameter that leads to optimal whitening of the baseline.
- ‘reg’
- Use the default regularization parameter (0.1).
- ‘auto’
- Use automatic selection of the optimal regularization method.
src
¶
The source space to use.
ico-x
: Surface source space based on icosahedral subdivision of the white matter surfacex
steps (e.g.,ico-4
, the default).vol-x
: Volume source space based on a volume grid withx
mm resolution (x
is the distance between sources, e.g.vol-10
for a 10 mm grid).
inv
¶
What inverse solution to use for source localization. This parameter can also be
set with MneExperiment.set_inv()
, which has a more detailed description of
the options. The inverse solution can be set directly using the appropriate
string as in e.set(inv='fixed-1-MNE')
.
parc
/mask
(parcellations)¶
The parcellation determines how the brain surface is divided into regions. There are a number of built-in parcellations:
- Freesurfer Parcellations
aparc.a2005s
,aparc.a2009s
,aparc
,aparc.DKTatlas
,PALS_B12_Brodmann
,PALS_B12_Lobes
,PALS_B12_OrbitoFrontal
,PALS_B12_Visuotopic
.lobes
- Modified version of
PALS_B12_Lobes
in which the limbic lobe is merged into the other 4 lobes. lobes-op
- One large region encompassing occipital and parietal lobe in each hemisphere.
lobes-ot
- One large region encompassing occipital and temporal lobe in each hemisphere.
Additional parcellation can be defined in the MneExperiment.parc
attribute. Parcellations are used in different contexts:
- When loading source space data, the current
parc
state determines the parcellation of the souce space (change the state parameter withe.set(parc='aparc')
).- When loading tests, setting the
parc
parameter treats each label as a separate ROI. For spatial cluster-based tests that means that no clusters can cross the boundary between two labels. On the other hand, using themask
parameter treats all named labels as connected surface, but discards any sources labeled as"unknown"
. For example, loading a test withmask='lobes'
will perform a whole-brain test on the cortex, while discarding subcortical sources.
Parcellations are set with their name, with the expception of
SeededParc
: for those, the name is followed by the radious in mm, for
example, to use seeds defined in a parcellation named 'myparc'
with a radius
of 25 mm around the seed, use e.set(parc='myparc-25')
.
connectivity
¶
Possible values: ''
, 'link-midline'
Connectivity refers to the edges connecting data channels (sensors for sensor
space data and sources for source space data). These edges are used to find
clusters in cluster-based permutation tests. For source spaces, the default is
to use FreeSurfer surfaces in which the two hemispheres are unconnected. By
setting connectivity='link-midline'
, this default connectivity can be
modified so that the midline gyri of the two hemispheres get linked at sources
that are at most 15 mm apart. This parameter currently does not affect sensor
space connectivity.
select_clusters
(cluster selection criteria)¶
In thresholded cluster test, clusters are initially filtered with a minimum
size criterion. This can be changed with the select_clusters
analysis
parameter with the following options:
name | min time | min sources | min sensors |
---|---|---|---|
"all" |
|||
"10ms" |
10 ms | 10 | 4 |
"" (default) |
25 ms | 10 | 4 |
"large" |
25 ms | 20 | 8 |
To change the cluster selection criterion use for example:
>>> e.set(select_clusters='all')
See also
- Wiki on GitHub
- Mailing list for announcements
- Source code on GitHub
- Eelbrain on the Python Package Index
Indices and tables¶
Eelbrain relies on NumPy, SciPy, Matplotlib, MNE-Python, PySurfer, WxPython, Cython and incorporates icons from the Tango Desktop Project.
Current funding: National Institutes of Health (NIH) grant R01-DC-014085 (since 2016). Past funding: NYU Abu Dhabi Institute grant G1001 (2011-2016).