Eelbrain

https://zenodo.org/badge/3651023.svg https://img.shields.io/conda/vn/conda-forge/eelbrain.svg https://img.shields.io/conda/pn/conda-forge/eelbrain.svg

Eelbrain is an open-source Python package for accessible statistical analysis of MEG and EEG data. It is maintained by Christian Brodbeck at the Computational sensorimotor systems lab at University of Maryland, College Park.

If you use Eelbrain in work that is published, please acknowledge it by citing it with the appropriate version and DOI.

Manual

Installing

Note

Because of the fluid nature of Python development, the recommended way of installing Eelbrain changes occasionally. For up-to-date information, see the corresponding Eelbrain wiki page.

Getting Started

MacOS: Framework Build

On macOS, the GUI tool Eelbrain uses requires a special build of Python called a “Framework build”. You might see this error when trying to create a plot:

SystemExit: This program needs access to the screen.
Please run with a Framework build of python, and only when you are
logged in on the main display of your Mac.

In order to avoid this, Eelbrain installs a shortcut to start IPython with a Framework build:

$ eelbrain

This automatically launches IPython with the “eelbrain” profile. A default startup script that executes from eelbrain import * is created, and can be changed in the corresponding IPython profile.

Quitting iPython

Sometimes iPython seems to get stuck after this line:

Do you really want to exit ([y]/n)? y

In those instances, pressing ctrl-c usually terminates iPython immediately.

Windows: Scrolling

Scrolling inside a plot axes normally uses arrow keys, but this is currently not possible on Windows (due to an issue in Matplotlib). Instead, the following keys can be used:

  i  
j   l
  k  

Introduction

There are three primary data-objects:

  • Factor for categorial variables
  • Var for scalar variables
  • NDVar for multidimensional data (e.g. a variable measured at different time points)

Multiple variables belonging to the same dataset can be grouped in a Dataset object.

Factor

A Factor is a container for one-dimensional, categorial data: Each case is described by a string label. The most obvious way to initialize a Factor is a list of strings:

>>> A = Factor(['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b'], name='A')

Since Factor initialization simply iterates over the given data, the same Factor can be initialized with:

>>> Factor('aaaabbbb', name='A')
Factor(['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b'], name='A')

There are other shortcuts to initialize factors (see also the Factor class documentation):

>>> A = Factor(['a', 'b', 'c'], repeat=4, name='A')
>>> A
Factor(['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b', 'c', 'c', 'c', 'c'], name='A')

Indexing works like for arrays:

>>> A[0]
'a'
>>> A[0:6]
Factor(['a', 'a', 'a', 'a', 'b', 'b'], name='A')

All values present in a Factor are accessible in its Factor.cells attribute:

>>> A.cells
('a', 'b', 'c')

Based on the Factor’s cell values, boolean indexes can be generated:

>>> A == 'a'
array([ True,  True,  True,  True, False, False, False, False, False,
       False, False, False], dtype=bool)
>>> A.isany('a', 'b')
array([ True,  True,  True,  True,  True,  True,  True,  True, False,
       False, False, False], dtype=bool)
>>> A.isnot('a', 'b')
array([False, False, False, False, False, False, False, False,  True,
        True,  True,  True], dtype=bool)

Interaction effects can be constructed from multiple factors with the % operator:

>>> B = Factor(['d', 'e'], repeat=2, tile=3, name='B')
>>> B
Factor(['d', 'd', 'e', 'e', 'd', 'd', 'e', 'e', 'd', 'd', 'e', 'e'], name='B')
>>> i = A % B

Interaction effects are in many ways interchangeable with factors in places where a categorial model is required:

>>> i.cells
(('a', 'd'), ('a', 'e'), ('b', 'd'), ('b', 'e'), ('c', 'd'), ('c', 'e'))
>>> i == ('a', 'd')
array([ True,  True, False, False, False, False, False, False, False,
       False, False, False], dtype=bool)

Var

The Var class is basically a container to associate one-dimensional numpy.ndarray objects with a name. While simple operations can be performed on the object directly, for any more complex operations on the data the corresponding numpy.ndarray can be retrieved in the Var.x attribute:

>>> Y = Var(np.random.rand(10), name='Y')
>>> Y
Var([0.185, 0.285, 0.105, 0.916, 0.76, 0.888, 0.288, 0.0165, 0.901, 0.72], name='Y')
>>> Y[5:]
Var([0.888, 0.288, 0.0165, 0.901, 0.72], name='Y')
>>> Y + 1
Var([1.18, 1.28, 1.11, 1.92, 1.76, 1.89, 1.29, 1.02, 1.9, 1.72], name='Y+1')
>>> Y.x
array([ 0.18454728,  0.28479396,  0.10546204,  0.91619036,  0.76006963,
        0.88807645,  0.28807859,  0.01645504,  0.90112081,  0.71991843])

Note

Note however that the Var.x attribute is not intended to be replaced; rather, a new Var object should be created for a new array.

NDVar

NDVar objects are containers for multidimensional data, and manage the description of the dimensions along with the data. NDVars are often derived from some import function, for example load.fiff.stc_ndvar(). As an example, consider single trial data from the mne sample dataset:

>>> ds = datasets.get_mne_sample(src='ico')
>>> src = ds['src']
>>> src
<NDVar 'src': 145 case, 5120 source, 76 time>

This representation shows that src contains 145 trials of data, with 5120 sources and 76 time points. NDVars offer numpy functionality that takes into account the dimensions. Through the NDVar.sub() method, indexing can be done using meaningful descriptions, such as selecting data for only the left hemisphere:

>>> src.sub(source='lh')
<NDVar 'src': 145 case, 2559 source, 76 time>

Through several methods data can be aggregated, for example a mean over time:

>>> src.mean('time')
<NDVar 'src': 145 case, 5120 source>

Or a root mean square over sources:

>>> src.rms('source')
<NDVar 'src': 145 case, 76 time>

As with a Var, the corresponding numpy.ndarray can always be accessed in the NDVar.x attribute:

>>> type(src.x)
numpy.ndarray
>>> src.x.shape
(145, 5120, 76)

NDVar objects can be constructed from an array and corresponding dimension objects, for example:

>>> frequency = Scalar('frequency', [1, 2, 3, 4])
>>> time = UTS(0, 0.01, 50)
>>> data = numpy.random.normal(0, 1, (4, 50))
>>> NDVar(data, (frequency, time))
<NDVar: 4 frequency, 50 time>

A case dimension can be added by including the bare Case class:

>>> data = numpy.random.normal(0, 1, (10, 4, 50))
>>> NDVar(data, (Case, frequency, time))
<NDVar: 10 case, 4 frequency, 50 time>

Dataset

The Dataset class is a subclass of collections.OrderedDict from which it inherits much of its behavior. Its intended purpose is to be a vessel for variable objects (Factor, Var and NDVar) describing the same cases. As a dictionary, its keys are strings and its values are data-objects.

The Dataset class interacts with data-objects’ name attribute:

  • A Dataset initialized with a list of data-objects automatically uses their names as keys:

    >>> A = Factor('aabb', name='A')
    >>> B = Factor('cdcd', name='B')
    >>> ds = Dataset((A, B))
    >>> print ds
    A   B
    -----
    a   c
    a   d
    b   c
    b   d
    >>> ds['A']
    Factor(['a', 'a', 'b', 'b'], name='A')
    
  • When an unnamed data-object is assigned to a Dataset, the data-object is automatically assigned its key as a name:

    >>> ds['Y'] = Var([2,1,4,2])
    >>> print ds
    A   B   Y
    ---------
    a   c   2
    a   d   1
    b   c   4
    b   d   2
    >>> ds['Y']
    Var([2, 1, 4, 2], name='Y')
    

The “official” string representation of a Dataset contains information on the variables stored in it:

>>> ds
<Dataset n_cases=4 {'A':F, 'B':F, 'Y':V}>

n_cases=4 indicates that the Dataset contains four cases (rows). The subsequent dictionary-like representation shows the keys and the types of the corresponding values (F: Factor, V: Var, Vnd: NDVar). If a variable’s name does not match its key in the Dataset, this is also indicated:

>>> ds['C'] = Factor('qwer', name='another_name')
>>> ds
<Dataset n_cases=4 {'A':F, 'B':F, 'Y':V, 'C':<F 'another_name'>}>

While indexing a Dataset with strings returns the corresponding data-objects, numpy.ndarray-like indexing on the Dataset can be used to access a subset of cases:

>>> ds2 = ds[2:]
>>> print ds2
A   B   Y   C
-------------
b   c   4   e
b   d   2   r
>>> ds2['A']
Factor(['b', 'b'], name='A')

Together with the “informal” string representation (retrieved by the print statement) this can be used to inspect the cases contained in the Dataset:

>>> print ds[0]
A   B   Y   C
-------------
a   c   2   q
>>> print ds[2:]
A   B   Y   C
-------------
b   c   4   e
b   d   2   r

This type of indexing also allows indexing based on the Dataset’s variables:

>>> print ds[A == 'a']
A   B   Y   C
-------------
a   c   2   q
a   d   1   w

Example

Below is a simple example using data objects (for more, see the Examples):

>>> from eelbrain import *
>>> import numpy as np
>>> y = np.empty(21)
>>> y[:14] = np.random.normal(0, 1, 14)
>>> y[14:] = np.random.normal(1.5, 1, 7)
>>> Y = Var(y, 'Y')
>>> A = Factor('abc', 'A', repeat=7)
>>> print Dataset((Y, A))
Y           A
-------------
0.10967     a
0.33562     a
-0.33151    a
1.3571      a
-0.49722    a
-0.24896    a
1.0979      a
-0.56123    b
-0.51316    b
-0.25942    b
-0.6072     b
-0.79173    b
0.0019011   b
2.1804      b
2.5373      c
1.7302      c
-0.17581    c
1.8922      c
1.2734      c
1.5961      c
1.1518      c
>>> print table.frequencies(A)
cell   n
--------
a      7
b      7
c      7
>>> print test.anova(Y, A)
                       SS   df               MS         F      p
----------------------------------------------------------------
A                    8.76    2             4.38   5.75*     .012

Residuals   13.7063612608   18   0.761464514489
----------------------------------------------------------------
Total               22.47   20
>>> print test.pairwise(Y, A, corr='Hochberg')

Pairwise t-Tests (independent samples)

    b                  c
----------------------------------------
a   t(12) = 0.71       t(12) = -2.79*
    p = .489           p = .016
    p(c) = .489        p(c) = .032
b                      t(12) = -3.00*
                       p = .011
                       p(c) = .032
(* Corrected after Hochberg, 1988)
>>> t = test.pairwise(Y, A, corr='Hochberg')
>>> print t.get_tex()
\begin{center}
\begin{tabular}{lll}
\toprule
 & b & c \\
\midrule
a & $t_{12} = 0.71^{    \ \ \ \ }$ & $t_{12} = -2.79^{*   \ \ \ }$ \\
 & $p = .489$ & $p = .016$ \\
 & $p_{c} = .489$ & $p_{c} = .032$ \\
b &  & $t_{12} = -3.00^{*   \ \ \ }$ \\
 &  & $p = .011$ \\
 &  & $p_{c} = .032$ \\
\bottomrule
\end{tabular}
\end{center}
>>> plot.Boxplot(Y, A, title="My Boxplot", ylabel="value", corr='Hochberg')
_images/statistics-example.png

Exporting Data

Dataset objects have different Dataset.save() methods for saving in various formats. Iterators (such as Var and Factor) can be exported using the save.txt() function.

Changes

New in 0.30

New in 0.29

New in 0.28

  • Transition to Python 3.6
  • API changes:
    • testnd.anova: The match parameter is now determined automatically and does not need to be specified anymore in most cases.
    • testnd.ttest_1samp.diff renamed to testnd.ttest_1samp.difference.
    • plot.Histogram: following matplotlib, the normed parameter was renamed to density.
    • Previously capitalized argument and attribute names Y, X and Xax are now lowercase.
    • Topomap-plot argument order changed to provide consistency between different plots.
  • NDVar and Var support for round(x)
  • MneExperiment pipeline:
    • Independent measures t-test

New in 0.27

New in 0.26

  • API changes:
    • A new global configure() function replaces module-level configuration functions.
    • Dataset: when a one-dimensional array is assigned to an unused key, the array is now automatically converted to a Var object.
    • SourceSpace.vertno has been renamed to SourceSpace.vertices.
  • Plotting:
    • The new name argument allows setting the window title without adding a title to the figure.
    • Plots that reresent time have a new method to synchronize the time axis on multiple plots: link_time_axis().
    • Plot source space time series: plot.brain.butterfly()
  • ANOVAs now support mixed models with between- and within-subjects factors (see examples at test.anova()).
  • load.fiff: when generating epochs from raw data, a new tstop argument allows specifying the time interval exclusive of the last sample.
  • New functions:
  • New methods:
  • MneExperiment pipeline:

New in 0.25

  • Installation with conda (see Installing) and $ eelbrain launch script (see Getting Started).
  • API:
    • NDVar objects now inherit names through operations.
    • Assignment to a Dataset overwrites variable .name attributes, unless the Dataset key is a pythonified version of .name.
  • GUI/plotting:
    • When using iPython 5 or later, GUI start and stop is now automatic. It is possible to revert to the old behavior with plot.configure().
    • There are new hotkeys for most plots (see the individual plots’ help for details).
    • Plots automatically rescale when the window is resized.
  • MneExperiment:
    • A new MneExperiment.sessions attribute replaces defaults['experiment'], with support for multiple sessions in one experiment (see Setting up the file structure).
    • The MneExperiment.epochs parameter sel_epoch has been removed, use base instead.
    • The setting raw='clm' has been renamed to raw='raw'.
    • Custom preprocessing pipelines (see MneExperiment.raw).
    • The model parameter for ANOVA tests is now optional (see MneExperiment.tests).
  • Reverse correlation using boosting().
  • Loading and saving *.wav files (load.wav() and save.wav()).

New in 0.24

  • API:
    • MneExperiment: For data from the NYU New York system converted with mne < 0.13, the MneExperiment.meg_system attribute needs to be set to "KIT-157" to distinguish it from data collected with the KIT UMD system.
    • masked_parameter_map() method of cluster-based test results: use of pmin=None is deprecated, use pmin=1 instead.
  • New test: test.TTestRel.
  • MneExperiment.make_report_rois() includes corrected p-values in reports for tests in more than one ROI
  • MneExperiment.make_rej() now has a decim parameter to improve display performance.
  • MneExperiment: BEM-solution files are now created dynamically with mne and are not cached any more. This can lead to small changes in results due to improved numerical precision. Delete old files to free up space with mne_experiment.rm('bem-sol-file', subject='*').
  • New MneExperiment.make_report_coreg() method.
  • New MneExperiment: analysis parameter connectivity
  • plot.TopoButterfly: press Shift-T for a large topo-map with sensor names.

New in 0.23

New in 0.22

  • Epoch Rejection GUI:
    • New “Tools” menu.
    • New “Info” tool to show summary info on the rejection.
    • New “Find Bad Channels” tool to automatically find bad channels.
    • Set marked channels by clicking on topo-map.
    • Faster page redraw.
  • plot.Barplot and plot.Boxplot: new cells argument to customize the order of bars/boxes.
  • MneExperiment: new method MneExperiment.show_rej_info().
  • NDVar: new method NDVar.label_clusters().
  • plot.configure(): option to revert to wxPython backend for plot.brain.

New in 0.21

  • MneExperiment:
    • New epoch parameters: trigger_shift and vars (see MneExperiment.epochs).
    • load_selected_events(): new vardef parameter to load variables from a test definition.
    • Log files stored in the root directory.
    • Parcellations (MneExperiment.parcs) based on combinations can also include split commands.
  • New Factor method: Factor.floodfill().
  • Model methods: get_table() replaced with as_table(), new head() and tail().
  • API: .sort_idx methods renamed to .sort_index.

New in 0.20

  • MneExperiment: new analysis parameter select_clusters='all' to keep all clusters in cluster tests (see select_clusters (cluster selection criteria)).
  • Use testnd.configure() to limit the number of CPUs that are used in permutation cluster tests.

New in 0.19

  • Two-stage tests (see MneExperiment.tests).
  • Safer cache-handling. See note at Analysis.
  • Dataset.head() and Dataset.tail() methods for more efficiently inspecting partial Datasets.
  • The default format for plots in reports is now SVG since they are displayed correctly in Safari 9.0. Use plot.configure() to change the default format.
  • API: Improvements in plot.Topomap with concomitant changes in the constructor signature. For examples see the meg/topographic plotting example.
  • API: plot.ColorList has a new argument called labels.
  • API: testnd.anova attribute probability_maps renamed to p analogous to other testnd results.
  • Rejection-GUI: The option to plot the data range only has been removed.

New in 0.18

  • API: The first argument for MneExperiment.plot_annot() is now parc.
  • API: the fill_in_missing parameter to combine() has been deprecated and replaced with a new parameter called incomplete.
  • API: Several plotting functions have a new xticklabels parameter to suppress x-axis tick labels (e.g. plot.UTSStat).
  • The objects returned by plot.brain plotting functions now contain a plot_colorbar() method to create a corresponding plot.ColorBar plot.
  • New function choose() to combine data in different NDVars on a case by case basis.
  • Rejection-GUI (gui.select_epochs()): Press Shift-i when hovering over an epoch to enter channels for interpolation manually.
  • MneExperiment.show_file_status() now shows the last modification date of each file.
  • Under OS X 10.8 and newer running code under a notifier statement now automatically prevents the computer from going to sleep.

New in 0.17

  • MneExperiment.brain_plot_defaults can be used to customize PySurfer plots in movies and reports.
  • MneExperiment.trigger_shift can now also be a dictionary mapping subject name to shift value.
  • The rejection GUI now allows selecting individual channels for interpolation using the ‘i’ key.
  • Parcellations based on combinations of existing labels, as well as parcellations based on regions around points specified in MNI coordinates can now be defined in MneExperiment.parcs.
  • Source space NDVar can be indexed with lists of region names, e.g., ndvar.sub(source=['cuneus-lh', 'lingual-lh']).
  • API: plot.brain.bin_table() function signature changed slightly (more parameters, new hemi parameter inserted to match other functions’ argument order).
  • API: combine() now raises KeyError when trying to combine Dataset objects with unequal keys; set fill_in_missing=True to reproduce previous behavior.
  • API: Previously, Var.as_factor() mapped unspecified values to str(value). Now they are mapped to ''. This also applies to MneExperiment.variables entries with unspecified values.

New in 0.16

  • New function for plotting a legend for annot-files: plot.brain.annot_legend() (automatically used in reports).
  • Epoch definitions in MneExperiment.epochs can now include a 'base' parameter, which will copy the given “base” epoch and modify it with the current definition.
  • MneExperiment.make_mov_ttest() and MneExperiment.make_mov_ga_dspm() are fixed but require PySurfer 0.6.
  • New function: table.melt_ndvar().
  • API: plot.brain function signatures changed slightly to accommodate more layout-related arguments.
  • API: use Brain.image() instead of plot.brain.image().

New in 0.15

  • The Eelbrain package on the PYPI is now compiled with Anaconda. This means that the package can now be installed into an Anaconda distribution with pip, whereas easy_install has to be used for the Canopy distribution.
  • GUI gui.select_epochs(): Set marked channels through menu (View > Mark Channels)
  • Datasets can be saved as tables in RTF format (Dataset.save_rtf()).
  • API plot.Timeplot: the default spread indicator changed to SEM, and there is a new argument for timelabels.
  • API: test.anova() is now a function with a slightly changed signature. The old class has been renamed to test.ANOVA.
  • API: test.oneway() was removed. Use test.anova().
  • API: the default value of the plot.Timeplot parameter bottom changed from 0 to None (determined by the data).
  • API: Factor.relabel() renamed to Factor.update_labels().
  • Plotting: New option for the figure legend 'draggable' (drag the legend with the mouse pointer).

New in 0.14

  • API: the plot.Topomap argument sensors changed to sensorlabels.
  • GUI: The python/Quit Eelbrain menu command now closes all windows to ensure that unsaved documents are handled properly. In order to yield to the terminal without closing windows, use the Go/Yield to Terminal command (Command-Alt-Q).
  • testnd.t_contrast_rel: support for unary operation abs.

New in 0.13

  • The gui.select_epochs() GUI can now also be used to set bad channels. MneExperiment subclasses will combine bad channel information from rejection files with bad channel information from bad channel files. Note that while bad channel files set bad channels for a given raw file globally, rejection files set bad channels only for the given epoch.
  • Factor objects can now remember a custom cell order which determines the order in tables and plots.
  • The Var.as_factor() method can now assign all unmentioned codes to a default value.
  • MneExperiment:
    • API: Subclasses should remove the subject and experiment parameters from MneExperiment.label_events().
    • API: MneExperiment can now be imported directly from eelbrain.
    • API: The MneExperiment._defaults attribute should be renamed to MneExperiment.defaults.
    • A draft for a guide at The MneExperiment Pipeline.
    • Cached files are now saved in a separate folder at root/eelbrain-cache. The cache can be cleared using MneExperiment.clear_cache(). To preserve cached test results, move the root/test folder into the root/eelbrain-cache folder.

New in 0.12

  • API: Dataset construction changed, allows setting the number of cases in the Dataset.
  • API: plot.SensorMap2d was renamed to plot.SensorMap.
  • MneExperiment:
    • API: The default number of samples for reports is now 10‘000.
    • New epoch parameter 'n_cases': raise an error if an epoch definition does not yield expected number of trials.
    • A custom baseline period for epochs can now be specified as a parameter in the epoch definition (e.g., 'baseline': (-0.2, -0.1)). When loading data, specifying baseline=True uses the epoch’s custom baseline.

New in 0.11

  • MneExperiment:
    • Change in the way the covariance matrix is defined: The epoch for the covariance matrix should be specified in MneExperiment.epochs['cov']. The regularization is no longer part of set_inv(), but is instead set with MneExperiment.set(cov='reg') or MneExperiment.set(cov='noreg').
    • New option cov='bestreg' automatically selects the regularization parameter for each subejct.
  • Var.as_factor() allows more efficient labeling when multiple values share the same label.
  • API: Previously plot.configure_backend() is now plot.configure()

New in 0.10

  • Tools for generating colors for categories (see Plotting).
  • Plots now all largely respect matplotlib rc-parameters (see Customizing Matplotlib).
  • Fixed an issue in the testnd module that could affect permutation based p-values when multiprocessing was used.

New in 0.9

  • Factor API change: The rep argument was renamed to repeat.
  • T-values for regression coefficients through NDVar.ols_t().
  • MneExperiment: subject name patterns and eog_sns are now handled automatically.
  • UTSStat and Barplot plots can use pooled error for variability estimates (on by default for related measures designs, can be turned off using the pool_error argument).
    • API: for consistency, the argument to specify the kind of error to plot changed to error in both plots.

New in 0.8

  • A new GUI application controls plots as well as the epoch selection GUI (see notes in the reference sections on Plotting and GUIs).
  • Randomization/Monte Carlo tests now seed the random state to make results replicable.

New in 0.6

New in 0.5

  • The eelbrain.lab and eelbrain.eellab modules are deprecated. Everything can now me imported from eelbrain directly.

New in 0.4

New in 0.3

  • Optimized clustering for cluster permutation tests.

New in 0.2

  • gui.SelectEpochs Epoch rejection GIU has a new “GA” button to plot the grand average of all accepted trials
  • Cluster permutation tests in testnd use multiple cores; To disable multiprocessing set eelbrain._stats.testnd.multiprocessing = False.

New in 0.1.7

  • gui.SelectEpochs can now be initialized with a single mne.Epochs instance (data needs to be preloaded).
  • Parameters that take NDVar objects now also accept mne.Epochs and mne.fiff.Evoked objects.

New in 0.1.5

  • plot.topo.TopoButterfly plot: new keyboard commands (t, left arrow, right arrow).

Publications using Eelbrain

Ordered alphabetically according to authors’ last name:

[1]Esti Blanco-Elorrieta and Liina Pylkkänen. Bilingual language switching in the lab vs. in the wild: the spatio-temporal dynamics of adaptive language control. Journal of Neuroscience, pages 0553–17, 2017. URL: http://www.jneurosci.org/content/early/2017/08/16/JNEUROSCI.0553-17.2017.abstract.
[2]Christian Brodbeck, Laura Gwilliams, and Liina Pylkkänen. EEG can track the time course of successful reference resolution in small visual worlds. Frontiers in psychology, 6:1787, 2015. URL: https://www.frontiersin.org/articles/10.3389/fpsyg.2015.01787.
[3]Christian Brodbeck, Laura Gwilliams, and Liina Pylkkänen. Language in context: MEG evidence for modality general and specific responses to reference resolution. eNeuro, pages ENEURO–0145, 2016. URL: http://www.eneuro.org/content/early/2016/12/15/ENEURO.0145-16.2016.abstract.
[4]Christian Brodbeck, L Elliot Hong, and Jonathan Z Simon. Rapid transformation from auditory to linguistic representations of continuous speech. Current Biology, 28(24):3976–3983, 2018. URL: https://www.sciencedirect.com/science/article/pii/S096098221831409X.
[5]Christian Brodbeck, Alessandro Presacco, Samira Anderson, and Jonathan Z Simon. Over-representation of speech in older adults originates from early response in higher order auditory cortex. Acta Acustica united with Acustica, 104(5):774–777, 2018. URL: https://www.ingentaconnect.com/content/dav/aaua/2018/00000104/00000005/art00013.
[6]Christian Brodbeck, Alessandro Presacco, and Jonathan Z Simon. Neural source dynamics of brain responses to continuous stimuli: speech processing from acoustics to comprehension. NeuroImage, 172:162–174, 2018. URL: https://www.sciencedirect.com/science/article/pii/S1053811918300429.
[7]Christian Brodbeck and Liina Pylkkänen. Language in context: characterizing the comprehension of referential expressions with MEG. NeuroImage, 147:447–460, 2017. URL: https://www.sciencedirect.com/science/article/pii/S1053811916307169.
[8]Teon L Brooks and Daniela Cid de Garcia. Evidence for morphological composition in compound words using MEG. Frontiers in human neuroscience, 9:215, 2015. URL: https://www.frontiersin.org/articles/10.3389/fnhum.2015.00215.
[9]Julien Dirani and Liina Pylkkanen. Lexical access in comprehension vs. production: spatiotemporal localization of semantic facilitation and interference. bioRxiv, pages 449157, 2018. URL: https://www.biorxiv.org/content/early/2018/10/23/449157.1.abstract.
[10]Graham Flick, Yohei Oseki, Amanda R Kaczmarek, Meera Al Kaabi, Alec Marantz, and Liina Pylkkänen. Building words and phrases in the left temporal lobe. Cortex, 2018. URL: https://www.sciencedirect.com/science/article/pii/S0010945218301904.
[11]Graham Flick and Liina Pylkkänen. Isolating syntax in natural language: MEG evidence for an early contribution of left posterior temporal cortex. BioRxiv, pages 439158, 2018. URL: https://www.biorxiv.org/content/early/2018/10/09/439158.abstract.
[12]Phoebe Gaston and Alec Marantz. The time course of contextual cohort effects in auditory processing of category-ambiguous words: MEG evidence for a single “clash” as noun or verb. Language, Cognition and Neuroscience, 33(4):402–423, 2018. URL: https://www.tandfonline.com/doi/abs/10.1080/23273798.2017.1395466.
[13]Laura Gwilliams, GA Lewis, and Alec Marantz. Functional characterisation of letter-specific responses in time, space and current polarity using magnetoencephalography. NeuroImage, 132:320–333, 2016. URL: https://www.sciencedirect.com/science/article/pii/S105381191600166X.
[14]Laura Gwilliams and Alec Marantz. Morphological representations are extrapolated from morpho-syntactic rules. Neuropsychologia, 114:77–87, 2018. URL: https://www.sciencedirect.com/science/article/pii/S0028393218301568.
[15]William Matchin, Christian Brodbeck, Christopher Hammerly, and Ellen Lau. The temporal dynamics of structure and content in sentence comprehension: evidence from fMRI-constrained MEG. Human brain mapping, 40(2):663–678, 2019. URL: https://onlinelibrary.wiley.com/doi/abs/10.1002/hbm.24403.
[16]Kyriaki Neophytou, Christina Manouilidou, Linnaea Stockall, and Alec Marantz. Syntactic and semantic restrictions on morphological recomposition: MEG evidence from greek. Brain and language, 183:11–20, 2018. URL: https://www.sciencedirect.com/science/article/pii/S0093934X1730130X.
[17]Krishna C Puvvada, Marisel Villafane-Delgado, Christian Brodbeck, and Jonathan Z Simon. Neural coding of noisy and reverberant speech in human auditory cortex. bioRxiv, pages 229153, 2017. URL: https://www.biorxiv.org/content/early/2017/12/04/229153.abstract.
[18]Victoria Sharpe, Samir Reddigari, Liina Pylkkänen, and Alec Marantz. Automatic access to verb continuations on the lexical and categorical levels: evidence from MEG. Language, Cognition and Neuroscience, 34(2):137–150, 2019. URL: https://www.tandfonline.com/doi/abs/10.1080/23273798.2018.1531139.
[19]Eline Verschueren, Jonas Vanthornhout, and Tom Francart. Semantic context enhances neural envelope tracking. bioRxiv, pages 421727, 2018. URL: https://www.biorxiv.org/content/early/2018/09/19/421727.abstract.
[20]Adina Williams, Samir Reddigari, and Liina Pylkkänen. Early sensitivity of left perisylvian cortex to relationality in nouns and verbs. Neuropsychologia, 100:131–143, 2017. URL: https://www.sciencedirect.com/science/article/pii/S0028393217301586.
[21]Linmin Zhang and Liina Pylkkänen. Composing lexical versus functional adjectives: evidence for uniformity in the left temporal lobe. Psychonomic bulletin & review, pages 1–14, 2018. URL: https://link.springer.com/content/pdf/10.3758/s13423-018-1469-y.pdf.
[22]Linmin Zhang and Liina Pylkkänen. Semantic composition of sentences word by word: MEG evidence for shared processing of conceptual and logical elements. Neuropsychologia, 119:392–404, 2018. URL: https://www.sciencedirect.com/science/article/pii/S0028393218305037.

Development

Eelbrain is actively developed and maintained by Christian Brodbeck at the Computational sensorimotor systems lab at University of Maryland, College Park.

Eelbrain is fully open-source and new contributions are welcome on GitHub. Suggestions can be raised as issues, and modifications can be made as pull requests into the master branch.

The Development Version

The Eelbrain source code is hosted on GitHub. Development takes place on the master branch, while release versions are maintained on branches called r/0.26 etc. For further information on working with GitHub see GitHub’s instructions.

Installing the development version requires the presence of a compiler. On macOS, make sure Xcode is installed (open it once to accept the license agreement). Windows will indicate any needed files when the install command is run.

After cloning the repository, the development version can be installed by running, from the Eelbrain repository’s root directory:

$ python setup.py develop

On macOS, the $ eelbrain shell script to run iPython with the framework build is not installed properly by setup.py; in order to fix this, run:

$ ./fix-bin

In Python, you can make sure that you are working with the development version:

>>> import eelbrain
>>> eelbrain.__version__
'dev'

To switch back to the release version use $ pip uninstall eelbrain.

Building with Conda

To build Eelbrain with conda, make sure that conda-build is installed. Then, from Eelbrain/conda run:

$ conda build eelbrain

After building successfully, the build can be installed with:

$ conda install --use-local eelbrain

Contributing

Style guides:

Useful tools:

Testing

Eelbrain uses nose for testing. Tests for individual modules are included in folders called tests, usually on the same level as the module. To run all tests, run $ make test from the Eelbrain project directory. On macOS, nosetests needs to run with the framework build of Python; if you get a corresponding error, run $ ./fix-bin nosetests from the Eelbrain repository root.

Reference

Data Classes

Primary data classes:

Dataset(*args, **kwargs) Stores multiple variables pertaining to a common set of measurement cases
Factor(x[, name, random, repeat, tile, …]) Container for categorial data.
Var(x[, name, repeat, tile, info]) Container for scalar data.
NDVar(x, dims[, info, name]) Container for n-dimensional data.
Datalist([items, name, fmt]) list subclass for including lists in in a Dataset.

Model classes (not usually initialized by themselves but through operations on primary data-objects):

Interaction(base) Represents an Interaction effect.
Model(x) A list of effects.

NDVar dimensions (not usually initialized by themselves but through load functions):

Case(n[, connectivity]) Case dimension
Categorial(name, values[, connectivity]) Simple categorial dimension
Scalar(name, values[, unit, tick_format, …]) Scalar dimension
Sensor(locs[, names, sysname, proj2d, …]) Dimension class for representing sensor information
SourceSpace(vertices[, subject, src, …]) MNE surface-based source space
VolumeSourceSpace(vertices[, subject, src, …]) MNE volume source space
Space(directions[, name]) Represent multiple directions in space
UTS(tmin, tstep, nsamples) Dimension object for representing uniform time series

File I/O

Eelbrain objects can be pickled. Eelbrain’s own pickle I/O functions provide backwards compatibility for Eelbrain objects (although files saved in Python 3 can only be opened in Python 2 if they are saved with protocol<=2):

save.pickle(obj[, dest, protocol]) Pickle a Python object.
load.unpickle([file_path]) Load pickled Python objects from a file.
load.arrow([file_path]) Load object serialized with pyarrow.
save.arrow(obj[, dest]) Save a Python object with pyarrow.
load.update_subjects_dir(obj, subjects_dir) Update NDVar SourceSpace.subjects_dir attributes
Import

Functions and modules for loading specific file formats as Eelbrain object:

load.wav([filename, name]) Load a wav file as NDVar
load.tsv(path, names, bool]=True, types, …) Load a Dataset from a text file.
load.eyelink Tools for loading data form eyelink edf files.
load.fiff Tools for importing data through mne.
load.txt Tools for loading data from text files.
load.besa Tools for loading data from the BESA-MN pipeline.
Export

Dataset with only univariate data can be saved as text using the save_txt() method. Additional export functions:

save.txt(iterator[, fmt, delim, dest]) Write any object that supports iteration to a text file.
save.wav(ndvar[, filename, toint]) Save an NDVar as wav file

Sorting and Reordering

align(d1, d2[, i1, i2, out]) Align two data-objects based on index variables
align1(d, to[, by, out]) Align a data object to an index variable
Celltable(y[, x, match, sub, cat, ds, …]) Divide y into cells defined by x.
choose(choice, sources[, name]) Combine data-objects picking from a different object for each case
combine(items[, name, check_dims, incomplete]) Combine a list of items of the same type into one item.
shuffled_index(n[, cells]) Return an index to shuffle a data-object

NDVar Operations

See also NDVar methods.

Butterworth(low, high, order[, sfreq]) Butterworth filter
complete_source_space(ndvar[, fill, mask]) Fill in missing vertices on an NDVar with a partial source space
concatenate(ndvars[, dim, name, tmin, info, …]) Concatenate multiple NDVars
convolve(h, x[, ds]) Convolve h and x along the time dimension
correlation_coefficient(x, y[, dim, name]) Correlation between two NDVars along a specific dimension
cross_correlation(in1, in2[, name]) Cross-correlation between two NDVars along the time axis
cwt_morlet(y, freqs[, use_fft, n_cycles, …]) Time frequency decomposition with Morlet wavelets (mne-python)
dss(ndvar) Denoising source separation (DSS)
filter_data(ndvar, l_freq, h_freq[, …]) Apply mne.filter.filter_data() to an NDVar
frequency_response(b[, frequencies]) Frequency response for a FIR filter
label_operator(labels[, operation, exclude, …]) Convert labeled NDVar into a matrix operation to extract label values
labels_from_clusters(clusters[, names]) Create Labels from source space clusters
morph_source_space(ndvar[, subject_to, …]) Morph source estimate to a different MRI subject
neighbor_correlation(x[, dim, obs, name]) Calculate Neighbor correlation
psd_welch(ndvar[, fmin, fmax, n_fft, …]) Power spectral density with Welch’s method
resample(ndvar, sfreq[, npad, window, pad, name]) Resample an NDVar along the time dimension
segment(continuous, times, tstart, tstop[, …]) Segment a continuous NDVar
set_parc(ndvar, parc[, dim]) Change the parcellation of an NDVar with SourceSpace dimension
set_tmin(ndvar[, tmin]) Change the time axis of an NDVar
xhemi(ndvar[, mask, hemi, parc]) Project data from both hemispheres to hemi of fsaverage_sym

Reverse Correlation

boosting(y, x, tstart, tstop[, scale_data, …]) Estimate a filter with boosting
BoostingResult(y, x, tstart, tstop, …[, …]) Result from boosting a temporal response function

Tables

Manipulate data tables and compile information about data objects such as cell frequencies:

table.cast_to_ndvar(data, dim_values, match) Create an NDVar by converting a data column to a dimension
table.difference(y, x, c1, c0, match[, by, …]) Subtract data in one cell from another
table.frequencies(y[, x, of, sub, ds]) Calculate frequency of occurrence of the categories in y
table.melt(name, cells, cell_var_name, ds[, …]) Restructure a Dataset such that a measured variable is in a single column
table.melt_ndvar(ndvar[, dim, cells, ds, …]) Transform data to long format by converting an NDVar dimension into a variable
table.repmeas(y, x, match[, sub, ds]) Create a repeated-measures table
table.stats(y, row[, col, match, sub, fmt, …]) Make a table with statistics

Statistics

Univariate statistical tests:

test.Correlation(y, x[, sub, ds]) Pearson product moment correlation between y and x
test.TTest1Sample(y[, match, sub, ds, tail]) 1-sample t-test
test.TTestInd(y, x[, c1, c0, match, sub, …]) Related-measures t-test
test.TTestRel(y, x[, c1, c0, match, sub, …]) Related-measures t-test
test.anova(y, x[, sub, ds, title, caption]) Univariate ANOVA
test.pairwise(y, x[, match, sub, ds, par, …]) Pairwise comparison table
test.ttest(y[, x, against, match, sub, …]) T-tests for one or more samples
test.correlations(y, x[, cat, sub, ds, asds]) Correlation with one or more predictors
test.pairwise_correlations(xs[, sub, ds, labels]) Pairwise correlation table
test.lilliefors(data[, formatted]) Lilliefors’ test for normal distribution

Mass-Univariate Statistics

testnd.ttest_1samp(y[, popmean, match, sub, …]) Mass-univariate one sample t-test
testnd.ttest_rel(y, x[, c1, c0, match, sub, …]) Mass-univariate related samples t-test
testnd.ttest_ind(y, str], x, …) Mass-univariate independent samples t-test
testnd.t_contrast_rel(y, x, contrast[, …]) Mass-univariate contrast based on t-values
testnd.anova(y, x[, sub, ds, samples, pmin, …]) Mass-univariate ANOVA
testnd.corr(y, x[, norm, sub, ds, samples, …]) Mass-univariate correlation
testnd.Vector(y[, match, sub, ds, samples, …]) Test a vector field for vectors with non-random direction
testnd.VectorDifferenceRelated(y, x[, c1, …]) Test difference between two vector fields for non-random direction

By default the tests in this module produce maps of statistical parameters along with maps of p-values uncorrected for multiple comparison. Using different parameters, different methods for multiple comparison correction can be applied (for more details and options see the documentation for individual tests):

1: permutation for maximum statistic (samples=n)
Look for the maximum value of the test statistic in n permutations and calculate a p-value for each data point based on this distribution of maximum statistics.
2: Threshold-based clusters (samples=n, pmin=p)
Find clusters of data points where the original statistic exceeds a value corresponding to an uncorrected p-value of p. For each cluster, calculate the sum of the statistic values that are part of the cluster. Do the same in n permutations of the original data and retain for each permutation the value of the largest cluster. Evaluate all cluster values in the original data against the distributiom of maximum cluster values (see [1]).
3: Threshold-free cluster enhancement (samples=n, tfce=True)
Similar to (1), but each statistical parameter map is first processed with the cluster-enhancement algorithm (see [2]). This is the most computationally intensive option.
Two-stage tests

Two-stage tests proceed by estimating parameters for a fixed effects model for each subject, and then testing hypotheses on these parameter estimates on the group level. Two-stage tests are implemented by fitting an LM for each subject, and then combining them in a LMGroup to retrieve coefficients for group level statistics.

testnd.LM(y, model[, ds, coding, subject, sub]) Fixed effects linear model
testnd.LMGroup(lms) Group level analysis for linear model LM objects
References
[1]Maris, E., & Oostenveld, R. (2007). Nonparametric statistical testing of EEG- and MEG-data. Journal of Neuroscience Methods, 164(1), 177-190. 10.1016/j.jneumeth.2007.03.024
[2]Smith, S. M., and Nichols, T. E. (2009). Threshold-Free Cluster Enhancement: Addressing Problems of Smoothing, Threshold Dependence and Localisation in Cluster Inference. NeuroImage, 44(1), 83-98. 10.1016/j.neuroimage.2008.03.061

Plotting

Plot univariate data (Var objects):

plot.Barplot(y[, x, match, sub, cells, …]) Barplot for a continuous variable
plot.Boxplot(y[, x, match, sub, cells, …]) Boxplot for a continuous variable
plot.PairwiseLegend([size, trend]) Legend for colors used in pairwise comparisons
plot.Correlation(y, x[, cat, sub, ds, c, …]) Plot the correlation between two variables
plot.Histogram(y[, x, match, sub, ds, …]) Histogram plots with tests of normality
plot.Regression(y, x[, cat, match, sub, ds, …]) Plot the regression of y on x
plot.Timeplot(y, categories, time[, match, …]) Plot a variable over time

Color tools for plotting:

plot.colors_for_categorial(x[, hue_start, cmap]) Automatically select colors for a categorial model
plot.colors_for_oneway(cells[, hue_start, …]) Define colors for a single factor design
plot.colors_for_twoway(x1_cells, x2_cells[, …]) Define cell colors for a two-way design
plot.ColorBar(cmap, vmin[, vmax, label, …]) A color-bar for a matplotlib color-map
plot.ColorGrid(row_cells, column_cells, colors) Plot colors for a two-way design in a grid
plot.ColorList(colors[, cells, labels, h]) Plot colors with labels

Plot uniform time-series:

plot.LineStack(y[, x, sub, ds, offset, …]) Stack multiple lines vertically
plot.UTS(y[, xax, axtitle, ds, sub, xlabel, …]) Value by time plot for UTS data
plot.UTSClusters(res[, pmax, ptrend, …]) Plot permutation cluster test results
plot.UTSStat(y[, x, xax, match, sub, ds, …]) Plot statistics for a one-dimensional NDVar

Plot multidimensional uniform time series:

plot.Array(y[, xax, xlabel, ylabel, …]) Plot UTS data to a rectangular grid.
plot.Butterfly(y[, xax, sensors, axtitle, …]) Butterfly plot for NDVars

Plot topographic maps of sensor space data:

plot.TopoArray(y[, xax, ds, sub, vmax, …]) Channel by sample plots with topomaps for individual time points
plot.TopoButterfly(y[, xax, ds, sub, vmax, …]) Butterfly plot with corresponding topomaps
plot.Topomap(y[, xax, ds, sub, vmax, vmin, …]) Plot individual topogeraphies
plot.TopomapBins(y[, xax, ds, sub, …]) Topomaps in time-bins

Plot sensor layout maps:

plot.SensorMap(sensors[, labels, proj, …]) Plot sensor positions in 2 dimensions
plot.SensorMaps(sensors[, select, proj, …]) Multiple views on a sensor layout.
xax parameter

Many plots have an xax parameter which is used to sort the data in y into different categories and plot them on separate axes. xax can be specified through categorial data, or as a dimension in y.

If a categorial data object is specified for xax, y is split into the categories in xax, and for every cell in xax a separate subplot is shown. For example, while

>>> plot.Butterfly('meg', ds=ds)

will create a single Butterfly plot of the average response,

>>> plot.Butterfly('meg', 'subject', ds=ds)

where 'subject' is the xax parameter, will create a separate subplot for every subject with its average response.

A dimension on y can be specified through a string starting with .. For example, to plot each case of meg separately, use:

>>> plot.Butterfly('meg', '.case', ds=ds)
General layout parameters

Most plots that also share certain layout keyword arguments. By default, all those parameters are determined automatically, but individual values can be specified manually by supplying them as keyword arguments.

h, w : scalar
Height and width of the figure. Use a number ≤ 0 to defined the size relative to the screen (e.g., w=0 to use the full screen width).
axh, axw : scalar
Height and width of the axes.
nrow, ncol : None | int
Limit number of rows/columns. If neither is specified, a square layout is produced
ax_aspect : scalar
Width / height aspect of the axes.
name : str
Window title (not displayed on the figure itself).
title : str
Figure title (displayed on the figure).

Plots that do take those parameters can be identified by the **layout in their function signature.

GUI Interaction

By default, new plots are automatically shown and, if the Python interpreter is in interactive mode the GUI main loop is started. This behavior can be controlled with 2 arguments when constructing a plot:

show : bool
Show the figure in the GUI (default True). Use False for creating figures and saving them without displaying them on the screen.
run : bool
Run the Eelbrain GUI app (default is True for interactive plotting and False in scripts).

The behavior can also be changed globally using configure().

By default, Eelbrain plots open in windows with enhance GUI features such as copying a figure to the OS clip-board. To plot figures in bare matplotlib windows, configure() Eelbrain with eelbrain.configure(frame=False).

Plotting Brains

The plot.brain module contains specialized functions to plot NDVar objects containing source space data. For this it uses a subclass of PySurfer’s surfer.Brain class. The functions below allow quick plotting. More specific control over the plots can be achieved through the Brain object that is returned.

plot.brain.brain(src[, cmap, vmin, vmax, …]) Create a Brain object with a data layer
plot.brain.butterfly(y[, cmap, vmin, vmax, …]) Shortcut for a Butterfly-plot with a time-linked brain plot
plot.brain.cluster(cluster[, vmax]) Plot a spatio-temporal cluster
plot.brain.dspm(src[, fmin, fmax, fmid]) Plot a source estimate with coloring for dSPM values (bipolar).
plot.brain.p_map(p_map[, param_map, p0, p1, …]) Plot a map of p-values in source space.
plot.brain.annot(annot[, subject, surf, …]) Plot the parcellation in an annotation file
plot.brain.annot_legend(lh, rh, \*args, …) Plot a legend for a freesurfer parcellation
Brain(subject, hemi[, surf, title, cortex, …]) PySurfer Brain subclass returned by plot.brain functions
plot.brain.SequencePlotter() Grid of anatomical images in one figure
plot.GlassBrain(ndvar[, cmap, vmin, vmax, …]) Plot 2d projections of a brain volume
plot.GlassBrain.butterfly(y[, cmap, vmin, …]) Shortcut for a butterfly-plot with a time-linked glassbrain plot

In order to make custom plots, a Brain figure without any data added can be created with plot.brain.brain(ndvar.source, mask=False).

Surface options for plotting data on fsaverage:

white

surf_white

smoothwm

surf_smoothwm

inflated_pre

surf_inflated_pre

inflated

surf_inflated

inflated_avg

surf_inflated_avg

sphere

surf_sphere

GUIs

Tools with a graphical user interface (GUI):

gui.select_components(path, ds[, sysname, …]) GUI for selecting ICA-components
gui.select_epochs(ds[, data, accept, blink, …]) GUI for rejecting trials of MEG/EEG data
Controlling the GUI Application

Eelbrain uses a wxPython based application to create GUIs. This GUI appears as a separate application with its own Dock icon. The way that control of this GUI is managed depends on the environment form which it is invoked.

When Eelbrain plots are created from within iPython, the GUI is managed in the background and control returns immediately to the terminal. There might be cases in which this is not desired, for example when running scripts. After execution of a script finishes, the interpreter is terminated and all associated plots are closed. To avoid this, the command gui.run(block=True) can be inserted at the end of the script, which will keep all gui elements open until the user quits the GUI application (see gui.run() below).

In interpreters other than iPython, input can not be processed from the GUI and the interpreter shell at the same time. In that case, the GUI application is activated by default whenever a GUI is created in interactive mode (this can be avoided by passing run=False to any plotting function). While the application is processing user input, the shell can not be used. In order to return to the shell, quit the application (the python/Quit Eelbrain menu command or Command-Q). In order to return to the terminal without closing all windows, use the alternative Go/Yield to Terminal command (Command-Alt-Q). To return to the application from the shell, run gui.run(). Beware that if you terminate the Python session from the terminal, the application is not given a chance to assure that information in open windows is saved.

gui.run([block]) Hand over command to the GUI (quit the GUI to return to the terminal)

Formatted Text

The fmtxt submodule provides tools for exporting results. Most eelbrain functions and methods that print tables in fact return fmtxt objects, which can be exported in different formats, for example:

>>> ds = datasets.get_uv()
>>> type(ds.head())
eelbrain.fmtxt.Table

This means that the result can be exported as formatted text, for example:

>>> fmtxt.save_pdf(ds.head())

Available export methods:

fmtxt.copy_pdf(fmtext) Copy an FMText object to the clipboard as PDF.
fmtxt.copy_tex(fmtext) Copy an FMText object to the clipboard as tex code
fmtxt.save_html(fmtext[, path, …]) Save an FMText object in HTML format
fmtxt.save_pdf(fmtext[, path]) Save an FMText object as a pdf (requires LaTeX installation)
fmtxt.save_rtf(fmtext[, path]) Save an FMText object in Rich Text format
fmtxt.save_tex(fmtext[, path]) Save an FMText object as TeX code

MNE-Experiment

The MneExperiment class serves as a base class for analyzing MEG data (gradiometer only) with MNE:

MneExperiment([root, find_subjects]) Analyze an MEG experiment (gradiometer only) with MNE
ROITestResult(subjects, samples, …) Test results for temporal tests in one or more ROIs

See also

For the guide on working with the MneExperiment class see The MneExperiment Pipeline.

Datasets

Datasets for experimenting and testing:

datasets.get_loftus_masson_1994() Dataset used for illustration purposes by Loftus and Masson (1994)
datasets.get_mne_sample([tmin, tmax, …]) Load events and epochs from the MNE sample data
datasets.get_uts([utsnd, seed, nrm, vector3d]) Create a sample Dataset with 60 cases and random data.
datasets.get_uv([seed, nrm, vector]) Dataset with random univariate data

Configuration

eelbrain.configure(n_workers=None, frame=None, autorun=None, show=None, format=None, figure_background=None, prompt_toolkit=None, animate=None, nice=None, tqdm=None)

Set basic configuration parameters for the current session

Parameters:
n_workers : bool | int

Number of worker processes to use in multiprocessing enabled computations. False to disable multiprocessing. True (default) to use as many processes as cores are available. Negative numbers to use all but n available CPUs.

frame : bool

Open figures in the Eelbrain application. This provides additional functionality such as copying a figure to the clipboard. If False, open figures as normal matplotlib figures.

autorun : bool

When a figure is created, automatically enter the GUI mainloop. By default, this is True when the figure is created in interactive mode but False when the figure is created in a script (in order to run the GUI at a specific point in a script, call eelbrain.gui.run()).

show : bool

Show plots on the screen when they’re created (disable this to create plots and save them without showing them on the screen).

format : str

Default format for plots (for example “png”, “svg”, …).

figure_background : bool | matplotlib color

While matplotlib uses a gray figure background by default, Eelbrain uses white. Set this parameter to False to use the default from matplotlib.rcParams, or set it to a valid matplotblib color value to use an arbitrary color. True to revert to the default white.

prompt_toolkit : bool

In IPython 5, prompt_toolkit allows running the GUI main loop in parallel to the Terminal, meaning that the IPython terminal and GUI windows can be used without explicitly switching between Terminal and GUI. This feature is enabled by default, but can be disabled by setting prompt_toolkit=False.

animate : bool

Animate plot navigation (default True).

nice : int [-20, 19]

Scheduling priority for muliprocessing (larger number yields more to other processes; negative numbers require root privileges).

tqdm : bool

Enable or disable tqdm progress bars.

Examples

Note

Examples frequently use the print() function. In interactive work, this function can often be omitted. For example, to show a summary of a dataset, examples use >>> print(ds.summary()), but in interactive work simply entering >>> ds.summary() will print the summary.

Datasets

Examples of how to construct and manipulate Dataset objects.

Dataset basics
# Author: Christian Brodbeck <christianbrodbeck@nyu.edu>
from eelbrain import *
import numpy as np

A dataset can be constructed column by column, by adding one variable after another:

# initialize an empty Dataset:
ds = Dataset()
# numeric values are added as Var object:
ds['y'] = Var(np.random.normal(0, 1, 6))
# categorical data as represented in Factors:
ds['a'] = Factor(['a', 'b', 'c'], repeat=2)
# check the result:
print(ds)

Out:

y          a
------------
-0.20087   a
0.42671    a
1.1178     b
2.5059     b
-0.67788   c
-1.1994    c

For larger datasets it can be more convenient to print only the first few cases…

print(ds.head())

Out:

y          a
------------
-0.20087   a
0.42671    a
1.1178     b
2.5059     b
-0.67788   c
-1.1994    c

… or a summary of variables:

print(ds.summary())

Out:

Key   Type     Values
-------------------------------------------------------------------------
y     Var      -1.19944, -0.677876, -0.200868, 0.426707, 1.11781, 2.50585
a     Factor   a:2, b:2, c:2
-------------------------------------------------------------------------
Dataset: 6 cases

A second way of constructing a dataset is case by case (i.e., row by row):

rows = []
for i in range(6):
    y = np.random.normal(0, 1)
    a = 'abc'[i % 3]
    rows.append([f'S{i}', y, a])
ds = Dataset.from_caselist(['subject', 'y', 'a'], rows)
print(ds)

Out:

subject   y          a
----------------------
S0        -0.36599   a
S1        0.37377    b
S2        0.085598   c
S3        0.27996    a
S4        0.26522    b
S5        0.90183    c

Gallery generated by Sphinx-Gallery

Align datasets

Shows how to combine information from two datasets describing the same cases, but not necessarily in the same order.

# Author: Christian Brodbeck <christianbrodbeck@nyu.edu>
import random
import string

from eelbrain import *


# Generate a dataset with known sequence
ds = Dataset()
ds['ascii'] = Factor(string.ascii_lowercase)
# Add an index variable to the dataset to later identify the cases
ds.index()

# Generate two shuffled copies of the dataset (and print them to confirm that
# they are shuffled)
ds1 = ds[random.sample(range(ds.n_cases), 15)]
print(ds1.head())
ds2 = ds[random.sample(range(ds.n_cases), 16)]
print(ds2.head())

Out:

ascii   index
-------------
s       18
l       11
g       6
y       24
z       25
u       20
b       1
v       21
j       9
k       10
ascii   index
-------------
k       10
f       5
v       21
p       15
l       11
g       6
r       17
b       1
c       2
e       4
Align the datasets

Use the "index" variable added above to identify cases and align the two datasets

ds1_aligned, ds2_aligned = align(ds1, ds2, 'index')

# show the ascii sequences for the two datasets next to each other to
# demonstrate that they are aligned
ds1_aligned['ascii_ds2'] = ds2_aligned['ascii']
print(ds1_aligned)

Out:

ascii   index   ascii_ds2
-------------------------
l       11      l
g       6       g
b       1       b
v       21      v
j       9       j
k       10      k
i       8       i
d       3       d

Gallery generated by Sphinx-Gallery

Reverese Correlation

EEG Speech Envelope TRF

Analyze continuous speech data from the mTRF dataset [1]: use the boosting algorithm for estimating temporal response functions (TRFs) to the acoustic envelope.

# Author: Christian Brodbeck <christianbrodbeck@nyu.edu>
# sphinx_gallery_thumbnail_number = 4
import os

from scipy.io import loadmat
import mne
from eelbrain import *

# Load the mTRF speech dataset and convert data to NDVars
root = mne.datasets.mtrf.data_path()
speech_path = os.path.join(root, 'speech_data.mat')
mdata = loadmat(speech_path)

# Time axis
tstep = 1. / mdata['Fs'][0, 0]
n_times = mdata['envelope'].shape[0]
time = UTS(0, tstep, n_times)
# Load the EEG sensor coordinates (drop fiducials coordinates, which are stored
# after sensor 128)
sensor = Sensor.from_montage('biosemi128')[:128]
# Frequency dimension for the spectrogram
band = Scalar('frequency', range(16))
# Create variables
envelope = NDVar(mdata['envelope'][:, 0], (time,), name='envelope')
eeg = NDVar(mdata['EEG'], (time, sensor), name='EEG', info={'unit': 'µV'})
spectrogram = NDVar(mdata['spectrogram'], (time, band), name='spectrogram')
# Exclude a bad channel
eeg = eeg[sensor.index(exclude='A13')]
Data

Plot the spectrogram of the speech stimulus:

plot.Array(spectrogram, xlim=5, w=6, h=2)
_images/sphx_glr_mtrf_001.png

Out:

<Array: spectrogram>

Plot the envelope used as stimulus representation for reverse correlation:

plot.UTS(envelope, xlim=5, w=6, h=2)
_images/sphx_glr_mtrf_002.png

Out:

<UTS: envelope>

Plot the corresponding EEG data:

p = plot.TopoButterfly(eeg, xlim=5, w=7, h=2)
p.set_time(1.200)
_images/sphx_glr_mtrf_003.png
Reverse correlation

TRF for the envelope using boosting:

  • TRF from -100 to 400 ms
  • Basis of 100 ms Hamming windows
  • Use 4 partitionings of the data for cross-validation based early stopping
res = boosting(eeg, envelope, -0.100, 0.400, basis=0.100, partitions=4)
p = plot.TopoButterfly(res.h_scaled, w=6, h=2)
p.set_time(.180)
_images/sphx_glr_mtrf_004.png

Out:

Boosting 127 signals:   0%|          | 0/508 [00:00<?, ?it/s]
Boosting 127 signals:   0%|          | 1/508 [00:00<02:48,  3.01it/s]
Boosting 127 signals:   2%|1         | 8/508 [00:00<01:58,  4.22it/s]
Boosting 127 signals:   3%|2         | 15/508 [00:00<01:24,  5.87it/s]
Boosting 127 signals:   4%|4         | 21/508 [00:00<01:00,  8.04it/s]
Boosting 127 signals:   6%|5         | 28/508 [00:00<00:43, 10.95it/s]
Boosting 127 signals:   7%|6         | 34/508 [00:00<00:33, 14.30it/s]
Boosting 127 signals:   8%|8         | 41/508 [00:00<00:25, 18.62it/s]
Boosting 127 signals:   9%|9         | 48/508 [00:01<00:19, 23.84it/s]
Boosting 127 signals:  11%|#         | 54/508 [00:01<00:15, 28.99it/s]
Boosting 127 signals:  12%|#1        | 60/508 [00:01<00:13, 32.04it/s]
Boosting 127 signals:  13%|#2        | 66/508 [00:01<00:12, 36.21it/s]
Boosting 127 signals:  14%|#4        | 72/508 [00:01<00:11, 38.62it/s]
Boosting 127 signals:  15%|#5        | 78/508 [00:01<00:10, 40.12it/s]
Boosting 127 signals:  17%|#6        | 85/508 [00:01<00:09, 44.66it/s]
Boosting 127 signals:  18%|#8        | 92/508 [00:01<00:08, 49.09it/s]
Boosting 127 signals:  20%|#9        | 100/508 [00:02<00:07, 53.03it/s]
Boosting 127 signals:  21%|##1       | 108/508 [00:02<00:07, 56.26it/s]
Boosting 127 signals:  23%|##2       | 115/508 [00:02<00:07, 49.61it/s]
Boosting 127 signals:  24%|##3       | 121/508 [00:02<00:08, 48.11it/s]
Boosting 127 signals:  25%|##5       | 128/508 [00:02<00:07, 52.90it/s]
Boosting 127 signals:  26%|##6       | 134/508 [00:02<00:06, 54.38it/s]
Boosting 127 signals:  28%|##7       | 141/508 [00:02<00:06, 54.05it/s]
Boosting 127 signals:  29%|##8       | 147/508 [00:02<00:06, 53.20it/s]
Boosting 127 signals:  30%|###       | 153/508 [00:03<00:06, 53.83it/s]
Boosting 127 signals:  32%|###1      | 161/508 [00:03<00:05, 58.53it/s]
Boosting 127 signals:  33%|###3      | 168/508 [00:03<00:05, 60.15it/s]
Boosting 127 signals:  34%|###4      | 175/508 [00:03<00:05, 59.12it/s]
Boosting 127 signals:  36%|###5      | 182/508 [00:03<00:05, 60.18it/s]
Boosting 127 signals:  37%|###7      | 190/508 [00:03<00:05, 61.83it/s]
Boosting 127 signals:  39%|###8      | 197/508 [00:03<00:04, 62.93it/s]
Boosting 127 signals:  40%|####      | 204/508 [00:03<00:04, 64.52it/s]
Boosting 127 signals:  42%|####1     | 211/508 [00:03<00:04, 62.03it/s]
Boosting 127 signals:  43%|####2     | 218/508 [00:04<00:04, 62.97it/s]
Boosting 127 signals:  44%|####4     | 225/508 [00:04<00:04, 64.05it/s]
Boosting 127 signals:  46%|####5     | 232/508 [00:04<00:04, 62.33it/s]
Boosting 127 signals:  47%|####7     | 239/508 [00:04<00:04, 61.04it/s]
Boosting 127 signals:  48%|####8     | 246/508 [00:04<00:04, 60.13it/s]
Boosting 127 signals:  50%|####9     | 253/508 [00:04<00:04, 62.04it/s]
Boosting 127 signals:  51%|#####1    | 260/508 [00:04<00:03, 64.22it/s]
Boosting 127 signals:  53%|#####2    | 267/508 [00:04<00:03, 63.72it/s]
Boosting 127 signals:  54%|#####3    | 274/508 [00:04<00:03, 60.94it/s]
Boosting 127 signals:  56%|#####5    | 282/508 [00:05<00:03, 63.12it/s]
Boosting 127 signals:  57%|#####7    | 290/508 [00:05<00:03, 65.52it/s]
Boosting 127 signals:  58%|#####8    | 297/508 [00:05<00:03, 64.28it/s]
Boosting 127 signals:  60%|#####9    | 304/508 [00:05<00:03, 62.55it/s]
Boosting 127 signals:  61%|######1   | 311/508 [00:05<00:03, 58.31it/s]
Boosting 127 signals:  62%|######2   | 317/508 [00:05<00:03, 55.96it/s]
Boosting 127 signals:  64%|######3   | 323/508 [00:05<00:03, 49.97it/s]
Boosting 127 signals:  65%|######4   | 329/508 [00:05<00:03, 48.18it/s]
Boosting 127 signals:  66%|######5   | 334/508 [00:06<00:03, 46.21it/s]
Boosting 127 signals:  67%|######7   | 341/508 [00:06<00:03, 50.45it/s]
Boosting 127 signals:  69%|######8   | 348/508 [00:06<00:02, 53.86it/s]
Boosting 127 signals:  70%|######9   | 354/508 [00:06<00:02, 52.36it/s]
Boosting 127 signals:  71%|#######   | 360/508 [00:06<00:02, 51.64it/s]
Boosting 127 signals:  72%|#######2  | 366/508 [00:06<00:02, 52.09it/s]
Boosting 127 signals:  73%|#######3  | 372/508 [00:06<00:02, 53.20it/s]
Boosting 127 signals:  74%|#######4  | 378/508 [00:06<00:02, 48.69it/s]
Boosting 127 signals:  76%|#######5  | 384/508 [00:07<00:02, 50.03it/s]
Boosting 127 signals:  77%|#######6  | 391/508 [00:07<00:02, 53.26it/s]
Boosting 127 signals:  78%|#######8  | 397/508 [00:07<00:02, 54.19it/s]
Boosting 127 signals:  79%|#######9  | 403/508 [00:07<00:02, 50.90it/s]
Boosting 127 signals:  81%|########  | 409/508 [00:07<00:02, 48.37it/s]
Boosting 127 signals:  81%|########1 | 414/508 [00:07<00:02, 42.29it/s]
Boosting 127 signals:  82%|########2 | 419/508 [00:07<00:02, 36.94it/s]
Boosting 127 signals:  84%|########3 | 425/508 [00:07<00:02, 40.36it/s]
Boosting 127 signals:  85%|########4 | 431/508 [00:08<00:01, 44.22it/s]
Boosting 127 signals:  86%|########5 | 436/508 [00:08<00:01, 43.81it/s]
Boosting 127 signals:  87%|########7 | 442/508 [00:08<00:01, 47.15it/s]
Boosting 127 signals:  88%|########8 | 449/508 [00:08<00:01, 51.72it/s]
Boosting 127 signals:  90%|########9 | 455/508 [00:08<00:01, 47.26it/s]
Boosting 127 signals:  91%|######### | 461/508 [00:08<00:01, 46.28it/s]
Boosting 127 signals:  92%|#########1| 466/508 [00:08<00:00, 44.18it/s]
Boosting 127 signals:  93%|#########2| 471/508 [00:08<00:00, 41.40it/s]
Boosting 127 signals:  94%|#########4| 479/508 [00:09<00:00, 45.99it/s]
Boosting 127 signals:  95%|#########5| 484/508 [00:09<00:00, 44.89it/s]
Boosting 127 signals:  96%|#########6| 489/508 [00:09<00:00, 45.27it/s]
Boosting 127 signals:  98%|#########7| 496/508 [00:09<00:00, 48.73it/s]
Boosting 127 signals:  99%|#########9| 503/508 [00:09<00:00, 48.65it/s]
Boosting 127 signals: 100%|##########| 508/508 [00:09<00:00, 52.57it/s]
Multiple predictors

Multiple predictors additively explain the signal:

# Derive acoustic onsets from the envelope
onset = envelope.diff('time', name='onset').clip(0)
onset *= envelope.max() / onset.max()
plot.UTS([[envelope, onset]], xlim=5, w=6, h=2)
_images/sphx_glr_mtrf_005.png

Out:

<UTS: envelope, onset>
res_onset = boosting(eeg, [onset, envelope], -0.100, 0.400, basis=0.100, partitions=4)
p = plot.TopoButterfly(res_onset.h_scaled, w=6, h=3)
p.set_time(.150)
_images/sphx_glr_mtrf_006.png

Out:

Boosting 127 signals:   0%|          | 0/508 [00:00<?, ?it/s]
Boosting 127 signals:   0%|          | 1/508 [00:00<02:59,  2.83it/s]
Boosting 127 signals:   1%|          | 5/508 [00:00<02:09,  3.90it/s]
Boosting 127 signals:   2%|1         | 9/508 [00:00<01:33,  5.31it/s]
Boosting 127 signals:   3%|2         | 13/508 [00:00<01:09,  7.13it/s]
Boosting 127 signals:   3%|3         | 17/508 [00:00<00:53,  9.16it/s]
Boosting 127 signals:   4%|4         | 21/508 [00:00<00:41, 11.61it/s]
Boosting 127 signals:   5%|4         | 24/508 [00:01<00:34, 13.84it/s]
Boosting 127 signals:   5%|5         | 27/508 [00:01<00:29, 16.06it/s]
Boosting 127 signals:   6%|6         | 31/508 [00:01<00:25, 18.86it/s]
Boosting 127 signals:   7%|6         | 35/508 [00:01<00:22, 21.49it/s]
Boosting 127 signals:   8%|7         | 39/508 [00:01<00:19, 24.43it/s]
Boosting 127 signals:   8%|8         | 43/508 [00:01<00:17, 26.56it/s]
Boosting 127 signals:   9%|9         | 47/508 [00:01<00:15, 28.82it/s]
Boosting 127 signals:  10%|#         | 51/508 [00:01<00:14, 30.96it/s]
Boosting 127 signals:  11%|#         | 55/508 [00:02<00:15, 30.16it/s]
Boosting 127 signals:  12%|#1        | 59/508 [00:02<00:15, 28.40it/s]
Boosting 127 signals:  12%|#2        | 63/508 [00:02<00:15, 28.97it/s]
Boosting 127 signals:  13%|#3        | 67/508 [00:02<00:14, 29.64it/s]
Boosting 127 signals:  14%|#3        | 71/508 [00:02<00:15, 29.02it/s]
Boosting 127 signals:  15%|#4        | 75/508 [00:02<00:15, 28.82it/s]
Boosting 127 signals:  15%|#5        | 78/508 [00:02<00:15, 28.36it/s]
Boosting 127 signals:  16%|#6        | 82/508 [00:03<00:14, 28.51it/s]
Boosting 127 signals:  17%|#6        | 86/508 [00:03<00:14, 29.16it/s]
Boosting 127 signals:  18%|#7        | 90/508 [00:03<00:14, 29.61it/s]
Boosting 127 signals:  19%|#8        | 94/508 [00:03<00:13, 30.95it/s]
Boosting 127 signals:  19%|#9        | 98/508 [00:03<00:12, 32.96it/s]
Boosting 127 signals:  20%|##        | 102/508 [00:03<00:12, 32.78it/s]
Boosting 127 signals:  21%|##        | 106/508 [00:03<00:12, 31.24it/s]
Boosting 127 signals:  22%|##1       | 110/508 [00:03<00:13, 30.44it/s]
Boosting 127 signals:  22%|##2       | 114/508 [00:04<00:14, 27.82it/s]
Boosting 127 signals:  23%|##3       | 118/508 [00:04<00:14, 26.90it/s]
Boosting 127 signals:  24%|##4       | 122/508 [00:04<00:13, 29.31it/s]
Boosting 127 signals:  25%|##4       | 126/508 [00:04<00:13, 29.23it/s]
Boosting 127 signals:  25%|##5       | 129/508 [00:04<00:13, 28.96it/s]
Boosting 127 signals:  26%|##6       | 133/508 [00:04<00:12, 30.71it/s]
Boosting 127 signals:  27%|##6       | 137/508 [00:04<00:12, 28.87it/s]
Boosting 127 signals:  28%|##7       | 140/508 [00:04<00:13, 26.46it/s]
Boosting 127 signals:  28%|##8       | 143/508 [00:05<00:15, 23.13it/s]
Boosting 127 signals:  29%|##8       | 146/508 [00:05<00:15, 23.58it/s]
Boosting 127 signals:  29%|##9       | 149/508 [00:05<00:14, 25.06it/s]
Boosting 127 signals:  30%|###       | 153/508 [00:05<00:12, 27.31it/s]
Boosting 127 signals:  31%|###       | 157/508 [00:05<00:11, 29.33it/s]
Boosting 127 signals:  32%|###1      | 161/508 [00:05<00:11, 30.68it/s]
Boosting 127 signals:  32%|###2      | 165/508 [00:05<00:10, 32.77it/s]
Boosting 127 signals:  33%|###3      | 169/508 [00:05<00:10, 33.63it/s]
Boosting 127 signals:  34%|###4      | 173/508 [00:06<00:10, 32.64it/s]
Boosting 127 signals:  35%|###4      | 177/508 [00:06<00:10, 31.96it/s]
Boosting 127 signals:  36%|###5      | 181/508 [00:06<00:09, 32.74it/s]
Boosting 127 signals:  36%|###6      | 185/508 [00:06<00:09, 34.00it/s]
Boosting 127 signals:  37%|###7      | 189/508 [00:06<00:09, 34.08it/s]
Boosting 127 signals:  38%|###7      | 193/508 [00:06<00:09, 34.52it/s]
Boosting 127 signals:  39%|###8      | 197/508 [00:06<00:10, 31.02it/s]
Boosting 127 signals:  40%|###9      | 201/508 [00:06<00:10, 30.43it/s]
Boosting 127 signals:  40%|####      | 205/508 [00:07<00:09, 32.02it/s]
Boosting 127 signals:  41%|####1     | 209/508 [00:07<00:09, 31.47it/s]
Boosting 127 signals:  42%|####1     | 213/508 [00:07<00:09, 31.69it/s]
Boosting 127 signals:  43%|####2     | 217/508 [00:07<00:09, 32.13it/s]
Boosting 127 signals:  44%|####3     | 221/508 [00:07<00:09, 31.87it/s]
Boosting 127 signals:  44%|####4     | 225/508 [00:07<00:08, 33.01it/s]
Boosting 127 signals:  45%|####5     | 229/508 [00:07<00:08, 32.58it/s]
Boosting 127 signals:  46%|####5     | 233/508 [00:07<00:09, 29.21it/s]
Boosting 127 signals:  47%|####6     | 237/508 [00:08<00:09, 29.93it/s]
Boosting 127 signals:  47%|####7     | 241/508 [00:08<00:08, 30.80it/s]
Boosting 127 signals:  48%|####8     | 245/508 [00:08<00:08, 30.54it/s]
Boosting 127 signals:  49%|####9     | 249/508 [00:08<00:08, 30.88it/s]
Boosting 127 signals:  50%|####9     | 253/508 [00:08<00:08, 30.69it/s]
Boosting 127 signals:  51%|#####     | 257/508 [00:08<00:07, 32.40it/s]
Boosting 127 signals:  51%|#####1    | 261/508 [00:08<00:07, 30.94it/s]
Boosting 127 signals:  52%|#####2    | 265/508 [00:08<00:07, 31.10it/s]
Boosting 127 signals:  53%|#####2    | 269/508 [00:09<00:07, 31.42it/s]
Boosting 127 signals:  54%|#####3    | 273/508 [00:09<00:07, 30.01it/s]
Boosting 127 signals:  55%|#####4    | 277/508 [00:09<00:07, 30.09it/s]
Boosting 127 signals:  55%|#####5    | 281/508 [00:09<00:07, 28.61it/s]
Boosting 127 signals:  56%|#####6    | 285/508 [00:09<00:07, 30.98it/s]
Boosting 127 signals:  57%|#####6    | 289/508 [00:09<00:06, 32.79it/s]
Boosting 127 signals:  58%|#####7    | 293/508 [00:09<00:06, 33.04it/s]
Boosting 127 signals:  58%|#####8    | 297/508 [00:09<00:06, 33.15it/s]
Boosting 127 signals:  59%|#####9    | 301/508 [00:10<00:06, 33.76it/s]
Boosting 127 signals:  60%|######    | 305/508 [00:10<00:06, 32.39it/s]
Boosting 127 signals:  61%|######    | 309/508 [00:10<00:06, 30.47it/s]
Boosting 127 signals:  62%|######1   | 313/508 [00:10<00:06, 31.17it/s]
Boosting 127 signals:  62%|######2   | 317/508 [00:10<00:06, 31.12it/s]
Boosting 127 signals:  63%|######3   | 321/508 [00:10<00:06, 27.27it/s]
Boosting 127 signals:  64%|######3   | 324/508 [00:10<00:06, 26.49it/s]
Boosting 127 signals:  64%|######4   | 327/508 [00:11<00:07, 25.25it/s]
Boosting 127 signals:  65%|######4   | 330/508 [00:11<00:06, 25.90it/s]
Boosting 127 signals:  66%|######5   | 333/508 [00:11<00:06, 25.62it/s]
Boosting 127 signals:  66%|######6   | 337/508 [00:11<00:06, 28.04it/s]
Boosting 127 signals:  67%|######6   | 340/508 [00:11<00:06, 26.14it/s]
Boosting 127 signals:  68%|######7   | 344/508 [00:11<00:05, 28.15it/s]
Boosting 127 signals:  69%|######8   | 348/508 [00:11<00:05, 29.81it/s]
Boosting 127 signals:  69%|######9   | 352/508 [00:11<00:04, 31.47it/s]
Boosting 127 signals:  70%|#######   | 356/508 [00:12<00:04, 31.27it/s]
Boosting 127 signals:  71%|#######   | 360/508 [00:12<00:04, 30.10it/s]
Boosting 127 signals:  72%|#######1  | 364/508 [00:12<00:04, 30.60it/s]
Boosting 127 signals:  72%|#######2  | 368/508 [00:12<00:05, 28.00it/s]
Boosting 127 signals:  73%|#######3  | 371/508 [00:12<00:04, 28.38it/s]
Boosting 127 signals:  74%|#######3  | 375/508 [00:12<00:04, 27.13it/s]
Boosting 127 signals:  75%|#######4  | 379/508 [00:12<00:05, 25.07it/s]
Boosting 127 signals:  75%|#######5  | 383/508 [00:13<00:04, 25.85it/s]
Boosting 127 signals:  76%|#######5  | 386/508 [00:13<00:04, 24.47it/s]
Boosting 127 signals:  77%|#######6  | 389/508 [00:13<00:04, 25.30it/s]
Boosting 127 signals:  77%|#######7  | 392/508 [00:13<00:04, 25.15it/s]
Boosting 127 signals:  78%|#######7  | 396/508 [00:13<00:04, 27.66it/s]
Boosting 127 signals:  79%|#######8  | 399/508 [00:13<00:03, 27.51it/s]
Boosting 127 signals:  79%|#######9  | 402/508 [00:13<00:04, 25.72it/s]
Boosting 127 signals:  80%|#######9  | 405/508 [00:13<00:04, 25.19it/s]
Boosting 127 signals:  80%|########  | 408/508 [00:14<00:04, 24.24it/s]
Boosting 127 signals:  81%|########  | 411/508 [00:14<00:05, 18.46it/s]
Boosting 127 signals:  82%|########1 | 415/508 [00:14<00:04, 20.64it/s]
Boosting 127 signals:  82%|########2 | 418/508 [00:14<00:04, 19.66it/s]
Boosting 127 signals:  83%|########2 | 421/508 [00:14<00:04, 18.53it/s]
Boosting 127 signals:  83%|########3 | 424/508 [00:14<00:04, 20.19it/s]
Boosting 127 signals:  84%|########4 | 427/508 [00:15<00:04, 19.24it/s]
Boosting 127 signals:  85%|########4 | 431/508 [00:15<00:03, 21.62it/s]
Boosting 127 signals:  85%|########5 | 434/508 [00:15<00:03, 23.27it/s]
Boosting 127 signals:  86%|########6 | 437/508 [00:15<00:02, 24.43it/s]
Boosting 127 signals:  87%|########6 | 441/508 [00:15<00:02, 27.18it/s]
Boosting 127 signals:  87%|########7 | 444/508 [00:15<00:02, 27.37it/s]
Boosting 127 signals:  88%|########7 | 447/508 [00:15<00:02, 25.54it/s]
Boosting 127 signals:  89%|########8 | 450/508 [00:15<00:02, 26.47it/s]
Boosting 127 signals:  89%|########9 | 453/508 [00:16<00:02, 23.68it/s]
Boosting 127 signals:  90%|########9 | 456/508 [00:16<00:02, 21.10it/s]
Boosting 127 signals:  90%|######### | 459/508 [00:16<00:02, 21.39it/s]
Boosting 127 signals:  91%|######### | 462/508 [00:16<00:02, 22.00it/s]
Boosting 127 signals:  92%|#########1| 465/508 [00:16<00:02, 20.84it/s]
Boosting 127 signals:  92%|#########2| 468/508 [00:16<00:01, 20.94it/s]
Boosting 127 signals:  93%|#########2| 471/508 [00:16<00:01, 20.11it/s]
Boosting 127 signals:  94%|#########3| 475/508 [00:17<00:01, 22.69it/s]
Boosting 127 signals:  94%|#########4| 479/508 [00:17<00:01, 23.75it/s]
Boosting 127 signals:  95%|#########4| 482/508 [00:17<00:01, 24.61it/s]
Boosting 127 signals:  95%|#########5| 485/508 [00:17<00:01, 22.07it/s]
Boosting 127 signals:  96%|#########6| 488/508 [00:17<00:00, 21.43it/s]
Boosting 127 signals:  97%|#########6| 491/508 [00:17<00:00, 21.67it/s]
Boosting 127 signals:  97%|#########7| 494/508 [00:17<00:00, 22.65it/s]
Boosting 127 signals:  98%|#########8| 498/508 [00:18<00:00, 25.42it/s]
Boosting 127 signals:  99%|#########8| 501/508 [00:18<00:00, 25.65it/s]
Boosting 127 signals:  99%|#########9| 504/508 [00:18<00:00, 24.99it/s]
Boosting 127 signals: 100%|#########9| 507/508 [00:18<00:00, 22.30it/s]
Boosting 127 signals: 100%|##########| 508/508 [00:18<00:00, 27.53it/s]
Compare models

Compare model quality through the correlation between measured and predicted responses:

plot.Topomap([res.r, res_onset.r], w=4, h=2, ncol=2, axtitle=['envelope', 'envelope + onset'])
_images/sphx_glr_mtrf_007.png

Out:

<Topomap: correlation>
References
[1]Crosse, M. J., Liberto, D., M, G., Bednar, A., & Lalor, E. C. (2016). The Multivariate Temporal Response Function (mTRF) Toolbox: A MATLAB Toolbox for Relating Neural Signals to Continuous Stimuli. Frontiers in Human Neuroscience, 10. https://doi.org/10.3389/fnhum.2016.00604

Total running time of the script: ( 0 minutes 32.077 seconds)

Gallery generated by Sphinx-Gallery

Univariate Statistics

Model coding

Illustrates how to inspect coding of regression models.

# Author: Christian Brodbeck <christianbrodbeck@nyu.edu>
from eelbrain import *

ds = Dataset()
ds['A'] = Factor(['a1', 'a0'], repeat=4)
ds['B'] = Factor(['b1', 'b0'], repeat=2, tile=2)

look at data

ds.head()

Out:

A    B
-------
a1   b1
a1   b1
a1   b0
a1   b0
a0   b1
a0   b1
a0   b0
a0   b0

create a fixed effects model

m = ds.eval('A * B')
print(repr(m))

Out:

A + B + A % B

show the model coding

print(m)

Out:

intercept   A   B   A x B
-------------------------
1           1   1   1
1           1   1   1
1           1   0   0
1           1   0   0
1           0   1   0
1           0   1   0
1           0   0   0
1           0   0   0

create random effects model

ds['subject'] = Factor(['s1', 's2'], tile=4, name='subject', random=True)
m = ds.eval('A * B * subject')
print(repr(m))

Out:

A + B + A % B + subject + A % subject + B % subject + A % B % subject

show the model coding

print(m)

Out:

intercept   A   B   A x B   subject   A x subject   B x subject   A x B x subject
---------------------------------------------------------------------------------
1           1   1   1       1         1             1             1
1           1   1   1       0         0             0             0
1           1   0   0       1         1             0             0
1           1   0   0       0         0             0             0
1           0   1   0       1         0             1             0
1           0   1   0       0         0             0             0
1           0   0   0       1         0             0             0
1           0   0   0       0         0             0             0

Gallery generated by Sphinx-Gallery

Repeated measures ANOVA

Based on [1].

# Author: Christian Brodbeck <christianbrodbeck@nyu.edu>
from eelbrain import *

y = Var([7,  3,  6,  6,  5,  8,  6,  7,
         7, 11,  9, 11, 10, 10, 11, 11,
         8, 14, 10, 11, 12, 10, 11, 12],
        name='y')
a = Factor('abc', repeat=8, name='A')

Fixed effects ANOVA (independent measures, [1] p. 24):

print(test.anova(y, a, title="Independent Measures"))

Out:

Independent Measures

                SS   df      MS          F        p
---------------------------------------------------
A           112.00    2   56.00   22.62***   < .001
Residuals    52.00   21    2.48
---------------------------------------------------
Total       164.00   23

Repeated measures ANOVA ([1] p. 72): subject is defined as random effect and entered for model construction as completely crossed factor

subject = Factor(range(8), tile=3, name='subject', random=True)
print(test.anova(y, a * subject, title="Repeated Measures"))

Out:

Repeated Measures

            SS   df      MS   MS(denom)   df(denom)          F        p
-----------------------------------------------------------------------
A       112.00    2   56.00        2.71          14   20.63***   < .001
-----------------------------------------------------------------------
Total   164.00   23
Two-way repeated measures ANOVA
y = Var([ 7,  3,  6,  6,  5,  8,  6,  7,
          7, 11,  9, 11, 10, 10, 11, 11,
          8, 14, 10, 11, 12, 10, 11, 12,
         16,  7, 11,  9, 10, 11,  8,  8,
         16, 10, 13, 10, 10, 14, 11, 12,
         24, 29, 10, 22, 25, 28, 22, 24])
a = Factor(['a0', 'a1'], repeat=3 * 8, name='A')
b = Factor(['b0', 'b1', 'b2'], tile=2, repeat=8, name='B')
subject = Factor(range(8), tile=6, name='subject', random=True)

print(test.anova(y, a * b * subject, title="Repeated Measure:"))

Out:

Repeated Measure:

             SS   df       MS   MS(denom)   df(denom)          F        p
-------------------------------------------------------------------------
A        432.00    1   432.00       10.76           7   40.14***   < .001
B        672.00    2   336.00       11.50          14   29.22***   < .001
A x B    224.00    2   112.00        6.55          14   17.11***   < .001
-------------------------------------------------------------------------
Total   1708.00   47

Bar-plot with within-subject error bars and pairwise tests

plot.Barplot(y, a % b, match=subject)
_images/sphx_glr_RMANOVA_001.png

Out:

<Barplot: None ~ A x B>
References
[1](1, 2, 3) Rutherford, A. (2001). Introducing ANOVA and ANCOVA: A GLM Approach. Sage.

Gallery generated by Sphinx-Gallery

ANCOVA
Example 1

Based on [1], Exercises (page 8).

# Author: Christian Brodbeck <christianbrodbeck@nyu.edu>
from eelbrain import *

y = Var([2, 3, 3, 4,
         3, 4, 5, 6,
         1, 2, 1, 2,
         1, 1, 2, 2,
         2, 2, 2, 2,
         1, 1, 2, 3], name="Growth Rate")

genotype = Factor(range(6), repeat=4, name="Genotype")

hours = Var([8, 12, 16, 24], tile=6, name="Hours")

Show the model

print(hours * genotype)

Out:

intercept   Hours   Genotype                 Hours x Genotype
-----------------------------------------------------------------------------
1           8       1    0    0    0    0    8      0      0      0      0
1           12      1    0    0    0    0    12     0      0      0      0
1           16      1    0    0    0    0    16     0      0      0      0
1           24      1    0    0    0    0    24     0      0      0      0
1           8       0    1    0    0    0    0      8      0      0      0
1           12      0    1    0    0    0    0      12     0      0      0
1           16      0    1    0    0    0    0      16     0      0      0
1           24      0    1    0    0    0    0      24     0      0      0
1           8       0    0    1    0    0    0      0      8      0      0
1           12      0    0    1    0    0    0      0      12     0      0
1           16      0    0    1    0    0    0      0      16     0      0
1           24      0    0    1    0    0    0      0      24     0      0
1           8       0    0    0    1    0    0      0      0      8      0
1           12      0    0    0    1    0    0      0      0      12     0
1           16      0    0    0    1    0    0      0      0      16     0
1           24      0    0    0    1    0    0      0      0      24     0
1           8       0    0    0    0    1    0      0      0      0      8
1           12      0    0    0    0    1    0      0      0      0      12
1           16      0    0    0    0    1    0      0      0      0      16
1           24      0    0    0    0    1    0      0      0      0      24
1           8       0    0    0    0    0    0      0      0      0      0
1           12      0    0    0    0    0    0      0      0      0      0
1           16      0    0    0    0    0    0      0      0      0      0
1           24      0    0    0    0    0    0      0      0      0      0

ANCOVA

print(test.anova(y, hours * genotype, title="ANOVA"))

Out:

ANOVA

                      SS   df     MS          F        p
--------------------------------------------------------
Hours               7.06    1   7.06   54.90***   < .001
Genotype           27.88    5   5.58   43.36***   < .001
Hours x Genotype    3.15    5   0.63    4.90*       .011
Residuals           1.54   12   0.13
--------------------------------------------------------
Total              39.62   23

Plot the slopes

plot.Regression(y, hours, genotype)
_images/sphx_glr_ANCOVA_001.png

Out:

/home/docs/checkouts/readthedocs.org/user_builds/eelbrain/envs/r-0.30/lib/python3.6/site-packages/eelbrain/plot/_uv.py:946: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
  (a, b), residues, rank, s = np.linalg.lstsq(coeff, y)
/home/docs/checkouts/readthedocs.org/user_builds/eelbrain/envs/r-0.30/lib/python3.6/site-packages/eelbrain/plot/_uv.py:946: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
  (a, b), residues, rank, s = np.linalg.lstsq(coeff, y)
/home/docs/checkouts/readthedocs.org/user_builds/eelbrain/envs/r-0.30/lib/python3.6/site-packages/eelbrain/plot/_uv.py:946: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
  (a, b), residues, rank, s = np.linalg.lstsq(coeff, y)
/home/docs/checkouts/readthedocs.org/user_builds/eelbrain/envs/r-0.30/lib/python3.6/site-packages/eelbrain/plot/_uv.py:946: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
  (a, b), residues, rank, s = np.linalg.lstsq(coeff, y)
/home/docs/checkouts/readthedocs.org/user_builds/eelbrain/envs/r-0.30/lib/python3.6/site-packages/eelbrain/plot/_uv.py:946: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
  (a, b), residues, rank, s = np.linalg.lstsq(coeff, y)
/home/docs/checkouts/readthedocs.org/user_builds/eelbrain/envs/r-0.30/lib/python3.6/site-packages/eelbrain/plot/_uv.py:946: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
  (a, b), residues, rank, s = np.linalg.lstsq(coeff, y)

<Regression: Growth Rate ~ Hours | Genotype>
Example 2

Based on [2] (p. 118-20)

y = Var([16,  7, 11,  9, 10, 11,  8,  8,
         16, 10, 13, 10, 10, 14, 11, 12,
         24, 29, 10, 22, 25, 28, 22, 24])

cov = Var([9, 5, 6, 4, 6, 8, 3, 5,
           8, 5, 6, 5, 3, 6, 4, 6,
           5, 8, 3, 4, 6, 9, 4, 5], name='cov')

a = Factor([1, 2, 3], repeat=8, name='A')

Full model, with interaction

print(test.anova(y, cov * a))
plot.Regression(y, cov, a)
_images/sphx_glr_ANCOVA_002.png

Out:

                 SS   df       MS          F        p
-----------------------------------------------------
cov          199.54    1   199.54   32.93***   < .001
A            807.82    2   403.91   66.66***   < .001
cov x A       19.39    2     9.70    1.60        .229
Residuals    109.07   18     6.06
-----------------------------------------------------
Total       1112.00   23
/home/docs/checkouts/readthedocs.org/user_builds/eelbrain/envs/r-0.30/lib/python3.6/site-packages/eelbrain/plot/_uv.py:946: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
  (a, b), residues, rank, s = np.linalg.lstsq(coeff, y)
/home/docs/checkouts/readthedocs.org/user_builds/eelbrain/envs/r-0.30/lib/python3.6/site-packages/eelbrain/plot/_uv.py:946: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
  (a, b), residues, rank, s = np.linalg.lstsq(coeff, y)
/home/docs/checkouts/readthedocs.org/user_builds/eelbrain/envs/r-0.30/lib/python3.6/site-packages/eelbrain/plot/_uv.py:946: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
  (a, b), residues, rank, s = np.linalg.lstsq(coeff, y)

<Regression: None ~ cov | A>

Drop interaction term

print(test.anova(y, a + cov))
plot.Regression(y, cov)
_images/sphx_glr_ANCOVA_003.png

Out:

                 SS   df       MS          F        p
-----------------------------------------------------
A            807.82    2   403.91   62.88***   < .001
cov          199.54    1   199.54   31.07***   < .001
Residuals    128.46   20     6.42
-----------------------------------------------------
Total       1112.00   23
/home/docs/checkouts/readthedocs.org/user_builds/eelbrain/envs/r-0.30/lib/python3.6/site-packages/eelbrain/plot/_uv.py:946: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
  (a, b), residues, rank, s = np.linalg.lstsq(coeff, y)

<Regression: None ~ cov>
ANCOVA with multiple covariates

Based on [3], p. 139.

# Load data form a text file
ds = load.txt.tsv('Fox_Prestige_data.txt', delimiter=None)
print(ds.head())

Out:

id                    education   income   women   prestige   census   type
---------------------------------------------------------------------------
GOV.ADMINISTRATORS    13.11       12351    11.16   68.8       1113     prof
GENERAL.MANAGERS      12.26       25879    4.02    69.1       1130     prof
ACCOUNTANTS           12.77       9271     15.7    63.4       1171     prof
PURCHASING.OFFICERS   11.42       8865     9.11    56.8       1175     prof
CHEMISTS              14.62       8403     11.68   73.5       2111     prof
PHYSICISTS            15.64       11030    5.13    77.6       2113     prof
BIOLOGISTS            15.09       8258     25.65   72.6       2133     prof
ARCHITECTS            15.44       14163    2.69    78.1       2141     prof
CIVIL.ENGINEERS       14.52       11377    1.03    73.1       2143     prof
MINING.ENGINEERS      14.64       11023    0.94    68.8       2153     prof
# Variable summary
print(ds.summary())

Out:

Key         Type     Values
-------------------------------------------------------------------------------------
id          Factor   GOV.ADMINISTRATORS, GENERAL.MANAGERS, ACCOUNTANTS... (102 cells)
education   Var      6.38 - 15.97
income      Var      611 - 25879
women       Var      0 - 97.51
prestige    Var      14.8 - 87.2
census      Var      1113 - 9517
type        Factor   prof:31, bc:44, wc:23, NA:4
-------------------------------------------------------------------------------------
Fox_Prestige_data.txt: 102 cases
# Exclude cases with missing type
ds2 = ds[ds['type'] != 'NA']

# ANOVA
print(test.anova('prestige', '(income + education) * type', ds=ds2))

Out:

                         SS   df        MS          F        p
--------------------------------------------------------------
income              1131.90    1   1131.90   28.35***   < .001
education           1067.98    1   1067.98   26.75***   < .001
type                 591.16    2    295.58    7.40**      .001
income x type        951.77    2    475.89   11.92***   < .001
education x type     238.40    2    119.20    2.99        .056
Residuals           3552.86   89     39.92
--------------------------------------------------------------
Total              28346.88   97
References
[1]Crawley, M. J. (2005). Statistics: an introduction using R. J Wiley.
[2]Rutherford, A. (2001). Introducing ANOVA and ANCOVA: A GLM Approach. Sage.
[3]Fox, J. (2008) Applied Regression Analysis and Generalized Linear Models, Second Edition. Sage.

Gallery generated by Sphinx-Gallery

Gallery generated by Sphinx-Gallery

Recipes

Group Level Analysis

To do group level analysis one usually wants to construct a Dataset that contains results for each participants along with condition and subject labels. The following illustration assumes functions that compute results for a single subject and condition:

  • result_for(subject, condition) returns an NDVar.
  • scalar_result_for(subject, condition) returns a scalar (float).

Given results by subject and condition, a Dataset can be constructed as follows:

>>> # create lists to collect data and labels
>>> ndvar_results = []
>>> scalar_results = []
>>> subjects = []
>>> conditions = []
>>> # collect data and labels
>>> for subject in ('s1', 's2', 's3', 's4'):
...     for condition in ('c1', 'c2'):
...         ndvar = result_for(subject, condition)
...         s = scalar_result_for(subject, condition)
...         ndvar_results.append(ndvar)
...         scalar_results.append(s)
...         subjects.append(subject)
...         conditions.append(condition)
...
>>> # create a Dataset and convert the collected lists to appropriate format
>>> ds = Dataset()
>>> ds['subject'] = Factor(subjects, random=True)  # treat as random effect
>>> ds['condition'] = Factor(conditions)
>>> ds['y'] = combine(ndvar_results)
>>> ds['s'] = Var(scalar_results)

Now this Dataset can be used for statistical analysis, for example ANOVA:

>>> res = testnd.anova('y', 'condition * subject', ds=ds)

Regression Design

The influence of a continuous predictor on single trial level can be tested by first calculating regression coefficients for each subject, and then performing a one sample t-test across subjects to test the null hypothesis that between subjects, the regression coefficient does not differ significantly from 0.

Assuming that ds_subject is a Dataset containing single trial data for one subject, with data the dependent variable and a predictor (called predictor):

>>> ds_subject
<Dataset 'example' n_cases=145 {'predictor':V, 'data':Vnd}>
>>> ds_subject['data']
<NDVar 'data': 145 (case) X 5120 (source) X 76 (time)>
>>> print ds_subject
predictor
---------
1.9085
0.30836
-0.58802
0.29686
...

The regression coefficient can be calculated the following way:

>>> beta = ds_subject.eval("data.ols(predictor)")
>>> beta
<NDVar 'ols': 1 (case) X 5120 (source) X 76 (time)>

Thus, in order to collect beta values for each subject, you would loop through subjects. We will call the NDVar with beta values ‘beta’:

>>> subjects = []
>>> betas = []
>>> for subject in ['R0001', 'R0002', 'R0003']:
...     ds_subject = my_load_ds_for_subject_function(subject)
...     beta = ds_subject.eval("data.ols(predictor, 'beta')")
...     subjects.append(subject)
...     betas.append(beta)
...
>>> ds = Dataset()
>>> ds['subject'] = Factor(subjects, random=True)
>>> ds['beta'] = combine(betas)

Now you can perform a one-sample t-test:

>>> res = testnd.ttest_1samp('beta', ...)

And analyze the results as for other testnd tests.

Plots for Publication

In order to produce multiple plots it is useful to set some plotting parameters globally in order to ensure that they are consistent between plots, e.g.:

import matplotlib as mpl

mpl.rcParams['font.family'] = 'sans-serif'
mpl.rcParams['font.size'] = 8
for key in mpl.rcParams:
    if 'width' in key:
        mpl.rcParams[key] *= 0.5
mpl.rcParams['savefig.dpi'] = 300  # different from 'figure.dpi'!

Matplotlib’s tight_layout() functionality provides an easy way for plots to use the available space, and most Eelbrain plots use it by default. However, when trying to produce multiple plots with identical scaling it can lead to unwanted discrepancies. In this case, it is better to define layout parameters globally and plot with the tight=False argument:

mpl.rcParams['figure.subplot.left'] = 0.25
mpl.rcParams['figure.subplot.right'] = 0.95
mpl.rcParams['figure.subplot.bottom'] = 0.2
mpl.rcParams['figure.subplot.top'] = 0.95

plot.UTSStat('uts', 'A', ds=ds, w=5, tight=False)

# now we can produce a second plot without x-axis labels that has exactly
# the same scaling:
plot.UTSStat('uts', 'A % B', ds=ds, w=5, tight=False, xlabel=False, ticklabels=False)

If a script produces several plots, the GUI should not interrupt the script. This can be achieved by setting the show=False argument. In addition, it is usually desirable to save the legend separately and combine it in a layout application:

p = plot.UTSStat('uts', 'A', ds=ds, w=5, tight=False, show=False, legend=False)
p.save('plot.svg', transparent=True)
p.save_legend('legend.svg', transparent=True)

The MneExperiment Pipeline

See also

Introduction

The MneExperiment class is a template for an MEG/EEG analysis pipeline. The pipeline is adapted to a specific experiment by creating a subclass, and specifying properties of the experiment as attributes.

Step by step

Setting up the file structure
MneExperiment.sessions

The first step is to define an MneExperiment subclass with the name of the experiment:

from eelbrain import *

class WordExperiment(MneExperiment):

    sessions = 'words'

Where sessions is the name which you included in your raw data files after the subject identifier.

The pipeline expects input files in a strictly determined folder/file structure. In the schema below, curly brackets indicate slots to be replaced with specific names, for example '{subject}' should be replaced with each specific subject’s label.:

root
mri-sdir                                /mri
mri-dir                                    /{mrisubject}
meg-sdir                                /meg
meg-dir                                    /{subject}
raw-dir
trans-file                                       /{mrisubject}-trans.fif
raw-file                                         /{subject}_{session}-raw.fif

This schema shows path templates according to which the input files should be organized. Assuming that root="/files", for a subject called “R0001” this includes:

  • MRI-directory at /files/mri/R0001
  • the raw data file at /files/meg/R0001/R0001_words-raw.fif (the session is called “words” which is specified in WordExperiment.sessions)
  • the trans-file from the coregistration at /files/meg/R0001/R0001-trans.fif

Once the required files are placed in this structure, the experiment class can be initialized with the proper root parameter, pointing to where the files are located:

>>> e = WordExperiment("/files")

The setup can be tested using MneExperiment.show_subjects(), which shows a list of the subjects that were discovered and the MRIs used:

>>> e.show_subjects()
#    subject   mri
-----------------------------------------
0    R0026     R0026
1    R0040     fsaverage * 0.92
2    R0176     fsaverage * 0.954746600461
...
Pre-processing

Make sure an appropriate pre-processing pipeline is defined as MneExperiment.raw.

To inspect raw data for a given pre-processing stage use:

>>> e.set(raw='1-40')
>>> y = e.load_raw(ndvar=True)
>>> p = plot.TopoButterfly(y, xlim=5)

Which will plot 5 s excerpts and allow scrolling through the data.

Labeling events

Initially, events are only labeled with the trigger ID. Use the MneExperiment.variables settings to add labels. For more complex designs and variables, you can override MneExperiment.label_events(). Events are represented as Dataset objects and can be inspected with corresponding methods and functions, for example:

>>> e = WordExperiment("/files")
>>> ds = e.load_events()
>>> ds.head()
>>> print(table.frequencies('trigger', ds=ds))
Defining data epochs

Once events are properly labeled, define MneExperiment.epochs.

There is one special epoch to define, which is called 'cov'. This is the data epoch that will be used to estimate the sensor noise covariance matrix for source estimation.

In order to find the right sel epoch parameter, it can be useful to actually load the events with MneExperiment.load_events() and test different selection strings. The epoch selection is determined by selection = event_ds.eval(epoch['sel']). Thus, a specific setting could be tested with:

>>> ds = e.load_events()
>>> print(ds.sub("event == 'value'"))
Bad channels

Flat channels are automatically excluded from the analysis.

An initial check for noisy channels can be done by looking at the raw data (see Pre-processing above). If this inspection reveals bad channels, they can be excluded using MneExperiment.make_bad_channels().

Another good check for bad channels is plotting the average evoked response,, and looking for channels which are uncorrelated with neighboring channels. To plot the average before trial rejection, use:

>>> ds = e.load_epochs(epoch='epoch', reject=False)
>>> plot.TopoButterfly('meg', ds=ds)

The neighbor correlation can also be quantified, using:

>>> nc = neighbor_correlation(concatenate(ds['meg']))
>>> nc.sensor.names[nc < 0.3]
Datalist([u'MEG 099'])

A simple way to cycle through subjects when performing a given pre-processing step is MneExperiment.next(). If a general threshold is adequate, the selection of bad channels based on neighbor-correlation can be automated using the :meth:`MneExperiment.make_bad_channels_neighbor_correlation method:

>>> for subject in e:
...     e.make_bad_channels_neighbor_correlation()
ICA

If preprocessing includes ICA, select which ICA components should be removed. The experiment raw state needs to be set to the ICA stage of the pipeline:

>>> e.set(raw='ica')
>>> e.make_ica_selection(epoch='epoch', decim=10)

Set epoch to the epoch whose data you want to display in the GUI (see MneExperiment.make_ica_selection() for more information, in particular on how to precompute ICA decomposition for all subjects).

In order to select ICA components for multiple subject, a simple way to cycle through subjects is MneExperiment.next(), like:

>>> e.make_ica_selection(epoch='epoch', decim=10)
>>> e.next()
subject: 'R1801' -> 'R2079'
>>> e.make_ica_selection(epoch='epoch', decim=10)
>>> e.next()
subject: 'R2079' -> 'R2085'
...
Trial selection

For each primary epoch that is defined, bad trials can be rejected using MneExperiment.make_epoch_selection(). Rejections are specific to a given raw state:

>>> e.set(raw='ica1-40')
>>> e.make_epoch_selection()
>>> e.next()
subject: 'R1801' -> 'R2079'
>>> e.make_epoch_selection()
...

To reject trials based on a pre-determined threshold, a loop can be used:

>>> for subject in e:
...     e.make_epoch_selection(auto=1e-12)
...
Analysis

With preprocessing completed, there are different options for analyzing the data.

The most flexible option is loading data from the desired processing stage using one of the many .load_... methods of the MneExperiment. For example, load a Dataset with source-localized condition averages using MneExperiment.load_evoked_stc(), then test a hypothesis using one of the mass-univariate test from the testnd module. To make this kind of analysis replicable, it is probably useful to write the complete analysis as a separate script that imports the experiment (see the example experiment folder).

Many statistical comparisons can also be specified in the MneExperiment.tests attribute, and then loaded directly using the MneExperiment.load_test() method. This has the advantage that the tests will be cached automatically and, once computed, can be loaded very quickly. However, these definitions are not quite as flexible as writing a custom script.

Finally, for tests defined in MneExperiment.tests, the MneExperiment can generate HTML report files. These are generated with the MneExperiment.make_report() and MneExperiment.make_report_rois() methods.

Warning

If source files are changed (raw files, epoch rejection or bad channel files, …) reports are not updated automatically unless the corresponding MneExperiment.make_report() function is called again. For this reason it is useful to have a script to generate all desired reports. Running the script ensures that all reports are up-to-date, and will only take seconds if nothing has to be recomputed (for an example see make-reports.py in the example experiment folder).

Example

The following is a complete example for an experiment class definition file (the source file can be found in the Eelbrain examples folder at examples/mouse/mouse.py):

# skip test: data unavailable
from eelbrain.pipeline import *


class Mouse(MneExperiment):

    # Name of the experimental session(s), used to locate *-raw.fif files
    sessions = 'CAT'

    # Pre-processing pipeline: each entry in `raw` specifies one processing step. The first parameter
    # of each entry specifies the source (another processing step or 'raw' for raw input data).
    raw = {
        # Maxwell filter as first step (taking input from raw data, 'raw')
        'tsss': RawMaxwell('raw', st_duration=10., ignore_ref=True, st_correlation=0.9, st_only=True),
        # Band-pass filter data between 1 and 40 Hz (taking Maxwell-filtered data as input, 'tsss)
        '1-40': RawFilter('tsss', 1, 40),
        # Perform ICA on filtered data
        'ica': RawICA('1-40', 'CAT', n_components=0.99),
    }

    # Variables determine how event triggeres are mapped to meaningful labels. Events are represented
    # as data-table in which each row corresponds to one event (i.e., one trigger). Each variable
    # defined here adds one column in that data-table, assigning a label or value to each event.
    variables = {
        # The first parameter specifies the source variable (here the trigger values),
        # the second parameter a mapping from source to target labels/values
        'stimulus': LabelVar('trigger', {(162, 163): 'target', (166, 167): 'prime'}),
        'prediction': LabelVar('trigger', {(162, 166): 'expected', (163, 167): 'unexpected'}),
    }

    # Epochs specify how to extract time-locked data segments ("epochs") from the continuous data.
    epochs = {
        # A PrimaryEpoch definition extracts epochs directly from continuous data. The first argument
        # specifies the recording session from which to extract the data (here: 'CAT'). The second
        # argument specifies which events to extract the data from (here: all events at which the
        # 'stimulus' variable, defined above, has a value of either 'prime' or 'target').
        'word': PrimaryEpoch('CAT', "stimulus.isin(('prime', 'target'))", samplingrate=200),
        # A secondary epoch inherits its properties from the base epoch ("word") unless they are
        # explicitly modified (here, selecting a subset of events)
        'prime': SecondaryEpoch('word', "stimulus == 'prime'"),
        'target': SecondaryEpoch('word', "stimulus == 'target'"),
        # The 'cov' epoch defines the data segments used to compute the noise covariance matrix for
        # source localization
        'cov': SecondaryEpoch('prime', tmax=0),
    }

    tests = {
        '=0': TTestOneSample(),
        'surprise': TTestRel('prediction', 'unexpected', 'expected'),
        'anova': ANOVA('prediction * subject'),
    }

    parcs = {
        'frontotemporal-lh': CombinationParc('aparc', {
            'frontal-lh': 'parsorbitalis + parstriangularis + parsopercularis',
            'temporal-lh': 'transversetemporal + superiortemporal + '
                           'middletemporal + inferiortemporal + bankssts',
            }, views='lateral'),
    }


root = '~/Data/Mouse'
e = Mouse(root)

The event structure is illustrated by looking at the first few events:

>>> from mouse import *
>>> ds = e.load_events()
>>> ds.head()
trigger   i_start   T        SOA     subject   stimulus   prediction
--------------------------------------------------------------------
182       104273    104.27   12.04   S0001
182       116313    116.31   1.313   S0001
166       117626    117.63   0.598   S0001     prime      expected
162       118224    118.22   2.197   S0001     target     expected
166       120421    120.42   0.595   S0001     prime      expected
162       121016    121.02   2.195   S0001     target     expected
167       123211    123.21   0.596   S0001     prime      unexpected
163       123807    123.81   2.194   S0001     target     unexpected
167       126001    126      0.598   S0001     prime      unexpected
163       126599    126.6    2.195   S0001     target     unexpected

Experiment Definition

Basic setup
MneExperiment.owner

Set MneExperiment.owner to your email address if you want to be able to receive notifications. Whenever you run a sequence of commands with mne_experiment.notification: you will get an email once the respective code has finished executing or run into an error, for example:

>>> e = MyExperiment()
>>> with e.notification:
...     e.make_report('mytest', tstart=0.1, tstop=0.3)
...

will send you an email as soon as the report is finished (or the program encountered an error)

MneExperiment.auto_delete_results

Whenever a MneExperiment instance is initialized with a valid root path, it checks whether changes in the class definition invalidate previously computed results. By default, the user is prompted to confirm the deletion of invalidated results. Set .auto_delete_results=True to delete them automatically without interrupting initialization.

MneExperiment.screen_log_level

Determines the amount of information displayed on the screen while using an MneExperiment (see logging).

MneExperiment.meg_system

Starting with mne 0.13, fiff files converted from KIT files store information about the system they were collected with. For files converted earlier, the MneExperiment.meg_system attribute needs to specify the system the data were collected with. For data from NYU New York, the correct value is meg_system="KIT-157".

MneExperiment.trigger_shift

Set this attribute to shift all trigger times by a constant (in seconds). For example, with trigger_shift = 0.03 a trigger that originally occurred 35.10 seconds into the recording will be shifted to 35.13. If the trigger delay differs between subjects, this attribute can also be a dictionary mapping subject names to shift values, e.g. trigger_shift = {'R0001': 0.02, 'R0002': 0.05, ...}.

Subjects
MneExperiment.subject_re

Subjects are identified by looking for folders in the subjects-directory whose name matches the subject_re regular expression (see re). By default, this is '(R|A|Y|AD|QP)(\d{3,})$', which matches R-numbers like R1234, but also numbers prefixed by A, Y, AD or QP.

Defaults
MneExperiment.defaults

The defaults dictionary can contain default settings for experiment analysis parameters (see :ref:`state-parameters`_), e.g.:

defaults = {'epoch': 'my_epoch',
            'cov': 'noreg',
            'raw': '1-40'}
Pre-processing (raw)
MneExperiment.raw

Define a pre-processing pipeline as a series of linked processing steps:

RawFilter(source[, l_freq, h_freq]) Filter raw pipe
RawICA(source, session[, method, …]) ICA raw pipe
RawApplyICA(source, ica[, cache]) Apply ICA estimated in a RawICA pipe
RawMaxwell(source, **kwargs) Maxwell filter raw pipe
RawSource([filename, reader, sysname, …]) Raw data source
RawReReference(source[, reference]) Re-reference EEG data

By default the raw data can be accessed in a pipe named "raw" (raw data input can be customized by adding a RawSource pipe). Each subsequent preprocessing step is defined with its input as first argument (source).

For example, the following definition sets up a pipeline using TSSS and band-pass filtering, and optionally ICA:

class Experiment(MneExperiment):

    sessions = 'session'

    raw = {
        'tsss': RawMaxwell('raw', st_duration=10., ignore_ref=True, st_correlation=0.9, st_only=True),
        '1-40': RawFilter('tsss', 1, 40),
        'ica': RawICA('tsss', 'session', 'extended-infomax', n_components=0.99),
        'ica1-40': RawFilter('ica', 1, 40),
    }

To use the raw --> TSSS --> 1-40 Hz band-pass pipeline, use e.set(raw="1-40"). To use raw --> TSSS --> ICA --> 1-40 Hz band-pass, select e.set(raw="ica1-40").

Event variables
MneExperiment.variables

Event variables add labels and variables to the events:

LabelVar(source, codes[, default, session]) Variable assigning labels to values
EvalVar(code[, session]) Variable based on evaluating a statement
GroupVar(groups[, session]) Group membership for each subject

Most of the time, the main purpose of this attribute is to turn trigger values into meaningful labels:

class Mouse(MneExperiment):

    variables = {
        'stimulus': LabelVar('trigger', {(162, 163): 'target', (166, 167): 'prime'}),
        'prediction': LabelVar('trigger', {162: 'expected', 163: 'unexpected'}),
    }

This defines a variable called “stimulus”, and on this variable all events that have triggers 162 and 163 have the value "target", and events with trigger 166 and 167 have the value "prime". The “prediction” variable only labels triggers 162 and 163. Unmentioned trigger values are assigned the empty string ('').

Epochs
MneExperiment.epochs

Epochs are specified as a {name: epoch_definition} dictionary. Names are str, and epoch_definition are instances of the classes described below:

PrimaryEpoch(session[, sel]) Epoch based on selecting events from a raw file
SecondaryEpoch(base[, sel]) Epoch inheriting events from another epoch
SuperEpoch(sub_epochs, **kwargs) Combine several other epochs

Examples:

epochs = {
    # some primary epochs:
    'picture': PrimaryEpoch('words', "stimulus == 'picture'"),
    'word': PrimaryEpoch('words', "stimulus == 'word'"),
    # use the picture baseline for the sensor covariance estimate
    'cov': SecondaryEpoch('picture', tmax=0),
    # another secondary epoch:
    'animal_words': SecondaryEpoch('noun', sel="word_type == 'animal'"),
    # a superset-epoch:
    'all_stimuli': SuperEpoch(('picture', 'word')),
}
Tests
MneExperiment.tests

Statistical tests are defined as {name: test_definition} dictionary. Test- definitions are defined from the following:

TTestOneSample([tail]) One-sample t-test
TTestRel(model, c1, c0[, tail]) Related measures t-test
TTestInd(model, c1, c0[, tail, vars]) Independent measures t-test (comparing groups of subjects)
ANOVA(x[, model, vars]) ANOVA test
TContrastRel(model, contrast[, tail]) Contrasts of T-maps (see eelbrain.testnd.t_contrast_rel)
TwoStageTest(stage_1[, vars, model]) Two-stage test: T-test of regression coefficients

Example:

tests = {
    'my_anova': ANOVA('noise * word_type * subject'),
    'my_ttest': TTestRel('noise', 'a_lot_of_noise', 'no_noise'),
}
Subject groups
MneExperiment.groups

A subject group called 'all' containing all subjects is always implicitly defined. Additional subject groups can be defined in MneExperiment.groups with {name: group_definition} entries:

Group(subjects) Group defined as collection of subjects
SubGroup(base, exclude) Group defined by removing subjects from a base group

Example:

groups = {
    'good': SubGroup('all', ['R0013', 'R0666']),
    'bad': Group(['R0013', 'R0666']),
}
Parcellations (parcs)
MneExperiment.parcs

The parcellation determines how the brain surface is divided into regions. A number of standard parcellations are automatically defined (see parc/mask (parcellations) below). Additional parcellations can be defined in the MneExperiment.parcs dictionary with {name: parc_definition} entries. There are a couple of different ways in which parcellations can be defined, described below.

CombinationParc(base, labels[, views]) Recombine labels from an existing parcellation
SeededParc(seeds[, mask, surface, views]) Parcellation that is grown from seed coordinates
IndividualSeededParc(seeds[, mask, surface, …]) Seed parcellation with individual seeds for each subject
FreeSurferParc([views]) Parcellation that is created outside Eelbrain for each subject
FSAverageParc([views]) Fsaverage parcellation that is morphed to individual subjects
Visualization defaults
MneExperiment.brain_plot_defaults

The MneExperiment.brain_plot_defaults dictionary can contain options that changes defaults for brain plots (for reports and movies). The following options are available:

surf : ‘inflated’ | ‘pial’ | ‘smoothwm’ | ‘sphere’ | ‘white’
Freesurfer surface to use as brain geometry.
views : str | iterator of str
View or views to show in the figure. Can also be set for each parcellation, see MneExperiment.parc.
foreground : mayavi color
Figure foreground color (i.e., the text color).
background : mayavi color
Figure background color.
smoothing_steps : None | int
Number of smoothing steps to display data.

State Parameters

These are parameters that can be set after an MneExperiment has been initialized to affect the analysis, for example:

>>> my_experiment = MneExperiment()
>>> my_experiment.set(raw='1-40', cov='noreg')

sets up my_experiment to use raw files filtered with a 1-40 Hz band-pass filter, and to use sensor covariance matrices without regularization.

session

Which raw session to work with (one of MneExperiment.sessions; usually set automatically when :ref:`state-epoch`_ is set)

raw

Select the preprocessing pipeline applied to the continuous data. Options are all the processing steps defined in MneExperiment.raw, as well as "raw" for using unprocessed raw data.

group

Any group defined in MneExperiment.groups. Will restrict the analysis to that group of subjects.

epoch

Any epoch defined in MneExperiment.epochs. Specify the epoch on which the analysis should be conducted.

rej (trial rejection)

Trial rejection can be turned off e.set(rej=''), meaning that no trials are rejected, and back on, meaning that the corresponding rejection files are used e.set(rej='man').

model

While the epoch state parameter determines which events are included when loading data, the model parameter determines how these events are split into different condition cells. The parameter should be set to the name of a categorial event variable which defines the desired cells. In the Example, e.load_evoked(epoch='target', model='prediction') would load responses to the target, averaged for expected and unexpected trials.

Cells can also be defined based on crossing two variables using the % sign. In the Example, to load corresponding primes together with the targets, you would use e.load_evoked(epoch='word', model='stimulus % prediction').

equalize_evoked_count

By default, the analysis uses all epoch marked as good during rejection. Set equalize_evoked_count=’eq’ to discard trials to make sure the same number of epochs goes into each cell of the model.

‘’ (default)
Use all epochs.
‘eq’
Make sure the same number of epochs is used in each cell by discarding epochs.
cov

The method for correcting the sensor covariance.

‘noreg’
Use raw covariance as estimated from the data (do not regularize).
‘bestreg’ (default)
Find the regularization parameter that leads to optimal whitening of the baseline.
‘reg’
Use the default regularization parameter (0.1).
‘auto’
Use automatic selection of the optimal regularization method.
src

The source space to use.

  • ico-x: Surface source space based on icosahedral subdivision of the white matter surface x steps (e.g., ico-4, the default).
  • vol-x: Volume source space based on a volume grid with x mm resolution (x is the distance between sources, e.g. vol-10 for a 10 mm grid).
inv

What inverse solution to use for source localization. This parameter can also be set with MneExperiment.set_inv(), which has a more detailed description of the options. The inverse solution can be set directly using the appropriate string as in e.set(inv='fixed-1-MNE').

parc/mask (parcellations)

The parcellation determines how the brain surface is divided into regions. There are a number of built-in parcellations:

Freesurfer Parcellations
aparc.a2005s, aparc.a2009s, aparc, aparc.DKTatlas, PALS_B12_Brodmann, PALS_B12_Lobes, PALS_B12_OrbitoFrontal, PALS_B12_Visuotopic.
lobes
Modified version of PALS_B12_Lobes in which the limbic lobe is merged into the other 4 lobes.
lobes-op
One large region encompassing occipital and parietal lobe in each hemisphere.
lobes-ot
One large region encompassing occipital and temporal lobe in each hemisphere.

Additional parcellation can be defined in the MneExperiment.parc attribute. Parcellations are used in different contexts:

  • When loading source space data, the current parc state determines the parcellation of the souce space (change the state parameter with e.set(parc='aparc')).
  • When loading tests, setting the parc parameter treats each label as a separate ROI. For spatial cluster-based tests that means that no clusters can cross the boundary between two labels. On the other hand, using the mask parameter treats all named labels as connected surface, but discards any sources labeled as "unknown". For example, loading a test with mask='lobes' will perform a whole-brain test on the cortex, while discarding subcortical sources.

Parcellations are set with their name, with the expception of SeededParc: for those, the name is followed by the radious in mm, for example, to use seeds defined in a parcellation named 'myparc' with a radius of 25 mm around the seed, use e.set(parc='myparc-25').

connectivity

Possible values: '', 'link-midline'

Connectivity refers to the edges connecting data channels (sensors for sensor space data and sources for source space data). These edges are used to find clusters in cluster-based permutation tests. For source spaces, the default is to use FreeSurfer surfaces in which the two hemispheres are unconnected. By setting connectivity='link-midline', this default connectivity can be modified so that the midline gyri of the two hemispheres get linked at sources that are at most 15 mm apart. This parameter currently does not affect sensor space connectivity.

select_clusters (cluster selection criteria)

In thresholded cluster test, clusters are initially filtered with a minimum size criterion. This can be changed with the select_clusters analysis parameter with the following options:

name min time min sources min sensors
"all"      
"10ms" 10 ms 10 4
"" (default) 25 ms 10 4
"large" 25 ms 20 8

To change the cluster selection criterion use for example:

>>> e.set(select_clusters='all')

See also

Indices and tables

Eelbrain relies on NumPy, SciPy, Matplotlib, MNE-Python, PySurfer, WxPython, Cython and incorporates icons from the Tango Desktop Project.


Current funding: National Institutes of Health (NIH) grant R01-DC-014085 (since 2016). Past funding: NYU Abu Dhabi Institute grant G1001 (2011-2016).