Eelbrain¶
Eelbrain is an open-source Python package for accessible statistical analysis of MEG and EEG data. It is maintained by Christian Brodbeck at the Computational sensorimotor systems lab at University of Maryland, College Park.
If you use Eelbrain in work that is published, please acknowledge it by citing it with the appropriate DOI for the version you used.
Manual¶
Installing¶
Note
Because of the fluid nature of Python development, the recommended way of installing Eelbrain changes occasionally. For up-to-date information, see the corresponding Eelbrain wiki page.
Getting Started¶
MacOS: Framework Build¶
On macOS, the GUI tool Eelbrain uses requires a special build of Python called a “Framework build”. You might see this error when trying to create a plot:
SystemExit: This program needs access to the screen.
Please run with a Framework build of python, and only when you are
logged in on the main display of your Mac.
In order to avoid this, Eelbrain installs a shortcut to start IPython with a Framework build:
$ eelbrain
This automatically launches IPython with the “eelbrain” profile. A default
startup script that executes from eelbrain import *
is created, and can be
changed in the corresponding IPython profile.
Quitting iPython¶
Sometimes iPython seems to get stuck after this line:
Do you really want to exit ([y]/n)? y
In those instances, pressing ctrl-c usually terminates iPython immediately.
Windows: Scrolling¶
Scrolling inside a plot axes normally uses arrow keys, but this is currently not possible on Windows (due to an issue in Matplotlib). Instead, the following keys can be used:
↑ i |
||
← j |
→ l |
|
↓ k |
Introduction¶
There are three primary data-objects:
Factor
for categorial variablesVar
for scalar variablesNDVar
for multidimensional data (e.g. a variable measured at different time points)
Multiple variables belonging to the same dataset can be grouped in a
Dataset
object.
Factor¶
A Factor
is a container for one-dimensional, categorial data: Each
case is described by a string label. The most obvious way to initialize a
Factor
is a list of strings:
>>> A = Factor(['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b'], name='A')
Since Factor initialization simply iterates over the given data, the same Factor can be initialized with:
>>> Factor('aaaabbbb', name='A')
Factor(['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b'], name='A')
There are other shortcuts to initialize factors (see also
the Factor
class documentation):
>>> A = Factor(['a', 'b', 'c'], repeat=4, name='A')
>>> A
Factor(['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b', 'c', 'c', 'c', 'c'], name='A')
Indexing works like for arrays:
>>> A[0]
'a'
>>> A[0:6]
Factor(['a', 'a', 'a', 'a', 'b', 'b'], name='A')
All values present in a Factor are accessible in its
Factor.cells
attribute:
>>> A.cells
('a', 'b', 'c')
Based on the Factor’s cell values, boolean indexes can be generated:
>>> A == 'a'
array([ True, True, True, True, False, False, False, False, False,
False, False, False], dtype=bool)
>>> A.isany('a', 'b')
array([ True, True, True, True, True, True, True, True, False,
False, False, False], dtype=bool)
>>> A.isnot('a', 'b')
array([False, False, False, False, False, False, False, False, True,
True, True, True], dtype=bool)
Interaction effects can be constructed from multiple factors with the %
operator:
>>> B = Factor(['d', 'e'], repeat=2, tile=3, name='B')
>>> B
Factor(['d', 'd', 'e', 'e', 'd', 'd', 'e', 'e', 'd', 'd', 'e', 'e'], name='B')
>>> i = A % B
Interaction effects are in many ways interchangeable with factors in places where a categorial model is required:
>>> i.cells
(('a', 'd'), ('a', 'e'), ('b', 'd'), ('b', 'e'), ('c', 'd'), ('c', 'e'))
>>> i == ('a', 'd')
array([ True, True, False, False, False, False, False, False, False,
False, False, False], dtype=bool)
Var¶
The Var
class is basically a container to associate one-dimensional
numpy.ndarray
objects with a name. While simple operations can be
performed on the object directly, for any more complex operations on the data
the corresponding numpy.ndarray
can be retrieved in the
Var.x
attribute:
>>> Y = Var(np.random.rand(10), name='Y')
>>> Y
Var([0.185, 0.285, 0.105, 0.916, 0.76, 0.888, 0.288, 0.0165, 0.901, 0.72], name='Y')
>>> Y[5:]
Var([0.888, 0.288, 0.0165, 0.901, 0.72], name='Y')
>>> Y + 1
Var([1.18, 1.28, 1.11, 1.92, 1.76, 1.89, 1.29, 1.02, 1.9, 1.72], name='Y+1')
>>> Y.x
array([ 0.18454728, 0.28479396, 0.10546204, 0.91619036, 0.76006963,
0.88807645, 0.28807859, 0.01645504, 0.90112081, 0.71991843])
Note
Note however that the Var.x
attribute is not intended to be
replaced; rather, a new Var
object should be created for a new
array.
NDVar¶
NDVar
objects are containers for multidimensional data, and manage the
description of the dimensions along with the data. NDVars
are often
derived from some import function, for example load.fiff.stc_ndvar()
.
As an example, consider single trial data from the mne
sample dataset:
>>> ds = datasets.get_mne_sample(src='ico')
>>> src = ds['src']
>>> src
<NDVar 'src': 145 case, 5120 source, 76 time>
This representation shows that src
contains 145 trials of data, with
5120 sources and 76 time points. NDVars
offer numpy
functionality that takes into account the dimensions. Through the
NDVar.sub()
method, indexing can be done using meaningful descriptions,
such as selecting data for only the left hemisphere:
>>> src.sub(source='lh')
<NDVar 'src': 145 case, 2559 source, 76 time>
Through several methods data can be aggregated, for example a mean over time:
>>> src.mean('time')
<NDVar 'src': 145 case, 5120 source>
Or a root mean square over sources:
>>> src.rms('source')
<NDVar 'src': 145 case, 76 time>
As with a Var
, the corresponding numpy.ndarray
can always be
accessed in the NDVar.x
attribute:
>>> type(src.x)
numpy.ndarray
>>> src.x.shape
(145, 5120, 76)
NDVar
objects can be constructed from an array and corresponding
dimension objects, for example:
>>> frequency = Scalar('frequency', [1, 2, 3, 4])
>>> time = UTS(0, 0.01, 50)
>>> data = numpy.random.normal(0, 1, (4, 50))
>>> NDVar(data, (frequency, time))
<NDVar: 4 frequency, 50 time>
A case dimension can be added by including the bare Case
class:
>>> data = numpy.random.normal(0, 1, (10, 4, 50))
>>> NDVar(data, (Case, frequency, time))
<NDVar: 10 case, 4 frequency, 50 time>
Dataset¶
The Dataset
class is a subclass of collections.OrderedDict
from which it inherits much of its behavior.
Its intended purpose is to be a vessel for variable objects (Factor
,
Var
and NDVar
) describing the same cases.
As a dictionary, its keys are strings and its values are data-objects.
The Dataset
class interacts with data-objects’ name
attribute:
A
Dataset
initialized with a list of data-objects automatically uses their names as keys:>>> A = Factor('aabb', name='A') >>> B = Factor('cdcd', name='B') >>> ds = Dataset((A, B)) >>> print ds A B ----- a c a d b c b d >>> ds['A'] Factor(['a', 'a', 'b', 'b'], name='A')
When an unnamed data-object is assigned to a
Dataset
, the data-object is automatically assigned its key as a name:>>> ds['Y'] = Var([2,1,4,2]) >>> print ds A B Y --------- a c 2 a d 1 b c 4 b d 2 >>> ds['Y'] Var([2, 1, 4, 2], name='Y')
The “official” string representation of a Dataset
contains information
on the variables stored in it:
>>> ds
<Dataset n_cases=4 {'A':F, 'B':F, 'Y':V}>
n_cases=4
indicates that the Dataset contains four cases (rows). The
subsequent dictionary-like representation shows the keys and the types of the
corresponding values (F
: Factor
, V
: Var
,
Vnd
: NDVar
).
If a variable’s name does not match its key in the Dataset, this is also
indicated:
>>> ds['C'] = Factor('qwer', name='another_name')
>>> ds
<Dataset n_cases=4 {'A':F, 'B':F, 'Y':V, 'C':<F 'another_name'>}>
While indexing a Dataset with strings returns the corresponding data-objects,
numpy.ndarray
-like indexing on the Dataset can be used to access a
subset of cases:
>>> ds2 = ds[2:]
>>> print ds2
A B Y C
-------------
b c 4 e
b d 2 r
>>> ds2['A']
Factor(['b', 'b'], name='A')
Together with the “informal” string representation (retrieved
by the print
statement) this can be used to inspect the cases contained in
the Dataset:
>>> print ds[0]
A B Y C
-------------
a c 2 q
>>> print ds[2:]
A B Y C
-------------
b c 4 e
b d 2 r
This type of indexing also allows indexing based on the Dataset’s variables:
>>> print ds[A == 'a']
A B Y C
-------------
a c 2 q
a d 1 w
Example¶
Below is a simple example using data objects (for more, see the statistics examples):
>>> from eelbrain import *
>>> import numpy as np
>>> y = np.empty(21)
>>> y[:14] = np.random.normal(0, 1, 14)
>>> y[14:] = np.random.normal(1.5, 1, 7)
>>> Y = Var(y, 'Y')
>>> A = Factor('abc', 'A', repeat=7)
>>> print Dataset((Y, A))
Y A
-------------
0.10967 a
0.33562 a
-0.33151 a
1.3571 a
-0.49722 a
-0.24896 a
1.0979 a
-0.56123 b
-0.51316 b
-0.25942 b
-0.6072 b
-0.79173 b
0.0019011 b
2.1804 b
2.5373 c
1.7302 c
-0.17581 c
1.8922 c
1.2734 c
1.5961 c
1.1518 c
>>> print table.frequencies(A)
cell n
--------
a 7
b 7
c 7
>>> print test.anova(Y, A)
SS df MS F p
----------------------------------------------------------------
A 8.76 2 4.38 5.75* .012
Residuals 13.7063612608 18 0.761464514489
----------------------------------------------------------------
Total 22.47 20
>>> print test.pairwise(Y, A, corr='Hochberg')
Pairwise t-Tests (independent samples)
b c
----------------------------------------
a t(12) = 0.71 t(12) = -2.79*
p = .489 p = .016
p(c) = .489 p(c) = .032
b t(12) = -3.00*
p = .011
p(c) = .032
(* Corrected after Hochberg, 1988)
>>> t = test.pairwise(Y, A, corr='Hochberg')
>>> print t.get_tex()
\begin{center}
\begin{tabular}{lll}
\toprule
& b & c \\
\midrule
a & $t_{12} = 0.71^{ \ \ \ \ }$ & $t_{12} = -2.79^{* \ \ \ }$ \\
& $p = .489$ & $p = .016$ \\
& $p_{c} = .489$ & $p_{c} = .032$ \\
b & & $t_{12} = -3.00^{* \ \ \ }$ \\
& & $p = .011$ \\
& & $p_{c} = .032$ \\
\bottomrule
\end{tabular}
\end{center}
>>> plot.Boxplot(Y, A, title="My Boxplot", ylabel="value", corr='Hochberg')
Exporting Data¶
Dataset
objects have different Dataset.save()
methods for
saving in various formats.
Iterators (such as Var
and Factor
) can be exported using the
save.txt()
function.
Changes¶
New in 0.28¶
- Transition to Python 3.6
- API changes:
testnd.anova
: Thematch
parameter is now determined automatically and does not need to be specified anymore in most cases.testnd.ttest_1samp.diff
renamed totestnd.ttest_1samp.difference
.plot.Histogram
: followingmatplotlib
, thenormed
parameter was renamed todensity
.- Previously capitalized argument and attribute names
Y
,X
andXax
are now lowercase. - Topomap-plot argument order changed to provide consistency between different plots.
NDVar
andVar
support forround(x)
MneExperiment
:- Independent measures t-test
New in 0.27¶
- API changes:
- To change the parcellation of an
NDVar
with source-space data, use the new functionset_parc()
. TheSourceSpace.set_parc()
method has been removed because dimension objects should be treated as immutable, as they can be shared between differentNDVar
instances. Analogously,UTS.set_tmin()
is nowset_tmin()
. table.frequencies()
: If the inputy
is aVar
object, the output will also be aVar
(wasFactor
).NDVar.smooth()
: window-based smoothing now uses a symmetric window, which can lead to slightly different results.
- To change the parcellation of an
concatenate()
: concatenate multipleNDVar
objects to form a new dimension.NDVar.ols()
: regress on a dimension.plot.brain.SequencePlotter
to plot multiple anatomical images on one figure.- New functions and objects:
- New methods:
New in 0.26¶
- API changes:
- A new global
configure()
function replaces module-level configuration functions. Dataset
: when a one-dimensional array is assigned to an unused key, the array is now automatically converted to aVar
object.SourceSpace.vertno
has been renamed toSourceSpace.vertices
.
- A new global
- Plotting:
- The new
name
argument allows setting the window title without adding a title to the figure. - Plots that reresent time have a new method to synchronize the time axis on
multiple plots:
link_time_axis()
. - Plot source space time series:
plot.brain.butterfly()
- The new
- ANOVAs now support mixed models with between- and within-subjects factors
(see examples at
test.anova()
). load.fiff
: when generating epochs from raw data, a newtstop
argument allows specifying the time interval exclusive of the last sample.- New functions:
- New methods:
MneExperiment
:MneExperiment.reset()
(replacingMneExperiment.store_state()
andMneExperiment.restore_state()
)- New
MneExperiment.auto_delete_results
attribute to control whether invalidated results are automatically deleted. MneExperiment.screen_log_level
New in 0.25¶
- Installation with
conda
(see Installing) and$ eelbrain
launch script (see Getting Started). - API:
- GUI/plotting:
- When using iPython 5 or later, GUI start and stop is now automatic. It is
possible to revert to the old behavior with
plot.configure()
. - There are new hotkeys for most plots (see the individual plots’ help for details).
- Plots automatically rescale when the window is resized.
- When using iPython 5 or later, GUI start and stop is now automatic. It is
possible to revert to the old behavior with
MneExperiment
:- A new
MneExperiment.sessions
attribute replacesdefaults['experiment']
, with support for multiple sessions in one experiment (see Setting up the file structure). - The
MneExperiment.epochs
parametersel_epoch
has been removed, usebase
instead. - The setting
raw='clm'
has been renamed toraw='raw'
. - Custom preprocessing pipelines (see
MneExperiment.raw
). - The
model
parameter for ANOVA tests is now optional (seeMneExperiment.tests
).
- A new
- Reverse correlation using
boosting()
. - Loading and saving
*.wav
files (load.wav()
andsave.wav()
).
New in 0.24¶
- API:
MneExperiment
: For data from the NYU New York system converted withmne
< 0.13, theMneExperiment.meg_system
attribute needs to be set to"KIT-157"
to distinguish it from data collected with the KIT UMD system.masked_parameter_map()
method of cluster-based test results: use ofpmin=None
is deprecated, usepmin=1
instead.
- New test:
test.TTestRel
. MneExperiment.make_report_rois()
includes corrected p-values in reports for tests in more than one ROIMneExperiment.make_rej()
now has adecim
parameter to improve display performance.MneExperiment
: BEM-solution files are now created dynamically withmne
and are not cached any more. This can lead to small changes in results due to improved numerical precision. Delete old files to free up space withmne_experiment.rm('bem-sol-file', subject='*')
.- New
MneExperiment.make_report_coreg()
method. - New
MneExperiment
: analysis parameter connectivity plot.TopoButterfly
: pressShift-T
for a large topo-map with sensor names.
New in 0.23¶
- API
plot.colors_for_twoway()
andplot.colors_for_categorial()
: new color model, different options. testnd.t_contrast_rel
contrasts can contain*
to include the average of multiple cells.- New
NDVar
methods:NDVar.envelope()
,NDVar.fft()
.
New in 0.22¶
- Epoch Rejection GUI:
- New “Tools” menu.
- New “Info” tool to show summary info on the rejection.
- New “Find Bad Channels” tool to automatically find bad channels.
- Set marked channels by clicking on topo-map.
- Faster page redraw.
plot.Barplot
andplot.Boxplot
: newcells
argument to customize the order of bars/boxes.MneExperiment
: new methodMneExperiment.show_rej_info()
.NDVar
: new methodNDVar.label_clusters()
.plot.configure()
: option to revert to wxPython backend forplot.brain
.
New in 0.21¶
MneExperiment
:- New epoch parameters:
trigger_shift
andvars
(seeMneExperiment.epochs
). load_selected_events()
: newvardef
parameter to load variables from a test definition.- Log files stored in the root directory.
- Parcellations (
MneExperiment.parcs
) based on combinations can also include split commands.
- New epoch parameters:
- New
Factor
method:Factor.floodfill()
. Model
methods:get_table()
replaced withas_table()
, newhead()
andtail()
.- API:
.sort_idx
methods renamed to.sort_index
.
New in 0.20¶
MneExperiment
: new analysis parameterselect_clusters='all'
to keep all clusters in cluster tests (see select_clusters (cluster selection criteria)).- Use
testnd.configure()
to limit the number of CPUs that are used in permutation cluster tests.
New in 0.19¶
- Two-stage tests (see
MneExperiment.tests
). - Safer cache-handling. See note at Analysis.
Dataset.head()
andDataset.tail()
methods for more efficiently inspecting partial Datasets.- The default format for plots in reports is now SVG since they are displayed
correctly in Safari 9.0. Use
plot.configure()
to change the default format. - API: Improvements in
plot.Topomap
with concomitant changes in the constructor signature. For examples see the meg/topographic plotting example. - API:
plot.ColorList
has a new argument called labels. - API:
testnd.anova
attributeprobability_maps
renamed top
analogous to othertestnd
results. - Rejection-GUI: The option to plot the data range only has been removed.
New in 0.18¶
- API: The first argument for
MneExperiment.plot_annot()
is now parc. - API: the
fill_in_missing
parameter tocombine()
has been deprecated and replaced with a new parameter calledincomplete
. - API: Several plotting functions have a new
xticklabels
parameter to suppress x-axis tick labels (e.g.plot.UTSStat
). - The objects returned by
plot.brain
plotting functions now contain aplot_colorbar()
method to create a correspondingplot.ColorBar
plot. - New function
choose()
to combine data in differentNDVars
on a case by case basis. - Rejection-GUI (
gui.select_epochs()
): Press Shift-i when hovering over an epoch to enter channels for interpolation manually. MneExperiment.show_file_status()
now shows the last modification date of each file.- Under OS X 10.8 and newer running code under a notifier statement now automatically prevents the computer from going to sleep.
New in 0.17¶
MneExperiment.brain_plot_defaults
can be used to customize PySurfer plots in movies and reports.MneExperiment.trigger_shift
can now also be a dictionary mapping subject name to shift value.- The rejection GUI now allows selecting individual channels for interpolation using the ‘i’ key.
- Parcellations based on combinations of existing labels, as well as
parcellations based on regions around points specified in MNI coordinates can
now be defined in
MneExperiment.parcs
. - Source space
NDVar
can be indexed with lists of region names, e.g.,ndvar.sub(source=['cuneus-lh', 'lingual-lh'])
. - API:
plot.brain.bin_table()
function signature changed slightly (more parameters, newhemi
parameter inserted to match other functions’ argument order). - API:
combine()
now raisesKeyError
when trying to combineDataset
objects with unequal keys; setfill_in_missing=True
to reproduce previous behavior. - API: Previously,
Var.as_factor()
mapped unspecified values tostr(value)
. Now they are mapped to''
. This also applies toMneExperiment.variables
entries with unspecified values.
New in 0.16¶
- New function for plotting a legend for annot-files:
plot.brain.annot_legend()
(automatically used in reports). - Epoch definitions in
MneExperiment.epochs
can now include a'base'
parameter, which will copy the given “base” epoch and modify it with the current definition. MneExperiment.make_mov_ttest()
andMneExperiment.make_mov_ga_dspm()
are fixed but require PySurfer 0.6.- New function:
table.melt_ndvar()
. - API:
plot.brain
function signatures changed slightly to accommodate more layout-related arguments. - API: use
Brain.image()
instead ofplot.brain.image()
.
New in 0.15¶
- The Eelbrain package on the PYPI is now compiled with Anaconda. This means
that the package can now be installed into an Anaconda distribution with
pip
, whereaseasy_install
has to be used for the Canopy distribution. - GUI
gui.select_epochs()
: Set marked channels through menu (View > Mark Channels) - Datasets can be saved as tables in RTF format (
Dataset.save_rtf()
). - API
plot.Timeplot
: the default spread indicator changed to SEM, and there is a new argument for timelabels. - API:
test.anova()
is now a function with a slightly changed signature. The old class has been renamed totest.ANOVA
. - API:
test.oneway()
was removed. Usetest.anova()
. - API: the default value of the
plot.Timeplot
parameter bottom changed from 0 to None (determined by the data). - API:
Factor.relabel()
renamed toFactor.update_labels()
. - Plotting: New option for the figure legend
'draggable'
(drag the legend with the mouse pointer).
New in 0.14¶
- API: the
plot.Topomap
argument sensors changed to sensorlabels. - GUI: The python/Quit Eelbrain menu command now closes all windows to ensure that unsaved documents are handled properly. In order to yield to the terminal without closing windows, use the Go/Yield to Terminal command (Command-Alt-Q).
testnd.t_contrast_rel
: support for unary operation abs.
New in 0.13¶
- The
gui.select_epochs()
GUI can now also be used to set bad channels.MneExperiment
subclasses will combine bad channel information from rejection files with bad channel information from bad channel files. Note that while bad channel files set bad channels for a given raw file globally, rejection files set bad channels only for the given epoch. Factor
objects can now remember a custom cell order which determines the order in tables and plots.- The
Var.as_factor()
method can now assign all unmentioned codes to a default value. MneExperiment
:- API: Subclasses should remove the
subject
andexperiment
parameters fromMneExperiment.label_events()
. - API:
MneExperiment
can now be imported directly fromeelbrain
. - API: The
MneExperiment._defaults
attribute should be renamed toMneExperiment.defaults
. - A draft for a guide at The MneExperiment Class.
- Cached files are now saved in a separate folder at
root/eelbrain-cache
. The cache can be cleared usingMneExperiment.clear_cache()
. To preserve cached test results, move theroot/test
folder into theroot/eelbrain-cache
folder.
- API: Subclasses should remove the
New in 0.12¶
- API:
Dataset
construction changed, allows setting the number of cases in the Dataset. - API:
plot.SensorMap2d
was renamed toplot.SensorMap
. MneExperiment
:- API: The default number of samples for reports is now 10‘000.
- New epoch parameter
'n_cases'
: raise an error if an epoch definition does not yield expected number of trials. - A custom baseline period for epochs can now be specified as a parameter in
the epoch definition (e.g.,
'baseline': (-0.2, -0.1)
). When loading data, specifyingbaseline=True
uses the epoch’s custom baseline.
New in 0.11¶
MneExperiment
:- Change in the way the covariance matrix
is defined: The epoch for the covariance matrix should be specified in
MneExperiment.epochs['cov']
. The regularization is no longer part ofset_inv()
, but is instead set withMneExperiment.set(cov='reg')
orMneExperiment.set(cov='noreg')
. - New option
cov='bestreg'
automatically selects the regularization parameter for each subejct.
- Change in the way the covariance matrix
is defined: The epoch for the covariance matrix should be specified in
Var.as_factor()
allows more efficient labeling when multiple values share the same label.- API: Previously
plot.configure_backend()
is nowplot.configure()
New in 0.10¶
- Tools for generating colors for categories (see Plotting).
- Plots now all largely respect matplotlib rc-parameters (see Customizing Matplotlib).
- Fixed an issue in the
testnd
module that could affect permutation based p-values when multiprocessing was used.
New in 0.9¶
Factor
API change: Therep
argument was renamed torepeat
.- T-values for regression coefficients through
NDVar.ols_t()
. MneExperiment
: subject name patterns and eog_sns are now handled automatically.UTSStat
andBarplot
plots can use pooled error for variability estimates (on by default for related measures designs, can be turned off using thepool_error
argument).- API: for consistency, the argument to specify the kind of error to plot
changed to
error
in both plots.
- API: for consistency, the argument to specify the kind of error to plot
changed to
New in 0.8¶
New in 0.6¶
- New recipes for Regression Design.
New in 0.5¶
- The
eelbrain.lab
andeelbrain.eellab
modules are deprecated. Everything can now me imported fromeelbrain
directly.
New in 0.4¶
- Optimized ANOVA evaluation, support for unbalanced fixed effects models.
- rpy2 interaction:
Dataset.from_r()
to create aDataset
from an R Data Frame, andDataset.to_r()
to cerate an R Data Frame from aDataset
.
New in 0.3¶
- Optimized clustering for cluster permutation tests.
New in 0.2¶
gui.SelectEpochs
Epoch rejection GIU has a new “GA” button to plot the grand average of all accepted trials- Cluster permutation tests in
testnd
use multiple cores; To disable multiprocessing seteelbrain._stats.testnd.multiprocessing = False
.
New in 0.1.7¶
gui.SelectEpochs
can now be initialized with a singlemne.Epochs
instance (data needs to be preloaded).- Parameters that take
NDVar
objects now also acceptmne.Epochs
andmne.fiff.Evoked
objects.
New in 0.1.5¶
plot.topo.TopoButterfly
plot: new keyboard commands (t
,left arrow
,right arrow
).
Development¶
Eelbrain is actively developed and maintained by Christian Brodbeck at the Computational sensorimotor systems lab at University of Maryland, College Park.
Eelbrain is fully open-source and new contributions are welcome on
GitHub. Suggestions can be
raised as issues, and modifications can be made as pull requests into the
master
branch.
The Development Version¶
The Eelbrain source code is hosted on
GitHub. Development takes
place on the master
branch, while release versions are maintained on
branches called r/0.26
etc. For further information on working with
GitHub see
GitHub’s instructions.
Installing the development version requires the presence of a compiler. On macOS, make sure Xcode is installed (open it once to accept the license agreement). Windows will indicate any needed files when the install command is run.
After cloning the repository, the development version can be installed by
running, from the Eelbrain
repository’s root directory:
$ python setup.py develop
On macOS, the $ eelbrain
shell script to run iPython
with the framework
build is not installed properly by setup.py
; in order to fix this, run:
$ ./fix-bin
In Python, you can make sure that you are working with the development version:
>>> import eelbrain
>>> eelbrain.__version__
'dev'
To switch back to the release version use $ pip uninstall eelbrain
.
Building with Conda¶
To build Eelbrain with conda
, make sure that conda-build
is installed.
Then, from Eelbrain/conda
run:
$ conda build eelbrain
After building successfully, the build can be installed with:
$ conda install --use-local eelbrain
Contributing¶
Style guides:
- Python: PEP8
- Documentation: NumPy documentation style
- ReStructured Text Primer
Useful tools:
- Graphical frontend for git: SourceTree
- Python IDE: PyCharm
Testing¶
Eelbrain uses nose for testing.
Tests for individual modules are included in folders called tests
, usually
on the same level as the module.
To run all tests, run $ make test
from the Eelbrain project directory.
On macOS, nosetests
needs to run with the framework build of Python;
if you get a corresponding error, run $ ./fix-bin nosetests
from the
Eelbrain
repository root.
Reference¶
Data Classes¶
Primary data classes:
Dataset (*args, **kwargs) |
Stores multiple variables pertaining to a common set of measurement cases |
Factor (x[, name, random, repeat, tile, labels]) |
Container for categorial data. |
Var (x[, name, repeat, tile, info]) |
Container for scalar data. |
NDVar (x, dims[, info, name]) |
Container for n-dimensional data. |
Datalist ([items, name, fmt]) |
list subclass for including lists in in a Dataset. |
Model classes (not usually initialized by themselves but through operations on primary data-objects):
Interaction (base) |
Represents an Interaction effect. |
Model (x) |
A list of effects. |
NDVar dimensions (not usually initialized by themselves but through
load
functions):
Case (n[, connectivity]) |
Case dimension |
Categorial (name, values[, connectivity]) |
Simple categorial dimension |
Scalar (name, values[, unit, tick_format, …]) |
Scalar dimension |
Sensor (locs[, names, sysname, proj2d, …]) |
Dimension class for representing sensor information |
SourceSpace (vertices[, subject, src, …]) |
MNE source space dimension. |
UTS (tmin, tstep, nsamples) |
Dimension object for representing uniform time series |
File I/O¶
Eelbrain objects can be
pickled. Eelbrain’s own
pickle I/O functions provide backwards compatibility for Eelbrain objects
(although files saved in Python 3 can only be opened in Python 2 if they are
saved with protocol<=2
):
save.pickle (obj[, dest, protocol]) |
Pickle a Python object. |
load.unpickle ([file_path]) |
Load pickled Python objects from a file. |
load.arrow ([file_path]) |
Load object serialized with pyarrow . |
save.arrow (obj[, dest]) |
Save a Python object with pyarrow . |
load.update_subjects_dir (obj, subjects_dir) |
Update NDVar SourceSpace.subjects_dir attributes |
Import¶
Functions and modules for loading specific file formats as Eelbrain object:
load.wav ([filename, name]) |
Load a wav file as NDVar |
load.tsv (path, names, bool] = True, types, …) |
Load a Dataset from a text file. |
load.eyelink |
Tools for loading data form eyelink edf files. |
load.fiff |
Tools for importing data through mne. |
load.txt |
Tools for loading data from text files. |
load.besa |
Tools for loading data from the BESA-MN pipeline. |
Export¶
Dataset with only univariate data can be saved as text using the
save_txt()
method. Additional export functions:
save.txt (iterator[, fmt, delim, dest]) |
Write any object that supports iteration to a text file. |
save.wav (ndvar[, filename, toint]) |
Save an NDVar as wav file |
Sorting and Reordering¶
align (d1, d2[, i1, i2, out]) |
Align two data-objects based on index variables |
align1 (d, idx[, d_idx, out]) |
Align a data object to an index variable |
Celltable (y[, x, match, sub, cat, ds, …]) |
Divide y into cells defined by x. |
choose (choice, sources[, name]) |
Combine data-objects picking from a different object for each case |
combine (items[, name, check_dims, incomplete]) |
Combine a list of items of the same type into one item. |
shuffled_index (n[, cells]) |
Return an index to shuffle a data-object |
NDVar Operations¶
For the most common operations see NDVar
methods. Here are less common
and more specific operations:
Butterworth (low, high, order[, sfreq]) |
Butterworth filter |
complete_source_space (ndvar[, fill]) |
Fill in missing vertices on an NDVar with a partial source space |
concatenate (ndvars[, dim, name, tmin, info, …]) |
Concatenate multiple NDVars |
convolve (h, x) |
Convolve h and x along the time dimension |
cross_correlation (in1, in2[, name]) |
Cross-correlation between two NDVars along the time axis |
cwt_morlet (y, freqs[, use_fft, n_cycles, …]) |
Time frequency decomposition with Morlet wavelets (mne-python) |
dss (ndvar) |
Denoising source separation (DSS) |
filter_data (ndvar, l_freq, h_freq[, …]) |
Apply mne.filter.filter_data() to an NDVar |
frequency_response (b[, frequencies]) |
Frequency response for a FIR filter |
label_operator (labels[, operation, exclude, …]) |
Convert labeled NDVar into a matrix operation to extract label values |
labels_from_clusters (clusters[, names]) |
Create Labels from source space clusters |
morph_source_space (ndvar, subject_to[, …]) |
Morph source estimate to a different MRI subject |
neighbor_correlation (x[, dim, obs, name]) |
Calculate Neighbor correlation |
psd_welch (ndvar[, fmin, fmax, n_fft, …]) |
Power spectral density with Welch’s method |
resample (ndvar, sfreq[, npad, window, name]) |
Resample an NDVar along the ‘time’ dimension with appropriate filter |
segment (continuous, times, tstart, tstop[, …]) |
Segment a continuous NDVar |
set_parc (ndvar, parc[, dim]) |
Change the parcellation of an NDVar with SourceSpace dimension |
set_tmin (ndvar[, tmin]) |
Change the time axis of an NDVar |
xhemi (ndvar[, mask, hemi, parc]) |
Project data from both hemispheres to hemi of fsaverage_sym |
Reverse Correlation¶
boosting (y, x, tstart, tstop[, scale_data, …]) |
Estimate a temporal response function through boosting |
BoostingResult (h, r, isnan, t_run, version, …) |
Result from boosting a temporal response function |
Tables¶
Manipulate data tables and compile information about data objects such as cell frequencies:
table.cast_to_ndvar (data, dim_values, match) |
Create an NDVar by converting a data column to a dimension |
table.difference (y, x, c1, c0, match[, by, …]) |
Subtract data in one cell from another |
table.frequencies (y[, x, of, sub, ds]) |
Calculate frequency of occurrence of the categories in y |
table.melt (name, cells, cell_var_name, ds) |
Restructure a Dataset such that a measured variable is in a single column |
table.melt_ndvar (ndvar[, dim, cells, ds, …]) |
Transform data to long format by converting an NDVar dimension into a variable |
table.repmeas (y, x, match[, sub, ds]) |
Create a repeated-measures table |
table.stats (y, row[, col, match, sub, fmt, …]) |
Make a table with statistics |
Statistics¶
Univariate statistical tests:
test.Correlation (y, x[, sub, ds]) |
Pearson product moment correlation between y and x |
test.TTest1Sample (y[, match, sub, ds, tail]) |
1-sample t-test |
test.TTestInd (y, x[, c1, c0, match, sub, …]) |
Related-measures t-test |
test.TTestRel (y, x[, c1, c0, match, sub, …]) |
Related-measures t-test |
test.anova (y, x[, sub, title, ds]) |
Univariate ANOVA. |
test.pairwise (y, x[, match, sub, ds, par, …]) |
Pairwise comparison table |
test.ttest (y[, x, against, match, sub, …]) |
T-tests for one or more samples |
test.correlations (y, x[, cat, sub, ds, asds]) |
Correlation with one or more predictors |
test.pairwise_correlations (xs[, sub, ds, labels]) |
Pairwise correlation table |
test.lilliefors (data[, formatted]) |
Lilliefors’ test for normal distribution |
Mass-Univariate Statistics¶
testnd.ttest_1samp (y[, popmean, match, sub, …]) |
Mass-univariate one sample t-test |
testnd.ttest_rel (y, x[, c1, c0, match, sub, …]) |
Mass-univariate related samples t-test |
testnd.ttest_ind (y, x[, c1, c0, match, sub, …]) |
Mass-univariate independent samples t-test |
testnd.t_contrast_rel (y, x, contrast[, …]) |
Mass-univariate contrast based on t-values |
testnd.anova (y, x[, sub, ds, samples, pmin, …]) |
Mass-univariate ANOVA |
testnd.corr (y, x[, norm, sub, ds, samples, …]) |
Mass-univariate correlation |
By default the tests in this module produce maps of statistical parameters along with maps of p-values uncorrected for multiple comparison. Using different parameters, different methods for multiple comparison correction can be applied (for more details and options see the documentation for individual tests):
- 1: permutation for maximum statistic (
samples=n
) - Look for the maximum
value of the test statistic in
n
permutations and calculate a p-value for each data point based on this distribution of maximum statistics. - 2: Threshold-based clusters (
samples=n, pmin=p
) - Find clusters of data
points where the original statistic exceeds a value corresponding to an
uncorrected p-value of
p
. For each cluster, calculate the sum of the statistic values that are part of the cluster. Do the same inn
permutations of the original data and retain for each permutation the value of the largest cluster. Evaluate all cluster values in the original data against the distributiom of maximum cluster values (see [1]). - 3: Threshold-free cluster enhancement (
samples=n, tfce=True
) - Similar to (1), but each statistical parameter map is first processed with the cluster-enhancement algorithm (see [2]). This is the most computationally intensive option.
Two-stage tests¶
Two-stage tests proceed by estimating parameters for a fixed effects model for
each subject, and then testing hypotheses on these parameter estimates on the
group level. Two-stage tests are implemented by fitting an LM
for each subject, and then combining them in a LMGroup
to
retrieve coefficients for group level statistics.
testnd.LM (y, model[, ds, coding, subject, sub]) |
Fixed effects linear model |
testnd.LMGroup (lms) |
Group level analysis for linear model LM objects |
References¶
[1] | Maris, E., & Oostenveld, R. (2007). Nonparametric statistical testing of EEG- and MEG-data. Journal of Neuroscience Methods, 164(1), 177-190. 10.1016/j.jneumeth.2007.03.024 |
[2] | Smith, S. M., and Nichols, T. E. (2009). Threshold-Free Cluster Enhancement: Addressing Problems of Smoothing, Threshold Dependence and Localisation in Cluster Inference. NeuroImage, 44(1), 83-98. 10.1016/j.neuroimage.2008.03.061 |
Plotting¶
Plot univariate data (Var
objects):
plot.Barplot (y[, x, match, sub, cells, …]) |
Barplot for a continuous variable |
plot.Boxplot (y[, x, match, sub, cells, …]) |
Boxplot for a continuous variable |
plot.PairwiseLegend ([size, trend]) |
Legend for colors used in pairwise comparisons |
plot.Correlation (y, x[, cat, sub, ds, c, …]) |
Plot the correlation between two variables |
plot.Histogram (y[, x, match, sub, ds, …]) |
Histogram plots with tests of normality |
plot.Regression (y, x[, cat, match, sub, ds, …]) |
Plot the regression of y on x |
plot.Timeplot (y, categories, time[, match, …]) |
Plot a variable over time |
Color tools for plotting:
plot.colors_for_categorial (x[, hue_start, cmap]) |
Automatically select colors for a categorial model |
plot.colors_for_oneway (cells[, hue_start, …]) |
Define colors for a single factor design |
plot.colors_for_twoway (x1_cells, x2_cells[, …]) |
Define cell colors for a two-way design |
plot.ColorBar (cmap, vmin[, vmax, label, …]) |
A color-bar for a matplotlib color-map |
plot.ColorGrid (row_cells, column_cells, colors) |
Plot colors for a two-way design in a grid |
plot.ColorList (colors[, cells, labels, h]) |
Plot colors with labels |
Plot uniform time-series:
plot.UTS (y[, xax, axtitle, ds, sub, xlabel, …]) |
Value by time plot for UTS data |
plot.UTSClusters (res[, pmax, ptrend, …]) |
Plot permutation cluster test results |
plot.UTSStat (y[, x, xax, match, sub, ds, …]) |
Plot statistics for a one-dimensional NDVar |
Plot multidimensional uniform time series:
plot.Array (y[, xax, xlabel, ylabel, …]) |
Plot UTS data to a rectangular grid. |
plot.Butterfly (y[, xax, sensors, axtitle, …]) |
Butterfly plot for NDVars |
Plot topographic maps of sensor space data:
plot.TopoArray (y[, xax, ds, sub, vmax, …]) |
Channel by sample plots with topomaps for individual time points |
plot.TopoButterfly (y[, xax, ds, sub, vmax, …]) |
Butterfly plot with corresponding topomaps |
plot.Topomap (y[, xax, ds, sub, vmax, vmin, …]) |
Plot individual topogeraphies |
plot.TopomapBins (y[, xax, ds, sub, …]) |
Topomaps in time-bins |
Plot sensor layout maps:
plot.SensorMap (sensors[, labels, proj, …]) |
Plot sensor positions in 2 dimensions |
plot.SensorMaps (sensors[, select, proj, …]) |
Multiple views on a sensor layout. |
xax parameter¶
Many plots have an xax
parameter which is used to sort the data in y
into different categories and plot them on separate axes. xax
can be
specified through categorial data, or as a dimension in y
.
If a categorial data object is specified for xax
, y
is split into the
categories in xax
, and for every cell in xax
a separate subplot is
shown. For example, while
>>> plot.Butterfly('meg', ds=ds)
will create a single Butterfly plot of the average response,
>>> plot.Butterfly('meg', 'subject', ds=ds)
where 'subject'
is the xax
parameter, will create a separate subplot
for every subject with its average response.
A dimension on y
can be specified through a string starting with .
.
For example, to plot each case of meg
separately, use:
>>> plot.Butterfly('meg', '.case', ds=ds)
Layout¶
Most plots that also share certain layout keyword arguments. By default, all those parameters are determined automatically, but individual values can be specified manually by supplying them as keyword arguments.
- h, w : scalar
- Height and width of the figure.
- axh, axw : scalar
- Height and width of the axes.
- nrow, ncol : None | int
- Limit number of rows/columns. If neither is specified, a square layout is produced
- ax_aspect : scalar
- Width / height aspect of the axes.
- name : str
- Window title (i.e. not displayed on the figure itself).
Plots that do take those parameters can be identified by the **layout
in
their function signature.
GUI Interaction¶
By default, new plots are automatically shown and, if the Python interpreter is in interactive mode the GUI main loop is started. This behavior can be controlled with 2 arguments when constructing a plot:
- show : bool
- Show the figure in the GUI (default True). Use False for creating figures and saving them without displaying them on the screen.
- run : bool
- Run the Eelbrain GUI app (default is True for interactive plotting and False in scripts).
The behavior can also be changed globally using configure()
:
Plotting Brains¶
The plot.brain
module contains specialized functions to plot
NDVar
objects containing source space data.
For this it uses a subclass of PySurfer’s surfer.Brain
class.
The functions below allow quick plotting.
More specific control over the plots can be achieved through the
Brain
object that is returned.
plot.brain.brain (src[, cmap, vmin, vmax, …]) |
Create a PySurfer Brain object with a data layer |
plot.brain.butterfly (y[, cmap, vmin, vmax, …]) |
Shortcut for a Butterfly-plot with a time-linked brain plot |
plot.brain.cluster (cluster[, vmax]) |
Plot a spatio-temporal cluster |
plot.brain.dspm (src[, fmin, fmax, fmid]) |
Plot a source estimate with coloring for dSPM values (bipolar). |
plot.brain.p_map (p_map[, param_map, p0, p1, …]) |
Plot a map of p-values in source space. |
plot.brain.annot (annot[, subject, surf, …]) |
Plot the parcellation in an annotation file |
plot.brain.annot_legend (lh, rh, *args, **kwargs) |
Plot a legend for a freesurfer parcellation |
plot.brain.SequencePlotter () |
Grid of anatomical images in one figure |
plot._brain_object.Brain |
In order to make custom plots, a Brain
figure
without any data added can be created with
plot.brain.brain(ndvar.source, mask=False)
.
Surface options for plotting data on fsaverage
:
white |
smoothwm |
inflated_pre |
inflated |
inflated_avg |
sphere |
GUIs¶
Tools with a graphical user interface (GUI):
gui.select_components (path, ds[, sysname]) |
GUI for selecting ICA-components |
gui.select_epochs (ds[, data, accept, blink, …]) |
GUI for rejecting trials of MEG/EEG data |
Controlling the GUI Application¶
Eelbrain uses a wxPython based application to create GUIs. This GUI appears as a separate application with its own Dock icon. The way that control of this GUI is managed depends on the environment form which it is invoked.
When Eelbrain plots are created from within iPython, the GUI is managed in the
background and control returns immediately to the terminal. There might be cases
in which this is not desired, for example when running scripts. After execution
of a script finishes, the interpreter is terminated and all associated plots are
closed. To avoid this, the command gui.run(block=True)
can be inserted at
the end of the script, which will keep all gui elements open until the user
quits the GUI application (see gui.run()
below).
In interpreters other than iPython, input can not be processed from the GUI
and the interpreter shell at the same time. In that case, the GUI application
is activated by default whenever a GUI is created in interactive mode
(this can be avoided by passing run=False
to any plotting function).
While the application is processing user input,
the shell can not be used. In order to return to the shell, quit the
application (the python/Quit Eelbrain menu command or Command-Q). In order to
return to the terminal without closing all windows, use the alternative
Go/Yield to Terminal command (Command-Alt-Q). To return to the application
from the shell, run gui.run()
. Beware that if you terminate the Python
session from the terminal, the application is not given a chance to assure that
information in open windows is saved.
gui.run ([block]) |
Hand over command to the GUI (quit the GUI to return to the terminal) |
MNE-Experiment¶
The MneExperiment
class serves as a base class for analyzing MEG
data (gradiometer only) with MNE:
MneExperiment ([root, find_subjects]) |
Analyze an MEG experiment (gradiometer only) with MNE |
ROITestResult (subjects, samples, …) |
Test results for temporal tests in one or more ROIs |
See also
For the guide on working with the MneExperiment class see The MneExperiment Class.
Datasets¶
Datasets for experimenting and testing:
datasets.get_loftus_masson_1994 () |
Dataset used for illustration purposes by Loftus and Masson (1994) |
datasets.get_mne_sample ([tmin, tmax, …]) |
Load events and epochs from the MNE sample data |
datasets.get_uts ([utsnd, seed, nrm]) |
Create a sample Dataset with 60 cases and random data. |
datasets.get_uv ([seed, nrm]) |
Dataset with random univariate data |
Configuration¶
-
eelbrain.
configure
(n_workers=None, frame=None, autorun=None, show=None, format=None, figure_background=None, prompt_toolkit=None, animate=None, nice=None, tqdm=None)¶ Set basic configuration parameters for the current session
Parameters: - n_workers : bool | int
Number of worker processes to use in multiprocessing enabled computations.
False
to disable multiprocessing.True
(default) to use as many processes as cores are available. Negative numbers to use all but n available CPUs.- frame : bool
Open figures in the Eelbrain application. This provides additional functionality such as copying a figure to the clipboard. If False, open figures as normal matplotlib figures.
- autorun : bool
When a figure is created, automatically enter the GUI mainloop. By default, this is True when the figure is created in interactive mode but False when the figure is created in a script (in order to run the GUI at a specific point in a script, call
eelbrain.gui.run()
).- show : bool
Show plots on the screen when they’re created (disable this to create plots and save them without showing them on the screen).
- format : str
Default format for plots (for example “png”, “svg”, …).
- figure_background : bool | matplotlib color
While
matplotlib
uses a gray figure background by default, Eelbrain uses white. Set this parameter toFalse
to use the default frommatplotlib.rcParams
, or set it to a valid matplotblib color value to use an arbitrary color.True
to revert to the default white.- prompt_toolkit : bool
In IPython 5, prompt_toolkit allows running the GUI main loop in parallel to the Terminal, meaning that the IPython terminal and GUI windows can be used without explicitly switching between Terminal and GUI. This feature is enabled by default, but can be disabled by setting
prompt_toolkit=False
.- animate : bool
Animate plot navigation (default True).
- nice : int [-20, 19]
Scheduling priority for muliprocessing (larger number yields more to other processes; negative numbers require root privileges).
- tqdm : bool
Enable or disable
tqdm
progress bars.
Recipes¶
Group Level Analysis¶
To do group level analysis one usually wants to construct a Dataset
that contains results for each participants along with condition and subject
labels. The following illustration assumes functions that compute results
for a single subject and condition:
Given results by subject and condition, a Dataset can be constructed as follows:
>>> # create lists to collect data and labels
>>> ndvar_results = []
>>> scalar_results = []
>>> subjects = []
>>> conditions = []
>>> # collect data and labels
>>> for subject in ('s1', 's2', 's3', 's4'):
... for condition in ('c1', 'c2'):
... ndvar = result_for(subject, condition)
... s = scalar_result_for(subject, condition)
... ndvar_results.append(ndvar)
... scalar_results.append(s)
... subjects.append(subject)
... conditions.append(condition)
...
>>> # create a Dataset and convert the collected lists to appropriate format
>>> ds = Dataset()
>>> ds['subject'] = Factor(subjects, random=True) # treat as random effect
>>> ds['condition'] = Factor(conditions)
>>> ds['y'] = combine(ndvar_results)
>>> ds['s'] = Var(scalar_results)
Now this Dataset can be used for statistical analysis, for example ANOVA:
>>> res = testnd.anova('y', 'condition * subject', ds=ds)
Regression Design¶
The influence of a continuous predictor on single trial level can be tested by first calculating regression coefficients for each subject, and then performing a one sample t-test across subjects to test the null hypothesis that between subjects, the regression coefficient does not differ significantly from 0.
Assuming that ds_subject
is a Dataset
containing single trial data
for one subject, with data
the dependent variable and a predictor (called
predictor
):
>>> ds_subject
<Dataset 'example' n_cases=145 {'predictor':V, 'data':Vnd}>
>>> ds_subject['data']
<NDVar 'data': 145 (case) X 5120 (source) X 76 (time)>
>>> print ds_subject
predictor
---------
1.9085
0.30836
-0.58802
0.29686
...
The regression coefficient can be calculated the following way:
>>> beta = ds_subject.eval("data.ols(predictor)")
>>> beta
<NDVar 'ols': 1 (case) X 5120 (source) X 76 (time)>
Thus, in order to collect beta values for each subject, you would loop through subjects. We will call the NDVar with beta values ‘beta’:
>>> subjects = []
>>> betas = []
>>> for subject in ['R0001', 'R0002', 'R0003']:
... ds_subject = my_load_ds_for_subject_function(subject)
... beta = ds_subject.eval("data.ols(predictor, 'beta')")
... subjects.append(subject)
... betas.append(beta)
...
>>> ds = Dataset()
>>> ds['subject'] = Factor(subjects, random=True)
>>> ds['beta'] = combine(betas)
Now you can perform a one-sample t-test:
>>> res = testnd.ttest_1samp('beta', ...)
And analyze the results as for other testnd
tests.
Plots for Publication¶
In order to produce multiple plots it is useful to set some plotting parameters globally in order to ensure that they are consistent between plots, e.g.:
import matplotlib as mpl
mpl.rcParams['font.family'] = 'sans-serif'
mpl.rcParams['font.size'] = 8
for key in mpl.rcParams:
if 'width' in key:
mpl.rcParams[key] *= 0.5
mpl.rcParams['savefig.dpi'] = 300 # different from 'figure.dpi'!
Matplotlib’s tight_layout()
functionality provides an
easy way for plots to use the available space, and most Eelbrain plots use it
by default. However, when trying to produce multiple plots with identical
scaling it can lead to unwanted discrepancies. In this case, it is better to
define layout parameters globally and plot with the tight=False
argument:
mpl.rcParams['figure.subplot.left'] = 0.25
mpl.rcParams['figure.subplot.right'] = 0.95
mpl.rcParams['figure.subplot.bottom'] = 0.2
mpl.rcParams['figure.subplot.top'] = 0.95
plot.UTSStat('uts', 'A', ds=ds, w=5, tight=False)
# now we can produce a second plot without x-axis labels that has exactly
# the same scaling:
plot.UTSStat('uts', 'A % B', ds=ds, w=5, tight=False, xlabel=False, ticklabels=False)
If a script produces several plots, the GUI should not interrupt the script.
This can be achieved by setting the show=False
argument. In addition, it
is usually desirable to save the legend separately and combine it in a layout
application:
p = plot.UTSStat('uts', 'A', ds=ds, w=5, tight=False, show=False, legend=False)
p.save('plot.svg', transparent=True)
p.save_legend('legend.svg', transparent=True)
The MneExperiment
Class¶
MneExperiment is a base class for managing data analysis for an MEG experiment with MNE.
See also
MneExperiment
class reference for details on all available methods
Step by step¶
Contents
Setting up the file structure¶
-
MneExperiment.
sessions
¶
The first step is to define an MneExperiment
subclass with the name
of the experiment:
from eelbrain import *
class WordExperiment(MneExperiment):
path_version = 1
sessions = 'words'
Where sessions
is the name which you included in your raw data files after
the subject identifier.
Once this basic experiment class is defined, it can be initialized without root
(i.e., without data files). This is useful to see the required file structure:
>>> e = WordExperiment()
>>> e.show_input_tree()
root
mri-sdir /mri
mri-dir /{mrisubject}
meg-sdir /meg
meg-dir /{subject}
raw-dir
trans-file /{mrisubject}-trans.fif
raw-file /{subject}_{session}-raw.fif
This output shows a template for the path structure according to which the input
files have to be organized. Assuming that root="/files"
, for a subject
called “R0001” this includes:
- MRI-directory at
/files/mri/R0001
- the raw data file at
/files/meg/R0001/R0001_words-raw.fif
(the session is called “words” which is specified inWordExperiment.sessions
) - the trans-file from the coregistration at
/files/meg/R0001/R0001-trans.fif
Once the required files are placed in this structure, the experiment class can be initialized with the proper root parameter, pointing to where the files are located:
>>> e = WordExperiment("/files")
The setup can be tested using MneExperiment.show_subjects()
, which shows
a list of the subjects that were discovered and the MRIs used:
>>> e.show_subjects()
# subject mri
-----------------------------------------
0 R0026 R0026
1 R0040 fsaverage * 0.92
2 R0176 fsaverage * 0.954746600461
...
Pre-processing¶
Make sure an appropriate pre-processing pipeline is defined as
MneExperiment.raw
.
To inspect raw data for a given pre-processing stage use:
>>> e.set(raw='1-40')
>>> y = e.load_raw(ndvar=True)
>>> p = plot.TopoButterfly(y, xlim=5)
Which will plot 5 s excerpts and allow scrolling through the data.
Labeling events¶
Initially, events are only labeled with the trigger ID. Use the
MneExperiment.variables
settings to add labels.
For more complex designs and variables, you can override
MneExperiment.label_events()
.
Events are represented as Dataset
objects and can be inspected with
corresponding methods and functions, for example:
>>> e = WordExperiment("/files")
>>> ds = e.load_events()
>>> ds.head()
>>> print(table.frequencies('trigger', ds=ds))
Defining data epochs¶
Once events are properly labeled, define MneExperiment.epochs
. There is
one special epoch to define, which is called 'cov'
. This is the data epoch
that will be used to estimate the sensor noise covariance matrix for source
estimation.
Bad channels¶
Flat channels are automatically excluded from the analysis.
An initial check for noisy channels can be done by looking at the raw data (see
Pre-processing above).
If this inspection reveals bad channels, they can be excluded using
MneExperiment.make_bad_channels()
.
Another good check for bad channels is plotting the average evoked response,, and looking for channels which are uncorrelated with neighboring channels. To plot the average before trial rejection, use:
>>> ds = e.load_epochs(epoch='epoch', reject=False)
>>> plot.TopoButterfly('meg', ds=ds)
The neighbor correlation can also be quantified, using:
>>> nc = neighbor_correlation(concatenate(ds['meg']))
>>> nc.sensor.names[nc < 0.3]
Datalist([u'MEG 099'])
A simple way to cycle through subjects when performing a given pre-processing
step is MneExperiment.next()
.
ICA¶
If preprocessing includes ICA, select which ICA components should be removed.
The experiment raw
state needs to be set to the ICA stage of the pipeline:
>>> e.set(raw='ica')
>>> e.make_ica_selection(epoch='epoch', decim=10)
Set epoch
to the epoch whose data you want to display in the GUI (see
MneExperiment.make_ica_selection()
for more information, in particular on
how to precompute ICA decomposition for all subjects).
In order to select ICA components for multiple subject, a simple way to cycle
through subjects is MneExperiment.next()
, like:
>>> e.make_ica_selection(epoch='epoch', decim=10)
>>> e.next()
subject: 'R1801' -> 'R2079'
>>> e.make_ica_selection(epoch='epoch', decim=10)
>>> e.next()
subject: 'R2079' -> 'R2085'
...
Trial selection¶
For each primary epoch that is defined, bad trials can be rejected using
MneExperiment.make_rej()
. Rejections are specific to a given raw
state:
>>> e.set(raw='ica1-40')
>>> e.make_rej()
>>> e.next()
subject: 'R1801' -> 'R2079'
>>> e.make_rej()
...
To reject trials based on a pre-determined threshold, a loop can be used:
>>> for subject in e:
... e.make_rej(auto=1e-12)
...
Analysis¶
Finally, define MneExperiment.tests
and create a make-reports.py
script so that all reports can be updated by running a single script
(see Example).
Warning
If source files are changed (raw files, epoch rejection or bad channel
files, …) reports are not updated unless the corresponding
MneExperiment.make_report()
function is called again. For this reason
it is useful to have a script that calls MneExperiment.make_report()
for all desired reports. Running the script ensures that all reports are
up-to-date, and will only take seconds if nothing has to be recomputed.
Example¶
The following is a complete example for an experiment class definition file
(the source file can be found in the Eelbrain examples folder at
examples/experiment/sample_experiment.py
):
# Author: Christian Brodbeck <christianbrodbeck@nyu.edu>
"""
Sample MneExperiment. This experiment can be used with a sample dataset that
treats different parts of the recording from the MNE sample dataset as different
subjects. To produce the data directory for this experiment use (make sure
that the directory you specify exists)::
>>> from eelbrain import datasets
>>> datasets.setup_samples_experiment('~/Data')
Then you can use::
>>> from sample_experiment import SampleExperiment
>>> e = SampleExperiment("~/Data/SampleExperiment")
"""
from eelbrain import MneExperiment
FILTER_KWARGS = {
'filter_length': 'auto',
'l_trans_bandwidth': 'auto',
'h_trans_bandwidth': 'auto',
'phase': 'zero',
'fir_window': 'hamming',
'fir_design': 'firwin',
}
class SampleExperiment(MneExperiment):
owner = "me@nyu.edu"
path_version = 1
meg_system = 'neuromag306mag'
sessions = 'sample'
defaults = {
'epoch': 'target',
'select_clusters': 'all',
}
variables = {
'event': {(1, 2, 3, 4): 'target', 5: 'smiley', 32: 'button'},
'side': {(1, 3): 'left', (2, 4): 'right'},
'modality': {(1, 2): 'auditory', (3, 4): 'visual'}
}
raw = {
'tsss': {
'type': 'maxwell_filter',
'source': 'raw',
'kwargs': {'st_duration': 10.,
'ignore_ref': True,
'st_correlation': .9,
'st_only': True}},
'ica': {
'type': 'ica',
'source': 'tsss',
'session': 'sample',
'kwargs': {'n_components': 0.95,
'random_state': 0,
'method': 'fastica'}},
'ica1-40': {
'type': 'filter',
'source': 'ica',
'args': (1, 40),
'kwargs': FILTER_KWARGS},
}
epochs = {
# all target stimuli:
'target': {'sel': "event == 'target'", 'tmax': 0.3},
# only auditory stimulation
'auditory': {'base': 'target', 'sel': "modality == 'auditory'"},
# only visual stimulation
'visual': {'base': 'target', 'sel': "modality == 'visual'"},
# recombine auditory and visual
'av': {'sub_epochs': ('auditory', 'visual')},
}
tests = {
# T-test to compare left-sided vs right-sided stimulation
'left=right': {'kind': 'ttest_rel', 'model': 'side',
'c1': 'left', 'c0': 'right'},
# One-tailed test for auditory > visual stimulation
'a>v': {'kind': 'ttest_rel', 'model': 'modality',
'c1': 'auditory', 'c0': 'visual', 'tail': 1},
# Two-stage
'twostage': {'kind': 'two-stage',
'model': 'side % modality',
'stage 1': 'side_left + modality_a',
'vars': {'side_left': "side == 'left'",
'modality_a': "modality == 'auditory'"}}
}
Given the SampleExperiment
class definition above, the following is a
script that would compute/update analysis reports:
# skip test: data unavailable
"""A script that creates test reports for an MneExperiment subclass
"""
from sample_experiment import SampleExperiment, ROOT
# create the experiment class instance
e = SampleExperiment(ROOT)
# Use this to send an email to e.owner when the reports are finished or the
# script raises an error
with e.notification:
# Whole-brain test with default settings
e.make_report('a>v', mask='lobes', pmin=0.05, tstart=0.05, tstop=0.2)
# different inverse solution
e.make_report('a>v', mask='lobes', pmin=0.05, tstart=0.05, tstop=0.2,
inv='fixed-3-dSPM')
# test on a different epoch (using only auditory trials)
# note that inv is still 'fixed-3-dSPM' unless it is set again
e.make_report('left=right', mask='lobes', pmin=0.05, tstart=0.05, tstop=0.2,
epoch='auditory')
Experiment Definition¶
Contents
Basic setup¶
-
MneExperiment.
owner
¶
Set MneExperiment.owner
to your email address if you want to be able to
receive notifications. Whenever you run a sequence of commands with
mne_experiment.notification:
you will get an email once the respective code
has finished executing or run into an error, for example:
>>> e = MyExperiment()
>>> with e.notification:
... e.make_report('mytest', tstart=0.1, tstop=0.3)
...
will send you an email as soon as the report is finished (or the program encountered an error)
-
MneExperiment.
auto_delete_results
¶
Whenever a MneExperiment
instance is initialized with a valid
root
path, it checks whether changes in the class definition invalidate
previously computed results. By default, the user is prompted to confirm
the deletion of invalidated results. Set .auto_delete_results=True
to
delete them automatically without interrupting initialization.
-
MneExperiment.
screen_log_level
¶
Determines the amount of information displayed on the screen while using
an MneExperiment
(see logging
).
-
MneExperiment.
meg_system
¶
Starting with mne
0.13, fiff files converted from KIT files store
information about the system they were collected with. For files converted
earlier, the MneExperiment.meg_system
attribute needs to specify the
system the data were collected with. For data from NYU New York, the
correct value is meg_system="KIT-157"
.
-
MneExperiment.
path_version
¶
MneExperiment.path_version
determines the file naming scheme. If you
are starting a new experiment, set it to 1
to use the most recent file
naming scheme. If your experiment class was defined before Eelbrain version
0.13, set it to 0
.
-
MneExperiment.
trigger_shift
¶
Set this attribute to shift all trigger times by a constant (in seconds). For
example, with trigger_shift = 0.03
a trigger that originally occurred
35.10 seconds into the recording will be shifted to 35.13. If the trigger delay
differs between subjects, this attribute can also be a dictionary mapping
subject names to shift values, e.g.
trigger_shift = {'R0001': 0.02, 'R0002': 0.05, ...}
.
Defaults¶
-
MneExperiment.
defaults
¶
The defaults dictionary can contain default settings for experiment analysis parameters, e.g.:
defaults = {'epoch': 'my_epoch',
'cov': 'noreg',
'raw': '1-40'}
Pre-processing (raw)¶
-
MneExperiment.
raw
¶
Define a pre-processing pipeline as a series of processing steps.
The default pre-processing pipeline is defined as follows:
default_raw = {
'0-40': {
'source': 'raw', 'type': 'filter', 'args': (None, 40),
'kwargs': {'method': 'iir'}},
'0.1-40': {
'source': 'raw', 'type': 'filter', 'args': (0.1, 40),
'kwargs': {'l_trans_bandwidth': 0.08, 'filter_length': '60s'}},
'0.2-40': {
'source': 'raw', 'type': 'filter', 'args': (0.2, 40),
'kwargs': {'l_trans_bandwidth': 0.08, 'filter_length': '60s'}},
'1-40': {
'source': 'raw', 'type': 'filter', 'args': (1, 40),
'kwargs': {'method': 'iir'}},
}
Additional pipes can be added in a MneExperiment.raw
attribute.
mne
changed default values for filtering occasionally. Eelbrain tries to
correct for this, but can’t guarantee it. For this reason it is advantageous
to fully define filter parameters when starting a new experiment, to both use
the newest settings and keep them consistent over time, for example:
# as of mne 0.16
FILTER_KWARGS = {
'filter_length': 'auto',
'l_trans_bandwidth': 'auto',
'h_trans_bandwidth': 'auto',
'phase': 'zero',
'fir_window': 'hamming',
'fir_design': 'firwin',
}
For example, to use TSSS, ICA and finally band-pass filter:
raw = {
'tsss': {
'type': 'maxwell_filter',
'source': 'raw',
'kwargs': {'st_duration': 10.,
'ignore_ref': True,
'st_correlation': .9,
'st_only': True}},
'1-40': {
'type': 'filter',
'source': 'tsss',
'args': (1, 40),
'kwargs': FILTER_KWARGS},
'ica': {
'type': 'ica',
'source': 'tsss',
'session': 'session',
'kwargs': {'n_components': 0.99,
'random_state': 0,
'method': 'extended-infomax'}},
'ica1-40': {
'type': 'filter',
'source': 'ica',
'args': (1, 40),
'kwargs': FILTER_KWARGS},
}
Each raw
dictionary entry constitutes one pipe, with in input (source
)
from another pipe, or raw data ('raw'
) and a 'type'
, and the following
type-specific parameters:
filter
¶
args
(tuple
) and kwargs
(dict
) for
mne.io.Raw.filter()
.
ica
¶
- session : str | tuple of str
- One or several sessions from which the raw data is used for estimating ICA components.
- kwargs : dict
mne.preprocessing.ICA
parameters.
Use MneExperiment.make_ica_selection()
to select ICA components to reject
for each subject.
tsss
¶
Temporal signal space separation; kwargs
(dict
) with
mne.preprocessing.maxwell_filter()
parameters.
Event variables¶
-
MneExperiment.
variables
¶
Categorial event variables can be specified in a dictionary mapping variable names to trigger-schemes, for example:
class MyExperiment(MneExperiment):
variables = {'word_type': {1: 'adjective', 2: 'noun', 3: 'verb',
(4, 5, 6): 'other'}}
This defines a variable called “word_type”, and on this variable all events
that have trigger 1 have the value “adjective”, events with trigger 2 have
the value “noun” and events with trigger 3 have the value “verb”. The last
entry shows how to map multiple trigger values to the same value, i.e. all
events that have a trigger value of either 4, 5 or 6 are labelled as “other”.
Unmentioned trigger values are assigned the empty string (''
).
These variables are assigned to the events-Dataset in
MneExperiment.label_events()
.
Epochs¶
-
MneExperiment.
epochs
¶
Epochs are specified as a {str
: dict
} dictionary. Keys are
names for epochs, and values are corresponding definitions. Epoch definitions
can use the following keys:
- sel :
str
- Expression which evaluates in the events Dataset to the index of the events included in this Epoch specification.
- tmin :
float
- Start of the epoch (default -0.1).
- tmax :
float
- End of the epoch (default 0.6).
- decim :
int
- Decimate the data by this factor (i.e., only keep every
decim
’th sample; default 5). - baseline :
tuple
- The baseline of the epoch (default
(None, 0)
). - n_cases :
int
- Expected number of epochs. If n_cases is defined, a RuntimeError error will be raised whenever the actual number of matching events is different.
- trigger_shift :
float
|str
- Shift event triggers before extracting the data [in seconds]. Can be a
float to shift all triggers by the same value, or a str indicating an event
variable that specifies the trigger shift for each trigger separately.
For secondary epochs the
trigger_shift
is applied additively with thetrigger_shift
of their base epochs. - post_baseline_trigger_shift :
str
- Shift the trigger (i.e., where epoch time = 0) after baseline correction.
The value of this entry has to be the name of an event variable providing
for each epoch the actual amount of time shift (in seconds). If the
post_baseline_trigger_shift
parameter is specified, the parameterspost_baseline_trigger_shift_min
andpost_baseline_trigger_shift_max
are also needed, specifying the smallest and largest possible shift. These are used to crop the resulting epochs appropriately, to the region fromnew_tmin = epoch['tmin'] - post_baseline_trigger_shift_min
tonew_tmax = epoch['tmax'] - post_baseline_trigger_shift_max
. - vars :
dict
- Add new variables only for this epoch.
Each entry specifies a variable with the following schema:
{name: definition}
.definition
can be either a string that is evaluated in the events-Dataset
, or a(source_name, {value: code})
-tuple.source_name
can also be an interaction, in which case cells are joined with spaces ("f1_cell f2_cell"
).
A secondary epoch can be defined using a base
entry.
Secondary epochs inherit trial rejection and all parameters from a primary
epoch (the base
).
Additional parameters can be used to modify the definition, for example sel
can be used to select a subset of the events in the primary epoch.
- base :
str
- Name of the epoch whose parameters provide defaults for all parameters.
Additional parameters override parameters of the
base
epoch, with the exception oftrigger_shift
, which is applied additively to thetrigger_shift
of thebase
epoch.
A superset epoch is an epoch that combines multiple other epochs.
A superset epoch can be defined with a single sub_epochs
parameter:
- sub_epochs :
tuple
ofstr
- Tuple of epoch names. These epochs are combined to form the current epoch. Epochs are merged at the level of events, so the base epochs can not contain post-baseline trigger shifts which are applied after loading data (however, the super-epoch can have a post-baseline trigger shift).
Examples:
epochs = {
# some primary epochs:
'picture': {'sel': "stimulus == 'picture'"},
'word': {'sel': "stimulus == 'word'"},
# use the picture baseline for the sensor covariance estimate
'cov': {'base': 'picture', 'tmax': 0}
# another secondary epoch:
'animal_words': {'base': 'noun', 'sel': "word_type == 'animal'"},
# a superset-epoch:
'all_stimuli': {'sub_epochs': ('picture', 'word')},
}
Tests¶
-
MneExperiment.
tests
¶
The MneExperiment.tests
dictionary defines statistical tests that
apply to the experiment’s data. Each test is defined as a dictionary. The
dictionary’s "kind"
entry defines the test (e.g., ANOVA, related samples
T-test, …). The other entries specify the details of the test and depend on
the test kind (see subsections on specific tests below).
- kind : ‘anova’ | ‘ttest_rel’ | ‘t_contrast_rel’ | ‘two-stage’
- The test kind.
Example:
tests = {'my_anova': {'kind': 'anova', 'model': 'noise % word_type',
'x': 'noise * word_type * subject'},
'my_ttest': {'kind': 'ttest_rel', 'model': 'noise',
'c1': 'a_lot_of_noise', 'c0': 'no_noise'}}
anova¶
- x :
str
- ANOVA model (e.g.,
"x * y * subject"
). The ANOVA model has to be fully specified and includesubject
. - model :
str
- The model which defines the cells into which the data is divided before
computing the ANOVA. This parameter can be left out if it includes the same
variables as
x
(excluding"subject"
). Otherwise, themodel
should be specified in the"x % y"
format (like interaction definitions) wherex
andy
are variables in the experiment’s events.
Example:
tests = {
'one_way': {'kind': 'anova', 'x': 'word_type * subject'},
'two_way': {'kind': 'anova', 'x': 'word_type * meaning * subject'},
}
ttest_rel¶
- model :
str
- The model which defines the cells that are used in the test. It is
specified in the
"x % y"
format (like interaction definitions) wherex
andy
are variables in the experiment’s events. - c1 :
str
|tuple
- The experimental condition. If the
model
is a single factor the condition is astr
specifying a value on that factor. Ifmodel
is composed of several factors the cell is defined as atuple
ofstr
, one value on each of the factors. - c0 :
str
|tuple
- The control condition, defined like
c1
. - tail :
int
(optional) - Tailedness of the test.
0
for two-tailed (default),1
for upper tail and-1
for lower tail.
Example:
tests = {'my_ttest': {'kind': 'ttest_rel', 'model': 'noise',
'c1': 'a_lot_of_noise', 'c0': 'no_noise'}}
ttest_ind¶
- model :
str
- The model which defines the cells that are used in the test. Usually
"group"
. - c1 :
str
|tuple
- The experimental group. Should be a group name.
- c0 :
str
|tuple
- The control group, defined like
c1
. - tail :
int
(optional) - Tailedness of the test.
0
for two-tailed (default),1
for upper tail and-1
for lower tail.
Example:
tests = {'group_difference': {'kind': 'ttest_ind', 'model': 'group',
'c1': 'group_1', 'c0': 'group_2'}}
t_contrast_rel¶
Contrasts involving different T-maps (see testnd.t_contrast_rel
)
- model :
str
- The model which defines the cells that are used in the test. It is
specified in the
"x % y"
format (like interaction definitions) wherex
andy
are variables in the experiment’s events. - contrast :
str
- Contrast specification using cells form the specified model (see test documentation).
- tail :
int
(optional) - Tailedness of the test.
0
for two-tailed (default),1
for upper tail and-1
for lower tail.
Example:
tests = {'a_b_intersection': {'kind': 't_contrast_rel', 'model': 'abc',
'contrast': 'min(a > c, b > c)', 'tail': 1}}
two-stage¶
Two-stage test. Stage 1: fit a regression model to the data for each subject. Stage 2: test coefficients from stage 1 against 0 across subjects.
- stage 1 :
str
- Stage 1 model specification. Coding for categorial predictors uses 0/1 dummy coding.
- vars :
dict
(optional) - Add new variables for the stage 1 model. This is useful for specifying
coding schemes based on categorial variables.
Each entry specifies a variable with the following schema:
{name: definition}
.definition
can be either a string that is evaluated in the events-Dataset
, or a(source_name, {value: code})
-tuple (see example below).source_name
can also be an interaction, in which case cells are joined with spaces ("f1_cell f2_cell"
). - model :
str
(optional) - This parameter can be supplied to perform stage 1 tests on condition
averages. If
model
is not specified, the stage1 model is fit on single trial data.
Example: The first example assumes 2 categorical variables present in events,
‘a’ with values ‘a1’ and ‘a2’, and ‘b’ with values ‘b1’ and ‘b2’. These are
recoded into 0/1 codes. The second test definition ('a_x_time'
uses the
“index” variable which is always present and specifies the chronological index
of the event within subject as an integer count and can be used to test for
change over time. Due to the numeric nature of these variables interactions
can be computed by multiplication:
tests = {'word_basic': {'kind': 'two-stage',
'vars': {'wordlength': 'word.label_length()'},
'stage 1': 'wordlength'},
'a_x_b': {'kind': 'two-stage',
'vars': {'a_num': ('a', {'a1': 0, 'a2': 1}),
'b_num': ('b', {'b1': 0, 'b2': 1})},
'stage 1': "a_num + b_num + a_num * b_num + index + a_num * index"},
'a_x_time': {'kind': 'two-stage',
'vars': {'a_num': ('a', {'a1': 0, 'a2': 1})},
'stage 1': "a_num + index + a_num * index"},
'ab_linear': {'kind': 'two-stage',
'vars': {'ab': ('a%b', {'a1 b1': 0, 'a1 b2': 1, 'a2 b1': 1, 'a2 b2': 2})},
'stage 1': "ab"},
}
Subject groups¶
-
MneExperiment.
groups
¶
A subject group called 'all'
containing all subjects is always implicitly
defined. Additional subject groups can be defined in
MneExperiment.groups
in a dictionary with {name: group_definition}
entries. The simplest group definition is a tuple
of subject names, e.g. ("R0026", "R0042", "R0066")
. In addition, a
group_definition can be a dictionary with the following entries:
- base :
str
- The name of the group to base the new group on.
- exclude :
tuple
ofstr
- A list of subjects to exclude (e.g.,
("R0026", "R0042", "R0066")
)
Examples:
groups = {
'some': ("R0026", "R0042", "R0066"),
'others': {'base': 'all', 'exclude': ("R0666",)},
# some, buth without R0042:
'some_less': {'base': 'some', 'exclude': ("R0042",)}
}
Parcellations (parcs
)¶
-
MneExperiment.
parcs
¶
The parcellation determines how the brain surface is divided into regions.
A number of standard parcellations are automatically defined (see
parc/mask (parcellations) below). Additional parcellations can be defined in
the MneExperiment.parcs
dictionary with {name: parc_definition}
entries. There are a couple of different ways in which parcellations can be
defined, described below.
Each parc_definition
can have a "views"
entry to set the views shown in
anatomical plots, e.g. {"views": ("medial", "lateral")}
.
Recombinations¶
Recombinations of existing parcellations can be defined as dictionaries include the following entries:
- kind :
'combination'
- Has to be ‘combination’.
- base :
str
- The name of the parcellation that provides the input labels.
- labels :
dict
{str
:str
} - New labels to create in
{name: expression}
format. All label names should be composed of alphanumeric characters (plus underline) and should not contain the -hemi tags. In order to create a given label only on one hemisphere, add the -hemi tag in the name (not in the expression, e.g.,{'occipitotemporal-lh': "occipital + temporal"}
).
Examples (these are pre-defined parcellations):
parcs = {'lobes-op': {'kind': 'combination',
'base': 'lobes',
'labels': {'occipitoparietal': "occipital + parietal"}},
'lobes-ot': {'kind': 'combination',
'base': 'lobes',
'labels': {'occipitotemporal': "occipital + temporal"}}}
An example using a split label:
parcs = {
'medial': {
'kind': 'combination',
'base': 'aparc',
'labels': {
'medialparietal': 'precuneus + posteriorcingulate',
'medialfrontal': 'medialorbitofrontal + '
'rostralanteriorcingulate + '
'split(superiorfrontal, 3)[2]',
},
'views': 'medial',
},
}
MNI coordinates¶
Labels can be constructed around known MNI coordinates using the foillowing entries:
- kind : ‘seeded’
- Has to be ‘seeded’.
- seeds :
dict
- {name: seed(s)} dictionary, where names are strings, including -hemi tags
(e.g.,
"mylabel-lh"
) and seed(s) are array-like, specifying one or more seed coordinate (shape(3,)
or(n_seeds, 3)
). - mask :
str
- Name of a parcellation to use as mask (i.e., anything that is “unknown” in
that parcellation is excluded from the new parcellation. Use
{'mask': 'lobes'}
to exclude the subcortical areas around the diencephalon.
For each seed entry, the source space vertex closest to the given MNI coordinate
will be used as actual seed, and a label will be created including all points
with a surface distance smaller than a given extent from the seed
vertex/vertices. The extent is determined when setting the parc as analysis
parameter as in e.set(parc="myparc-25")
, which specifies a radius of 25 mm.
Example:
parcs = {'stg': {'kind': 'seeded',
'mask': 'lobes',
'seeds': {'anteriorstg-lh': ((-54, 10, -8), (-47, 14, -28)),
'middlestg-lh': (-66, -24, 8),
'posteriorstg-lh': (-54, -57, 16)}}}
Individual coordinates¶
Labels can also be constructured from subjects-specific seeds. They work like MNI coordinates parcellations, except that seeds are provided for each subject.
Example:
parcs = {
'stg': {
'kind': 'individual seeded',
'mask': 'lobes',
'seeds': {
'anteriorstg-lh': {
'R0001': (-54, 10, -8),
'R0002': (-47, 14, -28),
},
'middlestg-lh': {
'R0001': (-66, -24, 8),
'R0002': (-60, -26, 9),
}
}
}
}
Externally created parcellations¶
For parcellations that are user-created, the following two definitions can be used to determine how they are handled:
- “subject_parc”
- Parcellations that are created outside Eelbrain for each subject. These parcellations are automatically generated only for scaled brains, for subjects’ MRIs the user is responsible for creating the respective annot-files.
- “fsaverage_parc”
- Parcellations that are defined for the fsaverage brain and should be morphed to every other subject’s brain. These parcellations are automatically morphed to individual subjects’ MRIs.
Examples (pre-defined parcellations):
parcs = {'aparc': 'subject_parc',
'PALS_B12_Brodmann': 'fsaverage_parc'}
Visualization defaults¶
-
MneExperiment.
brain_plot_defaults
¶
The MneExperiment.brain_plot_defaults
dictionary can contain options
that changes defaults for brain plots (for reports and movies). The following
options are available:
- surf : ‘inflated’ | ‘pial’ | ‘smoothwm’ | ‘sphere’ | ‘white’
- Freesurfer surface to use as brain geometry.
- views :
str
| iterator ofstr
- View or views to show in the figure. Can also be set for each parcellation,
see
MneExperiment.parc
. - foreground : mayavi color
- Figure foreground color (i.e., the text color).
- background : mayavi color
- Figure background color.
- smoothing_steps :
None
|int
- Number of smoothing steps to display data.
Analysis parameters¶
These are parameters that can be set after an MneExperiment
has been
initialized to affect the analysis, for example:
>>> my_experiment = MneExperiment()
>>> my_experiment.set(raw='1-40', cov='noreg')
sets up my_experiment
to use raw files filtered with a 1-40 Hz band-pass
filter, and to use sensor covariance matrices without regularization.
Contents
raw
¶
Which raw FIFF files to use. Can be customized (see MneExperiment.raw
).
The default values are:
'raw'
- The unfiltered files (as they were added to the data).
'0-40'
(default)- Low-pass filtered under 40 Hz.
'0.1-40'
- Band-pass filtered between 0.1 and 40 Hz.
'1-40'
- Band-pass filtered between 1 and 40 Hz.
group
¶
Any group defined in MneExperiment.groups
. Will restrict the analysis
to that group of subjects.
epoch
¶
Any epoch defined in MneExperiment.epochs
. Specify the epoch on which
the analysis should be conducted.
rej
(trial rejection)¶
Trial rejection can be turned off e.set(rej='')
, meaning that no trials are
rejected, and back on, meaning that the corresponding rejection files are used
e.set(rej='man')
.
equalize_evoked_count
¶
By default, the analysis uses all epoch marked as good during rejection. Set equalize_evoked_count=’eq’ to discard trials to make sure the same number of epochs goes into each cell of the model.
- ‘’ (default)
- Use all epochs.
- ‘eq’
- Make sure the same number of epochs is used in each cell by discarding epochs.
cov
¶
The method for correcting the sensor covariance.
- ‘noreg’
- Use raw covariance as estimated from the data (do not regularize).
- ‘bestreg’ (default)
- Find the regularization parameter that leads to optimal whitening of the baseline.
- ‘reg’
- Use the default regularization parameter (0.1).
- ‘auto’
- Use automatic selection of the optimal regularization method.
inv
¶
To set the inverse solution use MneExperiment.set_inv()
.
parc
/mask
(parcellations)¶
The parcellation determines how the brain surface is divided into regions. Parcellation are mainly used in tests and report generation:
parc
ormask
arguments forMneExperiment.make_report()
parc
argument toMneExperiment.make_report_roi()
When source estimates are loaded, the parcellation can also be used to index regions in the source estiomates. Predefined parcellations:
- Freesurfer Parcellations
aparc.a2005s
,aparc.a2009s
,aparc
,aparc.DKTatlas
,PALS_B12_Brodmann
,PALS_B12_Lobes
,PALS_B12_OrbitoFrontal
,PALS_B12_Visuotopic
.lobes
- Modified version of
PALS_B12_Lobes
in which the limbic lobe is merged into the other 4 lobes. lobes-op
- One large region encompassing occipital and parietal lobe in each hemisphere.
lobes-ot
- One large region encompassing occipital and temporal lobe in each hemisphere.
connectivity
¶
Possible values: ''
, 'link-midline'
Connectivity refers to the edges connecting data channels (sensors for sensor
space data and sources for source space data). These edges are used to find
clusters in cluster-based permutation tests. For source spaces, the default is
to use FreeSurfer surfaces in which the two hemispheres are unconnected. By
setting connectivity='link-midline'
, this default connectivity can be
modified so that the midline gyri of the two hemispheres get linked at sources
that are at most 15 mm apart. This parameter currently does not affect sensor
space connectivity.
select_clusters
(cluster selection criteria)¶
In thresholded cluster test, clusters are initially filtered with a minimum
size criterion. This can be changed with the select_clusters
analysis
parameter with the following options:
name | min time | min sources | min sensors |
---|---|---|---|
"all" |
|||
"10ms" |
10 ms | 10 | 4 |
"" (default) |
25 ms | 10 | 4 |
"large" |
25 ms | 20 | 8 |
To change the cluster selection criterion use for example:
>>> e.set(select_clusters='all')
See also
- Wiki on GitHub
- Mailing list for announcements
- Source code on GitHub
- Eelbrain on the Python Package Index
- Example scripts
Indices and tables¶
Eelbrain relies on NumPy, SciPy, Matplotlib, MNE-Python, PySurfer, WxPython, Cython and incorporates icons from the Tango Desktop Project.
Current funding: National Institutes of Health (NIH) grant R01-DC-014085 (since 2016). Past funding: NYU Abu Dhabi Institute grant G1001 (2011-2016).