Skip to content
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Commit baea958

Browse files
committedJul 11, 2024··
wiki plugins
1 parent 09a2169 commit baea958

29 files changed

+3731
-0
lines changed
 

‎.gitignore

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,3 +3,5 @@ _site/
33
_config-claire.yml
44
Gemfile.lock
55
code/plugins/github/
6+
.*/.DS_Store
7+
.DS_Store
Lines changed: 131 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,131 @@
1+
---
2+
layout: default
3+
title: Chapter_01_Getting_Started_with_NFT
4+
long_title: Chapter_01_Getting_Started_with_NFT
5+
parent: NFT
6+
grand_parent: Plugins
7+
---
8+
Introduction
9+
------------
10+
11+
The Neuroelectromagnetic Forward Head Modeling Toolbox is an open-source
12+
software toolbox running under MATLAB (The Mathworks, Inc.) for
13+
generating realistic head models from available data (MRI and/or
14+
electrode locations) and for solving the forward problem of
15+
electro-magnetic source imaging. The toolbox includes tools for
16+
segmenting scalp, skull, cerebrospinal fluid (CSF) and brain tissues
17+
from T1-weighted magnetic resonance (MR) images. After extracting the
18+
segmented tissue volumes, mesh generation can be performed. When MR
19+
images are not available, it is possible to warp a template head model
20+
to measured electrode locations to obtain a better-fitting head model.
21+
The toolbox also includes electrode scalp mesh co-registration and
22+
generation of a uniform source space inside the brain volume for to be
23+
used in coarse source localization. The Boundary Element Method (BEM) is
24+
used for the numerical solution of the forward problem. Toolbox
25+
functions can be called from either a graphic user interface or from the
26+
command line. Function help messages and a tutorial are included. The
27+
toolbox is freely available under the GNU Public License for
28+
noncommercial use and open source development.
29+
30+
The toolbox uses the following third party tools and libraries for
31+
segmentation, mesh generation and forward problem solution. The source
32+
codes for these tools are available.
33+
34+
1\. ASC - for triangulation of 3D volumes.
35+
36+
2\. Qslim - for mesh coarsening.
37+
38+
3\. Matitk - Matlab interface to the ITK image processing toolkit.
39+
40+
4\. Metu-bem - Boundary Element Method solver.
41+
42+
The NFT toolbox provides a user interface (UI) for segmentation, mesh
43+
generation and for creating the numerical head model. It also has a well
44+
defined MATLAB command-line interface.
45+
46+
This manual explains how to use the NFT toolbox. The head modeling UI,
47+
the command line API and the structures are described. An overview of
48+
the implementation is provided.
49+
50+
The next section describes the installation of the toolbox. The Getting
51+
Started section provides an overview of the interface. Head modeling
52+
from 3D MR images is described next, followed by head modeling from
53+
template warping. This is followed by a section on forward modeling and
54+
examples. The [final
55+
section](Chapter_05_NFT_Commands_and_Functions "wikilink") is a
56+
summary of all toolbox functions and commands.
57+
58+
Installation and Configuration
59+
------------------------------
60+
61+
This section describes installation and configuration of the NFT
62+
Toolbox. The following steps are necessary for a proper installation of
63+
the toolbox:
64+
65+
1\. Extract or copy the toolbox directory to a suitable place on your
66+
computer file system.
67+
68+
2\. The extracted directory will contain m-files, and C++ executables.
69+
70+
3\. Add the toolbox directory to the MATLAB path. You can use the File →
71+
SetPath menu item or the addpath() function. Under linux/unix, you may
72+
add the directory to the MATLABPATH.
73+
74+
The toolbox can also make use of the Matlab Parallel Processing toolbox
75+
(if installed) to distribute the computation of the transfer and
76+
lead-field matrices to multiple processors. To do this, before running
77+
NFT, the user must simply enter
78+
79+
\>\> matlabpool(n) % where n is the number of compute nodes available
80+
81+
In parallel mode, wait bars do not appear while computing the transfer
82+
and lead-field matrices.
83+
84+
Getting Started
85+
---------------
86+
87+
The toolbox starts by typing
88+
Neuroelectromagnetic_Forward_Modeling_Toolbox or NFT on command window.
89+
Main window appears as shown in Figure 1. This window is divided into
90+
three panels. The first panel is used to select the working folder, and
91+
to name the subject and the session. The NFM toolbox requires a subject
92+
folder to be specified at startup. All subject specific output is saved
93+
into this folder. The filenames are derived from the subject and session
94+
names entered into this panel. The second panel is the Head modeling
95+
panel. The head model can either be created from MR images, or a
96+
template head model can be warped to digitized sensors. The head
97+
modeling panel provides the following operations when creating a head
98+
model from MR images:
99+
100+
![NFT_ui](NFT_ui.png)
101+
102+
**Image Segmentation**
103+
104+
105+
106+
Interface for tissue classification from 3D MR Images.
107+
108+
**Mesh Generation**
109+
110+
111+
112+
Uses the segmentation results to generate realistic BEM meshes.
113+
114+
**Source Space Generation**
115+
116+
117+
118+
Generates a regular grid sources within the brain mesh.
119+
120+
**Electrode Co-Registration**
121+
122+
123+
124+
Registers digitized electrode locations to the scalp mesh.
125+
126+
When generating a template-based head model from digitized electrode
127+
positions, the only option is Template Warping. The final panel in the
128+
main menu is for Forward Model Generation. This opens up the Forward
129+
Model Generation interface which is used to compute the BEM coefficient
130+
matrix, create the transfer matrices for each sensor, and generate lead
131+
field matrices for a source distribution.
Lines changed: 348 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,348 @@
1+
---
2+
layout: default
3+
title: Chapter_02_Head_Modeling_from_MR_Images
4+
long_title: Chapter_02_Head_Modeling_from_MR_Images
5+
parent: NFT
6+
grand_parent: Plugins
7+
---
8+
The steps of head modeling are segmentation, mesh generation, and
9+
co-registration of electrode locations with scalp surface. User may also
10+
generate a source space to be used in the solution of the inverse
11+
problem. Figure 2 shows the steps of head modeling using MR images.
12+
13+
<center>
14+
15+
![Figure 2: steps of head modeling using MR images](NFM_Toolboox_UsersManual_html_2aaa1b22.gif)
16+
17+
</center>
18+
19+
Each step in realistic head modeling is implemented as a separate GUI
20+
module reachable from the main menu. These modules are described in the
21+
following sub-sections.
22+
23+
Segmentation
24+
------------
25+
26+
The first step in segmentation is to load the MR image. The input of the
27+
segmentation module is a 3-D sagittal T1-weighted MR image. The image
28+
format has to be in analyze format and the voxel size need to be 1×1×1
29+
mm. To prepare the image for this toolbox, one may use Freesurfer
30+
software (http://surfer.nmr.mgh.harvard.edu/) to perform the following
31+
operations:
32+
33+
1. Inhomogeneity correction:
34+
mri_nu_correct.mni --i input_volume --o output_volume --n 2
35+
36+
2. Conversion of the input volume to 1 mm volume data:
37+
mri_convert -i input_volume --conform_size 1 --o output_volume
38+
39+
3. Orient the image:
40+
mri_convert -i input_volume --out_orientation PSR -ot format
41+
-o output_volume
42+
43+
4. Save in analyze format:
44+
mri_convert -it analyze -i output_file_name.img -ot file_type
45+
-o input_volume
46+
47+
When the image is loaded, slices are shown in sagittal, axial, and
48+
coronal orientations and it is possible to select slices easily by using
49+
the scroll bars or clicking on the images (Figure 3). The Display image
50+
panel allows the user to select which image to display on the image
51+
panels. The available choices are the MR volume, the filtered volume or
52+
various stages of segmentation.
53+
54+
<center>
55+
56+
![](NFT_from_MRI_segmentation.png) .....
57+
![Figure 3: Interface for segmentation](NFT_segmentation.png)
58+
59+
</center>
60+
61+
The panel on the right of the segmentation GUI shows the segmentation
62+
steps that will be performed on the volume in order:
63+
64+
1. Anisotropic filtering.
65+
66+
2. Scalp segmentation.
67+
68+
3. Brain segmentation.
69+
70+
4. Outer skull segmentation.
71+
72+
5. Inner skull segmentation.
73+
74+
75+
The current step is highlighted in red. Pressing the Run button executes
76+
the segmentation step. It is possible to repeat a given step, changing
77+
parameters and observing the output. Pressing the Next button proceeds
78+
to the next step. Below is a discussion of each segmentation step:
79+
80+
### Anisotropic filtering
81+
82+
The purpose of anisotropic filtering is to enhance the image quality.
83+
This filter increases the SNR of the image while preserving the edges.
84+
The inputs to the anisotropic filtering are the number of iterations and
85+
image diffusion. The default values of 5 and 3 work well for most MR
86+
images. As the values increase, the image starts to get blurred. The
87+
output of anisotropic filtering can be observed by selecting “Filtered
88+
Image” from the Display Image panel.
89+
90+
### Scalp segmentation
91+
92+
The next step is scalp segmentation, separating the background from the
93+
image. There are no user inputs to scalp segmentation. An automatic
94+
thresholding algorithm is applied, and the result can be observed by
95+
selecting “Scalp Mask”.
96+
97+
### Brain segmentation
98+
99+
The brain segmentation uses the watershed segmentation algorithm, which
100+
selects connected voxels starting from a seed point. To prevent the
101+
algorithm from overflowing the brain region, the lowest point of
102+
cerebellum has to be selected by the user. This point has to be marked
103+
on the image panels using coronal and saggital views of the slices. Once
104+
the cursor is at the lowest point of cerebellum, pressing Set + selects
105+
the point. Figure 4 shows cursor locations for setting the lowest point
106+
for cerebellum. Another input for brain segmentation is a seed point on
107+
the white matter, any point can be used as long as it is on the white
108+
matter (Figure 5). Pressing the Set + button fetches the cursor
109+
coordinates for the seed. The other inputs are the parameters of the
110+
watershed segmentation algorithm, and the default values work for most
111+
images. The result of Brain segmentation can be seen by selecting “Brain
112+
Mask”.
113+
114+
<center>
115+
116+
![Figure 4: Interface of segmentation during setting lowest point
117+
for cerebellum.](NFT_cerebellarlowpoint.png "wikilink") ......
118+
![Figure 5: Interface of segmentation during a seed point
119+
selection on WM.](NFT_WMpointselection.png "wikilink")
120+
121+
</center>
122+
123+
### Outer skull segmentation
124+
125+
For outer skull segmentation, seed points for the eye lobes are selected
126+
by the user. Once a slice is selected where the eyes are clearly seen on
127+
the axial view (Figure 6), Set + is pressed to select that slice. During
128+
outer skull segmentation, an image window will pop-up for the user to
129+
click on both eye lobes. Figure 7 shows the matlab figure that pops-up
130+
to click on eye lobes. Once the eyes are selected the outer skull is
131+
segmented and can be seen by “Outer skull mask”.
132+
133+
<center>
134+
135+
![Figure 6: Interface of segmentation to select an axial slice where the eyes are clearly observed.](NFT_eyeselection.png "wikilink")
136+
137+
</center>
138+
<center>
139+
140+
![Figure 7: Matlab figure to click on eye lobes.](NFT_eyelobes.png "wikilink")
141+
142+
</center>
143+
144+
### Inner skull segmentation
145+
146+
Inner skull segmentation does not require any user input. After the
147+
inner skull is segmented, the outer skull and scalp are checked for
148+
intersections or very thin areas. These masks are corrected if there are
149+
any intersecting regions or too close regions which won’t be suitalbe
150+
for BEM modeling.
151+
152+
The outputs of the segmentation module are filtered MR images and scalp,
153+
skull, CSF and brain masks. It is possible to save the results during
154+
any stage of segmentation in Matlab data format. The filtered MR images
155+
are saved in Matlab format with double presicion, and the name would be
156+
Subject_name_filtered_images.mat. The masks are saved in structure
157+
format of Matlab, where the name would be Subject_name_segments.mat.
158+
When loaded in Matlab, the structure will look like as follows:
159+
160+
Segm =
161+
162+
scalpmask: [256x258x257 logical]
163+
brainmask: [256x258x257 logical]
164+
outerskullmask: [256x258x257 logical]
165+
innerskullmask: [256x258x257 logical]
166+
167+
Mesh Generation
168+
---------------
169+
170+
The second step in realistic head modeling is mesh generation. The mesh
171+
generation module uses the results of the segmentation and outputs the
172+
BEM mesh of the head. If the module is invoked from the Main Menu, it
173+
will use the Subject Name and Subject Folder selected in the Main Menu
174+
for segmentation files. The output folder is set to the Subject folder,
175+
and the mesh name is set to the Subject Name. It is possible to change
176+
the output folder and load a different segmentation which makes it
177+
possible to use the module as a standalone mesh generation tool.
178+
179+
The mesh generation module generates either 3-layer or 4-layer meshes.
180+
The number of layers is selected by the user. A three layer mesh has the
181+
scalp, skull and the brain regions separated by the scalp, skull and CSF
182+
surfaces. The CSF and the brain is considered as a single region. A four
183+
layer mesh models scalp, skull, CSF and brain regions, with an
184+
additional surface that separates the CSF and the brain.
185+
186+
The interface of Mesh Generation is shown in Figure 8. The generated
187+
mesh file is suitable to be used directly by the BEM solver. The format
188+
of the mesh file is given in [Appendix A](/NFT_Appendix_A "wikilink").
189+
The mesh generation process is described below.
190+
191+
<center>
192+
193+
![](NFT_from_MRI_mesh_gen.png "wikilink") .....
194+
![Figure 8: Interface for mesh
195+
generation.](NFT_meshgeneration_ui.png "wikilink")
196+
197+
</center>
198+
199+
Mesh Generation module creates triangular meshes that fits the
200+
boundaries of the segmentation. The aim is to approximate the geometry
201+
while keeping the number of triangles small enough to prevent running
202+
out of resources in the BEM solver. The approach taken by the mesh
203+
generation module is to start with a very fine mesh of the surface
204+
boundary, and gradually coarsen it, making sure the topology is correct
205+
and the quality of the elements is high at each step.
206+
207+
For this purpose, three external programs and various Matlab functions
208+
are used. The external programs are Adaptive Skeleton Climbing (ASC)
209+
(http://www.cse.cuhk.edu.hk/ttwong/papers/asc/asc.html) for
210+
triangulation, Qslim (http://mgarland.org/software/qslim.html) for mesh
211+
coarsening, and Showmesh for smoothing and topology correction.
212+
Functions written in MATLAB drive this process and also do local mesh
213+
refinement. The aim of local mesh refinement is to make sure that the
214+
distance between meshes is not too small compared with edge length of
215+
the neighboring elements. For this purpose, the elements with long edges
216+
are refined if the edge length is larger than the local distance of two
217+
neighboring meshes multiplied by the user specified LMR ratio. It is
218+
suggested to apply LMR with a ratio of 2.
219+
220+
During mesh generation the status of the program is written at the
221+
bottom of the window and a progress bar shows the progress of the
222+
program.
223+
224+
Source Space Generation
225+
-----------------------
226+
227+
Source space is a set of dipole sources placed within the brain volume.
228+
The source spaces are used to generate Lead Field Matrices (LFM) which
229+
is a matrix that maps dipole source strengths to electrode potentials.
230+
231+
The Forward Modeling Toolbox contains an option to generate a simple
232+
source space consisting of a regular grid. The grid is generated by
233+
placing three orthogonal dipoles at each grid location inside the brain
234+
volume. The user inputs are the spacing between the dipoles and the
235+
minimum distance of a dipole to brain mesh. The spacing determines the
236+
minimum distance between two dipoles. The default value of spacing is 8
237+
mm, and the minimum distance of a dipole to brain mesh is 2 mm. These
238+
default parameters result in about 6000-7000 dipoles for an average
239+
adult human brain. The user interface of this module is shown in Figure
240+
9. The output file is saved in the Output folder set in the Main window
241+
as sourcespace.dip in ascii format. It is a matrix of number of dipoles
242+
by 6, in each row the x, y, z position and direction of dipoles are
243+
given.
244+
245+
A LFM using a regular grid source space can be used in single dipole
246+
parametric inverse problem solution to find a coarse estimate of the
247+
dipole position.
248+
249+
<center>
250+
251+
![](NFT_from_MRI_source_space.png "wikilink") .....
252+
![Figure 9: Interface for source space generation.](NFT_sourcespacegen.png "wikilink")
253+
254+
</center>
255+
256+
Co-registration of electrode locations
257+
--------------------------------------
258+
259+
The BEM mesh is generated from the 3D MR volume, and uses the same
260+
coordinate system as the volume. When working with EEG recordings, the
261+
electrode coordinates, measured by a digitizer, must be mapped to the
262+
mesh coordinates. This step is called the co-registration of electrode
263+
locations.
264+
265+
The input to the Electrode co-registration module is the electrode
266+
locations. The scalp mesh of the subject is loaded automatically, and
267+
the electrodes are co-registered to the scalp mesh. The co-registration
268+
is done in two steps. First the user manually co-registers the sensors
269+
pressing the Initial co-registration button. This starts EEGLAB’s
270+
co-registration function and a coarse registration is done to bring the
271+
sensors to the mesh coordinate system. The second step is the Complete
272+
co-registration. This step starts from the initial co-registration and
273+
automatically finds the best translation and rotation parameters by
274+
minimizing the total distance between the sensors and scalp surface.
275+
276+
The interface of co-registration is shown in Figure 10. At the end of
277+
each registration step, a figure pops up to show the registered
278+
electrodes on the scalp surface. It is possible to save either the
279+
initial or the complete registration. The outputs of the program are
280+
registered electrodes and index of the electrodes in the scalp mesh
281+
region.
282+
283+
Note that the outputs of segmentation, mesh generation, and source space
284+
are subject specific. The Subject Name is used in output files for these
285+
stages. On the other hand, the electrodes must be registered each time
286+
the electrode positions change. Therefore, the co-registration output is
287+
specific to a session. The result of electrode co-registration is saved
288+
as Session_Name_Subject_Name_headsensors.sens in ASCII format.
289+
290+
<center>
291+
292+
![](NFT_from_MRI_coreg.png "wikilink") ....
293+
![Figure 10: Interface for co-registration.](NFT_coregistration.png "wikilink")
294+
295+
</center>
296+
297+
Head Modeling using Template Warping
298+
------------------------------------
299+
300+
When the MR images of the subject is not available, a frequently used
301+
approach is to use a template head mesh, and map the electrodes to this
302+
template for source localization. The MNI brain, which is created by the
303+
Montreal Neurological Institute (MNI) by averaging the head MRIs of 305
304+
normal subjects, is frequently used for this purpose.
305+
306+
An alternative approach suggested by Darvas et al \[1\] is to warp a
307+
template mesh to fit the sensor locations. The toolbox implements this
308+
functionality to generate subject specific head models when no MR images
309+
are available. This results in more realistic head models compared with
310+
using a template mesh, and mapping electrodes to it. The template model
311+
that is used in this toolbox is a 3-layer BEM mesh extracted from the
312+
MNI brain.
313+
314+
The warping is computed based on fiducials: the nasion and left and
315+
right preauricular points. Using these 3 points, another point is
316+
calculated on the top of the head on both the template model and
317+
subject’s electrode locations. Using these 4 points, the sensor
318+
locations and head model are brought into same coordinate system. After
319+
this initial co-registration, 19 landmarks on both the head model and
320+
sensors are located. These landmarks are used to find the warping
321+
parameters. The warping method is a non-rigid thin plate spline method.
322+
After finding the warping for scalp, all the surfaces and the source
323+
space are warped using the same warping parameters.
324+
325+
The inputs of the warping module are the fiducials and the electrode
326+
locations (obtained from a digitizer). The outputs are the warped mesh,
327+
warped source space, indices of the electrodes on the mesh, fitted
328+
electrode locations and the warping parameters in case a user wants to
329+
warp back the localized sources to the template model. Note that the
330+
number of warped electrodes may be lower, since the MNI head is not a
331+
whole head model, and some electrodes may fall out of the template mesh.
332+
333+
In Figure 11 the interface for warping module is shown.
334+
335+
<center>
336+
337+
![](NFT_from_Warping.png "wikilink") .....
338+
![Figure 11: Interface warping of a template head model.](NFT_warping_ui.png "wikilink")
339+
340+
</center>
341+
342+
------------------------------------------------------------------------
343+
344+
References
345+
346+
\[1\] F. Darvas, J.J. Ermer, J.C. Mosher, R.M. Leahy, Generic head
347+
models for atlas-based EEG source analysis, Human Brain Mapping, vol.
348+
27(2), 2005, pp 129-143.
Lines changed: 199 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,199 @@
1+
---
2+
layout: default
3+
title: Chapter_03_Forward_Model_Generation
4+
long_title: Chapter_03_Forward_Model_Generation
5+
parent: NFT
6+
grand_parent: Plugins
7+
---
8+
Forward Problem: Boundary Element Method
9+
----------------------------------------
10+
11+
The boundary element method (BEM) is a numerical computational technique
12+
for solving partial differential equations. In electro-magnetic source
13+
imaging (EMSI) of brain activity, it is used to solve the forward
14+
problem using realistic head models. When using BEM in head modeling,
15+
the head is assumed to be composed of uniform conductivity regions
16+
(i.e., scalp, skull, brain etc.) and the tissue boundaries are
17+
represented by triangular surface elements. The details of the BEM
18+
implementation used in this toolbox may be found in \[2\] and \[3\].
19+
This section describes the BEM Toolbox.
20+
21+
- Press ‘Forward Model Generation’ on main menu.
22+
23+
<!-- -->
24+
25+
- Load a mesh from file. It is also possible to load a model or a
26+
session if you have created them previously.
27+
28+
<!-- -->
29+
30+
- Enter model name and conductivity values and press on ‘Create Model’
31+
button.
32+
33+
<!-- -->
34+
35+
- When the model is created, enter session name and load sensors.
36+
Sensors may be loaded from a list of nodes (.dat file) or from mesh
37+
coordinates (.sens file).
38+
39+
<!-- -->
40+
41+
- Press on ‘Generate Transfer Matrix’.
42+
43+
<!-- -->
44+
45+
- Load source space from .dip file. It should be a number of dipoles
46+
by 6 matrix, giving the location and orientation of the dipoles.
47+
48+
<!-- -->
49+
50+
- Press on ‘Compute Lead Field Matrix’.
51+
52+
<!-- -->
53+
54+
- You may plot potential distribution for the nth dipole.
55+
56+
The potentials are save under ‘session_name’_LFM as a Matlab .mat file.
57+
The user interface is shown in Figure 12.
58+
59+
<center>
60+
61+
![Figure 12: Interface for Forward Model Generation.](NFT_forward_ui.png "wikilink")
62+
63+
</center>
64+
65+
A forward problem solution using BEM consists of the following steps:
66+
67+
1. Compute the BEM matrices for a given head model.
68+
2. Compute the transfer matrix for a given set of electrodes.
69+
3. Compute the source vector for a given set of dipoles.
70+
4. Obtain the electrode potentials due to the dipole activity.
71+
72+
The BEM solver that is interfaced from MATLAB to compute the BEM
73+
matrices is described below. Then, the data structures and functions
74+
used by the BEM toolbox are described.
75+
76+
The BEM Solver
77+
--------------
78+
79+
The BEM solver is written in C++ and is an executable program that is
80+
started from MATLAB with correct parameters to compute and save BEM
81+
matrices. The solver is called transparently from the MATLAB and need
82+
not be called explicitly by the user of the toolbox. However a
83+
familiarity with basic options and the program outputs would be useful.
84+
85+
$ bem_matrix
86+
87+
88+
89+
Generate BEM coefficient matrices, including inner matrices for IPA
90+
91+
Usage: bem_matrix \[-f matname\] \[-m magsens\] \[-o mod\] meshname
92+
\[s=sig ...\]
93+
94+
95+
96+
-f: save matrices to matname.\[cdi\]mat (default meshname)
97+
98+
-m: magnetic sensor file name
99+
100+
-o: interface to use with modified equations
101+
102+
(1: outer, 0: to disable)
103+
104+
s=sig: s:region (1: outer), sig: conductivity
105+
106+
107+
108+
(default conductivity: 0.2 for all regions)
109+
110+
As can be seen from the usage text, the bem_matrix application can
111+
generate and save BEM matrices for computing potential and magnetic
112+
fields. It can use the isolated problem approach (IPA) to compansate for
113+
the loss of accuracy due to the low conductivity of the skull.
114+
115+
BEM Matrices
116+
------------
117+
118+
When IPA is not used, only the BEM Coefficient matrix is generated for
119+
EEG. It’s saved with the extension .cmt. If IPA is applied, two more
120+
matrices are generated and saved: .dmt and .imt.
121+
122+
In addition to the matrices created by the BEM solver, an additional
123+
matrix is the inverse of the Inner Coefficient Matrix. This matrix is
124+
the inverse of the imt matrix. It is inverted using the builtin MATLAB
125+
function inv() and saved on disk using the extension .iinv.
126+
127+
The Transfer Matrix
128+
-------------------
129+
130+
The transfer matrix makes it possible to have very fast forward problem
131+
solutions. It is computed by inverting the selected columns of the
132+
coefficient matrix. In the toolbox, the inversion is done in MATLAB
133+
using the GMRES iterative solver. The resulting matrix is saved to the
134+
disk using the filename extension .tmte.
135+
136+
Since the matrix generation can take a long time for large meshes, an on
137+
disk copy is always present so that future computations can be made
138+
without generating the matrices from scratch. This is true for all the
139+
BEM and transfer matrices generated by the solver and by MATLAB. The
140+
matrices are loaded into MATLAB as needed, and can be cleared by the
141+
user as necessary to save memory. An additional advantage of this
142+
approach is that the matrices can be created externally (manually, at an
143+
other computer, etc.) and used by the toolbox.
144+
145+
Structures
146+
----------
147+
148+
The BEM toolbox functions are implemented on three main data structures
149+
which represent information available at different stages of the forward
150+
problem solution procedure. The structures correspond to the BEM Mesh,
151+
the Head Model which is a combination of the mesh, conductivities and
152+
solver parameters, and a Session which specifies the sensor (electrode)
153+
positions used to acquire the data. The structures are described below:
154+
155+
### The Mesh Structure
156+
157+
Mesh structure contains coordinates and connectivity and boundary
158+
information from the mesh file.
159+
160+
### The Model Structure
161+
162+
The model structure includes the mesh structure and conductivity values
163+
for the mesh tissue classes and the index of the modified boundary, for
164+
use with IPA. If the index is set for smaller than 1, then IPA is not
165+
applied. If, for example, it’s set for 3 and there are 4 tissue classes,
166+
then 3rd and the 4th layers are the inner layers and the RHS vector is
167+
modified.
168+
169+
Furthermore, as the model matrices (.cmt, .dmt, .imt and .iinv) are
170+
loaded, they are stored inside the model structure as fields of the same
171+
name. The name of the model is used as the base filename when loading
172+
the model matrices.
173+
174+
### The Session Structure
175+
176+
The session structure represents a ‘recording session’. It includes the
177+
model structure and a matrix relating the positions of the sensors on
178+
the scalp to the nodes of the mesh (Smatrix). Each electrode is a
179+
weighted sum of the nodes of the element. The weights are determined by
180+
the element shape functions. The format of the Smatrix is as follows:
181+
\[electrode_index node_index weight\]. The rows of the matrix must be
182+
sorted by electrode_index and there can be more than one row with a
183+
given electrode index.
184+
185+
When the transfer matrix (.tmte) is loaded, it is stored inside the
186+
session structure. The name of the session is used as the base filename
187+
when saving and loading the transfer matrix.
188+
189+
------------------------------------------------------------------------
190+
191+
References:
192+
193+
\[2\] Z. Akalin Acar, N.G. Gencer, An advanced BEM implementation for
194+
the forward problem of Electro-magnetic source imaging, Physics in Med.
195+
and Biol., vol. 49(5), 2004, pp 5011-28.
196+
197+
\[3\] N.G. Gencer, Z. Akalin Acar, Use of the isolated problem approach
198+
for multi-compartment BEM models of electro-magnetic source imaging,
199+
Physics in Med. and Biol., vol. 50, 2005, pp 3007-22.
Lines changed: 172 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,172 @@
1+
---
2+
layout: default
3+
title: Chapter_04_NFT_Examples
4+
long_title: Chapter_04_NFT_Examples
5+
parent: NFT
6+
grand_parent: Plugins
7+
---
8+
In this section, two head modeling examples are presented. Both of these
9+
examples use the same subject. The first example generates a realistic
10+
head model using the MR image of the subject. The second example warps
11+
the template head model to 141 digitized electrode locations. Mesh
12+
generation and electrode registration results are given for both of
13+
these examples. The computational cost of each modeling stage and sizes
14+
of the resulting output files are also given.
15+
16+
Head Model Generation
17+
---------------------
18+
19+
For the first example, a four-layer mesh is generated for the subject
20+
through segmentation and mesh generation steps. The mesh consists of
21+
scalp, skull, csf, and brain layers for a total of 16016 nodes and 32024
22+
elements. The individual layers can be seen in Figure 13.
23+
24+
<center>
25+
26+
(a)
27+
![a](NFM_Toolboox_UsersManual_html_56c540a1.gif "wikilink") ...
28+
(b)
29+
![b](NFM_Toolboox_UsersManual_html_28d79845.gif "wikilink") ...
30+
(c)
31+
![c](NFM_Toolboox_UsersManual19x.png "wikilink") ...
32+
(d)
33+
![d](NFM_Toolboox_UsersManual_html_m69f7a676.gif "wikilink")
34+
35+
</center>
36+
37+
Figure 13: BEM model of the scalp, skull, csf and the brain obtained
38+
from an MR image. (a) scalp mesh, (b) skull mesh, (c) CSF mesh, (d)
39+
brain mesh.
40+
41+
After mesh generation, the electrodes and the realistic mesh is
42+
co-registered. The result of co-registration can be seed in Figure 14.
43+
44+
<center>
45+
46+
![Figure 14: Registered electrode locations on the scalp mesh.](NFM_Toolboox_UsersManual_html_7b73089f.gif "wikilink")
47+
48+
</center>
49+
50+
The second example assumes that the only available subject data is the
51+
141 digitized electrode locations. For warping the template MNI mesh is
52+
used, which has three layers and 3000 nodes and 5988 elements. This is
53+
the standard mesh that is also used by other BEM solvers in the
54+
literature. The results of warping can be seen in Figure 15.
55+
56+
<center>
57+
58+
(a)
59+
![a](NFM_Toolboox_UsersManual_html_3bc436a3.gif "wikilink") ...
60+
(b)
61+
![b](NFM_Toolboox_UsersManual23x.png "wikilink") ...
62+
(c)
63+
![c](NFM_Toolboox_UsersManual_html_m350221ff.gif "wikilink") ...
64+
(d)
65+
![d](NFM_Toolboox_UsersManual_html_m788a9795.gif "wikilink")
66+
67+
</center>
68+
69+
Figure 15: BEM model of the scalp, skull, the brain obtained by warping
70+
a template head model to electrode locations. (a) scalp mesh, (b) skull
71+
mesh, (c) brain mesh, (d) electrode locations.
72+
73+
Note that the realistic model, and the warped model are two different
74+
models for the same experiment. Since the MNI head only contains the
75+
half of the head above the mouth, some electrodes had to be discarded.
76+
While the realistic model represents the real geometry of the head much
77+
better than the warped model, the warped model itself is an improvement
78+
over the template MNI head itself.
79+
80+
Computational Complexity
81+
------------------------
82+
83+
The computational cost of using a realistic head model is related to the
84+
size of the BEM matrices, which depend on the mesh. The aim of this
85+
section is to give an idea about how long different stages of the head
86+
modeling and forward problem solution takes.
87+
88+
The realistic model generated using MR image consists of 4 layers and
89+
has 16016 number of nodes, and 32024 number of faces in total. Local
90+
mesh refinement is done using LMR ratio of 2. The number of faces for
91+
each surface is as follows: Scalp:6944, Skull:7084, Csf: 9298,
92+
Brain:8698 elements.
93+
94+
Table 1 shows the computation times for realistic head modeling and
95+
forward model generation when head model is obtained using MR images.
96+
Table 2 shows the computation times for forward model generation when
97+
the head model is obtained by warping a template head model. Warping of
98+
a template head model and source space generation takes only seconds,
99+
therefore, these are not given in the tables. The computations are done
100+
on a 64-bit Opteron processor.
101+
102+
| Process | Time |
103+
|-------------------------------------------------|------------|
104+
| Segmentation | 25 minutes |
105+
| Mesh Generation | 38 minutes |
106+
| Co-registration | 25 minutes |
107+
| Generation of BEM matrices (16016 nodes) | 2 hours |
108+
| Calculation of transfer matrix (141 sensors) | 3.2 hours |
109+
| Calculation of Lead Field Matrix (6075 dipoles) | 1 hour |
110+
| | |
111+
112+
| Process | Time |
113+
|--------------------------------------------------|------------|
114+
| Generation of BEM matrices (6006 nodes) | 19 minutes |
115+
| Calculation of transfer matrix (135 sensors) | 15 minutes |
116+
| Calculation of Lead Field Matrix (10131 dipoles) | 30 minutes |
117+
| | |
118+
119+
The transfer matrix computation and lead-field generation steps may be
120+
executed on multiple processors if the MATLAB Parallel Processing
121+
Toolbox is available. We have measured a 2.6x speed-up by generating the
122+
transfer matrix on a quad-core instead of a single core processor.
123+
124+
Output Folder
125+
-------------
126+
127+
The toolbox uses the Subject folder to save the generated meshes and
128+
matrices. The names of the output files are derived from the subject and
129+
session names. This section lists the contents of output folders and
130+
size of the files for the two examples discussed above.
131+
132+
Table 3 shows the contents of the output folder when Subject Name is
133+
SubjectA and session name is Session1 for the example given in Table 1.
134+
Table 4 shows for the case given in Table 2, when the subject name is
135+
entered as SubjectB and session name as Session1.
136+
137+
| File | Size |
138+
|------------------------------------|-----------|
139+
| SubjectA_segments.mat | 0.4 MB |
140+
| SubjectA_filtered.mat | 84 MB |
141+
| SubjectA.bei | 67 bytes |
142+
| SubjectA.bec | 1.2 MB |
143+
| SubjectA.bee | 0.7 MB |
144+
| SubjectA.model | 473 bytes |
145+
| SubjectA.cmt | 2.9 GB |
146+
| SubjectA.dmt | 844 MB |
147+
| SubjectA.imt | 939 MB |
148+
| sourcespace.dip | 581 KB |
149+
| Session1_SubjectA_headsensors.sens | 6.9 KB |
150+
| Session1_SubjectA_sensorindex.mat | 2.2 KB |
151+
| Session1.session | 4.8 KB |
152+
| Session1.tmte | 53.9 MB |
153+
| Session1_LFM.mat | 6.3 MB |
154+
| | |
155+
156+
| | |
157+
|------------------------------------|-----------|
158+
| SubjectB.bei | 52 bytes |
159+
| SubjectB.bec | 381 KB |
160+
| SubjectB.bee | 240 KB |
161+
| SubjectB_warping | 2.3 KB |
162+
| SubjectB.model | 381 bytes |
163+
| SubjectB.cmt | 432.1 MB |
164+
| SubjectB.dmt | 136.6 MB |
165+
| SubjectB.imt | 46.3 MB |
166+
| sourcespace.dip | 959 KB |
167+
| Session1_SubjectB_headsensors.sens | 6.4 KB |
168+
| Session1_SubjectB_sensorindex.mat | 2.1 KB |
169+
| Session1.session | 7.6 KB |
170+
| Session1.tmte | 32.7 MB |
171+
| Session1_LFM.mat | 16.9 MB |
172+
| | |
Lines changed: 169 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,169 @@
1+
---
2+
layout: default
3+
title: Chapter_05_NFT_Commands_and_Functions
4+
long_title: Chapter_05_NFT_Commands_and_Functions
5+
parent: NFT
6+
grand_parent: Plugins
7+
---
8+
This section summarizes the MATLAB commands and data structures used for
9+
each stage of head modeling using the NFT toolbox. The function
10+
reference can be found in Appendix B
11+
12+
Function Naming and Style
13+
-------------------------
14+
15+
The toolbox functions have names all lowercase, words separated by
16+
underscore. The GUI function names start with capital letters. The user
17+
interface functions all begin with the name of the module, such as
18+
bem_, mesh_, segm_, warping_ while utility functions are prefixed by
19+
util, for instance BEM utility functions start with utilbem_. All
20+
functions have help sections describing the usage, inputs and outputs,
21+
compatible with the help2html() conventions. They also include a license
22+
block.
23+
24+
In user interface functions (bem_, etc.) the input arguments are
25+
validated before use. The utility functions, which are mostly used
26+
internally do minimal validation.
27+
28+
Segmentation Functions
29+
----------------------
30+
31+
The main user interface for the segmentation is initiated using
32+
[Segmentation()](NFT_Appendix_B#Segmentation "wikilink") command which
33+
opens up the GUI for segmentation.
34+
35+
While most of the BEM matrix generation functionality can be initiated
36+
from the GUI interface, each operation can also be performed through
37+
matlab functions. After loading the MR image the following functions are
38+
called respectively:
39+
[segm_aniso_filtering()](NFT_Appendix_B#segm_aniso_filtering "wikilink"),
40+
[segm_scalp()](NFT_Appendix_B#segm_scalp "wikilink"),
41+
[segm_brain()](NFT_Appendix_B#segm_brain "wikilink"),
42+
[segm_outer_skull()](NFT_Appendix_B#segm_outer_skull "wikilink"),
43+
[segm_inner_skull()](NFT_Appendix_B#segm_inner_skull "wikilink"),
44+
[segm_final_skull()](NFT_Appendix_B#segm_final_skull "wikilink").
45+
46+
Mesh Generation Functions
47+
-------------------------
48+
49+
The main user interface for mesh generation is initiated using
50+
[Mesh_generation()](NFT_Appendix_B#Mesh_generation "wikilink") command
51+
which opens up the GUI for mesh generation. The function loads the
52+
segmentation and generates meshes for scalp, skull, CSF and brain. If
53+
the user wants to refine the meshes locally,
54+
[mesh_local_refinement()](NFT_Appendix_B#mesh_local_refinement "wikilink")
55+
function is called. The topology of the generated meshes are checked by
56+
[mesh_final_correction()](NFT_Appendix_B#mesh_final_correction "wikilink"),
57+
and finally, the total head mesh is written in the format described in A
58+
using the function
59+
[mesh_read_write()](NFT_Appendix_B#mesh_read_write "wikilink").
60+
61+
Co-registration Functions
62+
-------------------------
63+
64+
The main user interface for the co-registration of electrode locations
65+
with MR images is initiated using Coregistration() command which opens
66+
up the GUI for co-registration.
67+
68+
Warping Functions
69+
-----------------
70+
71+
The main user interface for the warping of a template head model is
72+
initiated using
73+
[Warping_mesh()](NFT_Appendix_B#Warping_mesh "wikilink") command which
74+
opens up the GUI for warping functions. After loading the electrode
75+
locations the MNI mesh is loaded. Using
76+
[warping_main_function()](NFT_Appendix_B#warping_main_function "wikilink")
77+
function the warping parameters and the warped mesh are calculated.
78+
79+
Forward Model Generation Functions
80+
----------------------------------
81+
82+
The main user interface for the forward model generation is initiated
83+
using
84+
[Forward_Problem_Solution()](NFT_Appendix_B#Forward_Problem_Solution "wikilink")
85+
command. While most of the BEM matrix generation functionality can be
86+
initiated from the GUI interface, each operation also be performed
87+
through matlab functions.
88+
89+
The functions used by the BEM module are used for creating the BEM
90+
structures, running the solver to generate the model matrices, and
91+
solving for single or multiple dipoles.
92+
93+
The state of the forward solution is stored in the structures, and no
94+
global variables are used by the toolbox. Since the contents of the
95+
structure arguments may change during a function call, most interface
96+
functions return the structure so that the changes can be preserved
97+
(MATLAB has no OUT arguments).
98+
99+
### Mesh Functions
100+
101+
A set of mesh files can be loaded with the
102+
[bem_load_mesh()](NFT_Appendix_B#bem_load_mesh "wikilink") function
103+
which returns a mesh structure.
104+
105+
### Model Functions
106+
107+
The model structure which combines the mesh, conductivities and solver
108+
IPA parameters is obtained using the
109+
[bem_create_model()](NFT_Appendix_B#bem_create_model "wikilink")
110+
function. Once the model structure is obtained, it is possible to invoke
111+
the solver using the
112+
[bem_generate_eeg_matrices()](NFT_Appendix_B#bem_generate_eeg_matrices "wikilink")
113+
function to generate the BEM matrices. Individual matrices can be loaded
114+
into the model structure using the
115+
[bem_load_model_matrix()](NFT_Appendix_B#bem_load_model_matrix "wikilink")
116+
function.
117+
118+
### Session Functions
119+
120+
The session structure is used for solving the forward problem at a given
121+
set of sensors. The structure is created using
122+
[bem_create_session()](NFT_Appendix_B#bem_create_session "wikilink")
123+
from a model and a list of sensors. The list of sensors can be generated
124+
from a list of nodes using the
125+
[bem_smatrix_from_nodes()](NFT_Appendix_B#bem_smatrix_from_nodes "wikilink")
126+
function.
127+
128+
The transfer matrix for the sensors specified in the session can be
129+
generated using the
130+
[bem_generate_eeg_transfer_matrix()](NFT_Appendix_B#bem_generate_eeg_transfer_matrix "wikilink")
131+
function which computes and saves the transfer matrix. The computed
132+
transfer matrix can be loaded into the session structure using the
133+
[bem_load_transfer_matrix()](NFT_Appendix_B#bem_load_transfer_matrix "wikilink")
134+
function.
135+
136+
There are two functions for obtaining forward solutions. The
137+
[bem_solve_dipoles_eeg()](NFT_Appendix_B#bem_solve_dipoles_eeg "wikilink")
138+
function computes the sensor potentials due to the activation of a
139+
number of dipoles. The
140+
[bem_solve_lfm_eeg()](NFT_Appendix_B#bem_solve_lfm_eeg "wikilink")
141+
function is suitable for generating a Lead Field Matrix since it returns
142+
a matrix of single dipole solutions.
143+
144+
### Utility Functions
145+
146+
The utility functions are used internally by the functions described
147+
above.
148+
149+
The external user configuration can be returned using the
150+
[NFT_get_config()](NFT_Appendix_B#NFT_get_config "wikilink") function.
151+
This m-file can be edited manually to specify run-time options for the
152+
toolbox. User interface functions call this function to get
153+
configuration variables as needed.
154+
155+
The
156+
[utilbem_compute_cond()](NFT_Appendix_B#utilbem_compute_cond "wikilink")
157+
and
158+
[utilbem_compute_indices()](NFT_Appendix_B#utilbem_compute_indices "wikilink")
159+
functions compute conductivity and index information from the mesh.
160+
These functions are called by
161+
[bem_create_model()](NFT_Appendix_B#bem_create_model "wikilink") and
162+
the results are stored in the model structure.
163+
164+
There are two utility functions for computing source (right-hand-side)
165+
vectors.
166+
[utilbem_multilayer_rhs()](NFT_Appendix_B#utilbem_multilayer_rhs "wikilink")
167+
is used for IPA and
168+
[utilbem_pot_unbound()](NFT_Appendix_B#utilbem_pot_unbound "wikilink")
169+
is used without IPA.

‎plugins/NFT/NFT_Appendix_A.md

Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
---
2+
layout: default
3+
title: NFT_Appendix_A
4+
long_title: NFT_Appendix_A
5+
parent: NFT
6+
grand_parent: Plugins
7+
---
8+
BEM Mesh Format
9+
---------------
10+
11+
The generated BEM mesh is stored on disk as a set of three files: an
12+
element file, a coordinate file, and an information file. All three mesh
13+
files have the same base name, with the file extension specifying the
14+
file type. The file extensions are .bee for the element file, .bec for
15+
the coordinate file and .bei for the information file. All the files are
16+
ASCII text files for easier processing and portability.
17+
18+
The information file (.bei) defines the high level properties of the
19+
mesh. Each mesh consist of one or more boundaries. Since the boundaries
20+
separate tissues, each boundary has an inside and outside tissue type.
21+
The first row of the information file contains information about the
22+
mesh structure. The entries of the first row are the number of
23+
boundaries, the number of nodes, the number of elements, and the number
24+
of nodes per element respectively. For linear meshes there are 3 nodes
25+
per element and for quadratic meshes there are 6 nodes per element. The
26+
following rows of the information file define the boundary information.
27+
Since an element can be a part of only one boundary, the elements of the
28+
mesh are grouped according to the boundary, and from outside to inside.
29+
Therefore, each boundary is a consecutive group of elements. For the
30+
boundary rows, the first column is the boundary index. Second column
31+
gives the number of elements in the boundary. The third and fourth
32+
columns represent the inner and outer tissue class of the boundary.
33+
34+
The tissue class is an integer representing a tissue. This number is
35+
defined per mesh, there is no global assignment of tissue classes at the
36+
moment. The purpose of tissue class is to uniquely define the various
37+
tissues that are represented by the mesh. Since different tissues may
38+
have same or similar conductivities, using a tissue class identifier
39+
provides a better distinction. Furthermore, this scheme makes it
40+
possible to solve the same mesh geometry using different tissue
41+
conductivity values.
42+
43+
The coordinate file (.bec) defines the physical coordinates of the nodes
44+
in the BEM mesh. There is one node per row. The first column is the node
45+
index, and runs from one to the number of nodes in the mesh. The next
46+
three columns represent the x, y, and z coordinates of the node.
47+
48+
The element file (.bee) defines the connectivity of the nodes for each
49+
element. The element file defines one element per row. The first column
50+
is the element index, and runs from one to the number of elements in the
51+
mesh. The next three (linear mesh) or six (quadratic mesh) columns
52+
define the node indexes for the element. Note that the order of nodes
53+
are important, and define the orientation of the element.

‎plugins/NFT/NFT_Appendix_B.md

Lines changed: 597 additions & 0 deletions
Large diffs are not rendered by default.

‎plugins/NFT/NFT_Appendix_C.md

Lines changed: 55 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,55 @@
1+
---
2+
layout: default
3+
title: NFT_Appendix_C
4+
long_title: NFT_Appendix_C
5+
parent: NFT
6+
grand_parent: Plugins
7+
---
8+
Effect of brain-to-skull conductivity ratio estimate on EEG source localization
9+
-------------------------------------------------------------------------------
10+
11+
An important consideration for obtaining accurate forward problem
12+
solutions is to correctly model the distribution of conductivity within
13+
the head. In the literature, consistent conductivity values have been
14+
reported for scalp, brain, and CSF tissues but there has been a huge
15+
variation in reported skull conductivity values. This is partly caused
16+
by variations in skull conductivity from person to person and throughout
17+
the life cycle (Hoekema et al 2003), and partly from use of different
18+
measurement/estimation methods (Oostendorp et al 2000). In the 1970's
19+
and 80's the brain-to-skull conductivity ratio was reported to be 80
20+
(Rush 1968, Cohen 1983), still a very commonly used ratio in EEG source
21+
localization. More recent studies in the last decade have reported this
22+
ratio to be as low as 15 (Oostendorp 2000). In a more recent study on
23+
epilepsy patients undergoing presurgical evaluation using simultaneous
24+
intra-cranial and scalp EEG recordings, the average brain-to-skull
25+
conductivity ratio was estimated to be 25 (Lai 2005).
26+
27+
Below, we present some simulation results showing the effects of using
28+
incorrect skull conductivity values on equivalent dipole source
29+
localization. For this purpose, we solved the forward electrical head
30+
model problem using a realistic, subject-specific four-layer BEM model
31+
built from a subject’s MR head image using the NFT toolbox (Akalin Acar
32+
& Makeig, 2010). We set the forward model (‘ground truth’)
33+
brain-to-skull conductivity ratio to 25 and then solved the inverse
34+
problem using the same realistic head model using the commonly used
35+
ratio of 80. This produced equivalent dipole localization errors of up
36+
to 2.5 cm (figure below, top row). The positions of the grid of model
37+
dipoles were moved towards the scalp surface. On the other hand, (figure
38+
below, bottom row) if the brain-to-skull conductivity ratio was
39+
mis-estimated to be 15 when solving the inverse problem, the dipoles
40+
were localized more towards the center of the brain with localization
41+
errors up to 1 cm (Akalin Acar & Makeig, 2012). Therefore, correct
42+
modeling of skull conductivity is an important factor for EEG source
43+
localization.
44+
45+
![](Wiki_figure.png "wikilink")
46+
47+
Figure 1. Equivalent dipole source localization error directions
48+
(arrows) and magnitudes (colors) for a 4-layer realistic BEM head model
49+
when the brain-to-skull conductivity ratio was estimated to be 80 as
50+
opposed to the actual simulated forward model value of 25 (top row) and
51+
as 15 (as opposed to 25) (bottom row). The source space was a regular
52+
Cartesian grid of single equivalent dipole sources with 8-mm spacing
53+
filling the brain volume. The three columns show the errors when the
54+
equivalent dipole sources are oriented in x-, y-, and z-directions,
55+
respectively.

‎plugins/NFT/index.md

Lines changed: 118 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,118 @@
1+
---
2+
layout: default
3+
title: NFT
4+
long_title: NFT
5+
parent: Plugins
6+
categories: plugins
7+
has_children: true
8+
---
9+
### Open Source Matlab Toolbox for Neuroelectromagnetic Forward Head Modeling
10+
11+
![right](NFTsmall.jpg "wikilink")
12+
13+
### What is NFT?
14+
15+
Neuroelectromagnetic Forward Modeling Toolbox (NFT) is a MATLAB toolbox
16+
for generating realistic head models from available data (MRI and/or
17+
electrode locations) and for computing numerical solutions for solving
18+
the forward problem of electromagnetic source imaging (Zeynep Akalin
19+
Acar & S. Makeig, 2010). NFT includes tools for segmenting scalp, skull,
20+
cerebrospinal fluid (CSF) and brain tissues from T1-weighted magnetic
21+
resonance (MR) images. The Boundary Element Method (BEM) is used for the
22+
numerical solution of the forward problem. After extracting the
23+
segmented tissue volumes, surface BEM meshes may be generated. When a
24+
subject MR image is not available, a template head model may be warped
25+
to 3-D measured electrode locations to obtain an individualized BEM head
26+
model. Toolbox functions can be called from either a graphic user
27+
interface (gui) compatible with EEGLAB (sccn.ucsd.edu/eeglab), or from
28+
the MATLAB command line. Function help messages and a user tutorial are
29+
included. The toolbox is freely available for noncommercial use and open
30+
source development under the GNU Public License.
31+
32+
### Why NFT?
33+
34+
The NFT is released under an open source license, allowing researchers
35+
to contribute and improve on the work for the benefit of the
36+
neuroscience community. By bringing together advanced head modeling and
37+
forward problem solution methods and implementations within an easy to
38+
use toolbox, the NFT complements EEGLAB, an open source toolkit under
39+
active development. Combined, NFT and EEGLAB form a freely available EEG
40+
(and in future, MEG) source imaging solution.
41+
42+
The toolbox implements the major aspects of realistic head modeling and
43+
forward problem solution from available subject information:
44+
45+
1. Segmentation of T1-weighted MR images: The preferred method of
46+
generating a realistic head model is to use a 3-D whole-head
47+
structural MR image of the subject's head. The toolbox can generate
48+
a segmentation of scalp, skull, CSF and brain tissues from a
49+
T1-weighted image.
50+
51+
2. High-quality BEM meshes: The accuracy of the BEM solution depends on
52+
the quality of the underlying mesh that models tissue
53+
conductance-change boundaries. To avoid numerical instabilities, the
54+
mesh must be topologically correct with no self-intersections. It
55+
should represent the surface using high-quality elements while
56+
keeping the number of elements as small as possible. The NFT can
57+
create high-quality linear surface BEM meshes from the head
58+
segmentation.
59+
60+
3. Warping a template head model: When a whole-head structural MR image
61+
of the subject is not available, a semi-realistic head model can be
62+
generated by warping a standard template BEM mesh to the digitized
63+
electrode coordinates (instead of vice versa).
64+
65+
4. Registration of electrode positions with the BEM mesh: The digitized
66+
electrode locations and the BEM mesh must be aligned to compute
67+
accurate forward problem solutions and lead field matrices.
68+
69+
5. Accurate high-performance forward problem solution: The NFT uses a
70+
high-performance BEM implementation from the open source METU-FP
71+
Toolkit for bioelectromagnetic field computations.
72+
73+
### Required Resources
74+
75+
Matlab 7.0 or later running under any operating system (Linux, Windows).
76+
A large amount of RAM is useful - at least 2 GB (4-8 GB recommended for
77+
forward problem solution of realistic head models). The Matlab Image
78+
Processing toolbox is also recommended.
79+
80+
### NFT Reference Paper
81+
82+
Zeynep Akalin Acar & Scott Makeig, [Neuroelectromagnetic Forward Head
83+
Modeling
84+
Toolbox](http://sccn.ucsd.edu/%7Escott/pdf/Zeynep_NFT_Toolbox10.pdf).
85+
<em>Journal of Neuroscience Methods</em>, 2010
86+
87+
Download
88+
--------
89+
90+
To download the NFT, go to the [NFT download
91+
page](http://sccn.ucsd.edu/nft/).
92+
93+
NFT User's Manual
94+
-----------------
95+
96+
- [Chapter 01: Getting Started with NFT](Chapter_01_Getting_Started_with_NFT "wikilink")
97+
- [Chapter 02: Head Modeling from MR Images](Chapter_02_Head_Modeling_from_MR_Images "wikilink")
98+
- [Chapter 03: Forward Model Generation](Chapter_03_Forward_Model_Generation "wikilink")
99+
- [Chapter 04: NFT Examples](Chapter_04_NFT_Examples "wikilink")
100+
- [Chapter 05: NFT Commands and Functions](Chapter_05_NFT_Commands_and_Functions "wikilink")
101+
- [Appendix A: BEM Mesh Format](NFT_Appendix_A)
102+
- [Appendix B: Function Reference](NFT_Appendix_B)
103+
- [Appendix C: Effect of brain-to-skull conductivity ratio estimate](NFT_Appendix_C)
104+
105+
106+
- [Click here to download the NFT User Manual as a PDF book](NFT_Tutorial.pdf)
107+
108+
<div align=right>
109+
110+
Creation and documentation by:
111+
112+
Zeynep Akalin Acar
113+
114+
Project Scientist
115+
116+
zeynep@sccn.ucsd.edu
117+
118+
</div>

‎plugins/PACT/index.md

Lines changed: 283 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,283 @@
1+
---
2+
layout: default
3+
title: PACT
4+
long_title: PACT
5+
parent: Plugins
6+
categories: plugins
7+
has_children: true
8+
---
9+
What is PACT?
10+
-------------
11+
12+
PACT is a plug-in for EEGLAB. PACT stands for (cross-frequency)
13+
Phase-Amplitude Coupling Toolbox. See the github repository at
14+
[<https://github.com/sccn/PACT>](https://github.com/sccn/PACT) to submit
15+
bug reports or modify the codebase.
16+
17+
What data does PACT take?
18+
-------------------------
19+
20+
Currently it takes continuous data only. PACT was originally developed
21+
for electrocorticographic (ECoG) data analysis.
22+
23+
What does PACT do?
24+
------------------
25+
26+
In preparatory exploration, you may run a brute-force computation of PAC
27+
for all combinations of low-frequency oscillation (LFO) and
28+
highest-amplitude sampling (HAS) center frequencies. This computation
29+
may take a long time depending on the frequency resolution and bandwidth
30+
you specify. To compute each PAC value for a channel
31+
frequency-by-frequency combination, PACT performs the following steps:
32+
33+
1\. Band-pass filter the data to extract HFO and LFO signals.
34+
35+
2\. Hilbert transform the HFO signal to extract a time series of
36+
instantaneous amplitudes.
37+
38+
3\. Hilbert transform the LFO signal to extract a time series of
39+
instantaneous phases.
40+
41+
4\. (Highest-Amplitude Sampling, HAS): apply a threshold to select the
42+
N% highest HFO amplitudes. Obtain the HAS index from this.
43+
44+
5\. For LFO phase, apply HAS index.
45+
46+
6\. Combine HAS-indexed HFO and LFO indices into complex-valued phasors.
47+
48+
7\. Compute the Modulation Index (Canotly et al., 2006) for the
49+
collection of HAS phasors constructed above.
50+
51+
8\. Generate a collection of surrogate data by circularly permuting the
52+
phase time-series relative to the amplitude series.
53+
54+
9\. Compute a surrogate set of Modulation Indices for which the null
55+
hypothesis should hold and determine a statistical threshold from their
56+
distribution.
57+
58+
10\. Perform multiple comparison corrections based on the number of
59+
channels for which you are estimating PAC significance.
60+
61+
Note: To compute and perform statistics on the Mean Resultant Vector
62+
Length, PACT uses CircStat (Berens, 2009). To compute phase-sorted
63+
amplitude statistics, PACT uses K-S and Chi-square tests.
64+
65+
What do the PACT GUIs look like?
66+
--------------------------------
67+
68+
![thumb\|400px\|Figure 1. PACT Seen from EEGLAB main
69+
GUI.](Demo01.jpg)
70+
71+
<i><p style="text-align: center">Figure 1. PACT Seen from EEGLAB main GUI.</p></i>
72+
73+
![thumb\|400px\|Figure 2. Main
74+
GUI.](Demo02.jpg)
75+
76+
<i><p style="text-align: center">Figure 2. Main GUI</p></i>
77+
78+
When successfully installed, the item
79+
'PACT' should appear under 'Tools' (Figure 1). Currently it has 12
80+
menus.
81+
82+
- Compute PAC: This launches the main GUI (Figure 2). When press ok,
83+
computation starts. When it done, statistics set up window pops up
84+
(described later).
85+
- Phase freq range \[lohz hihz\]
86+
- Amp freq range \[lohz hihz\]
87+
- Highest amplitude sampling rate \[%\]
88+
- Sampling pool: This is for the experimental purpose. Always
89+
choose 'Each channel'.
90+
- If handpicked, event type and win size \[+/- ms\]: If you want
91+
to run the analysis using the data around event markers
92+
generated by either VidEd or MoBILAB, use this.
93+
- Significance threshold \[p\]
94+
- Number of surrogation \[N\]: This determines how many data
95+
points you want to generate surrogate data that represents for
96+
distribution of null hypothesis.
97+
- Number of phase bins \[N\]: This affects sensitivity of circular
98+
statistics. Don't use too extremely large value (e.g. \>100).
99+
100+
![thumb\|400px\|Figure 3. Detected HFOs (shown in
101+
red).](Demo06.jpg)
102+
103+
<i><p style="text-align: center">Figure 3. Detected HFOs (shown in red)</p></i>
104+
105+
- Plot HFO-marked Raw data: This plot looks like Figure 3.
106+
- Invert polarity: This is to invert EEG polarity by simply
107+
multiplying -1 to all the data.
108+
109+
![thumb\|400px\|Figure 4. Manually marking HFOs. Left, using VisEd.
110+
Right, using customized MoBILAB plots.](Demo04.jpg)
111+
112+
<i><p style="text-align: center">Figure 4. Manually marking HFOs. Left, using VisEd. Right, using customized MoBILAB plots</p></i>
113+
114+
- Handpick HFO(VisEd): This plot looks like Figure 4 left. You can
115+
choose the marking point by mouse click. For detailed explanation
116+
how to use this VisEd, see VisEd help.
117+
- Handpick HFO(Mobilab): This plot looks like Figure 4 left.
118+
Similarly, you can choose the marking point by mouse click. Use
119+
whichever suit you.
120+
- Copy event markers: This is to copy event markers from dataset 1 to
121+
dataset 2.
122+
123+
![thumb\|400px\|Figure 5. Statistics set
124+
up.](Demo03.jpg)
125+
126+
<i><p style="text-align: center">Figure 5. Statistics set up</p></i>
127+
128+
- Set up statistics: This shows a GUI that look like Figure 5.
129+
- Plot Modulation Index: This shows a plot that look like Figure 7/8
130+
top right.
131+
- Plot Angular hist (bar)
132+
- Plot Angular hist (polar): This shows a plot that look like Figure
133+
7/8 bottom left.
134+
- Plot phase-sorted amp: This shows a plot that look like Figure 7/8
135+
bottom left. Each bar represents mean amplitude of each phase bin.
136+
137+
![thumb\|400px\|Figure 6. Scanning parameter space consists of LFO phase
138+
frequencies and HAS rates.](Demo05.jpg)
139+
140+
<i><p style="text-align: center">Figure 6. Scanning parameter space consists of LFO phase frequencies and HAS rates</p></i>
141+
142+
- Scan LFO freqs (very slow!): This pops up GUI like Figure 6. Start
143+
with N = 10 or around, and HAS rate of 0.3-10. Color normalization
144+
should be used when plotting Mean Resultant Vector Length.
145+
146+
What plots does PACT output?
147+
----------------------------
148+
149+
1\. LFO-HAS parameter space scan results (combination of LFO phase
150+
frequencies and HAS rates; the measure used may be either Mean Resultant
151+
Vector Length or Modulation Index.
152+
153+
2\. Using Modulation Index with a confidence interval of 95% or 99%.
154+
155+
3a. Angular histogram displayed in a polar plot using Mean Resultant
156+
Vector Length.
157+
158+
3b. Angular histogram in a rectangular plot with phase unwrapped on the
159+
x-axis.
160+
161+
4\. Bar graphs of LFO phase-sorted HFO-amplitudes.
162+
163+
Note that the number of phase bins in Figures 3a and 3b is determined by
164+
user input and affects the results of the circular statistics.
165+
166+
How PACT can be used, and how its output can be interpreted: A demo example
167+
---------------------------------------------------------------------------
168+
169+
These plots show examples in which PACT was applied to
170+
electrocorticographic data for which a neurologist judged the channel(
171+
Ch) 1 signal to be pathological and the Ch2 signal to be normal.
172+
173+
First, exploratory LFO frequency scans were performed (Figures 7 and 8,
174+
top left; they are the same). We needed to run PACT several times,
175+
adjusting parameters; the result that showed the difference most clearly
176+
is plotted here. Mean Resultant Vector Length was chosen as the
177+
dependent variable, since it is naturally normalized from 0 to 1 and is
178+
therefore convenient for comparisons across channels. This plot shows
179+
two noticeable clusters of interest that showed differences between Ch1
180+
and Ch2:.One is (LFO 0.5 Hz, HAS 3%,) the other (LFO 1.5 Hz, HAS 1.5%).
181+
We decided to run analysis for both combinations of parameters.
182+
Statistical significance level was set to 1% with Bonferroni-Holm
183+
correction (Note: here, since we have only 2 items to compare, B-H is
184+
the same as Bonferroni).
185+
186+
![](PACT05Hz.jpg)
187+
188+
<i><p style="text-align: center">Figure 7. LFO 0.5Hz, HAS 3%, p < 0.01, CI 95%. Top left, LFO-HAS parameter space scan results. Top right, Modulation Index. Bottom left, Mean Resultant Vector Length. Bottom right, phase-sorted HFO amplitudes</p></i>
189+
190+
Figure 7 shows the result of choosing parameters (LFO0.5 Hz, HAS 3%). The Modulation Index
191+
for Ch1 is larger than that for Ch2. Only the Ch1 value reached
192+
statistical significance (Figure 7, top right; a horizontal bar in the
193+
graph shows the 95% confidence interval). By Mean Resultant Vector
194+
length, both channel signals exhibited showed phase concentrations,
195+
though their preferred phases were different -- almost opposite (Figure
196+
7, bottom left). Phase-sorted HFO amplitude also indicated that Ch1 has
197+
a preferred phase, and the Ch1 amplitude distribution over phase bins
198+
deviates significantly from uniform, whereas Ch2 does not show this
199+
effect (Figure 7, bottom right). Note also the large difference in
200+
amplitude scales.
201+
202+
![](PACT15Hz.jpg) <i><p style="text-align: center">Figure 8. LFO 1.5Hz, HAS 1.5%, p < 0.01, CI 95%. Top right, Modulation Index. Bottom left, Mean Resultant Vector Length. Bottom right, Phase-sorted HFO amplitude. Note that the Ch1 Modulation Index is much larger than the confidence interval compared to Figure 7</p></i>
203+
204+
Figure 8 shows the result of
205+
choosing the parameters (LFO 1.5 Hz, HAS 1.5%). Modulation Index, Mean
206+
Resultant Vector length, and Phase-sorted HFO amplitude all showed
207+
similar properties to the results shown in Figure 7. However, note that
208+
the Ch1 Modulation Index is much larger than its confidence interval
209+
level; probably this combination of parameters better fits the
210+
pathological pattern in this channel signal.
211+
212+
Download Link
213+
-------------
214+
215+
<http://sccn.ucsd.edu/wiki/Plugin_list_process>
216+
217+
Caution and Limitation
218+
----------------------
219+
220+
The 'Handpick HFO' menu does not work with newer Matlab versions, which
221+
no longer support the *graphics.cursorbar* object. To use this function,
222+
use Matlab 2013 or older as a workaround.
223+
224+
Scanning Phase-frequency vs. HFO-frequency (07/24/2019 updated)
225+
---------------------------------------------------------------
226+
227+
In calculating phase-amplitude coupling, a typical problem is how to
228+
determine the target frequencies in both phase and amplitude. To perform
229+
it simply, in ver.0.30 I implemented a function to generate
230+
phase-amplitude frequency-by-frequency grid plot. If you need to reduce
231+
the number of channel, do so by using EEGLAB GUI beforehand. Otherwise,
232+
this frequency scan process does not need any preprocessing by PACT, it
233+
does the job itself. One extra parameter you have to choose is highest
234+
amplitude sampling (HAS) rate, which specifies the right-tail cutoff for
235+
the amplitude distribution for each channel. Note that this is only
236+
picking up the highest amplitude after high-frequency band-pass filter,
237+
so if there is artifact with high-frequency (or broadband), HAS will
238+
pick it up. In this case, you would want to clean the data using EEGLAB
239+
function before performing this analysis.
240+
241+
In the example below, one can easily find that Ch21 showed the strongest
242+
PAC between 3-Hz phase and 80-Hz HFO amplitude, followed by Ch16. Ch18
243+
also showed some PAC, but it coupled with 1.7 Hz instead of 3 Hz so this
244+
could be something different. If one chooses mean vector length to show
245+
instead of Canolty's modulation index (MI), it allows to evaluate the
246+
same measure without the effect of HFO amplitude. The calculated values
247+
are stored under EEG.pacScan.
248+
249+
![200px](PactUpdate1crop.jpg)
250+
251+
![600px](PactUpdate2.jpg)
252+
253+
### How to obtain mean HFO gamma amplitude
254+
255+
1. In the plot above, confirm that the peak PAC value is observed at
256+
Ch21, phase 3.2-Hz, ampltiude 80-Hz.
257+
2. Type 'EEG.pacScan' in the command line. Among the variables, find
258+
'meanHfoAmp' This is mean HFO amplitude in microVolt. If you are not
259+
sure about dimensions, see 'dataDimensions'. We know our channel and
260+
freq-freq window of interest, which are 21, 3.2 Hz, 80 Hz,
261+
respectively. Based on these parameters of interest, we obtain
262+
indices for these parameters: 21 for the channel order, 7 for the
263+
phase freq (see 'phaseFreqEdge'--3.2Hz is between the 7th and 8th
264+
edges, so we select 7), and 1 for the HFO freq.
265+
3. We enter EEG.pacScan.meanHfoAmp(21,7,1) in the command window. It
266+
returned '35.0391' which mean the mean HFO amplitude during the
267+
selected HFO frames was 35.0391 microVolt.
268+
269+
![600px](PactUpdate3.jpg)
270+
271+
Bug report, request, comment
272+
----------------------------
273+
274+
Please post bugs and suggestions to the EEGLAB mailing list.
275+
276+
Reference
277+
---------
278+
279+
<http://www.ncbi.nlm.nih.gov/pubmed/24110429> Makoto Miyakoshi, Arnaud
280+
Delorme, Tim Mullen, Katsuaki Kojima, Scott Makeig, Eishi Asano.
281+
*Automated detection of cross-frequency coupling in the
282+
electrocorticogram for clinical inspection.* Conf Proc IEEE Eng Med Biol
283+
Soc.2013.3282-3285.

‎plugins/clean_rawdata/index.md

Lines changed: 409 additions & 0 deletions
Large diffs are not rendered by default.

‎plugins/get_chanlocs/index.md

Lines changed: 244 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,244 @@
1+
---
2+
layout: default
3+
title: get_chanlocs
4+
long_title: get_chanlocs
5+
parent: Plugins
6+
categories: plugins
7+
has_children: true
8+
---
9+
<h3>
10+
11+
<b>*get_chanlocs*: Compute 3-D electrode positions from a 3-D head image
12+
==\> <u>[Download the *get_chanlocs* User Guide](https://sccn.ucsd.edu/eeglab/download/Get_chanlocs_userguide.pdf)</u></b>
13+
14+
</h3>
15+
16+
![](Get_chanlocs.jpg)
17+
18+
### What is *get_chanlocs*?
19+
20+
The *get_chanlocs* EEGLAB plug-in is built on functions in
21+
[FieldTrip](http://www.fieldtriptoolbox.org/) to locate 3-D electrode
22+
positions from a 3-D scanned head image. Robert Oostenveld, originator
23+
of the FieldTrip toolbox, alerted us in 2017 that he and his students in
24+
Nijmegen had put functions into FieldTrip to compute positions of scalp
25+
electrodes from the recorded 3-D images for one 3-D camera, the
26+
[Structure scanner](https://structure.io/) mounted to an Apple iPad.
27+
(Read [Homölle and Oostenveld
28+
(2019)](https://doi.org/10.1016/j.jneumeth.2019.108378) and [notes on
29+
the incorporated FieldTrip
30+
functions](http://www.fieldtriptoolbox.org/tutorial/electrode/)). We at
31+
SCCN have created an EEGLAB plug-in extension, *get_chanlocs*, to ease
32+
the process of digitizing the positions of the electrodes from the
33+
acquired 3-D and entering them into the *EEG.chanlocs* data structure
34+
for use with other EEGLAB (plotting and source localization) functions
35+
that require electrode position information.
36+
37+
The <b>major advantages</b> of using <em>get_chanlocs</em> to measure
38+
electrode positions are that: 1) <b>the 3D image can be recorded quickly
39+
(\<1 min)</b>, thereby saving precious subject time (and attention
40+
capacity) better used to record EEG data! The researchers who have been
41+
most enthusiastic to hear about <em>get_chanlocs</em> are those
42+
collecting data from children and infants -- though even normal adult
43+
participants must feel less cognitive capacity for the experimental
44+
tasks after sitting, wearing the EEG montage, for 20 min while research
45+
assistants record the 3D location of each scalp electrode. 2) <b>The 3D
46+
image connects the electrode locations to the head fidicuals in a very
47+
concrete and permanent way</b>; future improved head modeling will be
48+
able to use the 3D head surface scans to fit to subject MR images or to
49+
warp template head models to the actual subject head. 3) Unlike with
50+
wand-based electrode localizing (neurologists call this electrode
51+
'digitizing'), <b>retaining the 3D head image allows rechecking the
52+
electrode positions</b> (e.g., if some human error occurs on first
53+
readout).
54+
55+
In brief, the process is as follows:
56+
57+
<b>Scanning the head surface:</B> A 3-D head image (3-D head ‘scan’) is
58+
acquired using the Structure scanner showing the subject wearing the
59+
electrode cap; this image acquisition typically requires a minute or
60+
less to perform. The resulting 3-D *.obj* image file is stored along
61+
with the EEG data. *get_chanlocs* also supports use of *.obj* 3D image
62+
files obtained using the [itSeez3D scanning app](https://itseez3d.com/),
63+
which we have found to be easier to capture good 3D images with than the
64+
Structure scanner's native app (Suggestion: Ask iSeez3D about a
65+
non-commercial license).
66+
67+
<B>Localizing the electrodes in the 3D scan:</B> When the data are to be
68+
analyzed, the *get_chanlocs* plug-in, called from the Matlab command
69+
line or EEGLAB menu, guides the data analyst through the process of
70+
loading the recorded 3-D head image and then clicking on each of the
71+
electrodes in the image in a pre-planned order to compute and store
72+
their 3-D positions relative to 3 fidicual points on the head (bridge of
73+
nose and ears). (Note: in future, this digitizing step may be automated
74+
at some point in the future using a machine vision approach). The
75+
electrode labels and their 3-D positions relative to the three skull
76+
landmarks (‘fiducial points’) are then written directly into the dataset
77+
*EEG.chanlocs* structure. During this process, a montage template
78+
created for the montage used in the recorded experiment can be shown by
79+
*get_chanlocs* as a convenient visual reference to speed and minimize
80+
human error in the electrode digitization process.
81+
82+
<B>User Guide</B> See the illustrated [*get_chanlocs* User
83+
Guide](https://sccn.ucsd.edu/mediawiki/images/5/5f/Get_chanlocs_userguide.pdf) for details.
84+
85+
<B>Uses:</B> Once the digitized electrode positions have been stored in
86+
the dataset, further (scalp field plotting and source localization)
87+
processes can use the digitized positions.
88+
89+
<b>Ethical considerations:</B> An institutional review board (or
90+
equivalent ethics review body) will likely consider head images as
91+
personally identifiable information. <b>Here is the IRB-approved [UCSD
92+
subject Consent
93+
form](/Media:Get_chanlocs_sampleConsent.pdf "wikilink")</B>, allowing
94+
participants to consent to different degrees of use of their 3D head
95+
image, that we use at SCCN.
96+
97+
### Why *get_chanlocs*?
98+
99+
To achieve <b>high-resolution EEG (effective) source imaging</b>
100+
requires (a) <b>an accurate 3-D electrical head model</b>, and (b)
101+
<b>accurate co-registration of the 3-D scalp electrode positions to the
102+
head model</b>. Several packages are available for fashioning a
103+
geometrically accurate head model from an anatomic MR head image. We use
104+
Zeynep Akalin Acar's [Neuromagnetic Forward problem Toolbox
105+
(NFT)](https://sccn.ucsd.edu/wiki/NFT), which she is now coupling to the
106+
first non-invasive, universally applicable method (SCALE) for estimating
107+
individual skull conductivity from EEG data (Akalin Acar et al., 2016;
108+
more news of this soon!). When a subject MR head image is *not*
109+
available, equivalent dipole models for independent component brain
110+
sources can use a template head model. Zeynep has shown that the dipole
111+
position fitting process is more accurate when the template head is
112+
warped to fit the actual 3-D positions of the electrodes -- IF these are
113+
recorded accurately. This kind of warping is performed in Zeynep's
114+
[**NFT** toolbox for EEGLAB](https://sccn.ucsd.edu/wiki/NFT).
115+
116+
For too long, it has been expensive and/or time consuming (for both
117+
experimenter and subject) to record (or 'digitize') the 3-D positions of
118+
the scalp electrodes for each subject. In recent years, however, cameras
119+
capable of recording images in 3-D have appeared and are now becoming
120+
cheaper and more prevalent. Robert Oostenveld, originator of the
121+
FieldTrip toolbox, alerted us that he and his students in Nijmegen had
122+
added functions to FieldTrip to compute the 3-D positions of scalp
123+
electrodes from scanned 3-D images acquired by one such camera, the
124+
[Structure scanner](https://store.structure.io/store) mounted to an
125+
Apple iPad.
126+
127+
Recording the actual electrode positions in a 3-D head image minimizes
128+
the time spent by the experimenter and subject on electrode position
129+
recording during the recording session to a minute or less, while also
130+
minimizing position digitizing system cost (to near $1000) and the space
131+
required (to an iPad-sized scanner plus enough space to walk around the
132+
seated subject holding the scanner). Digitizing the imaged electrode
133+
positions during data preprocessing is made convenient in *get_chanlocs*
134+
by using a montage template. In future, we anticipate an automated
135+
template-matching app will reduce time required to simply checking the
136+
results of an automated procedure.
137+
138+
Required Resources
139+
------------------
140+
141+
The *get_chanlocs* plug-in has been tested under Matlab 9.1 (R2016b) on
142+
Windows 10 as well as OS X 10.10.5. Please provide feedback concerning
143+
any incompatibilities, bugs, or feature suggestions using the [GitHub
144+
issue tracker](https://github.com/cll008/get_chanlocs/issues/).
145+
146+
<b>Scanning software:</B> In theory, any combination of 3-D scanning
147+
hardware and software that produces a Wavefront OBJ file (.obj) with the
148+
corresponding material texture library (.mtl) and JPEG (.jpg) files can
149+
be used for the plug-in. *get_chanlocs* has only been tested with head
150+
models produced by the [Structure Sensor
151+
camera](https://store.structure.io/store) attached to an iPad Air (model
152+
A1474). We use the default [calibrator
153+
app](https://itunes.apple.com/us/app/structure-sensor-calibrator/id914275485?mt=8)
154+
to align the Sensor camera and the tablet camera, and both the default
155+
scanning software
156+
([Scanner](https://itunes.apple.com/us/app/scanner-structure-sensor-sample/id891169722?mt=8))
157+
and a third-party scanning software ([itSeez3D](https://itseez3d.com/)).
158+
159+
<b>Scanner vs. itSeez3D:</B> While the default scanning app
160+
([Scanner](https://itunes.apple.com/us/app/scanner-structure-sensor-sample/id891169722?mt=8))
161+
is free and produces models that are of high enough quality for the
162+
plug-in, we find the third-party app ([itSeez3D](https://itseez3d.com/))
163+
easier to use. It seems to be more robust, providing better tracking and
164+
faster scans while minimizing the effects of adverse lighting
165+
conditions. itSeez3D features a user friendly online account system for
166+
accessing high-resolution models that are processed on their cloud
167+
servers. Users may contact [itSeez3D](mailto:support@itseez3d.com) to
168+
change processing parameters; for *get_chanlocs*, we found that
169+
increasing the model polygon count beyond 400,000 results in longer
170+
processing time without providing an appreciable increase in resolution.
171+
Unfortunately, while scanning is free, exporting models (required for
172+
*get_chanlocs*) has a [per export or subscription
173+
cost](https://itseez3d.com/pricing.html). Please contact
174+
[itSeez3D](mailto:support@itseez3d.com) regarding discounts for
175+
educational institutions and other non-commercial purposes.
176+
177+
Common Issues
178+
-------------
179+
180+
<b>Incorrect units in resulting electrode locations:</b> 3-D .obj model
181+
units are estimated by relating the range of the recorded vertex
182+
coordinates to an average-sized head: a captured model that is much
183+
larger or smaller than average will cause errors. If your project
184+
requires scanning an atypically-sized model (e.g. large bust scan
185+
including ECG electrode, arm scan for EMG sleeve, etc.), manually set
186+
obj.unit - [instead of using
187+
*ft_determine_units*](https://github.com/cll008/get_chanlocs/blob/master/private/ft_convert_units.m#L86)
188+
- to the correct unit used by your scanner {'m','dm','cm','mm'} to avoid
189+
complications.
190+
191+
<b>Keyboard settings:</b> Key presses are used to rotate 3-D head models
192+
when selecting electrode locations in *get_chanlocs*. Key press
193+
parameters should be adjusted per user discretion: macOS and Windows
194+
systems have adjustable Keyboard Properties, where 'Repeat delay' and
195+
'Repeat rate' may be modified. For some versions of macOS, long key
196+
presses will instead bring up an accent selection menu; in such cases,
197+
repeated single key presses can be used to control MATLAB, or users may
198+
disable the accent selection menu and enable repeating keys by typing
199+
(or pasting) the following in the terminal:
200+
`defaults write -g ApplePressAndHoldEnabled -bool false`
201+
202+
One way to circumvent this issue is to use the 3-D figure rotation tool
203+
in MATLAB. First select the rotation tool, then mark electrodes by
204+
clicking as normal; to rotate the model, hold the click after selecting
205+
an electrode and drag the mouse; else, be sure to press 'r' to remove
206+
points as necessary.
207+
208+
<b>Low resolution in head model:</b> Models will have lowered resolution
209+
in MATLAB due to how 3-D .obj are imported and handled, even if they
210+
have show a reasonable resolution in other 3-D modeling software (e.g.
211+
Paint 3D). Increase the polygon count of the model to circumvent this
212+
issue (we recommend 400,000 uniform polygons for itSeez3D).
213+
214+
Download
215+
--------
216+
217+
To download *get_chanlocs*, use the extension manager within EEGLAB.
218+
Alternatively, plug-ins are available for manual download from the
219+
[EEGLAB plug-in
220+
list](https://sccn.ucsd.edu/eeglab/plugin_uploader/plugin_list_all.php).
221+
222+
Revision History
223+
----------------
224+
225+
Please check the [commit
226+
history](https://github.com/cll008/get_chanlocs/commits/master) of the
227+
plug-in's GitHub repository.
228+
229+
*get_chanlocs* User Guide
230+
-------------------------
231+
232+
View/download the [*get_chanlocs* User
233+
Guide](https://sccn.ucsd.edu/eeglab/download/Get_chanlocs_userguide.pdf)
234+
235+
<div align=left>
236+
237+
Creation and documentation by:
238+
239+
**Clement Lee**, Applications Programmer, SCCN/INC/UCSD,
240+
<cll008@eng.ucsd.edu>
241+
**Scott Makeig**, Director, SCCN/INC/UCSD, <smakeig@ucsd.edu>
242+
243+
</div>
244+

‎plugins/list

Lines changed: 183 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,183 @@
1+
total 904
2+
drwxr-xr-x 52 dtyoung staff 1.6K Jul 11 16:42 ./
3+
drwxr-xr-x@ 35 dtyoung staff 1.1K Jul 11 16:41 ../
4+
-rw------- 1 dtyoung staff 5.7K Jul 11 12:46 ARfitStudio 2.md
5+
-rw-r--r-- 1 dtyoung staff 5.7K Jul 11 13:44 ARfitStudio.md
6+
-rw------- 1 dtyoung staff 5.9K Jul 11 12:43 EEG-BIDS 2.md
7+
-rw-r--r-- 1 dtyoung staff 5.9K Jul 11 13:44 EEG-BIDS.md
8+
-rw------- 1 dtyoung staff 5.2K Jul 11 12:43 ICLabel 2.md
9+
-rw-r--r-- 1 dtyoung staff 5.2K Jul 11 13:44 ICLabel.md
10+
drwxr-xr-x 11 dtyoung staff 352B Jul 11 13:44 NFT/
11+
-rw------- 1 dtyoung staff 3.4K Jul 11 12:46 NIMA 2.md
12+
-rw-r--r-- 1 dtyoung staff 3.4K Jul 11 13:44 NIMA.md
13+
drwxr-xr-x 3 dtyoung staff 96B Jul 11 13:44 PACT/
14+
-rw------- 1 dtyoung staff 23K Jul 11 12:44 PACTools 2.md
15+
-rw-r--r-- 1 dtyoung staff 23K Jul 11 13:44 PACTools.md
16+
-rw------- 1 dtyoung staff 11K Jul 11 12:43 PowPowCAT 2.md
17+
-rw-r--r-- 1 dtyoung staff 11K Jul 11 13:44 PowPowCAT.md
18+
drwxr-xr-x 34 dtyoung staff 1.1K Jul 11 13:50 SIFT/
19+
-rw------- 1 dtyoung staff 4.0K Jul 11 12:45 amica 2.md
20+
-rw-r--r-- 1 dtyoung staff 4.0K Jul 11 13:44 amica.md
21+
drwxr-xr-x 3 dtyoung staff 96B Jul 11 13:45 clean_rawdata/
22+
-rw------- 1 dtyoung staff 2.4K Jul 11 12:43 dipfit 2.md
23+
-rw-r--r-- 1 dtyoung staff 2.4K Jul 11 13:44 dipfit.md
24+
-rw------- 1 dtyoung staff 5.8K Jul 11 12:43 eegstats 2.md
25+
-rw-r--r-- 1 dtyoung staff 5.8K Jul 11 13:44 eegstats.md
26+
-rw------- 1 dtyoung staff 4.5K Jul 11 12:45 fMRIb 2.md
27+
-rw-r--r-- 1 dtyoung staff 4.5K Jul 11 13:44 fMRIb.md
28+
-rw------- 1 dtyoung staff 712B Jul 11 12:46 firfilt 2.md
29+
-rw-r--r-- 1 dtyoung staff 712B Jul 11 13:44 firfilt.md
30+
drwxr-xr-x 3 dtyoung staff 96B Jul 11 13:44 get_chanlocs/
31+
-rw------- 1 dtyoung staff 22K Jul 11 12:43 groupSIFT 2.md
32+
-rw-r--r-- 1 dtyoung staff 22K Jul 11 13:44 groupSIFT.md
33+
-rw------- 1 dtyoung staff 32K Jul 11 12:46 imat 2.md
34+
-rw-r--r-- 1 dtyoung staff 32K Jul 11 13:44 imat.md
35+
-rw-r--r-- 1 dtyoung staff 200B Jul 11 13:44 index.md
36+
-rw-r--r-- 1 dtyoung staff 0B Jul 11 16:42 list
37+
drwxr-xr-x 17 dtyoung staff 544B Jul 11 13:44 nsgportal/
38+
-rw------- 1 dtyoung staff 2.1K Jul 11 13:14 nsgportal 2.md
39+
-rw-r--r-- 1 dtyoung staff 2.1K Jul 11 13:44 nsgportal.md
40+
-rw------- 1 dtyoung staff 2.5K Jul 11 12:43 nwbio 2.md
41+
-rw-r--r-- 1 dtyoung staff 2.5K Jul 11 13:44 nwbio.md
42+
-rw------- 1 dtyoung staff 14K Jul 11 12:46 relica 2.md
43+
-rw-r--r-- 1 dtyoung staff 14K Jul 11 13:44 relica.md
44+
-rw------- 1 dtyoung staff 16K Jul 11 12:43 roiconnect 2.md
45+
-rw-r--r-- 1 dtyoung staff 16K Jul 11 13:44 roiconnect.md
46+
-rw------- 1 dtyoung staff 12K Jul 11 12:46 std_dipoleDensity 2.md
47+
-rw-r--r-- 1 dtyoung staff 12K Jul 11 13:44 std_dipoleDensity.md
48+
-rw------- 1 dtyoung staff 5.4K Jul 11 12:43 trimOutlier 2.md
49+
-rw-r--r-- 1 dtyoung staff 5.4K Jul 11 13:44 trimOutlier.md
50+
-rw------- 1 dtyoung staff 5.3K Jul 11 12:46 viewprops 2.md
51+
-rw-r--r-- 1 dtyoung staff 5.3K Jul 11 13:44 viewprops.md
52+
-rw------- 1 dtyoung staff 1.3K Jul 11 12:44 zapline-plus 2.md
53+
-rw-r--r-- 1 dtyoung staff 1.3K Jul 11 13:44 zapline-plus.md
54+
55+
./NFT:
56+
total 176
57+
drwxr-xr-x 11 dtyoung staff 352B Jul 11 13:44 ./
58+
drwxr-xr-x 52 dtyoung staff 1.6K Jul 11 16:42 ../
59+
-rw-r--r-- 1 dtyoung staff 4.9K Jul 11 13:46 Chapter_01_Getting_Started_with_NFT.md
60+
-rw-r--r-- 1 dtyoung staff 15K Jul 11 13:46 Chapter_02_Head_Modeling_from_MR_Images.md
61+
-rw-r--r-- 1 dtyoung staff 7.0K Jul 11 13:46 Chapter_03_Forward_Model_Generation.md
62+
-rw-r--r-- 1 dtyoung staff 7.3K Jul 11 13:46 Chapter_04_NFT_Examples.md
63+
-rw-r--r-- 1 dtyoung staff 7.0K Jul 11 13:46 Chapter_05_NFT_Commands_and_Functions.md
64+
-rw-r--r-- 1 dtyoung staff 2.7K Jul 11 13:46 NFT_Appendix_A.md
65+
-rw-r--r-- 1 dtyoung staff 22K Jul 11 13:46 NFT_Appendix_B.md
66+
-rw-r--r-- 1 dtyoung staff 3.0K Jul 11 13:46 NFT_Appendix_C.md
67+
-rw-r--r-- 1 dtyoung staff 4.8K Jul 11 13:46 index.md
68+
69+
./PACT:
70+
total 24
71+
drwxr-xr-x 3 dtyoung staff 96B Jul 11 13:44 ./
72+
drwxr-xr-x 52 dtyoung staff 1.6K Jul 11 16:42 ../
73+
-rw-r--r-- 1 dtyoung staff 12K Jul 11 13:46 index.md
74+
75+
./SIFT:
76+
total 672
77+
drwxr-xr-x 34 dtyoung staff 1.1K Jul 11 13:50 ./
78+
drwxr-xr-x 52 dtyoung staff 1.6K Jul 11 16:42 ../
79+
-rw------- 1 dtyoung staff 3.8K Jul 11 13:37 Chapter-1.-Downloads 2.md
80+
-rw-r--r-- 1 dtyoung staff 3.8K Jul 11 13:46 Chapter-1.-Downloads.md
81+
-rw------- 1 dtyoung staff 11K Jul 11 13:37 Chapter-2.-Introduction 2.md
82+
-rw-r--r-- 1 dtyoung staff 11K Jul 11 13:46 Chapter-2.-Introduction.md
83+
-rw------- 1 dtyoung staff 501B Jul 11 13:37 Chapter-3-and-4-Theory 2.md
84+
-rw-r--r-- 1 dtyoung staff 501B Jul 11 13:46 Chapter-3-and-4-Theory.md
85+
-rw------- 1 dtyoung staff 2.9K Jul 11 13:37 Chapter-5.-Computing-connectivity 2.md
86+
-rw-r--r-- 1 dtyoung staff 2.9K Jul 11 13:46 Chapter-5.-Computing-connectivity.md
87+
-rw------- 1 dtyoung staff 1.8K Jul 11 13:37 Chapter-5.1.-SIFT-install 2.md
88+
-rw-r--r-- 1 dtyoung staff 1.8K Jul 11 13:46 Chapter-5.1.-SIFT-install.md
89+
-rw------- 1 dtyoung staff 3.5K Jul 11 13:37 Chapter-5.2.-Loading-and-preparing-the-data 2.md
90+
-rw-r--r-- 1 dtyoung staff 3.5K Jul 11 13:46 Chapter-5.2.-Loading-and-preparing-the-data.md
91+
-rw------- 1 dtyoung staff 12K Jul 11 13:37 Chapter-5.3.-SIFT-preprocessing 2.md
92+
-rw-r--r-- 1 dtyoung staff 12K Jul 11 13:46 Chapter-5.3.-SIFT-preprocessing.md
93+
-rw------- 1 dtyoung staff 20K Jul 11 13:37 Chapter-5.4.-Model-Fitting-and-Validation 2.md
94+
-rw-r--r-- 1 dtyoung staff 20K Jul 11 13:46 Chapter-5.4.-Model-Fitting-and-Validation.md
95+
-rw------- 1 dtyoung staff 2.9K Jul 11 13:37 Chapter-5.5.-Connectivity-Estimation 2.md
96+
-rw-r--r-- 1 dtyoung staff 2.9K Jul 11 13:46 Chapter-5.5.-Connectivity-Estimation.md
97+
-rw------- 1 dtyoung staff 25K Jul 11 13:37 Chapter-6.-Visualization 2.md
98+
-rw-r--r-- 1 dtyoung staff 25K Jul 11 13:46 Chapter-6.-Visualization.md
99+
-rw------- 1 dtyoung staff 19K Jul 11 13:37 Chapter-7.-Statistics-in-SIFT 2.md
100+
-rw-r--r-- 1 dtyoung staff 19K Jul 11 13:46 Chapter-7.-Statistics-in-SIFT.md
101+
-rw------- 1 dtyoung staff 2.6K Jul 11 13:37 Chapter-8.-Conclusions-and-Acknowledgements 2.md
102+
-rw-r--r-- 1 dtyoung staff 2.6K Jul 11 13:46 Chapter-8.-Conclusions-and-Acknowledgements.md
103+
-rw------- 1 dtyoung staff 19K Jul 11 13:37 Function-Reference 2.md
104+
-rw-r--r-- 1 dtyoung staff 19K Jul 11 13:46 Function-Reference.md
105+
-rw------- 1 dtyoung staff 12K Jul 11 13:37 References 2.md
106+
-rw-r--r-- 1 dtyoung staff 12K Jul 11 13:46 References.md
107+
drwxr-xr-x 33 dtyoung staff 1.0K Jul 11 13:48 images/
108+
drwx------ 2 dtyoung staff 64B Jul 11 13:50 images 2/
109+
-rw------- 1 dtyoung staff 1.5K Jul 11 13:37 index 2.md
110+
-rw-r--r-- 1 dtyoung staff 1.5K Jul 11 13:46 index.md
111+
112+
./SIFT/images:
113+
total 13352
114+
drwxr-xr-x 33 dtyoung staff 1.0K Jul 11 13:48 ./
115+
drwxr-xr-x 34 dtyoung staff 1.1K Jul 11 13:50 ../
116+
-rw-r--r-- 1 dtyoung staff 2.3K Jul 11 13:44 Dl_ico.png
117+
-rw-r--r-- 1 dtyoung staff 1.2K Jul 11 13:44 Dlpdf.jpeg
118+
-rw-r--r-- 1 dtyoung staff 67K Jul 11 13:44 SIFT_splashslide.jpg
119+
-rw-r--r-- 1 dtyoung staff 65K Jul 11 13:44 SIFTfig1.jpg
120+
-rw-r--r-- 1 dtyoung staff 101K Jul 11 13:44 SIFTfig10.jpg
121+
-rw-r--r-- 1 dtyoung staff 57K Jul 11 13:44 SIFTfig11.jpg
122+
-rw-r--r-- 1 dtyoung staff 330K Jul 11 13:44 SIFTfig12.jpg
123+
-rw-r--r-- 1 dtyoung staff 135K Jul 11 13:44 SIFTfig13.jpg
124+
-rw-r--r-- 1 dtyoung staff 248K Jul 11 13:44 SIFTfig14.jpg
125+
-rw-r--r-- 1 dtyoung staff 185K Jul 11 13:44 SIFTfig15.jpg
126+
-rw-r--r-- 1 dtyoung staff 147K Jul 11 13:44 SIFTfig16.jpg
127+
-rw-r--r-- 1 dtyoung staff 178K Jul 11 13:44 SIFTfig17.jpg
128+
-rw-r--r-- 1 dtyoung staff 239K Jul 11 13:44 SIFTfig18a.jpg
129+
-rw-r--r-- 1 dtyoung staff 244K Jul 11 13:44 SIFTfig18b.jpg
130+
-rw-r--r-- 1 dtyoung staff 248K Jul 11 13:44 SIFTfig19.jpg
131+
-rw-r--r-- 1 dtyoung staff 118K Jul 11 13:44 SIFTfig2.jpg
132+
-rw-r--r-- 1 dtyoung staff 100K Jul 11 13:44 SIFTfig20.jpg
133+
-rw-r--r-- 1 dtyoung staff 509K Jul 11 13:44 SIFTfig21.jpg
134+
-rw-r--r-- 1 dtyoung staff 462K Jul 11 13:44 SIFTfig22.jpg
135+
-rw-r--r-- 1 dtyoung staff 317K Jul 11 13:44 SIFTfig23.jpg
136+
-rw-r--r-- 1 dtyoung staff 827K Jul 11 13:44 SIFTfig24.jpg
137+
-rw-r--r-- 1 dtyoung staff 523K Jul 11 13:44 SIFTfig25.jpg
138+
-rw-r--r-- 1 dtyoung staff 198K Jul 11 13:44 SIFTfig26.jpg
139+
-rw-r--r-- 1 dtyoung staff 194K Jul 11 13:44 SIFTfig27.jpg
140+
-rw-r--r-- 1 dtyoung staff 95K Jul 11 13:44 SIFTfig3.jpg
141+
-rw-r--r-- 1 dtyoung staff 145K Jul 11 13:44 SIFTfig4.png
142+
-rw-r--r-- 1 dtyoung staff 125K Jul 11 13:44 SIFTfig5.jpg
143+
-rw-r--r-- 1 dtyoung staff 127K Jul 11 13:44 SIFTfig6.jpg
144+
-rw-r--r-- 1 dtyoung staff 240K Jul 11 13:44 SIFTfig7.jpg
145+
-rw-r--r-- 1 dtyoung staff 167K Jul 11 13:44 SIFTfig8.jpg
146+
-rw-r--r-- 1 dtyoung staff 223K Jul 11 13:44 SIFTfig9.jpg
147+
148+
./SIFT/images 2:
149+
total 0
150+
drwx------ 2 dtyoung staff 64B Jul 11 13:50 ./
151+
drwxr-xr-x 34 dtyoung staff 1.1K Jul 11 13:50 ../
152+
153+
./clean_rawdata:
154+
total 40
155+
drwxr-xr-x 3 dtyoung staff 96B Jul 11 13:45 ./
156+
drwxr-xr-x 52 dtyoung staff 1.6K Jul 11 16:42 ../
157+
-rw-r--r-- 1 dtyoung staff 20K Jul 11 13:46 index.md
158+
159+
./get_chanlocs:
160+
total 24
161+
drwxr-xr-x 3 dtyoung staff 96B Jul 11 13:44 ./
162+
drwxr-xr-x 52 dtyoung staff 1.6K Jul 11 16:42 ../
163+
-rw-r--r-- 1 dtyoung staff 12K Jul 11 13:46 index.md
164+
165+
./nsgportal:
166+
total 192
167+
drwxr-xr-x 17 dtyoung staff 544B Jul 11 13:44 ./
168+
drwxr-xr-x 52 dtyoung staff 1.6K Jul 11 16:42 ../
169+
-rw-r--r-- 1 dtyoung staff 8.3K Jul 11 13:46 Creating-and-managing-a-job-from-pop_nsg-GUI.md
170+
-rw-r--r-- 1 dtyoung staff 5.4K Jul 11 13:46 Creating-and-managing-an-NSG-job-using-pop_nsg-from-the-command-line.md
171+
-rw-r--r-- 1 dtyoung staff 197B Jul 11 13:46 EEGLAB-command-line-tools-to-RESTful-interface.md
172+
-rw-r--r-- 1 dtyoung staff 1.2K Jul 11 13:46 EEGLAB-plug-ins-on-NSG.md
173+
-rw-r--r-- 1 dtyoung staff 860B Jul 11 13:46 Registering-at-NSG.md
174+
-rw-r--r-- 1 dtyoung staff 594B Jul 11 13:46 Registering-on-NSG-R.md
175+
-rw-r--r-- 1 dtyoung staff 264B Jul 11 13:46 Running-AMICA-on-NSG.md
176+
-rw-r--r-- 1 dtyoung staff 967B Jul 11 13:46 Scheme-of-plug-in-functions-call.md
177+
-rw-r--r-- 1 dtyoung staff 1.4K Jul 11 13:46 Setting-up-the-plug-in.md
178+
-rw-r--r-- 1 dtyoung staff 18K Jul 11 13:46 Using-pop_nsg-command-line-tools-in-your-EEGLAB-plug-in.md
179+
-rw-r--r-- 1 dtyoung staff 5.3K Jul 11 13:46 Using-the-Open-EEGLAB-Portal.md
180+
-rw-r--r-- 1 dtyoung staff 1.1K Jul 11 13:46 _Sidebar.md
181+
-rw-r--r-- 1 dtyoung staff 2.2K Jul 11 13:46 index.md
182+
-rw-r--r-- 1 dtyoung staff 4.9K Jul 11 13:46 nsgportal-command-line-tools.md
183+
-rw-r--r-- 1 dtyoung staff 3.9K Jul 11 13:46 nsgportal-graphical-user-interface:-pop_nsg.md
Lines changed: 101 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,101 @@
1+
---
2+
layout: default
3+
title: Creating-and-managing-a-job-from-pop_nsg-GUI
4+
long_title: Creating-and-managing-a-job-from-pop_nsg-GUI
5+
parent: nsgportal
6+
grand_parent: Plugins
7+
---
8+
# Creating and managing a job from pop_nsg GUI
9+
In this tutorial we will address the creation and managing of NSG jobs from EEGLAB by using the plugin *nsgportal*. Specifically, the tutorial will focus on performing these tasks from the pop_nsg GUI.
10+
Across the tutorial we will use a sample job distributed with the plug-in files. This job was previously used in the section [*Using the Open EEGLAB Portal*](https://github.com/sccn/nsgportal/wiki/Using-the-Open-EEGLAB-Portal).
11+
12+
To start the tutorial, launch the *pop_nsg* GUI by clicking **Tools > NSG Tools > Manage NSG jobs** as in the figure below:
13+
14+
<!-- EEGLAB GUI to launch pop_nsg-->
15+
16+
<center>
17+
<img src="https://github.com/sccn/nsgportal/blob/master/docs/img/eeglab_nsgtools_menus.jpg" alt="drawing" width="400"/>
18+
</center>
19+
20+
21+
The GUI depicted below will pop up. The GUI can also be invoked from the MATLAB command windows by typing *pop_nsg*.
22+
23+
<center>
24+
<img src="https://github.com/sccn/nsgportal/blob/master/docs/img/pop_nsgguineu.jpg" alt="drawing" width="600"/>
25+
</center>
26+
27+
## Submitting a job to NSG
28+
To submit a job from the pop_nsg GUI, go to the GUI section **Submit new NSG job** and click the button **Browse...**. A window will pop up requesting the type of file to be open, a zip file or a folder. To follow this tutorial, select **Zip file** and then navigate to the folder *.../nsgportal/demos/demo_jobs/* and select the zip file *TestingEEGLABNSG.zip*. Although we use the zip file option in this demo, a folder containing the job files may be selected similarly. The 'job' file must consist of the data to be used for the computation and a MATLAB script (.m) to execute. If functions that do not belong to MATLAB or EEGLAB are used in your script, you should add them to the file as well.
29+
30+
Once selected the job file, the list of *.m* files in the job zip file will appear in the list of **Matlab scripts to execute**. From here, select the file that must be executed in NSG. In this tutorial, we will select 'run_ica_nsg.m'.
31+
Notice in the edit ***Job ID (default or custom)*** that a Job ID has been assigned to the job. This job ID is a unique identifier provided to NSG to locate your job. The default job ID assigned is the combination of the job file name and a random number. We encourage users to change this field and set a more meaningful name. Recall, this is a unique ID, so do not use one ID that was used before! In this tutorial we will identify our job as *nsgtutorial*.
32+
33+
Additional options, e.g. running time allocated in NSG, can be defined in the edit **NSG run options**, in this tutorial we will set the running time to 0.5 Hrs by entering the option: *'runtime' 0.5*.
34+
With the exception of the path to the job file, the GUI at this point of the tutorial should look like the following:
35+
36+
<!-- Defining JOB ID -->
37+
<center>
38+
<img src="https://github.com/sccn/nsgportal/blob/master/docs/img/pop_nsg_job2send.jpg" alt="drawing" width="600"/>
39+
</center>
40+
41+
After this, you may click the button **Run job on NSG** to submit the job to NSG. But don't do this yet! You may want to test your job submission before that, right?
42+
43+
## Testing your job locally
44+
A job can be tested locally on your computer before being submitted to NSG. For this, a downscaled version of the job should be used (otherwise will deceive the purpose of using HPC). For example, if your NSG job runs a loop several times, you may in your local test perform only one iteration of the loop. The purpose of this test is to check the script you want to execute in NSG. The downscaling should not affect the ability of the script to run. In this tutorial, we will use a script ('*run_ica_nsg_downscaled.m*') similar to the one to be executed in NSG but without actually computing ICA.
45+
For this, under **Matlab scripts to execute**, select '*run_ica_nsg_downscaled.m*' and click the button **Test job locally**. The script should take just a few seconds to run without issues. Notice that testing your job is not a requirement necessary for job submission, this is just a tool for self-assessment of your job.
46+
47+
Once the testing is done, you can change back the script selected under **Matlab scripts to execute** to the one defined in the previous section and then submit your job to NSG. After successful submission of the job, the Job ID assigned previously will be shown in the list of jobs under your credentials in NSG. At the same time, the status of the job will be displayed in **NSG job status**.
48+
49+
<!-- Job submited -->
50+
<center>
51+
<img src="https://github.com/sccn/nsgportal/blob/master/docs/img/pop_nsg_jobsubmitted.jpg" alt="drawing" width="600"/>
52+
</center>
53+
54+
## Checking job status periodically
55+
Once the job is submitted (see **NSG job status**) you can ask *pop_nsg* to check periodically the status of the jobs on your list. For this, check the checkbox **Auto-refresh job list**. Messages with the job status will start being issued at the MATLAB command windows. To continue with the tutorial you may uncheck this option if desired.
56+
57+
## Retrieving job information: Intermediate messages, files and error logs
58+
Intermediate messages issued in the NSG MATLAB session can be checked from *pop_nsg*. To retrieve this information, click the button **Matlab output log**. The information will be displayed in a MATLAB browser. The same button can be used at the end of the processing in order to retrieve the MATLAB log for the session corresponding to the processing of the job selected in *pop_nsg*.
59+
If any error has happened during the processing of your job, the font color of the job on the job list will change (red in case of Matlab error or orange for NSG errors) and the button **Matlab error log** will be enabled, from where you can retrieve the information log.
60+
61+
## Retrieving and loading results
62+
Once your job is completed, proceed to download the results by clicking on the button **Download job results'**. A message similar to the one below will be displayed in the command windows. The list of the result files downloaded can be seen in lines 2 to 12. Notice that both, results and files submitted are in the downloaded file. The results will be saved in the path defined in *pop_nsginfo*.
63+
64+
```
65+
1 >> Accessing job: "https://nsgr.sdsc.edu:8443/cipresrest/v1/job/ramonmc/NGBW-JOB-EEGLAB_TG-87B63F47681545A482D010776408F82D/output/33777" on NSG..../TestingEEGLABNSG/
66+
2 >> ./TestingEEGLABNSG/IC_scalp_maps.jpg
67+
3 >> ./TestingEEGLABNSG/eeglab_data_epochs_ica.set
68+
4 >> ./TestingEEGLABNSG/run_ica_nsg.m
69+
5 >> ./TestingEEGLABNSG/eeglab_data_ICA_output.set
70+
6 >> ./TestingEEGLABNSG/run_ica_jader_nsg.m
71+
7 >> ./TestingEEGLABNSG/eeglab_data_ICA_output.fdt
72+
8 >> ./TestingEEGLABNSG/eeglab_data_epochs_ica.fdt
73+
9 >> ./scheduler_stderr.txt
74+
10 >> ./scheduler_stdout.txt
75+
11 >> ./stderr.txt
76+
12 >> ./stdout.txt
77+
13 >> Done.
78+
14 >> File downloaded and decompressed in the
79+
15 >> output folder specified in the settings
80+
16 >> 1 >> Accessing job: "https://nsgr.sdsc.edu:8443/cipresrest/v1/job/ramonmc/NGBW-JOB-EEGLAB_TG-87B63F47681545A482D010776408F82D" on NSG...Done.
81+
17 >> Accessing jobs on NSG...Done
82+
```
83+
EEG files and a wide range of image formats generated as a result of a NSG job can be loaded from the *pop_nsg* GUI. For this click in the button **Load/plot results**. The following file explorer will pop up with the current path being the one where the selected job results were downloaded.
84+
85+
<!-- Job submited -->
86+
<center>
87+
<img src="https://github.com/sccn/nsgportal/blob/master/docs/img/result_explorer1.jpg" alt="drawing" width="700"/>
88+
</center>
89+
90+
From this file explorer, navigate (by clicking into) to the folder *TestingEEGLABNSG* to acces the result files.
91+
Then, select e.g. the file *IC_scalp_maps.jpg* and then click the button **Load/plot**. The figure below will pop up. This figure was actually generated as part of our NSG job results (see script *run_ica_nsg.m*).
92+
93+
<!-- topos -->
94+
<center>
95+
<img src="https://github.com/sccn/nsgportal/blob/master/docs/img/results_topos.jpg" alt="drawing" width="400"/>
96+
</center>
97+
98+
In a similar way, a *.set* file can be selected and loaded from this interface.
99+
100+
## Deleting a job
101+
After retrieving the results, proceed to delete the job by selecting the job in the job list and clicking the button **Delete this NSG job**. Then the job ID will be removed from the interface.
Lines changed: 79 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,79 @@
1+
---
2+
layout: default
3+
title: Creating-and-managing-an-NSG-job-using-pop_nsg-from-the-command-line
4+
long_title: Creating-and-managing-an-NSG-job-using-pop_nsg-from-the-command-line
5+
parent: nsgportal
6+
grand_parent: Plugins
7+
---
8+
This tutorial describes in details the process of submitting and managing a job using the command line options in _pop_nsg_.
9+
10+
Before following the tutorial, you will need to install the plug-in and set up your NSG credentials. If you haven't done so, refer to [this section](https://github.com/sccn/nsgportal/wiki/Setting-up-the-plug-in) for instructions.
11+
12+
We will use the same job created in section [Preparing your files to submit a job](https://github.com/nucleuscub/pop_nsg_wiki/wiki/Preparing-your-files-to-submit-a-job) and used in the [Demo 1](https://github.com/nucleuscub/pop_nsg_wiki/wiki/Demo-1:-Creating-and-managing-a-job-form-pop_nsg-GUI). You can download the tutorial script [here](https://github.com/sccn/nsgportal/blob/master/demos/demo_command_line_tools.m) to follow along.
13+
14+
## Overview of the _pop_nsg_ command
15+
The **_pop_nsg_** command can be called with no arguments, in which case it will bring up the GUI interface (see [Demo 1](https://github.com/sccn/nsgportal/wiki/Creating-and-managing-a-job-from-pop_nsg-GUI)). Else, the first argument to _pop_nsg_ should specify the action you want to take:
16+
* 'test': Perform a test run on the local computer [argument: the job .zip file or folder]
17+
* 'run': Submit the job to run on NSG [argument: the job .zip file or folder]
18+
* 'output': Retrieve the job output files [argument: job identifier or NSG job structure]
19+
* 'delete': Delete the job record from your NSG account [argument: job identifier or NSG job structure]
20+
21+
## Running an NSG job from the command line
22+
In this example, we will assign the path to the job zip file or folder to a variable, but in general, the path can be passed directly as a string to the function:
23+
```
24+
path2zip = '/Users/amon-ra/program_files/eeglab/plugins/nsgportal/demos/demo_jobs/TestingEEGLABNSG.zip';
25+
```
26+
To run the job using the default options use:
27+
```
28+
[currentjob, alljobs] = pop_nsg('run',path2zip,'filename', 'run_ica_nsg.m');
29+
```
30+
Notice that a second pair of parameters was used here (```'filename', 'run_ica_nsg.m'```). This is necessary when using the option _'run'_ to specify which script should NSG execute in order to run the job. Thus the option 'filename' is mandatory when using the option _'run'_.
31+
32+
The default options will assign a randomly generated ID to the job and will submit the job to run on NSG using default job parameters.
33+
34+
You may also specify some job parameters by providing Key-Value pair arguments to the function call. Optional arguments include:
35+
* 'jobid' : Client job ID string [default value will be the job name trailed by a randon generated number eg: _jobname_1234_]
36+
* 'outfile' : Results filename string [default: ['nsgresults_', '_jobid_']]
37+
* 'runtime' : Maximum time (in hours) to allocate for running the job on NSG [default: 0.5]
38+
* 'subdirname': Name of the sub-directory containing the script file (if the script file is not in the top level folder) [default: none]
39+
40+
Example of a 'run' command with optional arguments specified:
41+
```
42+
[NSGjobstruct, alljobs] = pop_nsg('run', path2zip, 'filename', 'run_ica_nsg.m', 'jobid', 'runica_testing', 'runtime', 0.3);
43+
```
44+
45+
The function returns a MATLAB NSG job structure for the submitted job (_currentjob1_) and a structure containing information about all jobs under your NSG credential (_alljobs_).
46+
47+
## Checking job status periodically
48+
After the job is submitted it will be processed by the NSG server. You can check the status of the job periodically by calling the function _nsg_recurspoll_, providing as arguments either the jobid, job URL or job structure, and (optionally) the polling interval in seconds. Here the jobid is used as the first argument:
49+
```
50+
NSGjobstruct = nsg_recurspoll('runica_testing','pollinterval', 30);
51+
```
52+
53+
The _pollinterval_ should be more than (the default) 30 seconds. Keep the polling interval as long as possible to avoid overloading NSG.
54+
55+
_nsg_recurspoll_ [argument: jobid] returns a structure containing the status of a specified job. After the job completes, you can retrieve its results using _pop_nsg_.
56+
57+
Job results can be retrieved after the job completes. Use _nsg_recurspoll_ to confirm that the job has finished before attempting to access its results.
58+
59+
## Retrieving job results
60+
Access job results by providing either the _jobid_, job URL or job structure to _pop_nsg_:
61+
```
62+
[NSGjobstruct, alljobs] = pop_nsg('output', NSGjobstruct);
63+
```
64+
The input _NSGjobstruct_ contains the NSG job structure for the job we want to retrieve results from. The output _NSGjobstruct_ also contains the output status of the job. Output variable _alljobs_ contains current status information for all NSG jobs associated with the user credential.
65+
66+
## Deleting an NSG job
67+
To delete a job from the NSG record associated with the user NSG credential, provide either the _jobid_, job URL or job structure as a second argument:
68+
```
69+
[NSGjobstruct, alljobs] = pop_nsg('delete',NSGjobstruct);
70+
```
71+
Outputs are, as above, the modified NSG job structure and the information for all NSG jobs associated with the user credential. Notice that after this command is executed the job is deleted from your account in NSG. The structure returned as output _NSGjobstruct_ is a reference to the deleted job (as it can be seen in the field ```NSGjobstruct.jobStage``` which is set to 'DELETED') and cannot be used anymore to access the job.
72+
73+
74+
75+
76+
77+
78+
79+
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
---
2+
layout: default
3+
title: EEGLAB-command-line-tools-to-RESTful-interface
4+
long_title: EEGLAB-command-line-tools-to-RESTful-interface
5+
parent: nsgportal
6+
grand_parent: Plugins
7+
---
8+
# Under construction
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
---
2+
layout: default
3+
title: EEGLAB-plug-ins-on-NSG
4+
long_title: EEGLAB-plug-ins-on-NSG
5+
parent: nsgportal
6+
grand_parent: Plugins
7+
---
8+
The EEGLAB installation on NSG provides access to most of the EEGLAB plug-ins. See the table below for the list below of plug-in available from EEGLAB at NSG, as well as important links to the use of these plugins
9+
10+
| Plug-in name | Plug-in description | Optimized for NSG |
11+
| --------- | ----------- | --------------
12+
| [AMICA](https://sccn.ucsd.edu/wiki/AMICA#How_to_run_AMICA.3F_Option_2:_Neuroscience_Gateway_.28NSG.29) | Amica ICA algorithm plugin for EEGLAB | Yes|
13+
| Dipfit | Source localization of ICA components | No|
14+
| filrfilt | Routines for filtering data |No|
15+
16+
Not all the EEGLAB plug-ins on NSG are optimized for use in high-performance computing resources. Currently, we are adding these capabilities to key plug-ins developed at the SCCN. Optimization of plug-ins for HPC has been indicated in the last column of the table. Documentation for the plug-ins can be found by following the link in the plug-in name.
17+
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
---
2+
layout: default
3+
title: Registering-at-NSG
4+
long_title: Registering-at-NSG
5+
parent: nsgportal
6+
grand_parent: Plugins
7+
---
8+
The first step to using the Open EEGLAB Portal is to create an NSG account [HERE](https://www.nsgportal.org/gest/reg.php) (or by clicking on "Register account" on the NSG home page).
9+
10+
<center>
11+
<img src="https://github.com/nucleuscub/pop_nsg_wiki/blob/master/docs/img/500px-NSG11.png" alt="drawing" width="500"/>
12+
</center>
13+
14+
After your account is approved by the NSG team (typically within 2 days), the second step is to enter your NSG user credentials [HERE](https://nsgdev.sdsc.edu:8443/portal2/login!input.action) (else select, "Access NSG portal" on the [NSG home page](http://www.nsgportal.org/)).
15+
16+
17+
<center>
18+
<img src="https://github.com/nucleuscub/pop_nsg_wiki/blob/master/docs/img/500px-NSG2.png" alt="drawing" width="500"/>
19+
</center>
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
---
2+
layout: default
3+
title: Registering-on-NSG-R
4+
long_title: Registering-on-NSG-R
5+
parent: nsgportal
6+
grand_parent: Plugins
7+
---
8+
There are two ways to access to NSG: via NSG portal and through the command line interface NSG-R. The later one use on its core *curl* commands to communicate with NSG and is the interface used by the *nsgportal* plug-in. Since both ways are interfaces to NSG, If you have already registered to NSG, you can use the same login and password for NSG-R. If you have not done so, visit [this](https://github.com/sccn/nsgportal/wiki/Registering-at-NSG) section of the wiki.
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
---
2+
layout: default
3+
title: Running-AMICA-on-NSG
4+
long_title: Running-AMICA-on-NSG
5+
parent: nsgportal
6+
grand_parent: Plugins
7+
---
8+
See this [page](https://github.com/japalmer29/amica/wiki/AMICA#how-to-run-amica-option-2-neuroscience-gateway-nsg) on the AMICA Gihub page.
Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
---
2+
layout: default
3+
title: Scheme-of-plug-in-functions-call
4+
long_title: Scheme-of-plug-in-functions-call
5+
parent: nsgportal
6+
grand_parent: Plugins
7+
---
8+
The figure below shows a scheme of function calls in _nsgportal_. In the plug-in there are two main sets, or layers, of functions designated by the prefix _pop__ and _nsg__. The _pop__ functions open a parameter entry window when called with fewer than the required arguments, else run directly without opening a window. The second class of nsgportal functions with prefix _nsg__ can be called directly from MATLAB command line or from other MATLAB scripts or functions. These functions perform lower-level processing than the pop_ functions. A plug-in function (_eegplugin_nsgportal_) manages the inclusion and appearance of an nsgportal item in the main EEGLAB window menu.
9+
10+
<center>
11+
<img src="https://github.com/sccn/nsgportal/blob/master/docs/img/nsgportal_scheme_call.png" alt="drawing" width="1000"/>
12+
</center>
Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
---
2+
layout: default
3+
title: Setting-up-the-plug-in
4+
long_title: Setting-up-the-plug-in
5+
parent: nsgportal
6+
grand_parent: Plugins
7+
---
8+
# Setting up the plug-in
9+
10+
## Installing the plug-in
11+
12+
Generally, all plug-ins in EEGLAB, included *nsgportal*, can be installed following two different ways. For installing *nsgportal*:
13+
14+
1. From the EEGLAB Plug-in Manager: Launch EEGLAB and go to **File -> Manage EEGLAB extension**. The plug-in manager GUI will pop up. From this GUI look for and select the plug-in *nsgportal* from the main window, then click into the Install/Update button to install the plug-in.
15+
2. From the web: Download the zip file with the content of the plug-in *nsgportal* either from this [GitHub](https://github.com/sccn/nsgportal) page (select Download Zip on Github) or from the EEGLAB wiki page for plug-ins [here](https://sccn.ucsd.edu/wiki/Plugin_list_all). Decompress the zip file in the folder *../eeglab/plugins* and the restart EEGLAB.
16+
17+
## Setting your NSG credentials
18+
Use menu item **Tools > NSG Tools > NSG portal credentials and settings**. Simply enter your NSG user name and user password (see description above on this page). Inputs **NSG key** and **NSG Url** do not need to be modified. The entry **Output folder** is the folder where NSG data will be downloaded.
19+
20+
<center>
21+
<img src="https://github.com/nucleuscub/pop_nsg_wiki/blob/master/docs/img/500px-Nsgcredentials.png" alt="drawing" width="500"/>
22+
</center>

‎plugins/nsgportal/Using-pop_nsg-command-line-tools-in-your-EEGLAB-plug-in.md

Lines changed: 247 additions & 0 deletions
Large diffs are not rendered by default.
Lines changed: 74 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,74 @@
1+
---
2+
layout: default
3+
title: Using-the-Open-EEGLAB-Portal
4+
long_title: Using-the-Open-EEGLAB-Portal
5+
parent: nsgportal
6+
grand_parent: Plugins
7+
---
8+
There will be two approaches to using the Open EEGLAB Portal: either, through its NSG web interface (http://www.NSGportal.org), or by making use of the NSG command line RESTful interface (NSG-R). This section describes the use of the web interface.
9+
10+
Start by login into the NSG portal. Once logged in, you may upload a **zipped file** containing 1) an EEGLAB script calling 2) one or more data files by name (they should be in or under the same folder as the script). Then click on the "Data" tab and select "Upload data", then upload a file containing your script and data.
11+
12+
<center>
13+
<img src="https://github.com/nucleuscub/pop_nsg_wiki/blob/master/docs/img/500px-NSG3.png" alt="drawing" width="500"/>
14+
</center>
15+
16+
You may download a 3.5-MB sample zip file (containing EEG data and a sample script [HERE](https://sccn.ucsd.edu/mediawiki/images/7/7c/Testingeeglabonnsg.zip). Below is its list of contents:
17+
18+
<center>
19+
<img src="https://github.com/nucleuscub/pop_nsg_wiki/blob/master/docs/img/200px-NSG32.png" alt="drawing" width="200"/>
20+
</center>
21+
22+
The EEGLAB script (test.m) in this upload file is shown below ( try minor alterations for testing purposes)
23+
24+
<center>
25+
<img src="https://github.com/nucleuscub/pop_nsg_wiki/blob/master/docs/img/500px-NSG33.png" alt="drawing" width="500"/>
26+
</center>
27+
28+
Now create a new NSG task. To do this, click on the "Task" tab and select, "Create new task." Click on, "Select input data" and select the zip file you have uploaded above. Click on "Select tool" and select "EEGLAB".
29+
30+
<center>
31+
<img src="https://github.com/nucleuscub/pop_nsg_wiki/blob/master/docs/img/500px-NSG4.png" alt="drawing" width="500"/>
32+
</center>
33+
34+
Then click on "Select parameters". Enter the name of your script. This script must be at the root (top) folder of your zip archive. You may also (optionally) change other NSG settings on this page.
35+
36+
<center>
37+
<img src="https://github.com/nucleuscub/pop_nsg_wiki/blob/master/docs/img/500px-NSG5.png" alt="drawing" width="500"/>
38+
</center>
39+
40+
Finally, press "Save parameters". This will bring you back to the previous screen. You may now press, "Save and Run Task" which will enter the task into the Comet queue. A warning is shown as in the image below. Simply click OK.
41+
42+
<center>
43+
<img src="https://github.com/nucleuscub/pop_nsg_wiki/blob/master/docs/img/300px-NSG6_add.png" alt="drawing" width="350"/>
44+
</center>
45+
46+
Once the task has been run, you will receive an email from NSG (see email for the test job below).
47+
48+
49+
<center>
50+
<img src="https://github.com/nucleuscub/pop_nsg_wiki/blob/master/docs/img/500px-NSG6_3.png" alt="drawing" width="500"/>
51+
</center>
52+
53+
Upon receiving this message, go back to the NSG interface and select the task you ran from the list of tasks, as shown below.
54+
55+
<center>
56+
<img src="https://github.com/nucleuscub/pop_nsg_wiki/blob/master/docs/img/500px-NSG6.png" alt="drawing" width="500"/>
57+
</center>
58+
59+
Select "View" following the heading "Output" (see above): this will bring the output below.
60+
61+
<center>
62+
<img src="https://github.com/nucleuscub/pop_nsg_wiki/blob/master/docs/img/500px-NSG7.png" alt="drawing" width="500"/>
63+
</center>
64+
65+
You may now download the task output, a zip file containing the results of your task. Output files (see listing below) include the Matlab log and error log for your task. If your script saved data files, they will be there. For example, if you use the zip file and script provided above, below (left) is what the unzipped output archive will contain. The figure below (right) is the .jpg image created by the test.m script. Saving output images in Matlab .fig format (instead of .jpg) will allow you to read them into Matlab (for further editing, etc.). Note: The numeric data plotted in a figure can be read from the .fig file structure as well. Alternatively, saving figures in Postscript (e.g., as .epsc) will allow you to edit them in Illustrator.
66+
67+
Note: To save needless transfer time and effort, the uploaded data file itself will not be returned with the output unless your script explicitly saves it under a new name. In future this will also allow you to temporarily store and reuse the uploaded data.
68+
69+
<center>
70+
<img src="https://github.com/nucleuscub/pop_nsg_wiki/blob/master/docs/img/500px-NSG8.png" alt="drawing" width="500"/>
71+
</center>
72+
73+
### Interact with NSG through command line interface NSG-R and the creation of NSG EEGLAB Plugin
74+
As mentioned at the beginning of this page, users can also interact with NSG (and make use of Open EEGLAB Portal) through command line RESTFUL interface NSG-R (R for REST interface). You can use the same credentials when registered for NSG for NSG-R. NSG-R allows users to move away from the browser window and perform computational work more programmatically. However, interacting with NSG-R directly requires some knowledge of web services and appropriate networking tools and libraries (e.g. curl command) which most people are unfamiliar with. Thus we developed an EEGLAB plugin to simplify the process, allowing EEGLAB users to interact with NSG in the familiar MATLAB environment, either through graphical or command line interface. In the next sections of the wiki, we will describe the EEGLAB plugin to NSG and provide hands-on tutorials. For those who are curious about NSG-R, you can read more at this [link](https://www.nsgportal.org/guide.html).

‎plugins/nsgportal/_Sidebar.md

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
---
2+
layout: default
3+
title: _Sidebar
4+
long_title: _Sidebar
5+
parent: nsgportal
6+
grand_parent: Plugins
7+
---
8+
# EEGLAB on NSG
9+
* [EEGLAB on NSG](Home)
10+
* [Registering on NSG](Registering-at-NSG)
11+
* [Using the Open EEGLAB Portal](Using-the-Open-EEGLAB-Portal)
12+
* [EEGLAB plug-ins on NSG](EEGLAB-plug\-ins-on-NSG)
13+
# EEGLAB plug in to NSG: nsportal
14+
* [Setting up the plug-in](Setting-up-the-plug-in)
15+
* [Nsgportal plug-in GUI](https://github.com/sccn/nsgportal/wiki/nsgportal-graphical-user-interface:-pop_nsg)
16+
* [Nsgportal command line tools](nsgportal-command-line-tools)
17+
* [Scheme of plug-in functions call](scheme-of-plug\-in-functions-call)
18+
# Tutorials
19+
* [Tutorial 1: Creating and managing an NSG job from the _pop_nsg_ GUI](Creating-and-managing-a-job-from-pop_nsg-GUI)
20+
* [Tutorial 2: Creating and managing an NSG job using _pop_nsg_ from the command line](Creating-and-managing-an-NSG-job-using-pop_nsg-from-the-command-line)
21+
* [Tutorial 3: Using _pop_nsg_ in an EEGLAB plug-in](Using-pop_nsg-command-line-tools-in-your-EEGLAB-plug-in)
22+
* [Tutorial 4: Running AMICA on NSG](Running-AMICA-on-NSG)

‎plugins/nsgportal/index.md

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
---
2+
layout: default
3+
title: nsgportal
4+
long_title: nsgportal
5+
parent: Plugins
6+
categories: plugins
7+
has_children: true
8+
---
9+
# EEGLAB on NSG
10+
An Open EEGLAB Portal to High-Performance Computing: As of late 2018, EEGLAB scripts may now be run on high-performance computing resources via the freely available Neuroscience Gateway Portal to the NSF-sponsored [Comet supercomputer](https://ucsdnews.ucsd.edu/pressrelease/sdsc_to_double_comet_supercomputers_graphic_processor_count/) of the [San Diego Supercomputer Center](https://sdsc.edu/). The home page of the Neuroscience Gateway is shown below. NSG accounts are free and are not limited to US users, but the portal may only be used for non-commercial purposes (see the [NSG Terms of Use](http://www.nsgportal.org/policy.html)). We also recommend you to watch the [NSG tutorial videos](https://www.nsgportal.org/tutorial.html).
11+
<center>
12+
<img src="https://github.com/nucleuscub/pop_nsg_wiki/blob/master/docs/img/nsg_mainpage.jpg" alt="drawing" width="1000"/>
13+
</center>
14+
15+
Like all (except personal!) supercomputers, Comet typically runs jobs in batch mode rather than in the interactive style of Matlab. However, Comet has all Matlab functions as well as EEGLAB functions and many plug-in extensions installed ready to be called from scripts. When a job submitted through the NSG portal is run, you will receive an email from NSG alerting you to download the results. This means that best uses of the Open EEGLAB Portal are for computationally intensive processes and/or for parallel, automated processing of large EEG studies. In the first category, we are now installing the most computationally intensive EEGLAB functions on comet: AMICA, RELICA, time/frequency analysis, SCALE-optimized individual subject head modeling via NFT, etc. We will give more information here about using these installed capabilities as they become available.
16+
17+
To read a detailed overview of the Open EEGLAB Portal, browse a [conference paper submitted the IEEE/EMBS Neural Engineering Conference](https://sccn.ucsd.edu/~scott/pdf/Delorme_Open_EEGLAB_Portal_NER18.pdf) in San Francisco (March, 2019) and our [Neuroimage](https://www.sciencedirect.com/science/article/pii/S1053811920302652) article.
18+
Lines changed: 75 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,75 @@
1+
---
2+
layout: default
3+
title: nsgportal-command-line-tools
4+
long_title: nsgportal-command-line-tools
5+
parent: nsgportal
6+
grand_parent: Plugins
7+
---
8+
Just like many other EEGLAB functions, users can interact with *nsgportal* through either the graphic user interface or using command line tools. The command line tools allow users to largely automate their analysis and make the process easy to reproduce. In this section, we introduce the *nsgportal* command line tools to NSG.
9+
10+
# Using EEGLAB command line tools to NSG
11+
Command line access to NSG from EEGLAB is mainly performed through two functions: *pop_nsginfo()* and *pop_nsg()*. The first function (*pop_nsginfo*) is used to setup your NSG credential, while *pop_nsg* will allow you to manage your NSG jobs. These functions are introduced in more detail in the next sections.
12+
13+
## Setting credentials - *pop_nsginfo*
14+
Use function *pop_nsginfo* to specify your NSG credentials. The function accepts key-value pair inputs, allowing users to specify their NSG user name (option ***'nsgusername'***), user password (option ***'nsgpassword'***), and path to folder where NSG data will be downloaded (option ***'outputfolder'***). NSG key and NSG url are acceptable keys but need not be changed. See code snippet below for an example of *pop_nsginfo* command line call:
15+
```
16+
pop_nsginfo('nsgusername', 'your_username', 'nsgpassword', 'your_password', 'outputfolder', '/path/to/output/folder');
17+
```
18+
Running *pop_nsginfo* without any arguments will bring up its GUI interface.
19+
20+
## Managing your NSG jobs - *pop_nsg*
21+
The function *pop_nsg* is the workhorse of the EEGLAB command line tools to NSG. Different ways to call the function allows you to:
22+
23+
1. Create and run NSG job (*pop_nsg* option ***'run'***)
24+
2. Test the job on your local computer (*pop_nsg* option ***'test'***)
25+
3. Retrieve its result (*pop_nsg* option ***'output'***)
26+
4. Delete the job (*pop_nsg* option ***'delete'***)
27+
28+
In general, calling *pop_nsg* with these options should be done following the scheme:
29+
30+
```
31+
[NSGJobStructure, AllNSGJobStructure] = pop_nsg('option_name', 'option_value');
32+
```
33+
In the case of using the options ***'run'*** or ***'test'***, the second argument mut be always the path to the zip file or folder containing the job to be submitted or tested. Using these options also require a second pair of arguments defining the script (.m) to be run in your test or NSG run (option ***'filename'***). For instance:
34+
35+
```
36+
[NSGJobStructure, AllNSGJobStructure] = pop_nsg('test', 'path/to/my/job/folder', 'filename', 'my_job_script.m');
37+
```
38+
39+
The two outputs of *pop_nsg* above are (1) *NSGJobStructure*: the NSG job structure containing all relevant information of the submitted job (not available for option ***'test'***) and (2) *AllNSGJobStructure* : array of all NSG jobs currently in your account (available from all *pop_nsg* options).
40+
41+
To call *pop_nsg* with options ***'output'*** or ***'delete'*** simply pass the job ID (ID can be assigned by user during job submission), the NSG job structure (see above) or the job URL (unique NSG identifier for a job. see *NSGJobStructure.jobstatus.selfUri.url*). For instance:
42+
43+
```
44+
[NSGJobStructure, AllNSGJobStructure] = pop_nsg('output', 'My_Job_ID'); % Using job id
45+
[NSGJobStructure, AllNSGJobStructure] = pop_nsg('output', NSGJobStructure.jobstatus.selfUri.url); % Using job URL
46+
[NSGJobStructure, AllNSGJobStructure] = pop_nsg('output', NSGJobStructure); % Using job structure
47+
```
48+
Note that for running *pop_nsg* with options ***'output'*** or ***'delete'*** , a job has to be previosly submitted to NSG. The use of ***'output'*** is restricted to jobs already completed.
49+
50+
Running *pop_nsg* without any arguments will bring up the graphic interface.
51+
52+
## Other useful functions
53+
Here a list of other useful EEGLAB command line tools to NSG.
54+
55+
### Request list of NSG jobs - *nsg_jobs*
56+
Return cell array of all NSG jobs under your credentials.
57+
58+
Usage example:
59+
60+
```
61+
alljobs = nsg_jobs;
62+
```
63+
64+
### Recursive checking of job status - *nsg_recurspoll*
65+
Recursive check on the status of a job running on NSG. A mandatory first argument is required to be a single job ID, NSG job structure or job URL. A pair of parameters ('pollinterval', time_in_seconds) can be added to specify the time between polls.
66+
67+
Usage example:
68+
69+
```
70+
NSGJobStructure = nsg_recurspoll('My_Job_ID', 'pollinterval', 120 );
71+
```
72+
73+
# Summary
74+
This article introduced you to the command line tools of EEGLAB plug-in to NSG. To see a detailed explanation and examples of how to use EEGLAB to NSG command line tool, check out [this tutorial](https://github.com/sccn/nsgportal/wiki/Creating-and-managing-an-NSG-job-using-pop_nsg-from-the-command-line).
75+
Note that any function of EEGLAB plug-in to NSG, you can type "help *function_name*" to read its full documentation. The documentation is a great source explaining what the function does, examples of how to use it, what inputs are allowed, and what outputs it produces. It should always be your go-to when you're unsure about how to use the function.
Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
---
2+
layout: default
3+
title: nsgportal-graphical-user-interface:-pop_nsg
4+
long_title: nsgportal-graphical-user-interface:-pop_nsg
5+
parent: nsgportal
6+
grand_parent: Plugins
7+
---
8+
9+
# *nsgportal* graphical user interface: pop_nsg
10+
11+
Interaction with key function in EEGLAB is mostly supported through both, command line and graphical user interface (GUI). The plugin *nsgportal* also follows this philosophy. In this section, we introduce the GUI supporting the plugin *nsgportal* through its main function, *pop_nsg*. A more advanced tutorial on using the plugin from its GUI will be addressed in the next sections (see [here](https://github.com/sccn/nsgportal/wiki/Creating-and-managing-a-job-from-pop_nsg-GUI)).
12+
13+
## The *pop_nsg* GUI
14+
To call the *pop_nsg* GUI, simply type *pop_nsg* from the MATLAB command windows. The GUI depicted below will pop up. the GUI can also be invoked from the EEGLAB main GUI by clicking ***Tools > NSG Tools > Manage NSG jobs***. For this, the plugin *nsgportal* has to be installed before (see [this](https://github.com/sccn/nsgportal/wiki/Registering-on-NSG-R) section).
15+
16+
<center>
17+
<img src="https://github.com/sccn/nsgportal/blob/master/docs/img/pop_nsgguineu.jpg" alt="drawing" width="800"/>
18+
</center>
19+
20+
The main functionalities supported from the pop_nsg GUI are listed below:
21+
22+
1. Submit an EEGLAB job to NSG. See GUI section "Submit new NSG job"
23+
2. Test NSG jobs locally on your computer.
24+
3. Delete jobs from your NSG account
25+
4. Download NSG job results.
26+
5. Load results from a completed and downloaded NSG job.
27+
6. Visualize error and intermediate logs
28+
7. Access *pop_nsg* help.
29+
30+
## GUI main sections
31+
### Submitting new NSG job
32+
From this section of the GUI you will be able to test and submit a job for processing in NSG.
33+
34+
Component list:
35+
36+
1. Edit **Job folder or zip file** : Full path to the zip file or folder for a job to submit to NSG
37+
2. Button **Browse**: Browse a zip file or folder for a job to submit to NSG
38+
3. Edit **Matlab script to execute**: Matlab script for NSG to execute upon job submission
39+
4. Button **Test job locally**: Test job locally on this computer. A downscaled version of the job MUST be used.
40+
5. Edit **Job ID (default or custom)**: Unique identifier for the NSG job. Modify this field at your convenience.
41+
6. Edit **NSG run options (see Help)**: NSG options for the job to be submitted. See *>> pop_nsg help* for the list of all options.
42+
7. Button **Run job on NSG**: Submit the job to run on NSG
43+
44+
### Interacting with your jobs
45+
From this section of the GUI, you will be able to interact with the jobs submitted to NSG under your credentials.
46+
This section is comprised of the following components:
47+
48+
1. Button **Refresh job list**: Refresh the list of all of your NSG jobs.
49+
2. Checkbox **Auto-refresh job list**: Automatically refresh the list of all of your NSG jobs.
50+
3. Button **Delete this NSG job**: Delete the currently selected job
51+
4. List box **Select job**: List of all jobs under your credentials in NSG. A color code is used here to inform on the status of the jobs in this list. The legend can be found below the list box.
52+
5. Button **Matlab output log**: Download and display MATLAB command line output for the currently selected job. Intermediate job logs can be also visualized from here. In the figure above, this option appears disabled since there is no current job on the list.
53+
6. Button **Matlab error log**: Download and display the MATLAB error log for the currently selected job. In the figure above, this option appears disabled since there is no current job on the list.
54+
7. Button **Download job result**: Download result files from the currently selected job
55+
8. Button **Load/plot results**: Launch a GUI for loading and displaying results of the currently selected job
56+
57+
### Checking your NSG job status
58+
In this section are displayed the messages issued by NSG during the submission and processing of the job currently selected from the list box **Select job**.

0 commit comments

Comments
 (0)
Please sign in to comment.