Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

A wide field-of-view, modular, high-density diffuse optical tomography system for minimally constrained three-dimensional functional neuroimaging

Open Access Open Access

Abstract

The ability to produce high-quality images of human brain function in any environment and during unconstrained movement of the subject has long been a goal of neuroimaging research. Diffuse optical tomography, which uses the intensity of back-scattered near-infrared light from multiple source-detector pairs to image changes in haemoglobin concentrations in the brain, is uniquely placed to achieve this goal. Here, we describe a new generation of modular, fibre-less, high-density diffuse optical tomography technology that provides exceptional sensitivity, a large dynamic range, a field-of-view sufficient to cover approximately one-third of the adult scalp, and also incorporates dedicated motion sensing into each module. Using in-vivo measures, we demonstrate a noise-equivalent power of 318 fW, and an effective dynamic range of 142 dB. We describe the application of this system to a novel somatomotor neuroimaging paradigm that involves subjects walking and texting on a smartphone. Our results demonstrate that wearable high-density diffuse optical tomography permits three-dimensional imaging of the human brain function during overt movement of the subject; images of somatomotor cortical activation can be obtained while subjects move in a relatively unconstrained manner, and these images are in good agreement with those obtained while the subjects remain stationary. The scalable nature of the technology we described here paves the way for the routine acquisition of high-quality, three-dimensional, whole-cortex diffuse optical tomography images of cerebral haemodynamics, both inside and outside of the laboratory environment, which has profound implications for neuroscience.

Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. Introduction

Over the last 25 years, the application of functional neuroimaging technologies, particularly functional magnetic resonance imaging (fMRI), has yielded extensive and detailed insights into the relationships between brain structure, brain function and our interactions with the external world [13]. However, the limitations of functional neuroimaging technologies have meant that almost all of these insights are predicated on the assumption that brain function does not fundamentally differ between the laboratory and everyday environments. The ability to reliably image human brain function outside of the laboratory, using naturalistic stimuli and particularly while subjects move freely, would allow neuroscientists to investigate the effects of experimental environment and examine whole new aspects of human cognition.

Optical neuroimaging approaches such as functional near-infrared spectroscopy lend themselves to applications that are either extremely challenging or impossible with traditional neuroimaging technologies like fMRI [46]. Functional near-infrared spectroscopy (fNIRS) uses the differential absorption of near-infrared light transmitted through the tissues of the head to monitor cerebral haemodynamics [79]. In practice, fNIRS measurements are usually achieved by coupling multiple optical fibres to the scalp. These fibres transmit light to and from the subject to allow measurements of the changes in optical intensity between source and detector pairs (channels). Most fNIRS systems provide somewhere in the range of 1 to 50 measurement channels [7], which yield data that can be analysed channel-by-channel or converted into rudimentary two-dimensional maps with a resolution of around 30 mm [10,11].

In comparison to fMRI, fNIRS is relatively robust to subject movement [12,13], fNIRS devices are comparably small, are silent, portable, can be applied without stringent requirements on the local environment (e.g. no electromagnetic shielding is required) and are inexpensive to acquire and operate [7]. As a result, the fNIRS field has grown exponentially over the last decade [5,14].

Over the same period, there has been significant research effort focussed on extending the principles of fNIRS to provide three-dimensional, high-resolution functional optical neuroimaging [1518]. Diffuse optical tomography (DOT) uses high numbers of near-infrared light sources and detectors to provide overlapping spatial sampling of the target object, which permits the acquisition of depth-resolved, three-dimensional images of human cerebral haemodynamics [19]. The advantages of increasing the density of the source and detector array are well-established and numerous: greater channel density provides greater lateral image resolution [20]; signal-to-noise is improved by virtue of the fact that multiple measurements are sensitive to every point within the field of view of the instrument [21]; and the wide range of source-detector distances permitted by high-density arrays provides depth sensitivity and specificity [22], minimizing the confounding effects of haemodynamic signals arising from the scalp [2325]. However, increasing the number and density of the sources and detectors of near-infrared light that an instrument provides creates a number of challenges. The most obvious is the sheer number of optical fibres that must be coupled to the subject’s scalp to perform a measurement, which has approached ∼200 in some cases [18]. The use of optical fibres in these numbers severely limits the portability and applicability of the technique, and therefore undermines many of the key advantages of optical neuroimaging technologies that have underpinned the growth of the fNIRS field.

In recent years, fNIRS technologies have begun to advance away from optical-fibre-based systems that tether the subject to the bench and towards wearable devices [7]. There are now several wearable continuous wave (CW) fNIRS technologies that are commercially available, and these are already being applied to study moving subjects outside of the laboratory environment [7]. Despite these successes, all current wearable CW fNIRS technologies are subject to many of the same limitations as traditional, fibre-based devices: their low channel count and low sampling density means they have poor spatial resolution, no depth sensitivity and a narrow field-of-view.

In 2016, our group published the first three-dimensional, functional images of the human brain to be obtained with a fibre-less, high-density diffuse optical tomography (HD-DOT) system, which we refer to as the ‘microNTS’ (μNTS) system [26]. While this system exhibited field-leading sensitivity, it had a relatively narrow field-of-view, and a limited dynamic range that resulted in issues with detector saturation. In this paper, we describe the design of a new generation of modular HD-DOT technology, which overcomes these limitations. We also describe the application of this technology to a novel motor neuroimaging paradigm that was specifically designed to test the capabilities of our system, and its capacity for imaging human brain function during overt movement of the subject.

2. Methods

2.1 Module design

2.1.1 Board layout and control unit

The full system consists of 12 independent DOT sensor modules (Fig. 1). Each module consists of a 30 mm × 30 mm 4-layer printed circuit board (PCB) (Fig. 1(a)). Two pairs of LEDs (OIS-170-740-XT, OIS-170-IT855-XT, OSA-Opto, Germany) with wavelengths of 740 nm and 855 nm were selected to act as the optical sources, and were positioned at opposite corners of each board [27,28]. Four silicon photodiodes (photosensitive area: 2.77 × 2.77 mm2, S12158-01CT, Hamamatsu, Japan) were used as detectors. Two photodiodes are positioned at the remaining two corners of the PCB, and two are positioned at 10 mm from the centre of each pair of LEDs to form a short-separation channel [29]. Each module therefore provides within-board source-detector separations of 10 mm (2 channels), 23 mm (4 channels) and 28 mm (2 channels). In total the system incorporates 24 source and 48 detector locations and yields 12 × 2 × 12 × 4 = 1152 NIRS channels per wavelength. Twelve modules provide coverage of approximately 220 cm2, which equates to around one third of the accessible scalp surface of a typical adult [27,30]. When positioned on the scalp over the midline (Fig. 1(e),(f)), a 2 × 6 formation of modules allowed us to acquire data from across the somatosensory and motor cortices of both hemispheres in our adult subjects. Figure 1(g) shows the regions of the cortex to which the 12-module system was sensitive, as determined via photon transport modelling in subject-specific anatomical finite element meshes.

 figure: Fig. 1.

Fig. 1. The completed 12-module HD-DOT imaging system. a) Photographs of both sides of the module PCB with key components indicated; b) an exploded view of the module encapsulation layers; c) a photograph of 12 modules in a 2 × 6 formation; d) a system-level diagram showing the 12 modules, control unit, PC, and cabling; e) the system in place on a subject, with neoprene cap; f) diagrams illustrating the anatomical positioning of the 12-module array on the head of our subjects; and g) the group-average sensitivity map calculated via photon transport modelling indicating the regions of the cortex to which the array was sensitive.

Download Full Size | PDF

Each photodiode was connected to one of the four-channels of a charge-to-digital converter (DDC114, Texas Instrument, USA), consisting of a charge integration amplifier and a 20-bit Sigma-Delta analogue-to-digital converter. The integration of the charge amplifier and the analogue-to-digital converter into the same silicon substrate within a monolithic package allows the DDC114 to provide higher signal-to-noise performance than conventional discrete components. Each channel of the DDC114 has two independent integrators with a common input. When the integrated charge is converted to a digitized measurement from one integrator, the other integrator simultaneously accumulates new charge from the photodiode. This strategy avoids dead-time and permits continuous data acquisition.

A 32-bit ARM Cortex-M0 based micro-controller unit provides local control for each module. It regulates LED source illumination, retrieves optical measurements from the DDC114 and motion sensing data from the motion processing unit (MPU) and writes that data to an Inter-Integrated Circuit (I2C) bus. An 8-pin connector on the rear of each board allows the connection of a ribbon cable for power and data transmission. During operation, the power consumption of each module is 160 mW. A control unit based on a Field Programmable Gate Array (FPGA) provides power and control for all modules via the single ribbon cable (Fig. 1).

2.1.2 Shielding and encapsulation

While the adoption of the DDC114 allows for a high detection sensitivity, it is vulnerable to external sources of noise that include electro-magnetic interference and capacitive coupling. The principal source of electro-magnetic interference is mains power line noise. If the integration cycle does not match the power line frequency (i.e. 50 Hz in the UK), charge will be induced during each integration cycle. Due to unpredictable variations in the power line frequency, it is not possible to correct for these effects retrospectively. Capacitive coupling can also induce extra charge as a result of electrostatic build-up due to the low permittivity of the encapsulation materials. In order to eliminate the impact of these noise sources, we applied two stages of electromagnetic shielding. A dedicated copper shield was placed around each photodiode. Each shield had a 4 mm diameter circular opening on the upper surface to allow light to be coupled into the photodiode. This window was covered with a layer of Indium Tin Oxide (ITO) film to maintain the continuity of the shield while allowing for optical transmission. Each shield was connected to the ground of the system. A custom multilayer encapsulation (Fig. 1(b)) was then implemented to provide electrical isolation between the opto-electronics and the human scalp. Internal light leakage from LEDs to photodiodes was minimized by placing neoprene rubber optical isolation pads between the source and detector elements. The board was then placed inside a laser-cut acrylic case, which was itself permanently encased in another layer of copper shielding. A bespoke silicone light-guide with a diameter of 3.5 mm was used to couple light to and from the optical components through the layers of encapsulation to the scalp. Finally, a black-pigmented silicone rubber was moulded over the external shielding and around the light-guides to form a permanent, seamless, wipeable surface, with only a connector exposed on the upper surface. Including the 4 mm protrusion of each light guide, the total height in cross-section of the module is 14 mm. The weight of each module is approximately 15 g.

2.1.3 Sequence and signal optimization

In order to achieve a 12-module system with a minimum frame rate of 3 Hz, all required measurements associated with a given LED had to be obtained within 6.8 ms (48 LED illuminations plus one dark measurement per frame). It was also critical to ensure that the shortest channels were not at risk of saturation during in-vivo recordings. To achieve this level of performance, we took advantage of the fact that the DDC114 is programmable in terms of both gain and integration times. The maximum charge that can be accumulated on each channel of the DDC114 was 50 pC, with a theoretical Root-Mean-Squared noise of 0.63 fC. This is equivalent to 205 fW and 1.63 nW of optical power respectively, resulting in a theoretical maximum dynamic range of 98 dB.

However, in order to further increase the dynamic range, we implemented a dual-integration strategy, wherein both a short and a long integration period were undertaken within the 6.8 ms window available for each LED. In this dual-integration scheme, the data acquisition sequence for each LED consists of two separate integrations, labelled as A, and B in Fig. 2. The beginning of the sequence starts with the short integration (A, 0.5 ms) followed by a longer integration time (B, 6.3 ms). The respective LED is only switched on shortly before the start of the long integration, B. This is to ensure the LED is illuminated for a very short time within the integration A. The time between the rising edge of the signal driving the LED and the start of integration B is referred to as the overlap time (τovp), which is programmable: it can be set at any value between 0.02 ms and 0.5 ms with a 0.02 ms resolution. As part of the system initialisation procedure, in each subject users can select the maximum value of τovp that results in there being no saturated measurements from any short integrations. After the short integration period, as much of the time remaining in the 6.8 ms sample window as possible is allotted to the long integration. The LED remains on continuously throughout the long integration period.

 figure: Fig. 2.

Fig. 2. A timing diagram of the dual integration data acquisition scheme. Integration A corresponds to the short integration period (0.5 ms) and B, the long integration period (6.3 ms). The LED overlap time τovp can be optimized on a subject-by-subject basis.

Download Full Size | PDF

The advantage of this dual-integration approach is that it has an effect equivalent to increasing the maximum detectable intensity by a factor equal to the ratio of the long integration period and the minimum value of τovp, which is in excess of 300. If short-separation measurements from the long integration period are found to saturate the detector, the short-integration measurement can be used instead. Typically, which integration will be used for a given channel is determined for each subject as part of initialisation of the system, with the result stored in a simple look-up table for use in post-processing.

2.1.4 Motion sensor implementation

The motion sensor (MPU9250) has the potential to provide 3-axis accelerometry, 3-axis gyroscope and 3-axis magnetometry data at between 100 and 200 Hz. An I2C protocol was implemented to achieve two-way communication between the motion sensor and the local microcontroller within each module, which then loaded MPU data onto the data bus for transfer to the control unit. Due to limitations in data transmission rates, it was not possible to transfer all 9 axes of MPU data and simultaneously extract HD-DOT data from 12 DOT modules. A balance had to be struck between the sample rate of MPU data, the number of data axes that were transmitted per module, and the number of modules recording motion sensor data. Ultimately we chose to extract MPU data from every module at 150 Hz. To achieve this within the bandwidth limitations meant it was only possible to transmit a single axis of data per module. We chose therefore to transmit the norm of the three accelerometer axes, which we thought likely to provide the clearest indication of the occurrence of motion artifacts.

2.2 Array positioning and application

To couple the array to the head of the subjects, the 12 modules were first placed within a stretchable lattice made from 1 cm wide elastic strapping. The elastic ran around the outer edge of each module, and held the modules in formation while allowing the array to conform to the head. A continuation of the elastic lattice at each end of the array formed a chin-strap. Tightening the chin-strap caused the inter-module distance along the long axis of the array to increase. The minimum inter-module distance was simply the thickness of the elastic strapping (∼2 mm), while the maximum distance was limited only by the length of the ribbon cable between modules, but this limit was never approached. When applied to the subjects, the array was centred along the midline, 1 cm posterior to Cz, with the long axis of the array running parallel to the T7-Cz-T8 line (Fig. 1(b)). The chin-strap was tightened such that the centre of the lateral edges of the array reached C5 on the left and C6 on the right hemisphere. A Polhemus Patriot 3D digitizer (Polhemus, VT, USA) was used to measure each subject’s cranial landmarks (nasion, inion, pre-auricular left and right, Cz) and the external corners of each module within the array for registration purposes. Once the registration measurements were complete, a neoprene cap was applied over the module array to maintain downward pressure on the modules and insulate them from external light sources (see Fig. 1(e)). A single ribbon cable of 3 m in length connected the modules to one another and to the control unit, allowing subjects to move in a relatively unconstrained manner.

2.3 Experimental preparation and paradigm design

Nine subjects (2 female; median age 28 and range 22-45 years, 1 left handed) were recruited to this study. Each gave written, informed consent prior to undergoing any experiment. This study was approved by the UCL research ethics committee.

Structural magnetic resonance imaging (MRI) data were acquired for each participant using a Siemens Magnetom Prisma Fit 3T system. This included a 3D T1-weighted sequence (MPRAGE, 1 mm isotropic, TR/TE/TI = 2200/2.9/900 ms) and a 3D T2-weighted sequence (SPACE, 1 mm isotropic, TR/TE = 3200/400 ms). The MRI scans and HD-DOT experiments were performed on different days within the same 4-month period.

Prior to the start of an experimental session, the subjects were given 5 minutes to familiarize themselves with texting the pangram ‘the quick brown fox jumps over the lazy dog’ on an iPhone 7 smartphone (Apple Inc., CA, USA). The HD-DOT data acquisition experiment then consisted of three separate paradigms, which were randomly ordered for each subject and are described below.

The first paradigm was a unimanual texting protocol performed while the subjects remained (with the exception of the task itself) as still as possible. Subjects were seated in a comfortable chair, approximately 90 cm from a computer screen. Using a timing protocol built in MATLAB (Mathworks, MA, USA), the screen would initially (and during inter-stimulus ‘rest state’ periods) display a fixation cross at the centre of a 50% grey screen. To trigger the start of a stimulation block, the words ‘Left hand text’ or ‘Right hand text’ were displayed on the screen and accompanied by an audio cue of the same words read aloud. During each block, the subjects would text the pangram above continuously (without capitalization nor punctuation) for 17.5 seconds with the specified hand, at which point the block would end, as indicated by the screen reverting to the rest state and the audio cue ‘stop texting’. Inter-stimulus periods were randomly jittered in duration from 17.5 to 22.5 seconds. A total of 16 stimulation blocks (8 per hand in a pseudorandom sequence) were performed, so that the total duration of this paradigm, which including 30 seconds’ rest at the beginning and end of the sequence, was approximately 11 minutes.

The second paradigm consisted of a simple walking protocol. Subjects were asked to stand, and on hearing the audio cue of the word ‘walk’, walk at their own pace around the laboratory in a figure-eight path spanning the width of the room (approximately 3.5 m). Each walking block would last 17.5 seconds, with the end of each block indicated by the audio cue ‘stop walking’. At the end of each block, subjects would stop walking and stand still at whatever point in their path they found themselves. The inter-stimulus period was randomly jittered in duration from 17.5 to 22.5 seconds. A total of 12 stimulation walking blocks were performed, so that the total duration of this paradigm, which included 30 seconds’ rest at the beginning and end of the sequence, was approximately 8.5 minutes.

The last paradigm was a unimanual texting protocol performed while the subject walked continuously throughout. Subjects were asked to walk the figure-eight path at their own pace. To trigger the start of a stimulation block, the words ‘Left hand text’ or ‘Right hand text’ were provided by audio cue. During each block, the subjects would continue to walk, but would simultaneously text the pangram above continuously for 17.5 seconds with the specified hand. The end of each block was indicated by the audio cue ‘stop texting’. Inter-stimulus periods were randomly jittered in duration from 17.5 to 22.5 seconds. A total of 16 stimulation blocks (8 per hand) were performed, so that the total duration of this paradigm (including 30 seconds’ rest at the beginning and end of the sequence) was approximately 11 minutes.

2.4 HD-DOT data processing, image reconstruction and analysis

2.4.1 Head modelling and array registration

For each subject, the T2-weighted image was linearly co-registered to the T1-weighted image using ANTs [31]. Both T1-w and T2-w images were employed to produce a multi-tissue probabilistic segmentation using the unified segmentation algorithm implemented in SPM12 [32] (http://www.fil.ion.ucl.ac.uk/spm/). Six tissue types were segmented: soft tissue (i.e. scalp), skull, cerebrospinal fluid (CSF), grey matter (GM), white matter (WM) and air. The final 6-layer tissue mask of each subject was obtained by assigning to each voxel the label of the tissue with the highest probability. High-resolution multi-layer tetrahedral meshes were then created with the iso2mesh software [33] for each subject.

Cranial landmarks (Inion, Nasion, pre-auricular points and Cz) were manually identified for each subject from a 3D rendering of their T1-w MRI and used to map the cranial landmark coordinates measured with the Polhemus digitizer to the virtual head mesh coordinate system via an affine transformation. The transformation matrix obtained in this process was then applied to all points measured with the Polhemus system, which included the external corners of each module. Each scalp surface mesh was then realigned to the measured positions using the iterative closest point algorithm [34]. Sources and detector positions were then obtained from a priori knowledge of their position with respect to the measured corners of the tiles, using a modified version of the spring relaxation algorithm included in the Homer2 package, AtlasViewer [35].

2.4.2 Forward modelling

Wavelength-specific Jacobians were computed on the volumetric head meshes of each subject with the Toast++ software [36], which solves the diffusion approximation via the finite element method. Optical properties (absorption coefficients and reduced scattering coefficients) were assigned to each tissue type and for each wavelength by fitting all published values for these tissue types [28,37,38]. Voxels within the head that were defined by the segmentation process as air (typically within the sinuses) were assigned the same optical properties as CSF. The forward model solution was projected by Toast++ onto a 50 × 60 × 50 regular voxel grid for reconstruction, and a finer inter-mediate grid of 100 × 120 × 100 voxels was used to optimize the mapping between the mesh space and the voxel space. Sources (and adjoint detectors) were modelled as having a Gaussian profile with a standard deviation of 2 mm, and Neumann boundary conditions were applied.

2.4.3 Data pre-processing

The raw, channel-wise HD-DOT data was processed prior to image reconstruction using elements taken from the Homer2 toolbox (https://homer-fnirs.org/) to produce haemodynamic response functions (HRFs) for each subject and condition. First, channels were passed through a quality control step (enPruneChannel), which removed channels that exhibited a mean intensity of less than 500 ppm or a value of the mean intensity divided by the standard deviation of less than 12. Any channels that, despite the dual-integration strategy described above, were found to saturate at any point during each recording were also excluded. This quality control step resulted in a binary vector indicating those channels that had passed these thresholds (‘good’ channels). After converting the intensity data to changes in absorbance (in units of optical density), a motion correction step was performed using the kurtosis wavelet approach with a kurtosis level of 3.3 [39]. The function hmrMotionArtifactByChannel was applied both before and after wavelet motion correction to quantify the burden of motion artifacts in both the original data and that which remained in the data after the motion correction step. This information was used to permit data rejection where necessary and to yield the artifact envelope used in the comparison with the motion sensor data of Fig. 6 in Section 3.4. Note that a modified version of the hmrMotionArtifactByChannel function was also used to identify features in the MPU data. After these motion correction steps, the data was bandpass filtered in range 0.01 to 0.5 Hz and converted to changes in haemoglobin concentrations using differential pathlength factors of 7.5 and 7 for 740 nm and 855 nm wavelengths, respectively [40]. Haemodynamic response functions were then extracted using a deconvolution approach that employed a set of gaussian basis functions (with both a standard deviation and temporal spacing of 1.5 s) to model the HRF from 5 seconds prior to 30 seconds after the onset of each stimulus [23]. For every channel in the array, the deconvolution model included as a regressor the nearest short-separation channel (defined as that with a source-detector separation of 10 mm or less) that passed the quality control step. This process yielded 35-second long haemodynamic responses for every good channel, condition and subject.

2.4.4 Inverse problem

Images representing both the change in oxyhaemoglobin (ΔHbO) and the change deoxyhaemoglobin (ΔHbR) were reconstructed for each subject, task and time point using a multispectral approach, which directly generates concentration changes from attenuation data [41]. We employed the LSQR method to solve the inverse problem and first order Tikhonov regularization, as in Brigadoi et al. [42]. For each reconstruction, the LSQR algorithm was configured to terminate after a maximum of 50 iterations, and with a tolerance of 10−5. The regularization hyperparameter was set to λ = 10−3. The reconstructed images were defined on the same regular grid as the Jacobian. A remapping procedure was therefore performed to map back the reconstructed images from the voxel space to the volumetric head mesh.

2.4.5 Group-level analysis

For each subject, the structural pipeline of the Human Connectome Project (HCP) [43] (HCP pipeline v. 4.0.0, https://github.com/Washington-University/HCPpipelines/releases) was implemented to obtain a pial surface in the subject-space and a high-resolution pial surface mesh with 164,000 nodes registered to the Conte-69 template [44].

For surface analysis and visualization, each volumetric head-mesh-based image was mapped to the corresponding pial surface in the subject-space by assigning a value to each node on the pial boundary surface that was equal to the mean value of all the volumetric mesh node values within a 3 mm radius. In order to compute group average images and perform statistical analyses to compare activation across tasks, the ΔHbO and ΔHbR pial images in the subject space were first resampled to the high-resolution Conte-69-registered subject-space. Group averages for both ΔHbO and ΔHbR were then computed by averaging all the Conte-69-registered pial images for each task at each reconstructed time point in the template space.

For each subject, task and chromophore, the average activation image between 6 and 16 s post stimulus-onset was computed. Since the 12 modules were designed to cover the sensorimotor cortex in this experiment, we reduced the number of nodes on which to apply statistical comparisons by selecting only nodes within the regions to which the 12 modules were sensitive. The sensitivity mask was computed by mapping to the Conte69 template the sensitivity map for the subject with the highest number of good channels, i.e. the one achieving the highest sensitivity over the cortex (our best-case scenario), and setting to 0 all nodes with sensitivity lower than 5% of the maximum sensitivity.

Statistical analyses were performed using the Permutation Analysis of Linear Models (PALM) tool included in FSL [45]. PALM is a statistical toolbox that uses permutation methods and includes support for surface-based statistical analysis. Permutation-based t-tests were performed to compare each activation image vs. rest (defined as the average image during the baseline period from -5 to 0 seconds) and to compare activation images for different tasks, with 500 permutations, tail approximation [46] and threshold-free cluster enhancement to correct for multiple comparisons with default parameters for surface data.

3. Results

3.1 System performance and data quality

By taking the standard-deviation of dark measurements over time for each detector and each experiment, it is possible to quantify the sensitivity of the system in terms of the measured noise-equivalent power (NEP) [30]. The median NEP measured in-vivo, across all 9 subjects, 3 paradigms and all 48 detectors was 318 fW (range 268 to 409 fW). A 3-way ANOVA of subject, paradigm and detector demonstrated that all three variables have a significant impact on NEP (p < 0.001), with subject having the most significant effect (p < 1 × 10−6). The maximum detectable power for the long integration period with the chosen gain is 16.3 nW, and as a result, the measured dynamic range was 94.2 dB. However, accounting for the increase in the effective maximum detectable power afforded by dual-integration the effective dynamic range of the system was 142.6 dB. Note that the minimum value of τovp automatically determined across our subject group was 0.1 ms. As this system is an evolution of previous devices, basic assessments of features such as temporal stability are beyond the remit of this paper. However, we include as supplementary data an example of a simple stability measurement (see Fig. 7 in the Appendix).

The data quality control steps described in “Methods” allowed us to quantify the proportion of ‘good’ channels obtained in each subject as a function of source-detector separation. Two subjects from the cohort were found to have less than 70% of channels with source-detector separations in range 0-35 mm marked as ‘good’, and were subsequently rejected. The median percentage of channels in range 0-35 mm marked as good in the remaining 7 subjects was 94.5%. The maximum source-detector separation of channels marked as ‘good’ varied from 34.8 to 56.5 mm across our subject group and the corresponding number of dual-wavelength channels used for image reconstruction varied from 248 to 614.

3.2 Functional images

Example three-dimensional DOT functional response images are presented in Fig. 3(a). Reconstructed images of the peak response in oxyhaemoglobin concentration for the seated texting (right), walking texting (right) and walking paradigms are presented in a subject-specific finite element mesh. Group-level, peak-response maps associated with all five task conditions, for both ΔHbO and ΔHbR are shown in Fig. 3(b). A clear, localized contralateral response is apparent for the seated texting with the right hand, while the left-hand seated texting produces a more bilateral response pattern. In comparison the walking task produces a positive ΔHbO in the superior somatomotor cortex and only a small negative ΔHbR, predominantly in midline cortical areas anterior to the pre-central gyrus. The walking texting response maps show a positive ΔHbO and a negative ΔHbR that are similar in spatial distribution to those of the seated-equivalent, but with notably reduced contrast. Walking texting (with both left and right hand) are associated with a decrease in ΔHbO in the superior supplementary motor area relative to the baseline condition.

 figure: Fig. 3.

Fig. 3. HD-DOT haemodynamic images at the subject and group level. Part (a) depicts example 3D orthogonal slices through peak ΔHbO response DOT maps in response to texting with the right hand while seated, texting with the right hand whilst walking, and walking, each thresholded at 30% of the peak response. Group-average haemodynamic response maps, displayed in the Conte 69 template pial space, are shown in part (b). Images b(i-v) show the ΔHbO maps for each of the five conditions, while b(vi-x) the equivalent ΔHbR images.

Download Full Size | PDF

Cortical maps of the T-statistic associated with the contrast of task ΔHbO > baseline ΔHbO for all five tasks are shown in Fig. 4(a-e). The same maps, but masked to indicate significance after correction at a T-stat value equivalent to p < 0.05 are shown in Fig. 4(f-j). Note that the T-stat maps of the seated and walking texting conditions are very similar in their spatial distributions, with contralateral increases in HbO apparent in each case. Significant responses are apparent in the contralateral hand-knob regions of the somatomotor cortex to both seated texting conditions. Walking texting left also demonstrates a significant response in the hemisphere contralateral to stimulation, but with a notably reduced extent. No region of the brain exhibited a significant response to the walking texting right condition, despite the similarities to the equivalent seated condition apparent in the haemodynamic maps of Fig. 3(b). The walking condition produced a significant ΔHbO response in the superior somatomotor cortices. Note that while the ΔHbR images of Fig. 3(b) show decreases that are spatially localized and generally consistent with a typical functional haemodynamic response, the T-statistic images based on ΔHbR did not reach significance after correction.

 figure: Fig. 4.

Fig. 4. Group T-stat maps for the contrast of task ΔHbO > baseline ΔHbO for each of the five conditions. The upper row (a-e) consists of the unmasked T­-statistic maps, while the lower row (f-j) shows the masked T-stat maps in which only values that remain significant after correction are displayed.

Download Full Size | PDF

Figure 5 depicts the spatial overlap of the ΔHbO-derived T-statistic maps for the seated and walking texting paradigms for both right and left-handed tasks, when each is thresholded at 30% of its maximum T-statistic value. Areas in which the seated texting and walking texting distributions overlap are indicated in dark red. The dice coefficient, which was calculated to quantify the extent of this overlap, was 0.77 and 0.75 for right and left-handed tasks, respectively.

 figure: Fig. 5.

Fig. 5. Overlap images comparing the spatial distribution of seated and walking functional T-stat maps. (a) shows masks of the region in which the T-stat map exceeds 30% of its maximum for seated texting right (green) and walking texting right (blue). The regions where these masks coincide is shown in dark red. The equivalent image for texting with the left hand is shown in (b).

Download Full Size | PDF

3.4 Motion artifacts and motion sensor data

Despite offering less vulnerability to motion artifacts than other neuroimaging modalities, fNIRS is still sensitive to motion [12,13]. Motion artifacts are often a significant problem for fNIRS applications, particularly in studies of infants and in studies of moving subjects. For this study, the proportion of data identified as motion artifact was consistent across our three different experimental paradigms, despite two paradigms requiring the subjects to walk around the laboratory. The burden of motion artifact (the percentage of data-points identified as motion) averaged 4.0%, 2.2% and 2.4% for the seated texting, walking and walking texting paradigms, respectively, but these differences were not statistically significant (one-way ANOVA, p = 0.089). Similarly, the number of channels passing our data quality thresholds showed no significant variation across the three paradigms (one-way ANOVA, p = 0.996). However, the standard deviation of the absorbance data (which averaged 0.0252, 0.0262 and 0.0304 OD for the seated texting, walking and walking texted paradigms, respectively) were all significantly different from one another (one-way ANOVA, Tukey-Kramer post-hoc analysis, p < 0.01). Therefore, while the proportion of identified motion artifacts did not change in the walking paradigms, the act of walking appears to significantly increase the standard deviation of our HD-DOT data.

To inform and potentially alleviate the issue of motion artifacts [47,48,56,57] (and to take further advantage of the concept of modular, wearable HD-DOT technology) a motion sensor (MPU9250, Invensense Inc. USA) was incorporated into each module. The norm of the 3-axis accelerometry data from each motion sensor was collected for all 12 modules throughout all three paradigms in each subject. Figure 6(a) shows an example period of acquired accelerometer data from one module, and an unfiltered change-in-absorbance trace from one DOT channel associated with that module, for the seated texting paradigm in one subject. The figure also displays the periods identified as potentially artifactual using the MPU signal, and the periods identified as artifact from the DOT data itself using established approaches. The clear spike-like artifacts identified directly from the DOT data do not reliably coincide with identifiable features in the accelerometer signal, even when the subject is seated. Figure 6(b) shows an example period of data from the walking paradigm. Here, the walking blocks are clearly evident in the MPU data. Here, even large spike artifacts in the DOT data do not coincide with clear deviations in the MPU data while the subject is walking. Similar patterns are evident throughout the dataset. Figure 6(c) summarises the agreement between the DOT-derived artifact envelope and the MPU-derived equivalent. For the seated and walking texting cases, when an artifact is apparent from the DOT data, the MPU rarely agrees (rates of only 13.3 and 12.6% respectively). When there is no artifact identified in the DOT data (which is the vast majority of the time) there is greater agreement between the DOT and MPU-derived envelopes (93.2 and 96.6% respectively). The walking paradigm demonstrates a different pattern: motion in the DOT data is coincident with MPU features in 77.8% of cases, while when no artifact is apparent in the DOT data, the rate of agreement is 47.4%. However, this is likely a result of the MPU feature extraction algorithm marking the entirety of each walking block as motion artifact.

 figure: Fig. 6.

Fig. 6. The relationship between motion sensor data and DOT artifacts. a) A selected period of accelerometery data from one module (in blue) and the unfiltered change-in-absorbance signal at 740 nm for a channel associated with that same module (in green). The overlaid pink shaded regions indicate periods identified as motion artifacts using the DOT data (see “Methods”) while the yellow shaded periods identify features from the MPU signal. Note the discrepancies between the two. Panel b) presents equivalent data for the walking paradigm. Note the overt periods of increased acceleration variance during the walking blocks. Motion artifacts identified in the DOT data do not coincide with detectable changes in the accelerometer signal, which appears dominated by the acceleration associated with the walking task itself. Panel c) Provides a summary of the concordance between the DOT- and MPU-derived artifact envelopes across all datasets.

Download Full Size | PDF

4. Discussion

Advancing DOT technologies away from the use of optical fibres and towards head-mounted modular electronics has the potential to permit functional neuroimaging of the whole adult cortex, with sub-centimetre resolution, in almost any environment [30]. Dispensing with optical fibres allows for sampling density and field-of-view to be increased without the associated increase in weight and ergonomic burden. However, achieving a light-weight device that can provide high-density sampling, wide-range of source-detector separations and a large field of view with minimal cabling is a significant challenge. Here, we have presented several significant methodological and technological advances that have allowed us to create a wearable high-density DOT technology that incorporates 12 modules, and provides 1152 logical channels per wavelength. This is sufficient to cover around one third of the adult scalp. The biggest challenge in the development of this technology was achieving a reasonable frame rate while also maintaining sufficient sensitivity and dynamic range. By optimising the gain settings of the charge-to-digital converter (DDC114, Texas Instrument, USA) and focussing on the electro-magnetic shielding of the modules, we were able to achieve a sample time per measurement of 6.8 ms while also attaining exceptional sensitivity. The in-vivo-measured noise-equivalent power, which averaged 318 fW, is (as far as the authors are aware) the lowest ever reported for a CW device. The fact that the NEP was found to vary significantly with subject suggests that our measures may actually underestimate the achievable sensitivity of this device; variation between subjects implies that either the electrical noise of the environment changed day-to-day, or (more likely) the fit of the head cap affected the intensity of background light reaching the detectors during the ‘dark’ measurements. Furthermore, the NEP is inversely dependent on integration time, such that increasing the integration period to the 20 ms, for example, would yield an NEP of approximately 85 fW, but would result in a frame rate of only ∼1 Hz, which we deemed too low for functional neuroimaging because of the increased risk of aliasing of the cardiac signal.

The dual-integration strategy described in “Methods” allowed us to effectively increase the dynamic range of the device from an intrinsic value of 94.2 dB to an effective value of 142.6 dB. This ensured that we were able to successfully collect data from the shortest, 10 mm source-detector pair without saturating the DDC114. The tunable nature of the LED overlap time also allows signal quality to be optimized across different subjects. However, the fact that the minimum τovp value in our group was 0.1 ms (compared to a possible minimum of 0.02 ms) suggests that our large effective dynamic range was not fully exploited in this dataset. This result implies that the source power could have been increased by a factor of ∼5 without saturating the device in these subjects. However, given the high variability in optical contact losses across subjects and phantoms etc., maintaining some headroom in the device is clearly necessary.

Note that this system, like its predecessors [26,58], has undergone a range of phantom-based assessments. However, partly because building a realistic, dynamic DOT phantom that can accurately mimic the realities of an in-vivo experiment remains very challenging, we focused on assessing system performance in a real experimental scenario. One specific experimental factor that affects all optical devices is thermal stability, which is directly relevant to head-mounted electronics. The data obtained from the in-vivo experiments here did not exhibit obvious thermal drift, as the process of calibration for the integration timing scheme (which usually takes 5-10 minutes) tends to provide enough time for the modules to reach a stable temperature.

The signal quality observed in our subject group was generally high. Two subjects of 9 were rejected because the proportion of ‘good’ channels in range 0-35 mm did not exceed 70%, but in the remaining subjects the vast majority of those same channels (94.5%) passed our data quality checks. The longest separation ‘good’ channel in each subject varied from 34.8 to 56.5 mm. This compares very favourably with commercially available CW devices [7]. It is important to note, however, that in terms of the maximum obtainable source-detector distances, the performance of this system in this cohort fell short of that obtained by Chitnis et al. [26], despite the improvement in sensitivity. This is almost certainly due to the fact that the group studied here happened to include a larger proportion of subjects with dark hair.

Another significant advantage of moving DOT technologies away from the use of optical fibres and towards wearable electronics is the opportunity it presents for the integration of additional sensing devices into the HD-DOT modules. The inclusion of the motion sensor has the potential to allow automated detection and rejection, or even optimized correction, of motion-contaminated data segments [4749,56,57]. However, our results demonstrate that using motion sensor data, even as a simple indicator of the presence of artifacts is not straightforward. While a subject remains otherwise still, our results suggest that DOT artifacts can be coincident with clear features in the MPU data, but this is not routinely the case. When a subject is undertaking a gross movement (e.g. walking), the relationship between the amplitude signal from the accelerometer and the presence of motion artifacts decouples further. The acceleration associated with walking (and in this case, turning) appear significantly larger in amplitude than the spikes in acceleration associated with motion artifacts when a subject remains still (Fig. 6), but walking itself does not necessarily induce overt motion artifacts. Extracting multi-axes data from each module, and perhaps investigating the relationship between the network of motion sensor signals that can be obtained from multiple modules, may yet provide a way of overcoming at least some of these limitations [50].

The functional haemodynamic response data obtained in this cohort was in some respects consistent with previous work and with our expectations [26,27,51,52]. The seated texting task constitutes a unimanual motor stimulation, but one that likely induces activation in the areas of the brain associated with working memory and motor planning. Given the field-of-view associated with the array, the largest responses are evident as expected in the primary somatomotor regions, specifically around the pre- and post-central gyri, approximately half way between the mid-line and the lateral cerebral fissure (Fig. 3 and 4). This area (the well-known ‘hand knob’) is invariably associated with cortical representations of the fingers and thumb [53]. The functional imaging map associated with the walking paradigm shows a response that is markedly superior to those resulting from the texting stimulus. This is consistent with a greater involvement of the cortical representation of the feet [54], and possibly of parts of the supplementary motor area [55].

As can be seen from Fig. 3, 4 and 5, the spatial distribution of the functional maps associated with the walking texting and seated texting tasks are extremely similar. The peaks of each response distribution are highly consistent for both left and right-hand stimuli. However, as evident from Fig. 4, the T-statistic maps associated with these images are very different, with a much-reduced region of activation reaching significance in the case of walking and right-hand texting, while the walking and left-hand texting fails to reach significance at all after correction. There are clearly two possible reasons for this: either the functional response that results from the texting task is reduced while the subject is also walking or the walking-texting data was simply noisier. Our analysis of the differences in motion burden and data quality across our three paradigms demonstrates that the proportion of data identified as motion in each of the three paradigms is effectively the same, but that the walking paradigms yield data that exhibits significantly larger standard deviation. This suggests there are additional features in the walking and walking texting data that are not picked up by our motion artifacts detection process, but that still increase the variance of our data. While it is possible that this increase in standard deviation during the walking tasks relates directly to physiology (e.g. increases in heart rate, stroke volume, respiration rate etc.), it is also possible that these additional features are effectively small-scale motion artifacts that do not exhibit the large, fast changes in amplitude or standard deviation that we typically depend on to identify artifacts from DOT data. The presence of these ‘micro-artifacts’ would explain the increase in noise in the walking texting haemodynamic images relative to the seated texting equivalents in Fig. 3, and the reduction in the regions of the brain in which the functional response reaches significance in Fig. 4.

The consistency of the spatial distribution of the responses between seated and walking texting is encouraging evidence that wearable HD-DOT will be able to provide reliable functional neuroimaging of subjects during overt movement, but our results suggest that improvements in head-gear and the expansion of the use of motion sensing may be necessary to obtain results equivalent to those in stationary subjects.

5. Conclusion

We have presented a series of significant technological developments that have allowed us to produce a fibreless, modular, high-density DOT system with exceptional sensitivity, a large dynamic range and a field of view that encompasses approximately one third of the adult scalp. We have applied this technology in a set of novel paradigms that allowed us to investigate its efficacy in the functional neuroimaging of overtly moving subjects. The system yielded comparable images of functional activation in response to a paradigm performed by both stationary and walking subjects, demonstrating the potential for this technology to become the gold-standard in un-constrained functional neuroimaging.

Appendix 1

Figure 7 shows the full time-course of 100s of static epoxy-resin phantom data recorded using the 12-module system from cold without explicit warm-up time excepting the ∼5 minutes required to calibrate the integration timing scheme. The black dashed line denotes the noise-equivalent power, determined from the standard-deviation of dark measures. Figure 7(b) shows the temporal stability of a subset of the channels shown in Fig. 7(a) for which there was a good level of light recorded (taken here as > 1 nW). No variation exceeds 1.5%, and the median standard deviation across all channels in this example was 0.048%. Figure 7(c) shows a histogram of the gradients of the linear fit to each of the time-courses shown in b). The mean and maximum gradients were 5.8e-4% per second and 4.4e-3% per second, respectively. As the mean is non-zero, this implies there remains a linear trend present in the data, but that trend is small. For the worst-case channel, the signal intensity would expect to change by 1% in 247 seconds, while for the average channel, it would take 1700 seconds. This level of drift comparable with previous fNIRS devices [59], and is of a scale and duration that would render it irrelevant to most functional DOT paradigms.

 figure: Fig. 7.

Fig. 7. Phantom characterization of the thermal stability of the 12-module HD-DOT system.

Download Full Size | PDF

Funding

Engineering and Physical Sciences Research Council (EP/N025946/1, EP/N032055/1); Wellcome Trust (WT 203148/Z/16/Z); Royal Academy of Engineering (RF1516\15\33).

Acknowledgements

We would like to extend our thanks to clinical scientists Marzena Wylezinska-Arridge and Stephen Wastling for the essential part they played in acquiring the structural MRI data. Also thanks to Greg Smith from Gowerlabs Ltd. for help with constructing the system head gear and module diagrams. EDV is supported by the Wellcome/EPSRC Centre for Medical Engineering (WT 203148/Z/16/Z). SP is supported by RAEng Fellowship RF1516\15\33 and EPSRC grant EP/N032055/1. RJC is supported by EPSRC Fellowship EP/N025946/1.

Disclosures

Samuel Powell, Nicholas L Everdell, and Robert J Cooper have financial interests in UCL spin-out Gowerlabs Ltd., which is developing related (though entirely independent) technologies for commercialization.

References

1. M. P. van den Heuvel and H. E. Hulshoff Pol, “Exploring the brain network: A review on resting-state fMRI functional connectivity,” Eur. Neuropsychopharmacol. 20(8), 519–534 (2010). [CrossRef]  

2. L. J. Weiss-Croft and T. Baldeweg, “Maturation of language networks in children: A systematic review of 22years of functional MRI,” NeuroImage 123, 269–281 (2015). [CrossRef]  

3. R. Sperling, “The potential of functional MRI as a biomarker in early Alzheimer's disease,” Neurobiol. Aging 32, S37–S43 (2011). [CrossRef]  

4. S. Lloyd-Fox, A. Blasi, G. Pasco, T. Gliga, E. Jones, D. Murphy, C. E. Elwell, T. Charman, M. H. Johnson, and B. A. S. I. S. Team, “Cortical responses before 6 months of life associate with later autism,” Eur. J. Neurosci. 47(6), 736–749 (2018). [CrossRef]  

5. D. A. Boas, C. E. Elwell, M. Ferrari, and G. Taga, “Twenty years of functional near-infrared spectroscopy: introduction for the special issue,” NeuroImage 85, 1–5 (2014). [CrossRef]  

6. S. Lloyd-Fox, A. Blasi, and C. E. Elwell, “Illuminating the developing brain: The past, present and future of functional near infrared spectroscopy,” Neurosci. Biobehav. Rev. 34(3), 269–284 (2010). [CrossRef]  

7. F. Scholkmann, S. Kleiser, A. J. Metz, R. Zimmermann, J. M. Pavia, U. Wolf, and M. Wolf, “A review on continuous wave functional near-infrared spectroscopy and imaging instrumentation and methodology,” NeuroImage 85, 6–27 (2014). [CrossRef]  

8. S. Lloyd-Fox, M. Papademetriou, M. K. Darboe, N. L. Everdell, R. Wegmuller, A. M. Prentice, S. E. Moore, and C. E. Elwell, “Functional near infrared spectroscopy (fNIRS) to assess cognitive function in infants in rural Africa,” Sci. Rep. 4(1), 4740 (2015). [CrossRef]  

9. Y. Hoshi, “Functional near-infrared spectroscopy: current status and future prospects,” J. Biomed. Opt. 12(6), 062106 (2007). [CrossRef]  

10. Y. Hoshi, “Functional near-infrared optical imaging: Utility and limitations in human brain mapping,” Psychophysiology 40(4), 511–520 (2003). [CrossRef]  

11. G. Taga, K. Asakawa, A. Maki, Y. Konishi, and H. Koizumi, “Brain imaging in awake infants by near-infrared optical topography,” Proc. Natl. Acad. Sci. 100(19), 10722–10727 (2003). [CrossRef]  

12. R. J. Cooper, J. Selb, L. Gagnon, D. Phillip, H. W. Schytz, H. K. Iversen, M. Ashina, and D. A. Boas, “A Systematic Comparison of Motion Artifact Correction Techniques for Functional Near-Infrared Spectroscopy,” Front. Neurosci. 6, 147 (2012). [CrossRef]  

13. S. Brigadoi, L. Ceccherini, S. Cutini, F. Scarpa, P. Scatturin, J. Selb, L. Gagnon, D. A. Boas, and R. J. Cooper, “Motion artifacts in functional near-infrared spectroscopy: A comparison of motion correction techniques applied to real cognitive data,” NeuroImage 85, 181–191 (2014). [CrossRef]  

14. P. Pinti, I. Tachtsidis, A. Hamilton, J. Hirsch, C. Aichelburg, S. Gilbert, and P. W. Burgess, “The present and future use of functional near-infrared spectroscopy (fNIRS) for cognitive neuroscience,” Ann. N. Y. Acad. Sci. 1464(1), 5–29 (2020). [CrossRef]  

15. A. P. Gibson, J. C. Hebden, and S. R. Arridge, “Recent advances in diffuse optical imaging,” Phys. Med. Biol. 50(4), R1–R43 (2005). [CrossRef]  

16. R. J. Cooper, M. Caffini, J. Dubb, Q. Fang, A. Custo, D. Tsuzuki, B. Fischl, W. Wells, I. Dan, and D. A. Boas, “Validating atlas-guided DOT: A comparison of diffuse optical tomography informed by atlas and subject-specific anatomies,” NeuroImage 62(3), 1999–2006 (2012). [CrossRef]  

17. B. W. Zeff, B. R. White, H. Dehghani, B. L. Schlaggar, and J. P. Culver, “Retinotopic mapping of adult human visual cortex with high-density diffuse optical tomography,” Proc. Natl. Acad. Sci. 104(29), 12169–12174 (2007). [CrossRef]  

18. A. T. Eggebrecht, S. L. Ferradal, A. Robichaux-Viehoever, M. S. Hassanpour, H. Dehghani, A. Z. Snyder, T. Hershey, and J. P. Culver, “Mapping distributed brain function and networks with diffuse optical tomography,” Nat. Photonics 8(6), 448–454 (2014). [CrossRef]  

19. S. Arridge and R. J. Cooper, “Optical Image Reconstruction in Brain Mapping,” A. W. Toga, Ed., pp. 217–222, Academic Press, Waltham (2015).

20. B. R. White and J. P. Culver, “Quantitative evaluation of high-density diffuse optical tomography: in vivo resolution and mapping performance,” J. Biomed. Opt. 15(2), 026006 (2010). [CrossRef]  

21. A. T. Eggebrecht, B. R. White, S. L. Ferradal, C. Chen, Y. Zhan, A. Z. Snyder, H. Dehghani, and J. P. Culver, “A quantitative spatial comparison of high-density diffuse optical tomography and fMRI cortical mapping,” NeuroImage 61(4), 1120–1128 (2012). [CrossRef]  

22. H. Dehghani, B. R. White, B. W. Zeff, A. Tizzard, and J. P. Culver, “Depth sensitivity and image reconstruction analysis of dense imaging arrays for mapping brain function with diffuse optical tomography,” Appl. Opt. 48(10), D137–D143 (2009). [CrossRef]  

23. L. Gagnon, K. Perdue, D. N. Greve, D. Goldenholz, G. Kaskhedikar, and D. A. Boas, “Improved recovery of the hemodynamic response in diffuse optical imaging using short optode separations and state-space modeling,” NeuroImage 56(3), 1362–1371 (2011). [CrossRef]  

24. S. Brigadoi and R. J. Cooper, “How short is short? Optimum source-detector distance for short-separation channels in functional near-infrared spectroscopy,” Neurophotonics 2(2), 025005 (2015). [CrossRef]  

25. N. M. Gregg, B. R. White, B. W. Zeff, A. J. Berger, and J. P. Culver, “Brain specificity of diffuse optical imaging: improvements from superficial signal regression and tomography,” Front. Neuroenerg. 2, 14 (2010). [CrossRef]  

26. D. Chitnis, R. J. Cooper, L. Dempsey, S. Powell, S. Quaggia, D. Highton, C. Elwell, J. C. Hebden, and N. L. Everdell, “Functional imaging of the human brain using a modular, fibre-less, high-density diffuse optical tomography system,” Biomed. Opt. Express 7(10), 4275–4288 (2016). [CrossRef]  

27. A. D. Laura, R. J. Cooper, T. Roque, T. Correia, E. Magee, S. Powell, A. P. Gibson, and J. Hebden, “Data-driven approach to optimum wavelength selection for diffuse optical imaging,” J. Biomed. Opt. 20(1), 016003 (2015). [CrossRef]  

28. G. Strangman, M. A. Franceschini, and D. A. Boas, “Factors affecting the accuracy of near-infrared spectroscopy concentration calculations for focal changes in oxygenation parameters,” NeuroImage 18(4), 865–879 (2003). [CrossRef]  

29. L. Gagnon, R. J. Cooper, M. A. Yücel, K. L. Perdue, D. N. Greve, and D. A. Boas, “Short separation channel location impacts the performance of short channel regression in NIRS,” NeuroImage 59(3), 2518–2528 (2012). [CrossRef]  

30. H. Zhao and R. J. Cooper, “Review of recent progress toward a fiberless, whole-scalp diffuse optical tomography system,” Neurophotonics 5(1), 1 (2017). [CrossRef]  

31. B. B. Avants, N. J. Tustison, G. Song, P. A. Cook, A. Klein, and J. C. Gee, “A reproducible evaluation of ANTs similarity metric performance in brain image registration,” NeuroImage 54(3), 2033–2044 (2011). [CrossRef]  

32. J. Ashburner and K. J. Friston, “Unified segmentation,” NeuroImage 26(3), 839–851 (2005). [CrossRef]  

33. F. Qianqian and D. A. Boas, “Tetrahedral mesh generation from volumetric binary and grayscale images,” 2009 IEEE Int. Symp. Biomed. Imaging Nano Macro, 1142–1145 (2009).

34. P. J. Besl and N. D. McKay, “A method for registration of 3-D shapes,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (1992). [CrossRef]  

35. C. M. Aasted, M. A. Yücel, R. J. Cooper, J. Dubb, D. Tsuzuki, L. Becerra, M. P. Petkov, D. Borsook, I. Dan, and D. A. Boas, “Anatomical guidance for functional near-infrared spectroscopy: AtlasViewer tutorial,” Neurophotonics 2(2), 020801 (2015). [CrossRef]  

36. M. Schweiger and S. R. Arridge, “The Toast++ software suite for forward and inverse modeling in optical tomography,” J. Biomed. Opt. 19(4), 040801 (2014). [CrossRef]  

37. F. Bevilacqua, D. Piguet, P. Marquet, J. D. Gross, B. J. Tromberg, and C. Depeursinge, “In vivo local determination of tissue optical properties: applications to human brain,” Appl. Opt. 38(22), 4939–4950 (1999). [CrossRef]  

38. A. Custo, W. M. Wells, A. H. Barnett, E. M. C. Hillman, and D. A. Boas, “Effective scattering coefficient of the cerebral spinal fluid in adult head models for diffuse optical imaging,” Appl. Opt. 45(19), 4747–4755 (2006). [CrossRef]  

39. A. M. Chiarelli, E. L. Maclin, M. Fabiani, and G. Gratton, “A kurtosis-based wavelet algorithm for motion artifact correction of fNIRS data,” NeuroImage 112, 128–137 (2015). [CrossRef]  

40. H. Zhao, Y. Tanikawa, F. Gao, Y. Onodera, A. Sassaroli, K. Tanaka, and Y. Yamada, “Maps of optical differential pathlength factor of human adult forehead, somatosensory motor and occipital regions at multi-wavelengths in NIR,” Phys. Med. Biol. 47(12), 3062075 (2002). [CrossRef]  

41. A. Corlu, R. Choe, T. Durduran, K. Lee, M. Schweiger, S. R. Arridge, E. M. Hillman, and A. G. Yodh, “Diffuse optical tomography with spectral constraints and wavelength optimization,” Appl. Opt. 44(11), 2082–2093 (2005). [CrossRef]  

42. S. Brigadoi, P. Phan, D. Highton, S. Powell, R. J. Cooper, J. Hebden, M. Smith, I. Tachtsidis, C. E. Elwell, and A. P. Gibson, “Image reconstruction of oxidized cerebral cytochrome C oxidase changes from broadband near-infrared spectroscopy data,” Neurophotonics 4(2), 021105 (2017). [CrossRef]  

43. M. F. Glasser, S. N. Sotiropoulos, J. A. Wilson, T. S. Coalson, B. Fischl, J. L. Andersson, J. Xu, S. Jbabdi, M. Webster, J. R. Polimeni, D. C. Van Essen, M. Jenkinson, and WU-Minn HCP Consortium, “The minimal preprocessing pipelines for the Human Connectome Project,” NeuroImage 80, 105–124 (2013). [CrossRef]  

44. D. C. Van Essen, M. F. Glasser, D. L. Dierker, J. Harwell, and T. Coalson, “Parcellations and Hemispheric Asymmetries of Human Cerebral Cortex Analyzed on Surface-Based Atlases,” Cereb. Cortex 22(10), 2241–2262 (2012). [CrossRef]  

45. A. M. Winkler, G. R. Ridgway, M. A. Webster, S. M. Smith, and T. E. Nichols, “Permutation inference for the general linear model,” NeuroImage 92, 381–397 (2014). [CrossRef]  

46. A. M. Winkler, G. R. Ridgway, G. Douaud, T. E. Nichols, and S. M. Smith, “Faster permutation inference in brain imaging,” NeuroImage 141, 502–516 (2016). [CrossRef]  

47. J. Virtanen, K. M. Kotilahti, R. Ilmoniemi, T. E. J. Noponen, and J. Virtanen, “Accelerometer-based method for correcting signal baseline changes caused by motion artifacts in medical near-infrared spectroscopy,” J. Biomed. Opt. 16(8), 087005 (2011). [CrossRef]  

48. F. C. Robertson, T. S. Douglas, and E. M. Meintjes, “Motion Artifact Removal for Functional Near Infrared Spectroscopy: A Comparison of Methods,” IEEE Trans. Biomed. Eng. 57(6), 1377–1387 (2010). [CrossRef]  

49. X. Cui, J. M. Baker, N. Liu, and A. L. Reiss, “Sensitivity of fNIRS measurement to head motion: An applied use of smartphones in the lab,” J. Neurosci. Methods 245, 37–43 (2015). [CrossRef]  

50. S. Brigadoi, A. Ganglani, H. Zhao, and R. J. Cooper, “Integrating motion sensing and wearable, modular high-density diffuse optical tomography: preliminary results,” Proc. SPIE 11074, 1107405 (2019). [CrossRef]  

51. J. D. Meier, T. N. Aflalo, S. Kastner, and M. S. A. Graziano, “Complex Organization of Human Primary Motor Cortex: A High-Resolution fMRI Study,” J. Neurophysiol. 100(4), 1800–1812 (2008). [CrossRef]  

52. K. L. M. Koenraadt, E. G. J. Roelofsen, J. Duysens, and N. L. W. Keijsers, “Cortical control of normal gait and precision stepping: An fNIRS study,” NeuroImage 85, 415–422 (2014). [CrossRef]  

53. T. A. Yousry, U. D. Schmid, H. Alkadhi, D. Schmidt, A. Peraud, A. Buettner, and P. Winkler, “Localization of the motor hand area to a knob on the precentral gyrus. A new landmark,” Brain 120(1), 141–157 (1997). [CrossRef]  

54. C. Weiss, C. Nettekoven, A. K. Rehme, V. Neuschmelting, A. Eisenbeis, R. Goldbrunner, and C. Grefkes, “Mapping the hand, foot and face representations in the primary motor cortex — Retest reliability of neuronavigated TMS versus functional MRI,” NeuroImage 66, 531–542 (2013). [CrossRef]  

55. H. Chainay, A. Krainik, M. L. Tanguy, E. Gerardin, D. L. Bihan, and S. Lehéricy, “Foot, face and hand representation in the human supplementary motor area,” NeuroReport 15(5), 765–769 (2004). [CrossRef]  

56. A. von Lühmann, Z. Boukouvalas, K. R. Müller, and T. Adalı, “A new blind source separation framework for signal analysis and artifact rejection in functional Near-Infrared Spectroscopy,” NeuroImage 200, 72–88 (2019). [CrossRef]  

57. A. von Lühmann, X. Li, K. R. Müller, D. A. Boas, and M. A. Yücel, “Improved physiological noise regression in fNIRS: A multimodal extension of the General Linear Model using temporally embedded Canonical Correlation Analysis,” NeuroImage 208, 116472 (2020). [CrossRef]  

58. D. Chitnis, D. Airantzis, D. Highton, R. Williams, P. Phan, V. Giagka, S. Powell, R. Cooper, I. Tachtsidis, M. Smith, C. E. Elwell, J. C. Hebden, and N. Everdell, “Towards a wearable near infrared spectroscopic probe for monitoring concentrations of multiple chromophores in biological tissue in vivo,” Rev. Sci. Instrum. 87(6), 065112 (2016). [CrossRef]  

59. D. Wyzer, O. Lambercy, F. Scholkmann, and M. Wolf, “Wearable and modular functional near-infrared spectroscopy instrument with multidistance measurements at four wavelengths,” Neurophotonics 4(4), 041413 (2016). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. The completed 12-module HD-DOT imaging system. a) Photographs of both sides of the module PCB with key components indicated; b) an exploded view of the module encapsulation layers; c) a photograph of 12 modules in a 2 × 6 formation; d) a system-level diagram showing the 12 modules, control unit, PC, and cabling; e) the system in place on a subject, with neoprene cap; f) diagrams illustrating the anatomical positioning of the 12-module array on the head of our subjects; and g) the group-average sensitivity map calculated via photon transport modelling indicating the regions of the cortex to which the array was sensitive.
Fig. 2.
Fig. 2. A timing diagram of the dual integration data acquisition scheme. Integration A corresponds to the short integration period (0.5 ms) and B, the long integration period (6.3 ms). The LED overlap time τovp can be optimized on a subject-by-subject basis.
Fig. 3.
Fig. 3. HD-DOT haemodynamic images at the subject and group level. Part (a) depicts example 3D orthogonal slices through peak ΔHbO response DOT maps in response to texting with the right hand while seated, texting with the right hand whilst walking, and walking, each thresholded at 30% of the peak response. Group-average haemodynamic response maps, displayed in the Conte 69 template pial space, are shown in part (b). Images b(i-v) show the ΔHbO maps for each of the five conditions, while b(vi-x) the equivalent ΔHbR images.
Fig. 4.
Fig. 4. Group T-stat maps for the contrast of task ΔHbO > baseline ΔHbO for each of the five conditions. The upper row (a-e) consists of the unmasked T­-statistic maps, while the lower row (f-j) shows the masked T-stat maps in which only values that remain significant after correction are displayed.
Fig. 5.
Fig. 5. Overlap images comparing the spatial distribution of seated and walking functional T-stat maps. (a) shows masks of the region in which the T-stat map exceeds 30% of its maximum for seated texting right (green) and walking texting right (blue). The regions where these masks coincide is shown in dark red. The equivalent image for texting with the left hand is shown in (b).
Fig. 6.
Fig. 6. The relationship between motion sensor data and DOT artifacts. a) A selected period of accelerometery data from one module (in blue) and the unfiltered change-in-absorbance signal at 740 nm for a channel associated with that same module (in green). The overlaid pink shaded regions indicate periods identified as motion artifacts using the DOT data (see “Methods”) while the yellow shaded periods identify features from the MPU signal. Note the discrepancies between the two. Panel b) presents equivalent data for the walking paradigm. Note the overt periods of increased acceleration variance during the walking blocks. Motion artifacts identified in the DOT data do not coincide with detectable changes in the accelerometer signal, which appears dominated by the acceleration associated with the walking task itself. Panel c) Provides a summary of the concordance between the DOT- and MPU-derived artifact envelopes across all datasets.
Fig. 7.
Fig. 7. Phantom characterization of the thermal stability of the 12-module HD-DOT system.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.