Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Real-time multi-functional optical coherence tomography

Open Access Open Access

Abstract

We demonstrate real-time acquisition, processing, and display of tissue structure, birefringence, and blood flow in a multi-functional optical coherence tomography (MF-OCT) system. This is accomplished by efficient data processing of the phase-resolved inteference patterns without dedicated hardware or extensive modification to the high-speed fiber-based OCT system. The system acquires images of 2048 depth scans per second, covering an area of 5 mm in width×1.2 mm in depth with real-time display updating images in a rolling manner 32 times each second. We present a video of the system display as images from the proximal nail fold of a human volunteer are taken.

©2003 Optical Society of America

1. Introduction

Optical coherence tomography (OCT) is a promising technique for non-invasive imaging of biological tissue [1]. OCT uses low coherence interferometry to produce a two-dimensional image of optical scattering from internal tissue microstructures. Interference fringes are formed when the optical path length of light reflected from the sample matches that reflected from the reference arm to within the coherence length of the light source. An axial depth scan (A-line) is obtained by scanning the reference arm length, resulting in localized interference fringes with amplitudes related to sample reflectivity. The fringe intensities in adjacent A-lines are combined to form a two-dimensional image. The source coherence length and the spot size of the beam focus on the sample determine the depth resolution and lateral image resolution, respectively. Systems with ultrahigh resolution (1 µm×3 µm) have been developed [2], however, resolution on the order of 10 µm is more typical. OCT imaging can be performed at high speeds, with video rate imaging at 32 fps for 250×250 pixels covering an area of 3×4 mm being reported [3].

Various extensions have been shown to provide more information on tissue properties than standard OCT imaging alone. Polarization sensitive OCT (PS-OCT) is sensitive to light polarization changing properties of the sample [49]. Simultaneous detection of interference fringes in two orthogonal polarization channels allows determination of the Stokes parameters of light [8]. Comparison of the Stokes parameters of the incident state to that reflected from the sample can yield a depth-resolved map of optical properties such as birefringence [8]. PS-OCT has been incorporated in fiber-based systems using standard single-mode fiber [10,11] and using polarization-maintaining fiber [12,13]. Another extension, optical Doppler tomography (ODT), is capable of depth-resolved imaging of flow [1419]. Flow sensitivity can be achieved by measuring the shift in carrier frequency of the interference fringe pattern due to backscattering of light from moving particles, or by comparing the phase of the interference fringe pattern from one A-line to the next. Both methods have been implemented in real-time systems, either with dedicated hardware [18,19] or by use of an optical Hilbert transform [20]. In a previous publication, we demonstrated a multi-functional OCT system capable of simultaneously measuring all three images (intensity, birefringence, and flow) [21]. This requires acquisition of the full interference fringe patterns. Due to the processing time necessary to analyze this large amount of data, displaying all three images simultaneously and in real-time has not previously been demonstrated. Measurements were usually taken and saved with data processing and display occurring separately afterwards.

We present a fiber-based multi-functional OCT (MF-OCT) system capable of acquiring, processing, and displaying intensity, birefringence, and flow simultaneously and in real-time by use of efficient computation algorithms. Images comprised of 2048 A-line scans of 2560 points in length are acquired once a second. All three images are updated 32 times per second in a rolling manner, where successive fractions of the image are updated as acquisition progresses. The immediate nature of the display provides visual feedback during data acquisition, as opposed to previous efforts where acquisition was blind to features with birefringence and/or flow. Our system facilitates location of regions of interest, especially for in vivo studies, opening the possibility for clinical use of MF-OCT as an optical biopsy method.

2. System

The optical components of the fiber-based OCT system, described in more detail in Pierce et al. [21], are diagrammed in Fig. 1. Standard single-mode fiber is used throughout. A polarization controller and polarizer are used to select the polarization state with the highest power of a broadband source centered at 1310 nm and with a 70 nm bandwidth. This light is sent to an electro-optic polarization modulator controlled by a two-step driving function designed to toggle the polarization state between two orthogonal states in a Poincaré sphere representation. After passing through an optical circulator, the light is sent to the sample and reference arms of the interferometer via a 90/10 fiber splitter. A polarization controller and polarizer are used to insure that a constant amount of power is sent to the rapid scanning optical delay (RSOD), which provides depth scanning and controls the carrier frequency of the interference fringe pattern [22], regardless of the polarization state of light in the source arm. A handpiece with a rear-mounted CCD contains two galvo-mounted mirrors that move the beam across the sample to add lateral scanning. Light returning from both arms then passes back through the fiber splitter and the optical circulator before being split by a fiber polarizing beam splitter. The resulting two sets of interference fringes are measured by separate photodiodes.

The system is controlled by a dual processor (2.4 GHz Intel Pentium II Xeon) computer through two D/A boards (National Instruments 6733 and 6115), one of which doubles as a 12-bit A/D board capable of sampling both detectors at 5 million samples per second. The RSOD is controlled by a rounded triangular waveform with the same period as the two-step waveform sent to the polarization modulator, with a phase delay to insure that the RSOD galvo response is in phase with the polarization modulator, as described in Fig. 1. Waveform output is triggered by the start of data acquisition. Even and odd A-lines are recorded during the positive and negative sloping sections of the RSOD triangle and have incident polarization states orthogonal in a Poincaré sphere representation.

 figure: Fig. 1.

Fig. 1. Diagram of the OCT system and driving waveforms. The RSOD and polarization modulator are driven by a rounded triangle and a step function, respectively, both at approximately 1 kHz. A phase delay is introduced such that the RSOD galvo response is in phase with the polarization modulator. The system processes the central 80% of the positive and negative sloping regions of the RSOD response, yielding even and odd A-lines. (pol: polarizer, pc: passive polarization controller, pm: electro-optic polarization modulator, oc:optical circulator, RSOD: rapid scanning optical delay line, fpb: fiber polarizing beam splitter, pd: fiber-pigtailed photodiodes, ccd: charge coupled device camera).

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. Flow diagram of data processing. The main thread begins processing a data chunk as soon as it has been acquired. This entails activating even and odd A-line processing threads to convert the detected interference patterns into Stokes parameters and phase information, as well as updating the intensity image. Once initial processing of the data chunk has been completed, the birefringence and flow threads perform their respective analysis and image updates. The save thread writes raw data to disk once an image is completely acquired.

Download Full Size | PDF

3. Data processing

The acquisition and processing software was written in Microsoft Visual C++ v6.0. Initialization consists of allocation of various memory buffers and preparation of the driving waveforms. The driving waveforms for the RSOD and polarization modulator are calculated and sent to one D/A board with the waveforms for the two scanning axes of the handpiece sent to the other D/A board. Waveform output is initiated simultaneously by triggering both boards by the start of signal acquisition. The clocks of the two boards are linked to ensure continued synchronization of all waveforms and data acquisition.

The acquisition and processing scheme is outlined in Fig. 2. The software was written utilizing multiple threads and run under Windows 2000. This structure was implemented to take full advantage of a multi-processor platform, and could be run on a single processor computer with a sufficiently fast processor. Each thread is given a priority and waits in a dormant state until signaled. The operating system activates signaled threads of the highest priority to the processor with the least computational load. After a thread completes processing, it returns to a dormant state that occupies virtually no processor load. The main control thread has the highest priority, controls data acquisition, and signals the other threads, which include even and odd A-line processing threads, birefringence and flow calculation threads, and a thread to save data to disk.

3.1 Calculation of Stokes parameters and phase

Once initiated, the A/D board continuously fills a cyclical memory buffer with interleaved data from the two detectors. The buffer can be divided into a number of data chunks, with 2048 A-lines per second typically being divided into 32 data chunks of 64 A-lines each. When the main control thread notices that a data chunk has been completely acquired, it copies that data from the cyclical buffer to an image data buffer. The raw data is arranged in alternating even and odd A-lines, each consisting of 2560 pairs of points from the two detectors. Pointers to unprocessed pairs of A-lines are sent in order to the even and odd A-line processing threads until all 64 A-lines in the chunk are processed. At this point, the birefringence and flow threads are signaled and the main control thread returns to dormancy until the next data chunk becomes available. The appropriate sections of the intensity, birefringence, and flow images are updated as soon as the corresponding processing for a data chunk has been completed. If all data chunks for an entire image have been acquired, a thread is signaled that saves the data to disk.

The first task of the even and odd A-line processing threads is to separate the interleaved data into two separate detector arrays. Only the middle 2048 of the 2560 data points per A-line are used to minimize non-linearity in effective depth scan velocity, especially near the turning points of the RSOD galvo movement. The key difference between even and odd A-lines lies in the slope of the RSOD galvo waveform. The change in the sign of the slope translates into a difference in the direction of the depth scan. As an even A-line scan progresses, the depth of the scan increases. For an odd A-line, the depth of the scan decreases. For this reason, the odd A-line processing thread must reverse the order of points before deinterleaving so that the detector arrays for all A-lines can be written as functions of increasing depth zj, where j∊[0, 2048). Let the horizontal and vertical polarization detector signals as a function of depth be denoted by the arrays Hn[zj] and Vn[zj], respectively, where n∊[0, 64) represents the A-line number within the current data chunk. Assuming a constant effective depth scanning velocity νd, a detected interference pattern f(zj) can be rewritten in terms of time steps tj since zjdtj. A detected interference pattern f(t) can be expressed as the real part of the product of a real envelope fenv(t) and a complex oscillation at an angular carrier frequency s0 with phase φ(t). Writing the envelope amplitude fenv(t) and the phase φ(t) of the fringes in terms of real functions f cos(t) and f sin(t) such that fenv(t)e iφ(t)=f cos(t)+i f sin(t), the detected interference pattern is given by

f(t)=Re{fenv(t)ei(s0t+ϕ(t))}=fcos(t)cos(s0t)fsin(t)sin(s0t)

The cosine and sine components of the interference fringe can then be extracted from the detected interferogram by multiplying by an oscillation exp(-is0t) and averaging over a period Δt corresponding to a few oscillations of the carrier since

0Δtf(t)eis0tdt=0Δt(fcos(t)cos(s0t)fsin(t)sin(s0t))(cos(s0t)isin(s0t))dt
=0Δt(fcos(t)cos2(s0t)+ifsin(t)sin2(s0t))dt
fcos(t)+ifsin(t)

assuming f cos(t) and f sin(t) are relatively constant over Δt.

Depending on the initial settings of the acquisition software, the even and odd A-line processing threads can perform extraction of the cosine and sine components by either quadrature demodulation or using fast Fourier transforms (FFTs). Straightforward translation of Eq. (2) yields a quadrature demodulation, given by

cosHn,Vn[Tk]=l=1qHn,Vn[Tk+lδt]cos(s0lδt)
sinHn,Vn[Tk]=l=1qHn,Vn[Tk+lδt]sin(s0lδt)
 figure: Fig. 3.

Fig. 3. Line graph demonstrating the relationship between the time scales tj and Tk for parameters of 2048 points sampled at 5 MS/s at a carrier frequency of 625 kHz and q=8.

Download Full Size | PDF

where q represents the length of the demodulation in points and Tk are time steps for the resulting reduced length arrays such that δt=t j+1-tj=200 nsec, qδt=T k+1-Tk, and s 0 qδtZ (Fig. 3). For normal acquisition settings, the length of the demodulation is 8 points for a carrier frequency of 625 kHz and yields 256 pairs of cosine and sine components for each detector array of an A-line.

While extraction by quadrature demodulation is straightforward and easily implemented, determining the cosine and sine components of the interference pattern by FFTs is a more powerful method [21]. Since f(t)eis0t=F{f(ss0)} , where F⃗{f(t)}=f(s) and F⃗ and F⃖ represent the forward and inverse Fourier transforms, we can also extract the cosine and sine components of the interference envelope from the inverse Fourier transform of the carrier frequency shifted Fourier spectrum. We implement this idea by first doing FFTs (C libraries obtained from http://fftw.org) on the real arrays Hn[tj] and Vn[tj] to yield the complex frequency arrays Hn[sk] and Vn[sk] of length 1024 points. The Fourier transform of a real function f(t) yields a complex function f(s) where f(-s) is the complex conjugate of f(s). To insure examination of only the positive spectrum, we remove frequency components outside a detection bandwidth Δs around s0. Spectral shaping [23] and dispersion compensation [24] can be applied to the frequency array. The resulting spectrum is then frequency shifted by s0. Since the maximum frequency the arrays need to encode has decreased from half the A/D sampling frequency to Δs, we can reduce the size of the arrays at this point by a factor of 4. A reverse transform then yields 256 complex points, each of which represents an average of f(t)eis0t over 8 points. The cosine and sine components of each detector array are then the real and imaginary parts, respectively, of the complex array.

The Stokes parameters and phase at each point can now be derived from the cosine and sine components [8,10,21] according to the following equations

In[Tk]=(cosHn2[Tk]+sinHn2[Tk]+cosVn2[Tk]+sinVn2[Tk])
Qn[Tk]=(cosHn2[Tk]+sinHn2[Tk]cosVn2[Tk]sinVn2[Tk])
Un[Tk]=2(cosHn[Tk]cosVn[Tk]+sinHn[Tk]sinVn[Tk])
Vn[Tk]=2(cosHn[Tk]sinVn[Tk]sinHn[Tk]cosVn[Tk])
ϕHn,Vn[Tk]=tan1(sinHn,Vn[Tk]cosHn,Vn[Tk]).

The intensity image for the data chunk is derived from the logarithm of In[Tk] and immediately displayed on an 8-bit gray scale over a user-specified range and offset.

3.2 Birefringence calculation

In a Poincaré sphere representation, linear birefringence can be modeled as a rotation about some optic axis constrained to the QU-plane. The amount of birefringence may then be quantified by the amount of rotation, or degree of phase retardation, about that axis necessary to bring the incident polarization state in line with the state after passing through the birefringent material. It is clear that if the incident polarization state is parallel to the optic axis, the amount of rotation becomes impossible to discern. With this in mind, open air PS-OCT systems were designed such that the sample was probed with circularly polarized light [4, 5, 6, 8]. Given the constraints on the optic axis and incident state, the direction of the optic axis and the degree of phase retardation can then be completely determined from the reflected polarization state.

 figure: Fig. 4.

Fig. 4. Birefringence calculation illustrating (a): the surface states, I⃗ 1 and I⃗ 2, in blue and the reflected states, I⃗1 and I⃗2, in green, (b, c) the planes P 1 and P 2 that span all possible rotation axes, and (d): the intersection of the planes resulting in determination of the optic axis A⃗.

Download Full Size | PDF

The fact that the polarization state of light changes while propagating through single-mode fiber complicates the determination of birefringence when measured with a fiber-based PS-OCT system. The detected polarization state now reflects the overall birefringence of both the sample and the fiber of the system. The part due to the sample can be extracted by making a relative measurement. The fiber birefringence affects the polarization state reflected from the surface of the sample in the same way it affects light returning from some depth below the surface. Since the orthogonality between states is preserved in a lossless system, the difference between these two polarization states is due to the sample only. Treating the two polarization states as that occurring before and after passing through the birefringent sample,the effects of fiber birefringence on the overall measurement of the degree of phase retardation are eliminated. However, unlike the open-air situation, the incident state is not necessarily circular and so there is the possibility that the polarization state incident on the sample is linear and coincides with the optic axis. Additionally, the compound effect of fiber and sample birefringence means that the overall optic axis is no longer constrained to the QU-plane. We can solve both problems by probing with multiple polarization states [10,25]. In the current setup, we toggle between two states for adjacent A-lines such that all even A-lines have the same incident polarization state, perpendicular to the incident polarization state in a Poincaré sphere representation of all odd A-lines.

The first task of the birefringence thread is to average down a copy of the depth-resolved Stokes parameters for the data chunk, from 32 pairs of even and odd A-lines of 256 points each to 16 pairs of length 128. This improves the signal to noise ratio and reduces the amount of processing required in the rest of the calculation. The sample surface is determined by thresholding after application of a 2×2 median filter. Incident states for even and odd A-lines are determined by averaging the surface states over their respective 16 A-lines. Let the intensity and direction of these Stokes states be denoted by scalar Ij and vector I⃗j=(Qj, Uj, Vj), where j=1 indicates even A-lines and j=2 odd A-lines. The overall degree of phase retardation at depth z computed from an adjacent pair of A-lines may then be calculated in the following way (Fig. 4). Let the intensity and the polarization state in an even A-line at depth z be represented by I1=In[z] and I⃗1=(Qn[z],Un[z],Vn[z]), respectively, and the intensity and polarization state for the adjacent odd A-line by I2=I n+1[z] and I⃗2=(Q n+1[z],U n+1[z],V n+1[z]). The plane Pj containing all possible axes that can rotate the surface polarization state, I⃗ j, to the reflected polarization state at depth z, I⃗j, is spanned by their sum and cross products. The intersection of the two planes P 1 and P 2 determines a single axis of rotation A⃗, capable of rotating both surface polarization states, I⃗ 1 and I⃗ 2, to reflected states at depth z, I⃗1 and I⃗2, simultaneously. Let θj represent the angle about the optic axis A⃗ necessary to rotate I⃗ j to I⃗j. θj can be calculated from the angle between the cross products of I⃗j and I⃗j with A⃗ according to the relation

cosθj=(A×Ij)·(A×Ij)A×IjA×Ij.

The overall phase retardation θ is then the average of θ1 and θ2. Previously, we used I 1 and I 2 to weight the average [25]. However, the effect of noise on the value of θj increases as the angle between the rotation axis A⃗ and either Stokes states I⃗j or I⃗j decreases. This may be illustrated by defining θA,Ij and θA,Ij as the angles between A⃗ and I⃗j and I⃗j, respectively, where sin(θA,Ij)=A×IjAIj and sin(θA,Ij)=A×IjAIj . In the absence of noise, the rotation angle required to rotate both incident polarization states to the reflected polarization states at depth z will be identical, such that θA,Ij=θA,Ij . As shown in Fig. 5, the effect of noise on the polarization state of detected light can be modeled in the Poincaré sphere by adding a randomly oriented vector of a length proportional to the magnitude of the noise to the polarization state of the light. Since cross products are linear, A⃗×(I⃗+N⃗)=A⃗×I⃗+A⃗×N⃗. As θA,Ij decreases, the contribution to θj due to the signal decreases while that due to noise is unchanged. This means the calculated rotation angle θj becomes more susceptible to noise for small θA,Ij . To account for this, we additionally weight the calculated angles by the product of the sines of the angles between the axis of rotation and the polarization states. The degree of phase retardation is then given by

 figure: Fig 5.

Fig 5. Effect of noise on the calculated rotation angle. For a given axis optic axis A⃗, two pairs of I⃗ 1 and I⃗1 are shown, one with large θA,Ij and the other with small θA,Ij . The cones represent the uncertainty in orientation of the polarization states due to noise. The red areas show the variation in rotation angle θj due to the uncertainty introduced by noise, demonstrating the increased uncertainty involved in determining θj when the polarization state of incident light is closely aligned with the optic axis.

Download Full Size | PDF

θ=(I1sin(θA,I1)I1sin(θA,I1))θ1+(I2sin(θA,I2)I2sin(θA,I2))θ2(I1sin(θA,I1)I1sin(θA,I1))+(I2sin(θA,I2)I2sin(θA,I2)).

These angles are displayed on an 8-bit gray scale with black corresponding to 0° and white to 180°.

3.3 Flow calculation

ODT, also known as color Doppler optical coherence tomography, combines the Doppler principle with OCT to yield two dimensional maps of flow. The velocity of moving particles can be determined by the Doppler shift of the fringe carrier frequency over the detection window Δt [26]. In this case, the minimum detectable frequency varies inversely with the detection window Δt. However, the spatial resolution is directly related to Δt by the depth scanning velocity, meaning that the velocity sensitivity and spatial resolution are inversely related. An alternative method uses the phase change between A-lines to construct a velocity image [16]. The Doppler frequency shift is found by dividing the calculated phase change between points at the same depth in sequential A-lines by the time interval between depth scans, T. Since T is much longer than Δt, this results in a much better velocity sensitivity. It also decouples velocity sensitivity from spatial resolution since the two now depend on different time periods, allowing for sensitive velocity measurements with high spatial resolution. Due to phase-wrapping and the highly sensitive nature of these measurements, it is often more useful to simply image the presence of flow. Better mapping of vessel location than that provided by true Doppler flow velocity imaging can be obtained by displaying images of phase variance [17].

We have adapted the phase-resolved technique for flow imaging in our system [21], and now describe its implementation in real time, avoiding the use of the computationally intensive Hilbert transform. The flow processing thread must first determine an array of phase differences for all A-lines within the data chunk. Since we probe with two different polarization states, this is done by comparison of the phase at the same point in one A-line with that in the previous A-line of the same polarization state to yield ΔϕHn,Vn[z]=ϕHn,Vn[z]ϕHn2,Vn2[z]. In this way, we can always determine a phase difference except for the first two A-lines of an image, in which case Δφ[z]=0. The data chunk has now been reduced to 64 pairs of phase difference arrays of length 256 points with no distinction between even and odd A-lines. A phase correction algorithm is applied to correct for bulk motion artifact and RSOD galvo movement inconsistencies between A-lines. In this phase correction algorithm, intensity-weighted linear least squares fits are determined as a function of depth, according to ΔϕHn,Vn[z]=αn+βnz , where αn and βn signify a phase difference offset and slope, respectively, for an individual A-line. Corrected phase difference arrays are then computed to yield ΔϕHn,Vn[z]=ΔϕHn,Vn[z]αnβnz . Depending on the initial settings of the software, the flow thread can now compute either bi-directional flow or phase variance. The Doppler shift frequency is found by dividing the intensity weighted average by the time between acquisition of two A-lines,

ωn[z]=(In[z]+Qn[z])ΔϕHn[z]+(In[z]Qn[z])ΔϕVn[z](In[z]+Qn[z])+(In[z]Qn[z])·12T.

The phase variance is calculated according to the relation

Varn[z]=(In[z]+Qn[z])ΔϕHn2[z]+(In[z]Qn[z])ΔϕVn2[z](In[z]+Qn[z])+(In[z]Qn[z]).

In either case, we now have 64 arrays of length 256 points. To improve the signal-to-noise ratio, we average the data chunk of 64 arrays of length 256 points to 16 arrays of length 64 points. Bi-directional flow is displayed in 128 shades of a red and blue to represent positive and negative flow velocities. Phase variance values are encoded by an 8-bit gray-scale. Images are displayed for either option as soon as calculation for the data chunk is complete.

4. Results and discussion

Figure 6 shows intensity, birefringence, and flow images taken of the proximal nail fold of a human volunteer. The scan covers an area 5 mm wide by 1.2 mm deep with resolutions of 2048×256, 512×128, and 512×64 pixels for the intensity, birefringence, and flow images, respectively. The surface of the window in our handpiece, in light contact with the finger, is visible at the top of all three images. The delineation between the epidermal and dermal regions of the nail fold are evident in the intensity image, as is the continuity between the cuticle and nail fold epidermis. Moving outward from the bottom left corner, the nail plate, nail bed, nail matrix can be identified in the intensity image as well. The transition from black to white in the birefringence image indicates the polarization state experiences a change in phase retardation angle with increasing depth due to the presence of birefringent tissue at the epidermal-dermal boundary of the nail fold. In the lower left corner, the highly birefringent lower portion of the nail plate reveals a banded structure as the amount of phase retardation wraps several times from 0° to 180° and back. Small transverse blood vessels in the nail fold are easily distinguishable in the phase variance image by their lighter color. The vertical lines beneath the vessels are caused by an accumulation of random Doppler shifts experienced as light passes through the flow region [27], and are not an artifact of the data processing algorithm.

 figure: Fig. 6.

Fig. 6. Intensity, birefringence, and flow (phase variance) images of the proximal nail fold of a human volunteer (upper, middle, and lower images respectively). The epidermal (a) and dermal (b) areas of the nail fold, cuticle (c), nail plate (d), nail bed (e), and nail matrix (f) are all identifiable in the intensity image. The birefringence image shows the phase retardation of the epidermal-dermal boundary (g) as well as the lower half of the nail plate (h). Small transverse blood vessels in the nail fold are distinguishable in the flow image by their lighter color. Each image is 5 mm×1.2 mm, with all data acquired in 1 s.

Download Full Size | PDF

Figure 7 is a 4 second video recording as images are being taken of nearly the same area. The top left window is the user interface of the acquisition software. Below that is a graph displaying the power spectrum averaged over a full image, obtained by Fourier transform of the acquired interference patterns at each detector for even A-lines, in red and green, respectively, and odd A-lines, in light red and light green, respectively. The black lines in the graph show the detection bandwidth (500 kHz) centered about the carrier frequency (625 kHz). Moving down from the top right are the intensity, birefringence, and flow (phase variance) images. All three images can be seen being updated in a rolling manner 32 times a second. Below these images is the view from the CCD camera mounted in the handpiece. Due to the non-optimal lighting, the features of the nail fold are difficult to distinguish, however, an aiming beam revealing the area being scanned can be seen.

The immediate visualization of all three images greatly facilitates location and accurate measurement of various structures. Instead of having to record massive amounts of data without knowing if structures of interest are present, we can quickly locate and record images of desired regions. This ability becomes even more important considering that the pressure due to contact with the handpiece can often constrict flow through small vessels. An example of this can be seen in video recording. The pressure being exerted during acquisition of the second image is enough to temporarily constrict flow. Since we can immediately visualize this, we are able to ease the contact pressure and the vessel becomes visible in the last two seconds of imaging.

 figure: Fig. 7.

Fig. 7. (2.5 MB) Video recording of 4 seconds of real-time MF-OCT imaging of the same proximal nail fold. Moving clockwise from the top left corner are the simple user interface, intensity, birefringence, flow images, view of the scanning area, and spectrum. (7.5 MB version).

Download Full Size | PDF

In conclusion, we have demonstrated real-time MF-OCT acquisition, processing, and display. By efficient data processing in software, this has been accomplished without dedicated hardware or extensive modification of the optical system layout. We believe this makes MF-OCT imaging possible for clinical use, where speed and accuracy of imaging is of utmost importance.

Acknowledgments

Research grants from the Whitaker Foundation (26083), the National Eye Institute (1R24 EY12877), and the Department of Defense (F4 9620-01-1-0014) are gratefully acknowledged.

References and links

1. D. Huang, E.A. Swanson, C.P. Lin, J.S. Schuman, W.G. Stinson, W. Chang, M.R. Hee, T. Flotte, K. Gregory, C.A. Puliafito, and J.G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991). [CrossRef]   [PubMed]  

2. W. Drexler, U. Morgner, F.X. Kärtner, C. Pitris, S.A. Boppart, X.D. Li, E.P. Ippen, and J.G. Fujimoto, “In vivo ultrahigh-resolution optical coherence tomography,” Opt. Lett. 24, 1221–1223 (1999). [CrossRef]  

3. A.M. Rollins, M.D. Kulkarni, S. Yazdanfar, R. Ung-arunyawee, and J.A. Izatt, “In vivo video rate optical coherence tomography,” Opt. Express 3, 219–229 (1998). [CrossRef]   [PubMed]  

4. J.F. de Boer, T.E. Milner, M.J.C. van Gemert, and J.S. Nelson, “Two dimensional birefringence imaging in biological tissue by polarization-sensitive optical coherence tomography,” Opt. Lett. 22, 934–936 (1997). [CrossRef]   [PubMed]  

5. M.J. Everett, K. Schoenenberger, B.W. Colston, and L.B. Da Silva, “Birefringence characterization of biological tissue by use of optical coherence tomography,” Opt. Lett. 23, 228–230 (1998). [CrossRef]  

6. J.F. de Boer, S.M. Srinivas, A. Malekafzali, Z. Chen, and J.S. Nelson, “Imaging thermally damaged tissue by polarization sensitive optical coherence tomography,” Opt. Express 3, 212–218 (1998), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-3-6-212. [CrossRef]   [PubMed]  

7. J.M. Schmitt and S.H. Xiang, “Cross-polarized backscatter in optical coherence tomography of biological tissue,” Opt. Lett. 23, 1060–1062 (1998). [CrossRef]  

8. J.F. de Boer, T.E. Milner, and J.S. Nelson, “Determination of the depth-resolved Stokes parameters of light backscattered from turbid media by use of polarization-sensitive optical coherence tomography,” Opt. Lett. 24, 300–302 (1999). [CrossRef]  

9. G. Yao and L.V. Wang, “Two dimensional depth-resolved Mueller matrix characterization of biological tissue by optical coherence tomography,” Opt. Lett. 24, 537–539 (1999). [CrossRef]  

10. C.E. Saxer, J.F. de Boer, B.H. Park, Y. Zhao, Z. Chen, and J.S. Nelson, “High-speed fiber-based polarization-sensitive optical coherence tomography of in vivo human skin,” Opt. Lett. 25, 1355–1357 (2000). [CrossRef]  

11. J.E. Roth, J.A. Kozak, S. Yazdanfar, A.M. Rollins, and J.A. Izatt, “Simplified method for polarization-sensitive optical coherence tomography,” Opt. Lett. 26, 1069–1071 (2001). [CrossRef]  

12. R.V. Kuranov, V.V. Sapozhnikova, I.V. Turchin, E.V. Zagainova, V.M. Gelikonov, V.A. Kamensky, L.B. Snopova, and N.N. Prodanetz, “Complementary use of cross-polarization and standard OCT for differential diagnosis of pathological tissues,” Opt. Express 10, 707–713 (2002), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-10-15-707. [CrossRef]   [PubMed]  

13. D. Fried, J. Xie, S. Shafi, J.D.B. Featherstone, T.M. Breunig, and C. Le, “Imaging caries lesions and lesion progression with polarization sensitive optical coherence tomography,” J Biomed Opt 7, 618–627 (2002). [CrossRef]   [PubMed]  

14. Z. Chen, T.E. Milner, S. Srinivas, X. Wang, A. Malekafzali, M.J.C. van Gemert, and J.S. Nelson, “Noninvasive imaging of in vivo blood flow velocity using optical Doppler tomography,” Opt. Lett. 22, 1119–1121 (1997). [CrossRef]   [PubMed]  

15. J.A. Izatt, M.D. Kulkarni, S. Yazdanfar, J.K. Barton, and A.J. Welch, “In vivo bi-directional color Doppler flow imaging of picoliter blood volumes using optical coherence tomography,” Opt. Lett. 22, 1439–1441 (1997). [CrossRef]  

16. Y. Zhao, Z. Chen, C. Saxer, S. Xiang, J.F. de Boer, and J.S. Nelson, “Phase-resolved optical coherence tomography and optical Doppler tomography for imaging blood flow in human skin with fast scanning speed and high velocity sensitivity,” Opt. Lett. 25, 114–116 (2000). [CrossRef]  

17. Y. Zhao, Z. Chen, C. Saxer, Q. Shen, S. Xiang, J.F. de Boer, and J.S. Nelson, “Doppler standard deviation imaging for clinical monitoring of in vivo human skin blood flow,” Opt. Lett. 25(18), 1358 (2000). [CrossRef]  

18. V. Westphal, S. Yazdanfar, A.M. Rollins, and J.A. Izatt, “Real-time, high velocity-resolution color Doppler optical coherence tomography,” Opt. Lett. 27, 34–36 (2002). [CrossRef]  

19. A.M. Rollins, S. Yazdanfar, J.K. Barton, and J.A. Izatt, “Real-time in vivo color Doppler optical coherence tomography,” J. Biomed. Opt. 7, 123–129 (2002). [CrossRef]   [PubMed]  

20. Y. Zhao, Z. Chen, Z. Ding, H. Ren, and J.S. Nelson, “Real-time phase-resolved functional optical coherence tomography by use of optical Hilbert transformation,” Opt. Lett. 27, 98–100 (2002). [CrossRef]  

21. M.C. Pierce, B.H. Park, B. Cense, and J.F. de Boer, “Simultaneous intensity, birefringence, and flow measurements with high-speed fiber-based optical coherence tomography,” Opt. Lett. 27, 1534–1536 (2002). [CrossRef]  

22. G.J. Tearney, B.E. Bouma, and J.G. Fujimoto, “High-speed phase- and group-delay scanning with a grating-based phase control delay line,” Opt. Lett. 22, 1811–1813 (1997). [CrossRef]  

23. R. Tripathi, N. Nassif, J.S. Nelson, B.H. Park, and J.F. de Boer, “Spectral shaping for non-Gaussian source spectra in optical coherence tomography,” Opt. Lett. 27, 406–408 (2002). [CrossRef]  

24. J.F. de Boer, C.E. Saxer, and J.S. Nelson, “Stable carrier generation and phase-resolved digital data processing in optical coherence tomography,” Appl. Opt. 40, 5787–5790 (2001). [CrossRef]  

25. B.H. Park, C. Saxer, S.M. Srinivas, J.S. Nelson, and J.F. de Boer, “In vivo burn depth determination by high-speed fiber-based polarization sensitive optical coherence tomography,” J. Biomed. Opt. 6, 474–479 (2001). [CrossRef]   [PubMed]  

26. X.J. Wang, T.E. Milner, and J.S. Nelson, “Characterization of fluid-flow velocity by optical Doppler tomography,” Opt. Lett. 20, 1337–1339 (1995). [CrossRef]   [PubMed]  

27. T. Lindmo, D.J. Smithies, Z. Chen, J.S. Nelson, and T.E. Milner, “Accuracy and noise in optical Doppler tomography studied by Monte Carlo simulation,” Phys. Med. Biol. 43, 3045–3064 (1998). [CrossRef]   [PubMed]  

Supplementary Material (2)

Media 1: MOV (2471 KB)     
Media 2: MOV (7718 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Diagram of the OCT system and driving waveforms. The RSOD and polarization modulator are driven by a rounded triangle and a step function, respectively, both at approximately 1 kHz. A phase delay is introduced such that the RSOD galvo response is in phase with the polarization modulator. The system processes the central 80% of the positive and negative sloping regions of the RSOD response, yielding even and odd A-lines. (pol: polarizer, pc: passive polarization controller, pm: electro-optic polarization modulator, oc:optical circulator, RSOD: rapid scanning optical delay line, fpb: fiber polarizing beam splitter, pd: fiber-pigtailed photodiodes, ccd: charge coupled device camera).
Fig. 2.
Fig. 2. Flow diagram of data processing. The main thread begins processing a data chunk as soon as it has been acquired. This entails activating even and odd A-line processing threads to convert the detected interference patterns into Stokes parameters and phase information, as well as updating the intensity image. Once initial processing of the data chunk has been completed, the birefringence and flow threads perform their respective analysis and image updates. The save thread writes raw data to disk once an image is completely acquired.
Fig. 3.
Fig. 3. Line graph demonstrating the relationship between the time scales tj and Tk for parameters of 2048 points sampled at 5 MS/s at a carrier frequency of 625 kHz and q=8.
Fig. 4.
Fig. 4. Birefringence calculation illustrating (a): the surface states, I⃗ 1 and I⃗ 2, in blue and the reflected states, I⃗1 and I⃗2, in green, (b, c) the planes P 1 and P 2 that span all possible rotation axes, and (d): the intersection of the planes resulting in determination of the optic axis A⃗.
Fig 5.
Fig 5. Effect of noise on the calculated rotation angle. For a given axis optic axis A⃗, two pairs of I⃗ 1 and I⃗1 are shown, one with large θ A , I j and the other with small θ A , I j . The cones represent the uncertainty in orientation of the polarization states due to noise. The red areas show the variation in rotation angle θ j due to the uncertainty introduced by noise, demonstrating the increased uncertainty involved in determining θ j when the polarization state of incident light is closely aligned with the optic axis.
Fig. 6.
Fig. 6. Intensity, birefringence, and flow (phase variance) images of the proximal nail fold of a human volunteer (upper, middle, and lower images respectively). The epidermal (a) and dermal (b) areas of the nail fold, cuticle (c), nail plate (d), nail bed (e), and nail matrix (f) are all identifiable in the intensity image. The birefringence image shows the phase retardation of the epidermal-dermal boundary (g) as well as the lower half of the nail plate (h). Small transverse blood vessels in the nail fold are distinguishable in the flow image by their lighter color. Each image is 5 mm×1.2 mm, with all data acquired in 1 s.
Fig. 7.
Fig. 7. (2.5 MB) Video recording of 4 seconds of real-time MF-OCT imaging of the same proximal nail fold. Moving clockwise from the top left corner are the simple user interface, intensity, birefringence, flow images, view of the scanning area, and spectrum. (7.5 MB version).

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

f ( t ) = Re { f e n v ( t ) e i ( s 0 t + ϕ ( t ) ) } = f cos ( t ) cos ( s 0 t ) f sin ( t ) sin ( s 0 t )
0 Δ t f ( t ) e i s 0 t d t = 0 Δ t ( f cos ( t ) cos ( s 0 t ) f sin ( t ) sin ( s 0 t ) ) ( cos ( s 0 t ) i sin ( s 0 t ) ) dt
= 0 Δ t ( f cos ( t ) cos 2 ( s 0 t ) + i f sin ( t ) sin 2 ( s 0 t ) ) d t
f cos ( t ) + i f sin ( t )
cos H n , V n [ T k ] = l = 1 q H n , V n [ T k + l δ t ] cos ( s 0 l δ t )
sin H n , V n [ T k ] = l = 1 q H n , V n [ T k + l δ t ] sin ( s 0 l δ t )
I n [ T k ] = ( cos H n 2 [ T k ] + sin H n 2 [ T k ] + cos V n 2 [ T k ] + sin V n 2 [ T k ] )
Q n [ T k ] = ( cos H n 2 [ T k ] + sin H n 2 [ T k ] cos V n 2 [ T k ] sin V n 2 [ T k ] )
U n [ T k ] = 2 ( cos H n [ T k ] cos V n [ T k ] + sin H n [ T k ] sin V n [ T k ] )
V n [ T k ] = 2 ( cos H n [ T k ] sin V n [ T k ] sin H n [ T k ] cos V n [ T k ] )
ϕ H n , V n [ T k ] = tan 1 ( sin H n , V n [ T k ] cos H n , V n [ T k ] ) .
cos θ j = ( A × I j ) · ( A × I j ) A × I j A × I j .
θ = ( I 1 sin ( θ A , I 1 ) I 1 sin ( θ A , I 1 ) ) θ 1 + ( I 2 sin ( θ A , I 2 ) I 2 sin ( θ A , I 2 ) ) θ 2 ( I 1 sin ( θ A , I 1 ) I 1 sin ( θ A , I 1 ) ) + ( I 2 sin ( θ A , I 2 ) I 2 sin ( θ A , I 2 ) ) .
ω n [ z ] = ( I n [ z ] + Q n [ z ] ) Δ ϕ H n [ z ] + ( I n [ z ] Q n [ z ] ) Δ ϕ V n [ z ] ( I n [ z ] + Q n [ z ] ) + ( I n [ z ] Q n [ z ] ) · 1 2 T .
Var n [ z ] = ( I n [ z ] + Q n [ z ] ) Δ ϕ H n 2 [ z ] + ( I n [ z ] Q n [ z ] ) Δ ϕ V n 2 [ z ] ( I n [ z ] + Q n [ z ] ) + ( I n [ z ] Q n [ z ] ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.