Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Efficient holoscopy image reconstruction

Open Access Open Access

Abstract

Holoscopy is a tomographic imaging technique that combines digital holography and Fourier-domain optical coherence tomography (OCT) to gain tomograms with diffraction limited resolution and uniform sensitivity over several Rayleigh lengths. The lateral image information is calculated from the spatial interference pattern formed by light scattered from the sample and a reference beam. The depth information is obtained from the spectral dependence of the recorded digital holograms. Numerous digital holograms are acquired at different wavelengths and then reconstructed for a common plane in the sample. Afterwards standard Fourier-domain OCT signal processing achieves depth discrimination. Here we describe and demonstrate an optimized data reconstruction algorithm for holoscopy which is related to the inverse scattering reconstruction of wavelength-scanned full-field optical coherence tomography data. Instead of calculating a regularized pseudoinverse of the forward operator, the recorded optical fields are propagated back into the sample volume. In one processing step the high frequency components of the scattering potential are reconstructed on a non-equidistant grid in three-dimensional spatial frequency space. A Fourier transform yields an OCT equivalent image of the object structure. In contrast to the original holoscopy reconstruction with backpropagation and Fourier transform with respect to the wavenumber, the required processing time does neither depend on the confocal parameter nor on the depth of the volume. For an imaging NA of 0.14, the processing time was decreased by a factor of 15, at higher NA the gain in reconstruction speed may reach two orders of magnitude.

© 2012 Optical Society of America

1. Introduction

Holoscopy is a new imaging technique that combines principles of Fourier-domain optical coherence tomography (FD-OCT) with digital holography (DH) [1, 2]. It has promising advantages compared to other optical tomographic imaging techniques. All light backscattered from the sample is collected by an area camera and no confocal or coherence gating restricts the depth of field. Compared to FD-OCT, holoscopy is thus capable of obtaining more information in less time and with less light illuminating the sample. High-resolution imaging is not limited to the focus region, which is defined by the confocal parameter (twice the Rayleigh length). Holographic techniques are used to refocus the data that build the tomographic images. These techniques can bring any layer of the sample into the ‘virtual’ focus – even after the data has been acquired.

Several techniques have been demonstrated to achieve acquisition of tomographic images over several Rayleigh lengths with constant or almost constant lateral resolution. Using multiple foci [3] or non-diffracting beams [47], i. e. Bessel illumination, a more uniform lateral resolution over several Rayleigh lengths was demonstrated with an imaging quality that is comparable to conventional FD-OCT. Interferometric synthetic aperture microscopy (ISAM) extends the depth range of scanning FD-OCT by numerical post-processing routines that reconstruct the image structure outside the focus at increased resolution [810]. Similar effects can be achieved by applying refocusing techniques of digital holography to FD-OCT data [11, 12]. However, all techniques that use scanning with either Gaussian or Bessel beams suffer from reduced sensitivity. In case Gaussian beams are used the light intensity illuminating the scatterers drops rapidly in out-of-focus layers. In case of Bessel illumination, intensity is reduced in all depth layers as relatively large parts of the energy are deposited in the side-lobes of the beam. Thus, to achieve constantly high sensitivity over the full measurement depth, full-field approaches have to be applied where a collimated beam is used for illumination. For example, full-field Fourier-domain OCT has been demonstrated ex-vivo and also in-vivo by using high speed imaging cameras and tunable lasers [13,14]. Depth independent sensitivity was achieved by abandoning the confocal gating, but lateral resolution was optimal only in the focal plane of the imaging optics.

Digital holography (DH) captures the entire wave-field of the object and has a lateral resolution that is not restricted to a focal plane. By modeling the light propagation the optical field is traced back to the object to reconstruct a real image. Using multiple wavelengths, either simultaneously in low-coherence holography [15, 16], or sequentially in digital interference holography [17, 18] depth structures can be resolved and three-dimensional tomograms can be obtained. However, previous approaches using multiple wavelengths in DH to obtain tomographic images either show a non-uniform resolution over depth [1921] or suffer from inefficient reconstruction algorithms and reduced imaging quality [18, 22, 23].

It was shown by Emil Wolf that the Fourier transform of the scattering potential can be obtained when the scattered field of an incident plane wave is observed holographically for multiple directions of the incident wave [24]. In backscattering geometry only a part of the Fourier coefficients can be recorded and especially at limited NA a large portion of the low frequency information for the z-direction is missing. By using multiple wavelengths the sampled bandwidth is considerably enlarged [25]. Still information of the low frequency components of the scattering potential are missing. Therefore the theory of inverse scattering was applied to full-field swept-source OCT (SS-OCT) to calculate an estimation of the scattering potential [26, 27]. From a forward model for the scattered light field the Tikhonov-regularized least-squares solution was calculated. Depth-invariant resolution and sensitivity were demonstrated with simulated data.

In contrast to this work holoscopy records the scattered irradiation in the far-field. It combines swept-source FD-OCT with DH by using a tunable laser in a holography-like setup. Instead of using inverse scattering theory, a fusion of numerical reconstruction techniques of FD-OCT and digital holography is used to achieve uniform sensitivity and lateral resolution over the entire measurement depth even at high lateral resolution. First experimental results have demonstrated capabilities of holoscopy for ex-vivo and in-vivo imaging [1]. In this work holoscopy suffered from a computationally expensive reconstruction. At first, all acquired holograms were reconstructed to the same numerical focus. Afterwards a Fourier transform for each lateral position over the recorded wavenumbers provided the depth resolution similar to full-field FD-OCT. Unfortunately, only a depth of about a Rayleigh length around the chosen layer in the tissue volume was reconstructed with optimal lateral resolution. For imaging over more than 3mm the reconstruction steps, which took about 22s, had to be repeated up to 15 times with different virtual foci. Though holoscopy allows image acquisition of the full volume without physically refocusing at any NA, computing the image needed serial reconstruction of all tissue layers with a respective depth of twice the Rayleigh length. This is neither very elegant nor very efficient.

Here we propose a highly efficient one-step reconstruction process of the complete volume. A similar resampling technique as in ISAM [810,26,27] is applied, but without regularization and thus neglecting the low frequency components of the object structure. Depending on the relation between Rayleigh length and measurement depth it significantly decreases the required reconstruction time compared to the previously demonstrated holoscopy reconstruction. For large NAs improvements by two or more orders of magnitude are expected.

2. Setup

We used two setups to demonstrate the capabilities of the new reconstruction technique. The first lens-less setup (Fig. 1(a)) was previously used for holoscopy [1]. The light of a rapidly tunable laser (Broadsweeper BS-840-1, Superlum, Ireland) with a tuning range from 823.5nm to 873.5nm and 3mW output power was collimated to 2.2mm beam diameter and then split by a beam splitter cube into reference and sample light. The reference beam was brought onto a convex mirror (f = −10.34mm) and the sample beam onto the specimen. The two beams of reflected and scattered light were then superimposed on a high-speed CMOS camera (EoSens MC3010, Mikrotron GmbH, Germany) with 1696 × 1710 pixels measuring 8μm × 8μm each. Camera and laser were synchronized by a trigger from the camera which started the sweep of the laser with the first frame of the imaging sequence. During the sweep 1024 holograms with 1024 × 1024 pixels were acquired at a frame-rate of 440 frames per second resulting in a total measurement time of approximately 2.3s. The setup was adjusted in such a way that the path lengths of all light scattered or reflected by the sample were, in analogy to FD-OCT, slightly longer than the path length of the reference light. Certain restrictions applied to the optical layout, as the spatial frequencies of the interference fringes, which depend on the local angle between sample and reference light, have to be sampled by the camera (see e. g. [28]). The distance between reference mirror and camera was 81mm. The numerical aperture (NA) of the setup – determined by the opening angle between sample and camera – was 0.05. The setup thus provided axial and lateral resolutions of approximately 14μm and 11μm, respectively. The confocal parameter was 220μm.

 figure: Fig. 1

Fig. 1 Setups used for the holoscopic measurements. (a) A lens-less Michelson type setup was used for 0.05 NA on-axis holoscopy. (b) The Mach-Zehnder type setup for off-axis recording of the holograms with 0.14 NA was used for high resolution measurements.

Download Full Size | PDF

The second setup used a Mach-Zehnder type interferometer for off-axis holoscopy as shown in Fig. 1(b). It utilized a microscope objective (MO) to magnify the object in order to achieve higher NAs. The light of a newer version of the rapidly tunable laser (Broadsweeper BS-840-1, Superlum, Ireland) with an extended tuning range from 882nm to 800 nm and 3mW output power was split by a fiber coupler into reference and sample light. The sample light was collimated to 2.2mm beam diameter and illuminated the sample through a f = 75mm lens and an MO of NA 0.14 (5× Plan Apo NIR, Mitutoyo, Japan). The sample was approximately put in the focal plane of the MO. The width of the illumination beam limited the effective field of view, which was about 1mm in diameter. The light that was scattered back by the sample passed the MO again. Irradiation scattered at one point approximately left the MO as a parallel beam. In this far-field a high speed CMOS camera was placed (ACE AC2040-180km, Basler, Germany) with 2048 × 2048 pixels measuring 5.5μm × 5.5μm each. The reference light was collimated to a beam diameter of 16mm and was also brought under an angle of approximately 3° onto the camera. The reference light was adjusted such that its overall path length is slightly shorter than the path lengths of sample scatterers. A total of 1024 holograms with 2048 × 2048 pixels were acquired with a frame-rate of 127fps resulting in a measurement time of 8.1s. The NA of this setup was 0.14, as the full aperture of the MO was used. The confocal parameter, axial and lateral resolution of the second setup were 28μm, 8.6μm and 4μm, respectively.

Reflections from the MO that reached the camera and disturbed the actual interference were suppressed by using linear polarized light in the object arm and a quarter-wave plate between MO and object. The plate was adjusted to rotate the backscattered light from the sample by 90° compared to the light reflected by the MO. An additional polarizer between camera and MO suppressed all reflections except those from the camera window and the quarter-wave plate.

3. Theory

3.1. The measured signal

3.1.1. The intensity distribution in holography

In holography the information of the wave field O(x, y; k) as scattered by the sample is encoded in the interference fringes that occur on superposition with a reference wave field R(x, y; k). The intensity distribution which is acquired by the camera can be described by

I(x,y;k)=γ|R(x,y;k)+O(x,y;k)|2=γ(|R(x,y;k)|2+|O(x,y;k)|2+(R*O)(x,y;k)+(RO*)(x,y;k)),
where x and y denote the lateral position on the camera, k is the wavenumber and γ is a conversion factor of the camera. If the intensity distribution is multiplied with the reference wave four terms are obtained
(RI)(x,y;k)=(R|R|2)(x,y;k)+(R|O|2)(x,y;k)DCandautocorrelationterms+(|R|2O)(x,y;k)imageterm+(R2O*)(x,y;k)twinimageterm,
of which the third (image term) is proportional to the object wave. The multiplication with R corresponds to the illumination with the original reference wave when reconstructing a hologram optically.

3.1.2. Numerical propagation in digital holography (DH)

In DH the lateral distribution of scatterers is reconstructed by numerically propagating the recorded light field O from the camera plane back into the sample volume. This is done by the propagator, an operator 𝒫kz [·] that computes at a specific wavenumber k the diffracted wave field in a plane of distance Δz from a known optical wave-field U in the plane z = z0:

U(x,y,z0+Δz)=𝒫k,Δz[U(x,y,z0)]
In general, the propagation is isomorph to the addition of real numbers with respect to the propagation distance, i. e.
U(x,y,z0+Δz1+Δz2)=𝒫k,Δz1+Δz2[U(x,y,z0)]=𝒫k,Δz2[𝒫k,Δz1[U(x,y,z0)]]
and – as a direct consequence – the inverse propagator is given by 𝒫k,Δz1=𝒫k,Δz, i. e.
U(x,y,z0)=𝒫k,Δz[U(x,y,z0+Δz)].
There are several ways to compute the diffracted field (e. g. by the Rayleigh-Sommerfeld diffraction integral, by the Huygens-Fresnel integral, by the Kirchhoff integral or by the Fresnel transform, see e. g. [29,30]). We used the angular spectrum approach (see e. g. [28,30]), where the object waves are decomposed by a 2-dimensional Fourier transform with respect to x and y into plane waves. In the Fourier space the diffraction of the field Ũ is then computed by a simple multiplication with a phase factor Pkz (kx, ky):
U˜(kx,ky,z0+Δz)=Pk,Δz(kx,ky)U˜(kx,ky,z0)andPk,Δz(kx,ky)=exp(ikzΔz),
where kx, ky and kz are the components of the wave vector which are related to each other by
kz=k2kx2ky2.
Equation (5) allows to calculate the Fourier transform of the optical field in any distance from the camera once it is known in the camera plane.

3.1.3. Object and reference field in holoscopy

In contrast to digital holography, which images surfaces or flat samples at a single or a few wavenumbers, holoscopy records scattering three-dimensional samples at a large number of different wavenumbers. The propagation times of sample and reference radiation are important as they are used to discriminate the light scattered from different depths. For the mathematical description we assume a common reference plane, the zero delay plane, at which the radiations in the sample and reference arm have the same initial phase ϕ0(k) for all wavenumbers. The distance from the camera to this reference plane is z0. Scatterers are located at different depths z from the reference plane. Figure 2 shows the coordinate systems and the variables that were used to derive the following formulas for sample and reference field.

 figure: Fig. 2

Fig. 2 Schematic representation of the coordinate system and variables in the sample arm (left) and reference arm (right) as used for Eq. (7) and Eq. (8).

Download Full Size | PDF

At a certain k the object wave field is given by the coherent superposition of the propagated wave fields that are scattered in different depths z of the sample

O(x,y;k)=AOS(k)dz𝒫k,z0+z[η(x,y,z)eikze+iϕ0(k)],
where AO is the overall amplitude and S(k) the normalized spectral intensity of the illuminating light. η(x, y, z) denotes the scattering potential, i. e. the relative amplitude of the scattered field compared to the incident field. The phase factor exp(−ikz) is created by the propagation of the incident plane wave from the reference plane to the point scatterer, before the scattered field is propagated to the camera by 𝒫k,z0+z [·]. The integral sums the interference signals of all point scatterers originating from different depths. Attenuation of incoming and scattered radiation as well as multiple scattering were neglected, i. e. the scattered wave was treated in the first order Born approximation.

The reference wave is a spherical wave, which has a (virtual) focus in a distance zRef from the reference plane that is also located in a distance z0 in front of the camera. Using the same notation as for the object wave, the spherical reference wave can be described by

R(x,y;k)=ARS(k)eikx2+y2+(z0+zRef)2+ikzRef+iϕ0(k),
with amplitude AR.

For the Mach-Zehnder type setup as shown in Fig. 1(b) the coordinates and parameters used in Eq. (7) and Eq. (8) need to be adjusted. Numerical reconstruction used a spherical reference wave, although the actual physical wave had been collimated. As the obtained object wave field is in the far field, the action of a lens is required to reconstruct the image. Except for its overall z-dependent phase, the spherical reference wave has the same effects as a “numerical lens” with a focal length corresponding to the radius of curvature of the reference wave, since both are represented by the same numerical expression (see e. g. [30]). With the introduction of the phase-corrected reference wave in the next section, the overall phase difference introduced by the reference is compensated. The following reconstruction can therefore be used for both setups.

3.1.4. Phase-corrected propagator

For holoscopy the exact treatment of the phases of reference and sample radiation is important because their spectral dependence contains the depth information. In order to simplify further computations we introduce a modified, phase-corrected propagator and its corresponding phase factor in the Fourier-domain

𝒫k,z0[]exp(+ikz)𝒫k,z[]andPk,z(kx,ky)exp(+ikz)exp(ikzz).
The phase-corrected propagator does not change the phase of the wave field when it is propagated. It therefore focuses the image without changing the actual optical path length, i. e. without actually moving the wave field to another plane. We also introduce phase-corrected reference and object wave fields
R0(x,y;k)R(x,y;k)eiϕ0(k)e+ikz0,O0(x,y;k)O(x,y;k)eiϕ0(k)e+ikz0.
For the modified fields the relation
I=γ|R+O|2=γ|R0+O0|2
holds and consequently also Eq. (1) and Eq. (2) are still true if R and O are replaced by R0 and O0, respectively. By adapting Eq. (7) one computes the explicit representation of the modified object wave field to
O0(x,y;k)=AOS(k)dz𝒫k,z0+z0[η(x,y,z)e2ikz].
Except for the phase-corrected propagator 𝒫k,z0+z0[] this relation has close similarity with the complex cross-correlation term known from Fourier-domain OCT (see e. g. [31]). If the effect of the phase-corrected propagator can be reverted a similar reconstruction of η(x, y, z) as in FD-OCT can be achieved. The depth distribution of the scatterer can be retrieved by a fast Fourier transform (FFT).

3.2. Reconstruction

3.2.1. Obtaining the object wave

As in DH the first step in the reconstruction process is to obtain the phase-corrected object wave O0 from the recorded fringe pattern I. To achieve this, twin-image, DC and auto-correlation terms of Eq. (2) need to be separated from the image term. This problem is well known from holography. The most common solution is to introduce a carrier frequency by using an off-axis reference as in our Mach-Zehnder setup shown in Fig. 1(b). The lateral fringes, which encode object wave, conjugated object wave (twin-image), DC and autocorrelation components, will be located in different spatial frequency regions. After acquisition of the fringe pattern the different terms can be separated by spatial filtering [28, 32, 33]. The resulting complex field is no longer real and contains information about amplitude and phase. It also allows for an increased axial measurement range as in full-range FD-OCT [34] without requiring any further hardware or further signal processing.

Alternatively, in holoscopy the different terms can be separated by positioning the object on only one side of the reference plane as is commonly done in FD-OCT. Then, all object-related path lengths are larger than the reference length. After a Fourier transform with respect to k, the twin-images will be found only at negative frequencies. This axial filtering was used for the on-axis setup shown in Fig. 1(a). However, as in FD-OCT this approach reduces the measurement depth by a factor two and autocorrelation noise, i. e. self-interference signals of the scattered object waves, remain.

Once the laterally or axially filtered intensity distribution If is obtained and the phase-corrected reference wave R0 is known, the phase-corrected object wave O0 in the camera plane is calculated by

O0(x,y;k)=R0(x,y;k)If(x,y;k)γ|R0(x,y;k)|2.

3.2.2. Reconstruction of one Rayleigh length

For the phase-corrected object waves, which are recorded at different wavenumbers, the effect of the phase-corrected propagator 𝒫0 in Eq. (9) can be reverted for a chosen plane located at a distance of zP from the reference plane, by applying the propagator 𝒫k,z0zP0 to O0, i. e.

𝒫k,z0zP0[O0(x,y;k)]=AOS(k)dz𝒫k,z0zP0𝒫k,z0+z0[η(x,y,z)e2ikz]=AOS(k)dz𝒫k,zzP0[η(x,y,z)e2ikz].
For this plane (z = zP) the propagators will cancel each other and the propagated light field is the same that could be achieved by imaging the plane at zP onto the camera. For each point (x, y) the dependence of the remaining term on k is comparable to a complex FD-OCT signal and an image of the scattering potential S˜(z) * ηzP (x, y, z) can be reconstructed for different depths z by a simple inverse Fourier transform along the k axis
S˜(z)*ηzP(x,y,z)=2AOdkexp(+2ikz)𝒫k,z0zP0[O0(x,y;k)].
Here S˜(z) is the Fourier transform of the spectrum and determines the axial point spread function of the reconstructed holoscopy image. The demonstrated propagator inversion is only precise for the one selected depth, to which O0 is propagated, and only here ηzP (x, y, z) will be equal to the scattering potential η(x, y, z). In neighbored planes, ηzP (x, y, z) will be defocused by a distance zzP and no sharp image of η(x, y, z) will be reconstructed. Thus the reconstruction only gives optimal images within a few Rayleigh lengths. The procedure can nevertheless be repeated for various zP and images can be stitched together afterwards.

3.2.3. Reconstruction of the complete volume in free space

Writing Eq. (10) in the Fourier-domain ({kx, ky}-space), the propagation operator is according to Eq. (5) replaced by a multiplication with the phase function Pk,−z0zP (kx, ky):

S˜(z)*η˜zP(kx,ky;z)=2AOdkexp(+i2kz)exp(ik(z0zP))exp(ikz(z0+zP))O˜0(kx,ky;k),
where η̃zP and Õ0 are two-dimensional Fourier transforms of ηzP and O0 with respect to the x and y axis, respectively. By setting the reconstruction distance z equal to the chosen propagation plane zP the complete volume η̃ is reconstructed. Shifting the terms independent of the reconstruction depth z to the right, η̃ can be expressed by a simple integral transform
S˜(z)*η˜(kx,ky;z)=2AOdkexp(+i(kz+k)z)kernelexp(+i(kzk)z0)phaseO˜0(kx,ky;k).
The right hand side does no longer depend on the initial propagation depth zP. The reconstruction of the complete volume is possible by a multiplication of the object wave field with the phase factor exp(i(kzk)z0) and an integral transform along the k-axis with the modified Fourier kernel exp(+i(kz + k)z). Though according to Eq. (6) kz is a nonlinear function of k, rescaling of the recorded data to a linear kz + k scale changes the integral transform to a simple Fourier transform. In the analytic Eq. (11) this is achieved by introducing a new variable
κ(k)=kz(k)+k
and a suitable variable substitution in the integral. This yields
S˜(z)*η˜(kx,ky;z)=2AOdκdkdκexp(+iκz)kernelexp(+i(kz(k(κ))k(κ))z0)phaseO˜0(kx,ky;k(κ)),
where k(κ) is obtained by inverting Eq. (12).

The problem of not equidistant data sampling in k is commonly found in FD-OCT, where the spectrometer disperses spectral interference not linear in k but in the wavelength λ. Interpolation and resampling or a fast Fourier transform on non-equispaced data (NFFT) are used to preserve image quality in FD-OCT. These algorithms differ in imaging quality and processing speed. Comparison of suitable algorithms have been done for best performance of FD-OCT (see e. g. [3537]). Accordingly, the same algorithms can be applied although care needs to be taken as the resampling also depends on the lateral components kx and ky of the wave vector as kz=(k2kx2ky2)1/2 is a function of k, kx and ky.

Resampling is necessary, because the wave field Õ0 (kx, ky; k) is not recorded in the Cartesian coordinate system (kx, ky, kz) of the Fourier space, which is directly converted by the inverse Fourier transform back to the spatial coordinates. By using a non-equispaced Fourier transform the modified coordinate system, in which the data were recorded, is taken into account. The phase factor arises as lateral reconstruction distances – the second index of the propagator 𝒫 –are measured relative to the camera whereas phases and path lengths, which provide the depth information, are measured relative to the reference plane. The phase factor corrects for these different reference points by propagating the object wave fields to the reference plane.

3.2.4. Reconstruction of the complete volume in a medium

The reconstruction by Eq. (11) requires the optical path lengths between scatterer and reference and the respective imaging distance to be identical, which is only the case in free space. For example, in a medium with constant refractive index n the effective wavenumber is given by n · k and the according z-component is

kz=n2k2kx2ky2,
if kx and ky remain unchanged. This is the case if kx and ky are parallel to the boundary surface of the medium. In the reconstruction by Eq. (11) the kernel thus needs to be modified by replacing k and kz by n · k and kz, respectively. However, assuming that z0 describes the effective propagation distance to focus the reference plane when assuming n = 1, the phase factor does not need to be modified. The reconstruction is thus given by
S˜(nz)*η˜(kx,ky;z)=2AOd(nk)exp(+i(kz+nk)z)kernelexp(+i(kzk)z0)phaseO˜0(kx,ky;k),
where the argument of the point-spread function S̃ changed to compensate for the modification of the Fourier transform when replacing k by n · k. The integral transform by Eq. (13) with the modified kernel can also be calculated by the NFFT. For moderate NA a simplification is possible. In paraxial approximation the kernel can be rewritten as
exp(+i(kz+nk)z)exp(+i(2nk+1n(kzk))z).
By introducing the optical path length z′ = nz the kernel can thus be modified to
=exp((+i(2ζ)k+ζkz)z),
with ζ = 1/n2. ζ describes the proportionality constant between imaging distance and optical path length as shown in Fig. 3. The complete reconstruction formula is given by
S˜(z)*η˜(kx,ky;z)=2AOdke+i((2ζ)k+ζkz)zkernele+i(kzk)z0phaseO˜0(kx,ky;k).

 figure: Fig. 3

Fig. 3 Formation of a virtual image of a scatterer by a medium with an index of refraction n larger than one. The medium decreases the reconstruction distance by 1/n while the optical path length z′ is increased by n, i. e. z′ = nz.

Download Full Size | PDF

For ζ = 0, Eq. (15) reduces to Eq. (10) with zP = 0, i. e. the chosen reconstruction distance is the reference plane. For increasing n the parameter ζ tends to zero, thus the higher the refractive index of the sample, the better the approximation by Eq. (10). In fact, for ζ < 0.5 the reconstruction by Eq. (10) yields better results than the reconstruction by Eq. (11) that assumes free space.

In case the paraxial approximation of Eq. (14) is not valid the more specific formula by Eq. (13) needs to be used. A fast implementation of this formula is possible as well.

An approximate solution of the reconstruction can be used for a fast determination of ζ. By first Fourier transforming the object waves O0(x, y; k) with respect to the k axis, i. e. using FD-OCT depth discrimination on the unprocessed holograms and only afterwards performing holographic refocusing with the center wavenumber for at least two different depths, the focus positions and the optical path lengths of these layers can be determined. A linear regression of these points gives ζ and the reference propagation length z0.

4. Results and discussion

The proposed algorithms have been implemented using C++ with the compiler of the GNU Compiler Collection (GCC). In order to increase performance, vectorization using Streaming SIMD Extensions (SSE) intrinsics and parallelization using OpenMP have been applied.

We demonstrated the effectiveness of both proposed reconstruction algorithms using a scattering sample [38] which contains 300 – 800nm sized iron oxide nanoparticles embedded in polyurethane resin. The simple reconstruction according to Eq. (10), which first propagates the object field from the camera plane to one depth in the sample and then applies the Fourier transform, yields sharp images only in the focal range around the reconstruction depth zP (Fig. 4(a) and 4(b)). The algorithms can be used to reconstruct the acquired holoscopic data if only a single layer is of interest. The one-step reconstruction of the complete volume by the NFFT in Eq. (15) reconstructed all layers over a depth of more than 30 Rayleigh lengths sharply (Fig. 4(c)). However, this does only work if the index of refraction in the sample volume is correctly incorporated. A one-step reconstruction of a complete volume assuming the free space situation by Eq. (11) reconstructs only a limited depth region sharply (Fig. 4(d)). In fact, with the phantom the simple reconstruction shows better performance than the one-step reconstruction with n = 1.0 as ζ = 1/n2 = 1/1.52 = 0.44 < 0.5. Only the full reconstruction by Eq. (15) with the correct ζ reconstructs all depths sharply.

 figure: Fig. 4

Fig. 4 B-scans from the reconstructed volume, which was recorded from a scattering phantom [38] consisting of multiple point scatterers. (a) and (b) result from single reconstructions according to Eq. (10) at two different propagation depths zP, which correspond to virtual numerical foci of the reconstruction. Outside the focal regions the lateral resolution is degraded. The confocal parameter was 220μm. (c) One-step reconstruction of the complete volume by Eq. (15) with the correct refractive index n = 1.5 (ζ = 0.44). No lateral resolution degradation is visible. The loss of intensity in depth is only caused by a sensitivity roll-off due to the limited instantaneous coherence length of the laser source. (d) One-step reconstruction of the complete volume by Eq. (11) without correcting for the increased index of refraction in the sample volume (i.e. n = 1.0 and thus ζ = 1). Focus degradation is worse than in the reconstruction for a single focal volume. This is due to the fact that the former corresponds to ζ = 1 and the latter to ζ = 0. The correct value of ζ = 0.44 is thus closer to the reconstruction of a single plane by Eq. (10).

Download Full Size | PDF

To demonstrate the abilities of the reconstruction process for more complex biological structures, tomograms of a bug are shown in Fig. 5. From the whole recorded volume en-face images at three different depths are shown. Media 1 shows a lower-resolution fly-through reconstruction of the bug. While the resolution inside the bug changes due to its non-homogeneous refractive index, the outer shell is sharp within all layers.

 figure: Fig. 5

Fig. 5 En-face tomographic images of a bug at three different depth layers. The image cube was acquired by holoscopy. For reconstruction the one-step algorithm described by Eq. (15) was used. Internal structures of the bug can clearly be seen. Media 1 shows a low-resolution fly-through of the bug.

Download Full Size | PDF

The speed advantage of holoscopy becomes more important for higher NA since the Rayleigh length drops quadratically with the NA. Images of a grape were acquired with 0.14 NA using a microscope objective and a Mach-Zehnder interferometer (Fig. 1(b)). B-scans from reconstructed volumes clearly demonstrate that also for more complex structures a one-step reconstruction by Eq. (15) is possible while maintaining lateral reconstruction over the depth (Fig. 6). Some additional artifacts were introduced by reflections of the microscope objective.

 figure: Fig. 6

Fig. 6 Holoscopic images of a grape acquired at 0.14 NA using the Mach-Zehnder type high resolution setup. Simple reconstruction by propagating the field to one focal plane (left column) is compared with the one-step reconstruction of the complete volume by an NFFT (right column). (a) B-scan of the simple reconstruction according to Eq. (10). (b) B-scan of the one-step reconstruction according to Eq. (15). (c) En-face image of the focal plane of the simple reconstruction. (d) En-face image of the same plane in the one-step reconstruction. (e) En-face image of the simple reconstruction in an optical distance of about 160μm from the virtual focus shows deteriorated resolution. (f) En-face image of a onestep reconstruction of the same layer. No degradation of the lateral resolution is observed. The confocal parameter was 28μm. Remaining artifacts arise because of reflections from within the setup.

Download Full Size | PDF

4.1. Numerical complexity and execution speed

The one-step reconstruction of the complete volume reduces the computational complexity significantly. The simple reconstruction by Eq. (10) requires for one focal volume the propagation of each acquired hologram to a certain depth. For a single propagation of an image array of size NX × NY two 2D Fourier transforms of size 2NX × 2NY are required, if zero-padding is applied to prevent circular convolution artifacts. One transform is required to go from position space to Fourier-space and one to go back. Consequently, for N holograms at different wavelengths a total of 2N two-dimensional Fourier transforms of size 2NX × 2NY are required. For each lateral position a one-dimensional Fourier transform of size N needs to be performed afterwards to gain depth information, i. e. a total of NX × NY Fourier transforms calculate the final data. In total, 2N Fourier transforms of 2NX × 2NY arrays plus NX × NY one-dimensional Fourier transforms of N data points are needed for each focal volume. The overall complexity CSL of a simple reconstruction with a single focus layer (SL) is of the order of

CSL~𝒪(8NNXNYlog(4NXNY)+NXNYNlogN).

The one-step reconstruction of the complete volume by Eq. (15) also requires N two-dimensional Fourier transforms of size 2NX × 2NY to bring the images to Fourier space and 2NX × 2NY one-dimensional Fourier transforms of size N to gain depth information, which is for the complete volume 4× more than for the reconstruction of only one focal volume by propagation and FFT. Because of the hermitian symmetry, the back-transformation from 2D Fourier-space to position space only requires N/2 + 1 two-dimensional Fourier transforms of size 2NX × 2NY, which is about half the amount required for the propagation and FFT approach. The overall complexity CFV of the one-step reconstruction for the full volume (FV) can thus be written as

CFV~𝒪(6NNXNYlog(4NXNY)+4NXNYNlogN).
On a quad-CPU Opteron 6150 a single reconstruction of a dataset of 1024 holograms with 1024×1024 pixels by Eq. (10) took about 22s whereas a reconstruction of the complete volume by Eq. (11) or Eq. (15) took about 40s, i. e. about twice the time required for the reconstruction of the focal volume.

Using these measurements, we can estimate the required time when stitching several images with different focus layers using the single reconstruction by Eq. (10) compared to using one one-step reconstruction by Eq. (11) or Eq. (15): the confocal parameter (i.e. twice the Rayleigh length) for the lens-less setup was 2zR = 220μm and the measurement depth was 3.7mm which would need 17 reconstructions of different focal volume for an overall diffraction limited resolution in air. In this case the one-step reconstruction of the complete volume offered an 8.5× speed-up. For a medium with refractive index n = 1.5 – as the scattering sample in Fig. 4 –this speed-up is reduced by a factor ζ = 1/n2 ≈ 0.44, because the focal range is increased by a factor n and the effective total measurement depth is reduced by a factor 1/n, because it is determined by the optical path length. The full reconstruction is still about 3 to 4 times faster in this case.

For the high NA setup in Fig. 6 the confocal parameter was reduced to about 2zR = 30μm and the measurement depth was about 2.2mm. Therefore in air with Eq. (10) about 70 reconstructions are required with an approximately 30× total speed-up when using Eq. (11). With a refractive index of about n = 1.4 this speed-up is again reduced by a factor ζ ≈ 1/2 and thus the grape shown in Fig. 6 can be reconstructed about 15× faster using the complete reconstruction by Eq. (15) compared to the single layer reconstructions by Eq. (10).

For higher lateral resolution, this factor will increase further. The actual speed-up for the reconstruction depends on the ratio of the confocal parameter 2zR to the measurement depth d of the system and the refractive index n of the sample. The expected improvement in reconstruction speed is shown in Fig. 7. At high NAs the one-step reconstruction will be up to three orders of magnitude faster.

 figure: Fig. 7

Fig. 7 Approximate increase of the reconstruction speed by using the one-step algorithm of Eq. (15) instead of sequentially applying Eq. (10) for reconstructing multiple focal volumes. The increase of speed depends on measurement depth d and the refractive index n of the sample.

Download Full Size | PDF

5. Conclusion and outlook

Holoscopy is a promising imaging technique which combines digital holography with full-field swept-source OCT. With inverse scattering for frequency-scanned full-field OCT [26] it shares unique advantages compared to conventional OCT. Depth of field is not limited by the focusing optics, i. e. sharp images are obtained over several Rayleigh lengths with only a single data acquisition. All backscattered photons from the sample volume are used efficiently, sensitivity does not degrade due to confocal gating over the complete measurement depth. The remaining degradation in signal-to-noise ratio in the scattering sample is only caused by the light attenuation of the sample and the limited instantaneous coherence length of the laser. Image reconstruction is done numerically at a significantly increased numerical complexity compared to confocal OCT.

Compared to scanning ISAM [810], sensitivity does not degrade outside the confocal range. In contrast to inverse scattering for full-field SS-OCT [26, 27], holoscopy does not image the object volume onto the camera, but samples the interference patterns of the scattered field with a reference field. Similar setups as used as in digital holographic microscopy (DHM) are used. This distributes the scattered irradiation over multiple pixels and should give an advantage in SNR and dynamic range for localized tissue structures. The off-axis reference beam in the presented version of holoscopy, which is also commonly used in DHM, allows full-range measurements, i. e. doubles the measurement depth by avoiding the ambiguity of positive and negative path length differences, and avoids auto-correlation signals and coherence noise. Consequently, holoscopy image reconstruction combines concepts of DH with FD-OCT to yield an image of the sample volume, which does not resemble the real scattering potential due to the missing low spatial frequencies [25]. Inverse scattering of full-field frequency-scanned OCT [26, 27] estimates the complete scattering potential by trying to solve the inverse problem. Both algorithms rely on the same forward model for the scattered radiation and use the same resampling in frequency space. While the holoscopy reconstruction is completely deterministic and more intuitive, since it models the physics of light propagation, inverse scattering potentially yields more quantitative information on the refractive index distribution of the object. Physical constraints like real imaging optics or boundaries between areas of different index of refraction are easily incorporated in the here presented holoscopy reconstruction algorithm. So far, no images of real objects were published using inverse scattering on full-field SS-OCT data, and therefore a comparison of image quality with holoscopy has still to be done.

In the same way as full-field inverse scattering [26, 27] and full-field SS-OCT [13, 14], holoscopy is vulnerable to multiple scattered photons and cross-talk. Contrary to full-field OCT a spatially incoherent light source cannot easily be applied to suppress this cross-talk as the spatial coherence is required to sample the light fields correctly.

In conclusion, we demonstrated that the numerical complexity of holoscopy is considerably reduced by a one-step reconstruction algorithm using an NFFT on the non-equispaced data of the angular spectra which are recorded at multiple wavenumbers.

The reconstruction process uses a resampling in frequency space which was previously demonstrated for ISAM, Synthetic Aperture Radar and other reconstruction algorithms for volumetric imaging, which are based on propagation time measurements [27, 39].

Essential for the reconstruction is a separation of holographic and OCT imaging distances. The one-step algorithm can be applied as long as the argument of the kernel has a linear relation of the depth, as is the case in a medium of constant refractive index. We demonstrated up to 15× reduced processing time and good imaging performance on a phantom and two biological objects at NAs up to 0.14. Using the one-step reconstruction algorithm makes the numerical reconstruction process in holoscopy significantly faster especially for high NA imaging. The one-step reconstruction algorithm is a significant step towards making real-time imaging using holoscopy possible.

Acknowledgments

This work was funded by the “ Zukunftsprogramm Wirtschaft (ZPW)” of Schleswig-Holstein, Germany ( Competence Center for Technology and Engineering in Medicine (TANDEM), 122-09-024) and the European Union (EFRE 636 and FP7 HEALTH, Contract No. 201880 – FUN OCT).

References and links

1. D. Hillmann, C. Lührs, T. Bonin, P. Koch, and G. Hüttmann, “Holoscopy—holographic optical coherence tomography,” Opt. Lett. 36, 2390–2392 (2011). [CrossRef]   [PubMed]  

2. D. Hillmann, C. Lhrs, T. Bonin, P. Koch, A. Vogel, and G. Httmann, “Holoscopy: holographic optical coherence tomography,” Proc. SPIE 8091, 80911H (2011). [CrossRef]  

3. J. Holmes, “Theory and applications of multi-beam OCT,” Proc. SPIE 7139, 713908–713907 (2008). [CrossRef]  

4. C. Blatter, B. Grajciar, C. M. Eigenwillig, W. Wieser, B. R. Biedermann, R. Huber, and R. A. Leitgeb, “Extended focus high-speed swept source OCT with self-reconstructive illumination,” Opt. Express 19, 12141–12155 (2011). [CrossRef]   [PubMed]  

5. K.-S. Lee and J. P. Rolland, “Bessel beam spectral-domain high-resolution optical coherence tomography with micro-optic axicon providing extended focusing range,” Opt. Lett. 33, 1696–1698 (2008). [CrossRef]   [PubMed]  

6. R. A. Leitgeb, M. Villiger, A. H. Bachmann, L. Steinmann, and T. Lasser, “Extended focus depth for Fourier domain optical coherence microscopy,” Opt. Lett. 31, 2450–2452 (2006). [CrossRef]   [PubMed]  

7. L. Liu, C. Liu, W. C. Howe, C. J. R. Sheppard, and N. Chen, “Binary-phase spatial filter for real-time swept-source optical coherence microscopy,” Opt. Lett. 32, 2375–2377 (2007). [CrossRef]   [PubMed]  

8. T. S. Ralston, D. L. Marks, P. S. Carney, and S. A. Boppart, “Interferometric synthetic aperture microscopy:inverse scattering for optical coherence tomography,” Opt. Photon. News 17, 25–25 (2006).

9. T. S. Ralston, D. L. Marks, P. S. Carney, and S. A. Boppart, “Interferometric synthetic aperture microscopy,” Nat. Phys. 3, 129–134 (2007). [CrossRef]  

10. T. S. Ralston, D. L. Marks, P. S. Carney, and S. A. Boppart, “Real-time interferometric synthetic aperture microscopy,” Opt. Express 16, 2555–2569 (2008). [CrossRef]   [PubMed]  

11. L. Yu, B. Rao, J. Zhang, J. Su, Q. Wang, S. Guo, and Z. Chen, “Improved lateral resolution in optical coherence tomography by digital focusing using two-dimensional numerical diffraction method,” Opt. Express 15, 7634–7641 (2007). [CrossRef]   [PubMed]  

12. A. A. Moiseev, G. V. Gelikonov, P. A. Shilyagin, D. A. Terpelov, and V. M. Gelikonov, “Digital refocusing in optical coherence tomography,” Proc. SPIE 8213, 82132C (2012). [CrossRef]  

13. B. Považay, A. Unterhuber, B. Hermann, H. Sattmann, H. Arthaber, and W. Drexler, “Full-field time-encoded frequency-domain optical coherence tomography,” Opt. Express 14, 7661–7669 (2006). [CrossRef]  

14. T. Bonin, G. Franke, M. Hagen-Eggert, P. Koch, and G. Hüttmann, “In vivo Fourier-domain full-field OCT of the human retina with 1.5 million A-lines/s,” Opt. Lett. 35, 3432–3434 (2010). [CrossRef]   [PubMed]  

15. J. Pomarico, U. Schnars, H. J. Hartmann, and W. Juptner, “Digital recording and numerical reconstruction of holograms: a new method for displaying light in flight,” Appl. Opt. 34, 8095–8099 (1995). [CrossRef]   [PubMed]  

16. G. Pedrini and H. J. Tiziani, “Short-coherence digital microscopy by use of a lensless holographic imaging system,” Appl. Opt. 41, 4489–4496 (2002). [CrossRef]   [PubMed]  

17. J. C. Marron and T. J. Schulz, “Three-dimensional, fine-resolution imaging using laser frequency diversity,” Opt. Lett. 17, 285–287 (1992). [CrossRef]   [PubMed]  

18. M. C. Potcoava and M. K. Kim, “Optical tomography for biomedical applications by digital interference holography,” Meas. Sci. Technol. 19, 074010 (2008). [CrossRef]  

19. A. V. Zvyagin, “Fourier-domain optical coherence tomography: optimization of signal-to-noise ratio in full space,” Opt. Commun. 242, 97–108 (2004). [CrossRef]  

20. A. V. Zvyagin, P. Blazkiewicz, and J. Vintrou, “Image reconstruction in full-field Fourier-domain optical coherence tomography,” J. Opt. A, Pure Appl. Opt. 7, 350 (2005). [CrossRef]  

21. D. V. Shabanov, G. V. Geliknov, and V. M. Gelikonov, “Broadband digital holographic technique of optical coherence tomography for 3-dimensional biotissue visualization,” Laser Phys. Lett. 6, 753–758 (2009). [CrossRef]  

22. M. K. Kim, “Wavelength-scanning digital interference holography for optical section imaging,” Opt. Lett. 24, 1693–1695 (1999). [CrossRef]  

23. F. Montfort, T. Colomb, F. Charrière, J. Kühn, P. Marquet, E. Cuche, S. Herminjard, and C. Depeursinge, “Submicrometer optical tomography by multiple-wavelength digital holographic microscopy,” Appl. Opt. 45, 8209–8217 (2006). [CrossRef]   [PubMed]  

24. E. Wolf, “Three-dimensional structure determination of semi-transparent objects from holographic data,” Opt. Commun. 1, 153–156 (1969). [CrossRef]  

25. A. F. Fercher, “Optical coherence tomography—development, principles, applications,” Z. Med. Phys. 20, 251 –276 (2010). [PubMed]  

26. D. L. Marks, T. S. Ralston, S. A. Boppart, and P. S. Carney, “Inverse scattering for frequency-scanned full-field optical coherence tomography,” J. Opt. Soc. Am. A 24, 1034–1041 (2007). [CrossRef]  

27. B. J. Davis, D. L. Marks, T. S. Ralston, P. S. Carney, and S. A. Boppart, “Interferometric synthetic aperture microscopy: Computed imaging for scanned coherent microscopy,” Sensors 8, 3903–3931 (2008). [CrossRef]   [PubMed]  

28. U. Schnars and W. Jueptner, Digital Holography: Digital Hologram Recording, Numerical Reconstruction, and Related Techniques (Springer, 2005).

29. M. Born, E. Wolf, A. Bhatia, P. Clemmow, D. Gabor, A. Stokes, A. Taylor, P. Wayman, and W. Wilcock, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light (Cambridge University Press, 2000). [PubMed]  

30. J. Goodman, Introduction to Fourier Optics, McGraw-Hill Physical and Quantum Electronics Series (Roberts & Co., 2005).

31. A. Fercher, C. Hitzenberger, G. Kamp, and S. El-Zaiat, “Measurement of intraocular distances by backscattering spectral interferometry,” Opt. Commun. 117, 43–48 (1995). [CrossRef]  

32. M. K. Kim, “Principles and techniques of digital holographic microscopy,” SPIE Rev. 1, 018005 (2010). [CrossRef]  

33. U. Schnars and W. P. O. Jptner, “Digital recording and numerical reconstruction of holograms,” Meas. Sci. Technol. 13, R85 (2002). [CrossRef]  

34. Y. Yasuno, S. Makita, T. Endo, G. Aoki, M. Itoh, and T. Yatagai, “Simultaneous B-M-mode scanning method for real-time full-range Fourier domain optical coherence tomography,” Appl. Opt. 45, 1861–1865 (2006). [CrossRef]   [PubMed]  

35. D. Hillmann, G. Hüttmann, and P. Koch, “Using nonequispaced fast Fourier transformation to process optical coherence tomography signals,” Proc. SPIE 7372, 73720R (2009). [CrossRef]  

36. S. Vergnole, D. Lévesque, and G. Lamouche, “Experimental validation of an optimized signal processing method to handle non-linearity in swept-source optical coherence tomography,” Opt. Express 18, 10446–10461 (2010). [CrossRef]   [PubMed]  

37. K. K. Chan and S. Tang, “Selection of convolution kernel in non-uniform fast Fourier transform for Fourier domain optical coherence tomography,” Opt. Express 19, 26891–26904 (2011). [CrossRef]  

38. P. D. Woolliams, R. A. Ferguson, C. Hart, A. Grimwood, and P. H. Tomlins, “Spatially deconvolved optical coherence tomography,” Appl. Opt. 49, 2014–21 (2010). [CrossRef]   [PubMed]  

39. K. Langenberg, M. Berger, T. Kreutter, K. Mayer, and V. Schmitz, “Synthetic aperture focusing technique signal processing,” NDT International 19, 177–189 (1986). [CrossRef]  

Supplementary Material (1)

Media 1: AVI (4061 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Setups used for the holoscopic measurements. (a) A lens-less Michelson type setup was used for 0.05 NA on-axis holoscopy. (b) The Mach-Zehnder type setup for off-axis recording of the holograms with 0.14 NA was used for high resolution measurements.
Fig. 2
Fig. 2 Schematic representation of the coordinate system and variables in the sample arm (left) and reference arm (right) as used for Eq. (7) and Eq. (8).
Fig. 3
Fig. 3 Formation of a virtual image of a scatterer by a medium with an index of refraction n larger than one. The medium decreases the reconstruction distance by 1/n while the optical path length z′ is increased by n, i. e. z′ = nz.
Fig. 4
Fig. 4 B-scans from the reconstructed volume, which was recorded from a scattering phantom [38] consisting of multiple point scatterers. (a) and (b) result from single reconstructions according to Eq. (10) at two different propagation depths zP, which correspond to virtual numerical foci of the reconstruction. Outside the focal regions the lateral resolution is degraded. The confocal parameter was 220μm. (c) One-step reconstruction of the complete volume by Eq. (15) with the correct refractive index n = 1.5 (ζ = 0.44). No lateral resolution degradation is visible. The loss of intensity in depth is only caused by a sensitivity roll-off due to the limited instantaneous coherence length of the laser source. (d) One-step reconstruction of the complete volume by Eq. (11) without correcting for the increased index of refraction in the sample volume (i.e. n = 1.0 and thus ζ = 1). Focus degradation is worse than in the reconstruction for a single focal volume. This is due to the fact that the former corresponds to ζ = 1 and the latter to ζ = 0. The correct value of ζ = 0.44 is thus closer to the reconstruction of a single plane by Eq. (10).
Fig. 5
Fig. 5 En-face tomographic images of a bug at three different depth layers. The image cube was acquired by holoscopy. For reconstruction the one-step algorithm described by Eq. (15) was used. Internal structures of the bug can clearly be seen. Media 1 shows a low-resolution fly-through of the bug.
Fig. 6
Fig. 6 Holoscopic images of a grape acquired at 0.14 NA using the Mach-Zehnder type high resolution setup. Simple reconstruction by propagating the field to one focal plane (left column) is compared with the one-step reconstruction of the complete volume by an NFFT (right column). (a) B-scan of the simple reconstruction according to Eq. (10). (b) B-scan of the one-step reconstruction according to Eq. (15). (c) En-face image of the focal plane of the simple reconstruction. (d) En-face image of the same plane in the one-step reconstruction. (e) En-face image of the simple reconstruction in an optical distance of about 160μm from the virtual focus shows deteriorated resolution. (f) En-face image of a onestep reconstruction of the same layer. No degradation of the lateral resolution is observed. The confocal parameter was 28μm. Remaining artifacts arise because of reflections from within the setup.
Fig. 7
Fig. 7 Approximate increase of the reconstruction speed by using the one-step algorithm of Eq. (15) instead of sequentially applying Eq. (10) for reconstructing multiple focal volumes. The increase of speed depends on measurement depth d and the refractive index n of the sample.

Equations (27)

Equations on this page are rendered with MathJax. Learn more.

I ( x , y ; k ) = γ | R ( x , y ; k ) + O ( x , y ; k ) | 2 = γ ( | R ( x , y ; k ) | 2 + | O ( x , y ; k ) | 2 + ( R * O ) ( x , y ; k ) + ( R O * ) ( x , y ; k ) ) ,
( R I ) ( x , y ; k ) = ( R | R | 2 ) ( x , y ; k ) + ( R | O | 2 ) ( x , y ; k ) DC and autocorrelation terms + ( | R | 2 O ) ( x , y ; k ) image term + ( R 2 O * ) ( x , y ; k ) twin image term ,
U ( x , y , z 0 + Δ z ) = 𝒫 k , Δ z [ U ( x , y , z 0 ) ]
U ( x , y , z 0 + Δ z 1 + Δ z 2 ) = 𝒫 k , Δ z 1 + Δ z 2 [ U ( x , y , z 0 ) ] = 𝒫 k , Δ z 2 [ 𝒫 k , Δ z 1 [ U ( x , y , z 0 ) ] ]
U ( x , y , z 0 ) = 𝒫 k , Δ z [ U ( x , y , z 0 + Δ z ) ] .
U ˜ ( k x , k y , z 0 + Δ z ) = P k , Δ z ( k x , k y ) U ˜ ( k x , k y , z 0 ) and P k , Δ z ( k x , k y ) = exp ( i k z Δ z ) ,
k z = k 2 k x 2 k y 2 .
O ( x , y ; k ) = A O S ( k ) d z 𝒫 k , z 0 + z [ η ( x , y , z ) e i k z e + i ϕ 0 ( k ) ] ,
R ( x , y ; k ) = A R S ( k ) e i k x 2 + y 2 + ( z 0 + z Ref ) 2 + i k z Ref + i ϕ 0 ( k ) ,
𝒫 k , z 0 [ ] exp ( + i k z ) 𝒫 k , z [ ] and P k , z ( k x , k y ) exp ( + i k z ) exp ( i k z z ) .
R 0 ( x , y ; k ) R ( x , y ; k ) e i ϕ 0 ( k ) e + i k z 0 , O 0 ( x , y ; k ) O ( x , y ; k ) e i ϕ 0 ( k ) e + i k z 0 .
I = γ | R + O | 2 = γ | R 0 + O 0 | 2
O 0 ( x , y ; k ) = A O S ( k ) d z 𝒫 k , z 0 + z 0 [ η ( x , y , z ) e 2 i k z ] .
O 0 ( x , y ; k ) = R 0 ( x , y ; k ) I f ( x , y ; k ) γ | R 0 ( x , y ; k ) | 2 .
𝒫 k , z 0 z P 0 [ O 0 ( x , y ; k ) ] = A O S ( k ) d z 𝒫 k , z 0 z P 0 𝒫 k , z 0 + z 0 [ η ( x , y , z ) e 2 i k z ] = A O S ( k ) d z 𝒫 k , z z P 0 [ η ( x , y , z ) e 2 i k z ] .
S ˜ ( z ) * η z P ( x , y , z ) = 2 A O d k exp ( + 2 i k z ) 𝒫 k , z 0 z P 0 [ O 0 ( x , y ; k ) ] .
S ˜ ( z ) * η ˜ z P ( k x , k y ; z ) = 2 A O d k exp ( + i 2 k z ) exp ( i k ( z 0 z P ) ) exp ( i k z ( z 0 + z P ) ) O ˜ 0 ( k x , k y ; k ) ,
S ˜ ( z ) * η ˜ ( k x , k y ; z ) = 2 A O d k exp ( + i ( k z + k ) z ) kernel exp ( + i ( k z k ) z 0 ) phase O ˜ 0 ( k x , k y ; k ) .
κ ( k ) = k z ( k ) + k
S ˜ ( z ) * η ˜ ( k x , k y ; z ) = 2 A O d κ d k d κ exp ( + i κ z ) kernel exp ( + i ( k z ( k ( κ ) ) k ( κ ) ) z 0 ) phase O ˜ 0 ( k x , k y ; k ( κ ) ) ,
k z = n 2 k 2 k x 2 k y 2 ,
S ˜ ( n z ) * η ˜ ( k x , k y ; z ) = 2 A O d ( n k ) exp ( + i ( k z + n k ) z ) kernel exp ( + i ( k z k ) z 0 ) phase O ˜ 0 ( k x , k y ; k ) ,
exp ( + i ( k z + n k ) z ) exp ( + i ( 2 n k + 1 n ( k z k ) ) z ) .
= exp ( ( + i ( 2 ζ ) k + ζ k z ) z ) ,
S ˜ ( z ) * η ˜ ( k x , k y ; z ) = 2 A O d k e + i ( ( 2 ζ ) k + ζ k z ) z kernel e + i ( k z k ) z 0 phase O ˜ 0 ( k x , k y ; k ) .
C S L ~ 𝒪 ( 8 N N X N Y log ( 4 N X N Y ) + N X N Y N log N ) .
C F V ~ 𝒪 ( 6 N N X N Y log ( 4 N X N Y ) + 4 N X N Y N log N ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.