Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Three-dimensional imaging in reflection phase microscopy with minimal axial scanning

Open Access Open Access

Abstract

Reflection phase microscopy is a valuable tool for acquiring three-dimensional (3D) images of objects due to its capability of optical sectioning. The conventional method of constructing a 3D map is capturing 2D images at each depth with a mechanical scanning finer than the optical sectioning. This not only compromises sample stability but also slows down the acquisition process, imposing limitations on its practical applications. In this study, we utilized a reflection phase microscope to acquire 2D images at depth locations significantly spaced apart, far beyond the range of optical sectioning. By employing a numerical propagation, we successfully filled the information gap between the acquisition layers, and then constructed complete 3D maps of objects with substantially reduced number of axial scans. Our experimental results also demonstrated the effectiveness of this approach in enhancing imaging speed while maintaining the accuracy of the reconstructed 3D structures. This technique has the potential to improve the applicability of reflection phase microscopy in diverse fields such as bioimaging and material science.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

The imaging of three-dimensional (3D) structures plays a crucial role in accurately investigating object features. In an optical regime, advancements in 3D imaging have enhanced the assessment capabilities of samples, particularly in biomedical science [13], including cells [46], tissues [710], and organs [1113]. By visualizing objects from different perspectives, 3D imaging has become an indispensable tool in such areas.

With the advancement of microscopy technology, reflection phase microscopy has emerged as a valuable method for obtaining 3D images of objects, offering the advantage of optical sectioning [1421]. Typically, this technique involves capturing multiple 2D images at different depths and subsequently constructing a comprehensive 3D map. However, to ensure the preservation of intricate object information, it necessitates axial scanning of objects finer than the optical sectioning, leading to frequent mechanical movements of the sample. These movements, unfortunately, can introduce unwanted motion artifacts, especially in soft environments like living cells in a culture medium, potentially compromising the accuracy of the results. To mitigate the instability, a resting period is introduced after each translation of the sample. Nevertheless, this resting period, typically much longer than the time needed for sample translation, significantly slows down the entire 3D imaging acquisition process, posing challenges in imaging speed.

A recent study introduced wide-field reflection phase microscopy by successive accumulation of interferograms (SAI), a technique renowned for its superior optical sectioning and exceptional 3D imaging capabilities [22,23]. Unlike conventional reflection phase microscopy which relies on the coherence properties of light sources for optical sectioning, SAI adopts a unique and innovative optical configuration. In this approach, the angle of light beams is simultaneously and identically scanned on both the sample and reference arms during a single camera acquisition. As a result of this novel configuration, the contrast of the accumulated interference becomes highly sensitive to the path length difference between the two arms, effectively generating optical sectioning applicable to 3D imaging. Despite this breakthrough in advancing 3D microscopy capabilities offered by the SAI configuration, it still necessitates frequent mechanical shifts of the stage to acquire the complete 3D information of the target sample. Consequently, the requirement for the number of mechanical movements compromises the stability of the 3D measurements.

To address this point, in this study, we present a new imaging protocol that can reduce the number of mechanical scanning along the axial direction required for 3D imaging in the reflection phase microscopy using the SAI configuration. First, we acquire multiple images at different illumination angles at a certain depth. And then a series of depth images, that fills the empty information, is constructed using the angular propagation method. This approach allows axial scanning far longer than optical sectioning, effectively reducing the number of mechanical movements required for 3D imaging. The utility of this approach is assessed by comparing 3D images of a phantom acquired through the proposed method with those obtained by the usual mechanical scanning in the SAI configuration. We also present proof of the applicability of this method to 3D imaging of a biological sample. The results demonstrate that our strategy considerably decreases the number of mechanical scanning, and thus enhances measurement precision and speed in 3D imaging.

2. Materials and methods

2.1 Experimental setup and image acquisition process

The schematic of the experimental setup is depicted in Fig. 1(a). The configuration of the setup is based on the prior implementation for SAI [22,23]. The light source is a superluminescent laser diode (S; SLD-CS-381-HP3-SM-795-I, Superlum) with a central wavelength of λ0 = 795 nm and a spectral width of about 15 nm. The collimated beam from the light source first passes through two-axis galvanometer mirrors (GMx and GMy; GVS002, Thorlabs) for steering the incident angle. Subsequently, this beam enters the Linnik-type interferometer composed of two objective lenses (OLS and OLR; UPLFLN20X, 0.5 NA, Olympus) and two tube lenses (L3 and L4, AC254-200, Thorlabs). Inside the interferometer, it is divided into a sample path and a reference path using a polarizing beam splitter (PBS). In front of the PBS, a half-wave plate (HWP) is positioned to adjust the ratio of light intensities in the two arms.

 figure: Fig. 1.

Fig. 1. Illustration of the experimental setup. (a) Experimental schematic of the system. S: light source, M: mirror, L1-6: lens, PBS: polarizing beam splitter, HWP: half-wave plate, QWP: quarter-wave plate, OLR and OLS: objective lenses for reference and sample arms. DG: diffraction grating, LP: linear polarizer, TB: Two-hole Block, TR: translation stage. (b) Schematic of image acquisition. Multiple reflection images are acquired at various illumination angles, $\theta $. (c) Scanning trajectory of the beam in the k-space. Images are taken at each spot.

Download Full Size | PDF

Within the sample path, the light transmitted through the PBS illuminates a plane wave onto the sample plane through the 4-f telescope consisting of L3 and OLS. At the sample plane, a target object, or a test mirror (M) is placed on a Piezo actuator-driven mechanical stage (N-565.360, PI). The sample stage is used for 3D imaging by scanning the sample along the axial direction. While scanning the GMs, the pivot point of the beam is kept stationary at the sample plane, so that a plane wave is illuminated at various angles, as illustrated with red arrows in Fig. 1 (b). After passing through the quarter-wave plate (QWP) twice, the polarization state of the light reflected from the sample (or M, a test mirror) is reversed, and eventually, the light is reflected off the PBS, exiting it through the output port. Conversely, in the reference path, the returning light from the mirror (M), which is located at the focal plane of the OLR, passes through the PBS due to the polarization change by the twice transmission through the QWP. At the output port of the PBS, the sample and the reference beams are recombined. Note that the two beams in both arms are synchronously scanned to illuminate a sample and a reference mirror with the same incidence angle. Thus, the two beams co-propagate along the same path but with polarizations perpendicular to each other when they exit the Linnik interferometer.

The beams are further delivered by L4 and reach a diffraction grating (DG; 46-068, 72 grooves/mm, Edmund optics) placed at the image plane conjugated to the sample plane. The beams are then separated by the DG and delivered to the image plane by a 4-f system of L5 and L6. At the Fourier plane in between, a two-hole block (TB) is placed, as shown in Fig. 1(a), so that only 0-th and +1-st orders are allowed to pass through it, others are all blocked. The two beams are then filtered by two linear polarizers, one with a polarization axis of 90° (LP90°) and the other with 0° (LP): the 0-th order path transmits only the sample beam, while the 1-st order path transmits only the reference beam. The polarization of both beams is realigned to the same state by using QWPs located right behind the LPs. Subsequently, the two beams meet at the image plane at an angle defined by the DG and the 4-f lenses. In the image plane, a camera (pco. edge 5.5, PCO) equipped with 2560 × 2160 pixels, each with a size of 6.5 × 6.5 µm2, is located. The overall magnification of the microscope is approximately 83.3, resulting in a corresponding field of view (FOV) of 200 × 168 µm2. The camera is synchronized with the scanning of the GMs, capturing N images as the GMs sequentially sample N distinct points along a spiral curve in the back aperture of the objective lens, as illustrated in Fig. 1(c). This process ensures that the sampling points are evenly distributed across the entire NA of the objective lens, achieving the maximum optical sectioning. Consequently, a set of images at N different illumination angles is acquired. All experiments were conducted with the camera speed of 20 frames per second.

2.2 Axial scanning strategy

In usual RPM systems including the SAI configuration, to construct a 3D map of a sample, multiple depth images are acquired while mechanically moving the sample along the axial direction. For the completeness of the information, the step size of the mechanical scanning (Δzms) is finer than the depth sectioning (δz). Figure 2(a) represents the case where the red layers indicate the imaging planes determined by the fine mechanical scanning spaced apart with Δzms. In the SAI measurements in this study, we used Δzms of 0.2 µm. As our SAI system has δz ${\approx} $ 0.8 µm, this results in Δzms/ δz ${\approx} $ 1/4, approximately 4 times finer than the optical sectioning. This was empirically determined to improve the measurement quality.

 figure: Fig. 2.

Fig. 2. Illustration of the sparse scanning strategy. (a) Conventional mechanical scanning of a sample for 3D imaging. The sample is mechanically translated, and 2D depth images are acquired at every sample movement with a step size of Δzms. The image acquisition planes are indicated with red layers. Typically Δzms = 0.2 µm is used in the experiments. (b) The sparse scanning strategy. The sample is translated with a step size of Δzss, which is much larger than Δzms. And multiple angular images are acquired at the shifted planes indicated with blue layers. The blind depth images are generated from the acquired angular images by numerical propagation with a step size of Δzns. The numerical propagation covers the range from d = 0 to (M-1)·Δzns. Typically, we use Δzss = 10 µm, Δzns = 0.2 µm, and M = 50.

Download Full Size | PDF

To reduce the sample translations in the 3D imaging while maintaining the performance similar to the SAI, we employ a sparse axial scanning strategy as illustrated in Fig. 2(b). At a specific depth marked by blue layers, we first acquire a set of images at N different illumination angles. Next, we move the sample with a shift of Δzss and repeat the angular measurements with N = 200, where Δzss is a step size for sparse axial scanning. Since we use Δzss = 10 µm, Δzss/ δz ${\approx} $ 12.5, which is far beyond the depth sectioning. This new scanning strategy requires mechanical movement 50 times less than the usual mechanical scanning method.

For each set of images, the measured interferograms are transformed into complex field images, incorporating both the amplitude and phase information [24,25]. Subsequently, we add all the complex images to produce a depth image for each measurement layer. Importantly, during this summation, we achieve depth sectioning equivalent to that of the SAI measurement at the plane where the imaging focus is located (see Sec. 2.2.2 for the details). Due to the sparsity of the axial scanning, the depth information between the imaging layers is incomplete. To fill in the empty information, we employ numerical propagation based on the angular spectrum method [26,27]. Starting from the layer where the initial acquisition is conducted, we numerically propagate along ± z-direction with a numerical step size of Δzns. The propagated images are added to produce a single depth image with a new focus at the shifted position. We subsequently apply both the numerical propagation and the image summation for the distances of integer multiples of Δzns, and generate multiple depth images within a propagation range. We repeat these procedures for all the sets, creating a 3D map of a sample with full depth information. It is noteworthy that an optical sectioning is generated through the image summation, consequently the numerically generated images are featured with depth sectioning comparable to that of the SAI measurement.

2.2.1 Numerical field propagation

In our sparse scanning strategy, we utilize the angular spectrum method for numerical propagation of the imaging focus. An interferometric image obtained at j-th illumination angle is denoted by ${E_j}({\mathbf x} )$ where ${\mathbf x} = (x,y)$ is the position vector. The 2D Fourier transform of this image is represented as:

$${\tilde{E}_j}({{{\mathbf k}_ \bot }} )= {P_j}({{{\mathbf k}_ \bot }} )\tilde{O}({{{\mathbf k}_ \bot };z} ),$$
where ${{\mathbf k}_ \bot } = ({k_x},{k_y})$ is the coordinate vector in the Fourier plane (k-space) perpendicular to the direction of wave propagation, and $\tilde{O}({{{\mathbf k}_ \bot };z} )$ is the 2D angular spectrum of the object in k-space. Furthermore, ${P_j}({{{\mathbf k}_ \bot }} )= P({{{\mathbf k}_ \bot }{+ }{\boldsymbol{\mathrm{\kappa}}_ \bot }_j} )$ represents the shifted pupil function of the objective lens at the j-th illumination with a wavevector ${\boldsymbol{\mathrm{\kappa}}_{ \bot j}}$ in the SAI detection configuration. The pupil function of the normal illumination $P({{{\mathbf k}_ \bot }} )$ is defined as:
$$P({{\mathbf k}_ \bot }) = P({{\mathbf k}_ \bot };{\boldsymbol{\mathrm{\kappa}}_{ \bot j}} = {0}) = \left\{ {\begin{array}{{cc}} {1,}&{{{|{{{\mathbf k}_ \bot }} |}^2} \le {{({{k_0}\textrm{NA}} )}^2}}\\ {0,}&{otherwise} \end{array}} \right.,$$
where ${k_0}$ is the wavenumber at a given wavelength λ0.

In Eq. (1), ${P_j}({{{\mathbf k}_ \bot }} )= P({{{\mathbf k}_ \bot } + {\boldsymbol{\mathrm{\kappa}}_ \bot }_j} )$ acts as a low-pass filter with a cutoff frequency determined by ${k_0}\textrm{NA}$, and its center is shifted to the position at $- {\boldsymbol{\mathrm{\kappa}}_{ \bot j}}$. Notably, in the SAI configuration, the object spectrum remains stationary in the Fourier plane regardless of the illumination angle, consequently, $\tilde{O}({{{\mathbf k}_ \bot };z} )$ becomes independent of ${\boldsymbol{\mathrm{\kappa}}_{ \bot j}}$, while the pupil function is shifted as described in Eq. (1).

In our implementation, the illumination angles may exhibit slight deviations from the set values due to experimental imperfections. Therefore, a prior calibration process for ${\boldsymbol{\mathrm{\kappa}}_{ \bot j}}$ was conducted to account for these deviations prior to performing the main measurements. To accurately trace the position of the illumination wavevector, we employed a phantom constructed from agarose gel (1.5% v/w) containing embedded scatters (Magnetic PS beads, 1 µm, Spherotech). The scatters were distributed randomly within the gel, resulting in the angular spectrum of the phantom image forming a high contrast circular disk, irrespective of the illumination angle. By detecting the center of this circular disk, we were able to precisely determine the series of illumination wavevector ${\boldsymbol{\mathrm{\kappa}}_{ \bot j}}$ for all j.

In the numerical propagation process, we first adjust the acquired angular spectrum to account for the oblique illumination. This adjustment involves shifting the spectrum described in Eq. (1) so that its pupil is centered at the origin as:

$$\begin{aligned} {{\tilde{E}}_j}({{{\mathbf k}_ \bot } - {\boldsymbol{\mathrm{\kappa}}_{ \bot j}}} )&= {P_j}({{\mathbf k}_ \bot } - {\boldsymbol{\mathrm{\kappa}}_{ \bot j}})\tilde{O}({{\mathbf k}_ \bot } - {\boldsymbol{\mathrm{\kappa}}_{ \bot j}};z)\\ &= P({{\mathbf k}_ \bot })\tilde{O}({{\mathbf k}_ \bot } - {\boldsymbol{\mathrm{\kappa}}_{ \bot j}};z). \end{aligned}$$

Next, we apply a free-space transfer function $H({{{\mathbf k}_ \bot };d} )= \exp ({ - i2{k_z}d} )= \exp \left( { - i2d\sqrt {{k^2} - {{|{{{\mathbf k}_ \bot }} |}^2}} } \right)$ to ${\tilde{E}_j}({{{\mathbf k}_ \bot } - {\boldsymbol{\mathrm{\kappa}}_{ \bot j}}} )$, where $k = n{k_0}$ with n the refractive index of the propagation medium and d represents the propagation distance. The propagated field ${\mathrm{\tilde{{\cal E}}}_j}$ at the j-th illumination is expressed as:

$$\begin{aligned} {{\mathrm{\tilde{{\cal E}}}}_j}({{{\mathbf k}_ \bot } - {\boldsymbol{\mathrm{\kappa}}_{ \bot j}};d} )&= H({{{\mathbf k}_ \bot };d} ){{\tilde{E}}_j}({{{\mathbf k}_ \bot } - {\boldsymbol{\mathrm{\kappa}}_{ \bot j}}} )\\ &= H({{{\mathbf k}_ \bot };d} )P({{{k}_ \bot }} )\tilde{O}({{{\mathbf k}_ \bot } - {\boldsymbol{\mathrm{\kappa}}_{ \bot j}};z} ). \end{aligned}$$

Note that the propagated field in Eq. (4) is shifted due to the relocation of the angular spectrum in Eq. (3). To restore the spectrum to its original position, we apply an additional shift of $- {{\kappa }_{ \bot j}}$ as follows:

$$\begin{aligned} {{\mathrm{\tilde{{\cal E}}}}_j}({{{\mathbf k}_ \bot };d} )&= H({{{\mathbf k}_ \bot } + {\boldsymbol{\mathrm{\kappa}}_{ \bot j}};d} )P({{{\mathbf k}_ \bot } + {\boldsymbol{\mathrm{\kappa}}_{ \bot j}}} )\tilde{O}({{{\mathbf k}_ \bot };z} )\\ &= {H_j}({{{\mathbf k}_ \bot };d} ){P_j}({{{\mathbf k}_ \bot }} )\tilde{O}({{{\mathbf k}_ \bot };z} ), \end{aligned}$$
where ${H_j}({{{\mathbf k}_ \bot };d} )= H({{{k}_ \bot } + {{\kappa }_{ \bot j}};d} )$ is the shifted transfer function at the j-th illumination, which is similarly defined as ${P_j}({{{\mathbf k}_ \bot }} )$ in Eq. (1). In Eq. (5), the object spectrum is now realigned, ensuring consistency regardless of the illumination angle, as was in Eq. (1).

2.2.2 Refocusing

After propagation to a desired distance, a depth image at the new focus is generated. We add the propagated images in Eq. (5) for all the illumination angles. Then the added image is described as:

$$\begin{aligned} \mathrm{\tilde{{\cal E}}}({{{\mathbf k}_ \bot };d} )&= \sum\limits_{j = 1}^N {{{\mathrm{\tilde{{\cal E}}}}_j}({{{\mathbf k}_ \bot };d} )} = \sum\limits_{j = 1}^N {{H_j}({{{\mathbf k}_ \bot };d} ){P_j}({{{\mathbf k}_ \bot }} )\tilde{O}({{{\mathbf k}_ \bot };z} )} \\ &= \tilde{O}({{{\mathbf k}_ \bot };z} )\sum\limits_{j = 1}^N {\mathbf H({{{\mathbf k}_ \bot } + {{\kappa }_{ \bot j}};d} )P({{{\mathbf k}_ \bot } + {\boldsymbol{\mathrm{\kappa}}_{ \bot j}}} )} \\ &= \tilde{O}({{{\mathbf k}_ \bot };d} )\sum\limits_{j = 1}^N {P({{{\mathbf k}_ \bot } + {\boldsymbol{\mathrm{\kappa}}_{ \bot j}}} )} . \end{aligned}$$

In Eq. (6), the sum of ${H_j}({{{\mathbf k}_ \bot };d} )$ generates an optical sectioning on a plane where z = d, consequently, the object information only on that plane survives as shown in the last step in Eq. (6). Furthermore, the sum of ${P_j}({{{\mathbf k}_ \bot }} )$ enlarges the detection aperture, enhancing the imaging resolution. Finally, by taking an inverse Fourier transform of $\mathrm{\tilde{{\cal E}}}({{{\mathbf k}_ \bot };d} )$, we produce a propagated object image at a new depth at z = d. When d = 0, the summation in Eq. (6) is equivalent to the accumulation of interferograms while scanning the illumination angles in the SAI measurement. By repeating the numerical propagation and focus generation with different d, we can create multiple depth images with the same depth sectioning as the SAI measurements [22,23], but without moving the sample mechanically. Note that the depth range available for the numerical propagation is limited by the coherence length of the light source.

2.3 Sample preparation and analysis

To address the imaging capabilities of the proposed method, an artificial phantom was employed. This phantom was created by mixing an agarose gel (1.5% w/v, Thermo Fischer) with a suspension of magnetic polystyrene beads (1% v/v, 1.0 µm, Spherotech). Additionally, a sheet of fibrous tissue (Lens Tissue, MC-5, Thorlabs) was immersed in the mixture before it underwent gelation.

To prepare biological samples for imaging, Caenorhabditis elegans (C.elegans, Biozoa, Korea) were first fixed immediately prior to imaging, using 99% EtOH fixation for 10 minutes. Subsequently, they were embedded in an agarose gel (2.0% w/v, Thermo Fischer).

In our experimental procedures, data acquisition and subsequent image processing were executed using Matlab 2021a (Mathworks). We employed Origin (OriginLab) for plotting and estimating the full width at half maximum (FWHM) of the axial response function. To address and compare the image quality between the proposed strategy and the SAI configuration, we relied on the multi-scale structural similarity index measure (MS-SSIM) [28,29]. Furthermore, following the reconstruction process, we applied a non-local mean filter with a carefully estimated sigma value to reduce the noise stemming from the void region [3032].

3. Results

3.1 Axial response evaluation

We explored the depth sectioning capability of the numerical propagation in our sparse scanning approach. To evaluate this, a flat mirror served as the test sample for characterizing the axial response of the system.

Initially, as a reference, we acquired multiple depth images using the SAI configuration, systematically shifting the sample stage from d = −5 to +5 µm at an interval of 0.2 µm. Subsequently, we generated intensity images from the recorded interferograms of the flat mirror. The average intensity was then measured as a function of the axial shift of the translation stage, as illustrated in Fig. 3(a). The FWHM of the axial response, representing the resolving power of the SAI configuration in the axial direction, was measured as 846 ± 19 nm through Gaussian fitting, depicted by the red line.

 figure: Fig. 3.

Fig. 3. Depth sectioning of the imaging system. A flat mirror was used as a test sample. (a) Intensity profile measured from the depth images taken by mechanically moving a mirror. (b) Intensity profile obtained from the depth images that were numerically generated. All depth images were produced from the single data set captured at the mirror surface without mechanical scanning. Inlets of (b): Intensity at d = 0 (top) and d = 4 µm (bottom). Red lines are Gaussian fits. FOV of inlet images in (b) : 20 × 20 µm2. Scalebar = 10 µm

Download Full Size | PDF

For the numerical propagation, we captured 200 interferograms at a single depth (d = 0) and then followed the procedures outlined in Sec. 2 to propagate and refocus the images. This allowed us to construct a depth image corresponding to the propagated distance. The propagation range covered the distance from d = −5 to +5 µm at a 0.2 µm interval, mirroring the mechanical scanning conditions. From the 51 depth images, we measured the intensity profile as a function of the propagation distance, as shown in Fig. 3(b). The resulting FWHM was determined as 920 ± 8 nm, which was 8.7% broader than the FWHM observed with the SAI configuration. This minor variation could be attributed to the limited sampling of N = 200 angular images. Nevertheless, these findings confirm that the proposed method exhibits similar depth selectivity to that of the SAI configuration. Consequently, it enables the reconstruction of multiple depth images with measurements taken at a single depth without mechanical movement of a sample within the achievable propagation range.

3.2 Quality assessment for 3D reconstruction

We assessed the capability of our method to generate a 3D map of an object with the depth selectivity demonstrated in the previous section. To carry out this evaluation, we employed the phantom, as described in Sec. 2.3, including polystyrene beads and fibrous paper tissue. For the SAI measurement, images were acquired at a 0.2 µm interval during fine mechanical scanning, covering a range from −5 to +5 µm. Subsequently, N = 200 images were acquired at d = 0, which were then used to produce depth images at a 0.2 µm interval through the numerical propagation. Among two sets of 51 reconstructed images obtained using both methods, 11 representative images are presented in Fig. 4(a). The distribution of beads and the formation of fibers exhibit similar characteristics at equivalent depths, with these features showing parallel variations as the focal plane shifts. In Fig. 4(b), we emphasize the depth images at three distinct positions: d = 0, + 1 and +2 µm, demonstrating the progressive unveiling of different structural aspects with increasing imaging depth.

 figure: Fig. 4.

Fig. 4. 3D image of a phantom. (a) (upper) Depth images of a phantom taken using the mechanical scanning. (lower) Depth images produced by the numerical scanning. In the numerical scanning, all depth images were generated from the image set captured at z = 0 µm. (b) Zoom-in images of the areas indicated by a grey box in (a). Both methods effectively display the similar structural changes in fibers and microbeads embedded within the phantom at different target depths. (c) The value of MS-SSIM calculated from the depth images obtained by both methods. The shaded area represents the range of standard deviation obtained from 6 repetitive experiments. FOV in all images: 168 × 168 µm2 and scale bars: 100 µm.

Download Full Size | PDF

To quantitatively assess the similarity of the images acquired by the two methods, we employed the MS-SSIM as a metric. The MS-SSIM score quantifies the degree of similarity between two images, with values closer to 1 indicating a higher degree of similarity between the target and reference images. In Fig. 4(c), we plotted the MS-SSIM score for the depth images acquired by the numerical propagation with respect to those taken by the SAI (mechanical scanning) as a function of d. During this assessment, the SAI images at corresponding depths served as the reference. As shown in the figure, the mean of MS-SSIM values consistently exceeded 0.99 throughout the entire scanning range. When comparing all the pairs of depth images, the computed average MS-SSIM value was 0.9912 ± 0.0030. This finding affirms that the numerical propagation approach is capable of reconstructing depth images that closely resemble those obtained through mechanical scanning. The minor discrepancy observed in the images produced by the two methods resulted from structural variation of the sample occurring in the time gap between the two measurements. Because the object was suspended in an agarose gel, it was sensitive to the mechanical perturbation, leading to subtle changes in shape through lateral and rotational drift within the time gap of the two acquisitions. To suppress the error, we implemented translational correction to the images using correlation calculations. However, deformation occurring in irregular directions could not be perfectly compensated. It’s also noteworthy that the marginal reduction in similarity observed as the propagation distance increases may be attributed to the interference fall-off due to the limitation of the coherence length (∼21 µm) and the sparse angular sampling.

3.3 3D imaging of a biological sample

In the preceding section, we assessed the 3D imaging capability of the numerical propagation using an artificial phantom, confirming its ability to generate a comparable 3D map of an object when compared to mechanical scanning. To explore its applicability to biological samples, we chose adult Caenorhabditis elegans (C. elegans) as a 3D target specimen. A fixed C. elegans, approximately 1 mm in total length, was immersed in an agarose gel. Depth images were acquired using both the mechanical and sparse scanning methods. To cover the full depth range where the C. elegans was laying, for the mechanical scanning, the total travel distance of the sample stage was 40 µm while acquiring images at every Δzms =0.2 µm. For the sparse scanning, since the total distance was too long to address at a single time, we took the angular images at every 10 µm, thus we sparsely moved the sample with a step size of Δzss = 10 µm while covering the same depth range. We conducted the depth measurements with both methods at 7 distinct positions along the anterior-posterior axis, starting from the pharynx tip, to observe the entire shape of the C. elegans. In each set of the depth images, we picked the image positioned near the middle of the C. elegans’ body and stitched the chosen images, as depicted in Fig. 5(a), which presents a z-slice image of the entire C. elegans in a dorsal view.

 figure: Fig. 5.

Fig. 5. Depth images of an adult C. elegans. (a) Stitched images of the C. elegans. (b) Pharynx (c) Posterior intestine and gonad arm. FOV of (b) and (c) is 100 × 100 µm2.

Download Full Size | PDF

We then compared the depth images of the C. elegans obtained with both methods. In Fig. 5(b), the depth images around the pharynx at position (i), located 10 µm below the top surface of the sample, are shown. The structures of the digestive tract near the pharynx of C. elegans are clearly identified from the strong scattering in both images. At position (ii), moving to a depth of 15 µm below the top surface, as seen in Fig. 5(c), we observed the anterior part of the rectum. Notably, the structure of the intestine and lumen, which appears disconnected at d = 0 µm, is observed as connected at d = 1.6 µm in both images. This demonstration proves the 3D imaging capabilities of the proposed method and its potential as an alternative to mechanical scanning for thick biological tissues.

4. Discussion and summary

As briefly mentioned in Sec. 3.2, the propagation distance in our numerical scanning strategy is limited by the coherence length of the light source. In our measurements, each angular image can record the depth information only within the coherence length of the light. Consequently, the object’s structures that are out of coherence cannot be recovered although the focus is moved up to such distance. In this study, we employed an SLD with a coherence length of approximately 21 µm. This could be the maximum achievable propagation distance. However, in real implementation, the image quality starts to decrease after 5 µm propagation from the acquisition plane. In order to avoid significant degradation of the reconstruction image quality, we have limited the propagation range to ± 5 µm. This is the practical limit that numerical scanning can cover. If we need to acquire depth information beyond that range, a mechanical shift of a sample with a step size of Δzss = 10 µm is necessary, as we did for the C. elegans in Sec. 3.3. The maximum propagation distance may be moderately increased by employing a light source with an extended coherence length. However, even under such circumstances, the propagation distance still remains limited due to the reduction in the field of view caused by boundary aliasing in the frequency domain. Additionally, the use of light source with an elongated coherence length may introduce diffraction noise, significantly compromising image quality.

Our sparse scanning strategy combined with numerical propagation offers an advantage in imaging speed. Consider acquiring depth images spanning a 100 µm axial distance. Conventional mechanical scanning, with a step size of Δzms = 0.2 µm, demands 500 physical movements. In contrast, our sparse scanning strategy with Δzss = 10 µm, requires just 10 shifts of the stage, albeit at the cost of multiple image acquisition. To provide a quantitative comparison of measurement speed, we investigated the time required for covering 100 µm depth by both methods. The sample translation stage requires 0.067 seconds for a 0.2 µm step and 0.19 seconds for a 10 µm step in movement. After each translation, additional idle time is necessary for sample stabilization in both cases. This stabilization period is crucial, especially when imaging a sample with low stiffness in a water-immersed environment, as in this study. The speed of image acquisition by the camera is also an important concern. In our case, the image acquisition speed is determined by the kinematic performance of the galvanometer mirrors. To ensure stable laser scanning, we operate it at a moderate speed, avoiding the irregular motion of the mirrors that might occur during high-speed operation. The resultant imaging speed by the camera is limited to 20 fps. Under these conditions, mechanical scanning takes 183.0 seconds, while the sparse scanning strategy takes 103.7 seconds to acquire full depth images spanning 100 µm. Consequently, our sparse scanning approach proves to be 1.76 times faster than conventional mechanical scanning. Notably, employing high-speed galvanometer mirrors further enhances the imaging speed. If we could acquire images at 100 fps combined with faster scanning mirrors, mechanical scanning is expected to take 143.0 seconds, whereas sparse scanning would only require 23.7 seconds. This represents a 6-fold increase in speed compared to the mechanical scanning. Hence, there still is potential for the sparse scanning strategy to further enhance its imaging speed. This comparison is digested in Table 1.

Tables Icon

Table 1. Comparison of conventional mechanical scanning and proposed sparse scanning for imaging 100 µm thick sample. Scanning step is set to 0.2 µm and 10 µm each

In summary, in this study, we present a strategy for observing 3D structures of an object in a reflection phase microscope employing the SAI configuration. We acquired a set of multiple images at different illumination angles instead of relying on its basic working principle, i.e., the accumulation of interferograms. From the set of taken images, a series of depth images were constructed by the numerical propagation method. Since the achievable range that the numerical scanning can cover, which is 10 µm in our current implementation, is limited by the coherence length of the light source, we moved the sample with the step size of 10 µm to extend the imaging depth beyond the propagation range. With this sparse scanning strategy, we significantly reduce the mechanical movement of the sample for its 3D imaging. We compared the 3D imaging capabilities of the mechanical scanning and the proposed sparse scanning and confirmed that there is no noticeable difference between the two methods. And we also applied our approach to C. elegans and demonstrated its applicability for 3D imaging of biological specimens. Through all the experiments presented in this study, we obtained 3D images of samples with less than 5% of the stage shifts compared to the usual mechanical scanning. In addition, due to the decrease in mechanical movements, the overall acquisition speed of the sparse scanning strategy was faster than the mechanical scanning. By employing faster mechanical parts for beam scanning, the imaging speed can be further improved. Therefore, this method can be potentially used for fast 3D imaging of samples that are easily disturbed by mechanical movement, such as cells suspended in a culture medium, and can improve the precision of the measurements by minimizing the perturbation from the mechanical shift of the specimens.

Funding

National Research Foundation of Korea grant funded by the Korea government Ministry of Education (MOE) (2021R1A2C2012069, 2021R1I1A1A01059752); Korea Evaluation Institute of Industrial Technology funded by the Korea government Ministry of Trade, Industry and Energy (MOTIE) (20010031); Institute of Information & Communications Technology Planning & Evaluation (IITP) grant (2020-0-00997) and Start up Pioneering in Research and Innovation (SPRINT) through the Commercialization Promotion Agency for R&D Outcomes (COMPA) grant (1711198921) funded by the Korea government Ministry of Science and ICT, South Korea (MSIT).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not available at this time but may be obtained from the authors upon reasonable request.

References

1. J. Pawley, Handbook of biological confocal microscopy (Springer Science & Business Media, 2006).

2. D. Shotton and N. White, “Confocal scanning microscopy: three-dimensional biological imaging,” Trends Biochem. Sci. 14(11), 435–439 (1989). [CrossRef]  

3. T. Tahara, X. Quan, R. Otani, et al., “Digital holography and its multidimensional imaging applications: a review,” Microscopy 67(2), 55–67 (2018). [CrossRef]  

4. W. Choi, C. Fang-Yen, K. Badizadegan, et al., “Tomographic phase microscopy,” Nat. Methods 4(9), 717–719 (2007). [CrossRef]  

5. Y. Park, C. Depeursinge, and G. Popescu, “Quantitative phase imaging in biomedicine,” Nat. Photonics 12(10), 578–589 (2018). [CrossRef]  

6. A. Kuś, W. Krauze, P. L. Makowski, et al., “Holographic tomography: hardware and software solutions for 3D quantitative biomedical imaging,” Etri Journal 41(1), 61–72 (2019). [CrossRef]  

7. M. Ang, A. C. Tan, C. M. G. Cheung, et al., “Optical coherence tomography angiography: a review of current and future clinical applications,” Graefe's Arch. Clin. Exp. Ophthalmol. 256(2), 237–245 (2018). [CrossRef]  

8. H.-U. Dodt, U. Leischner, A. Schierloh, et al., “Ultramicroscopy: three-dimensional visualization of neuronal networks in the whole mouse brain,” Nat. Methods 4(4), 331–336 (2007). [CrossRef]  

9. G. Ku and L. V. Wang, “Deeply penetrating photoacoustic tomography in biological tissues enhanced with an optical contrast agent,” Opt. Lett. 30(5), 507–509 (2005). [CrossRef]  

10. K. H. Song and L. V. Wang, “Deep reflection-mode photoacoustic imaging of biological tissue,” J. Biomed. Opt. 12(6), 060503 (2007). [CrossRef]  

11. J. Huisken, J. Swoger, F. Del Bene, et al., “Optical sectioning deep inside live embryos by selective plane illumination microscopy,” Science 305(5686), 1007–1009 (2004). [CrossRef]  

12. P. J. Keller, A. D. Schmidt, J. Wittbrodt, et al., “Reconstruction of zebrafish early embryonic development by scanned light sheet microscopy,” science 322(5904), 1065–1069 (2008). [CrossRef]  

13. J.-M. Yang, C. Favazza, R. Chen, et al., “Simultaneous functional photoacoustic and ultrasonic endoscopy of internal organs in vivo,” Nat. Med. 18(8), 1297–1302 (2012). [CrossRef]  

14. Y. Choi, P. Hosseini, W. Choi, et al., “Dynamic speckle illumination wide-field reflection phase microscopy,” Opt. Lett. 39(20), 6062–6065 (2014). [CrossRef]  

15. Y. Choi, P. Hosseini, J. W. Kang, et al., “Reflection phase microscopy using spatio-temporal coherence of light,” Optica 5(11), 1468–1473 (2018). [CrossRef]  

16. B. Redding, Y. Bromberg, M. A. Choma, et al., “Full-field interferometric confocal microscopy using a VCSEL array,” Opt. Lett. 39(15), 4446–4449 (2014). [CrossRef]  

17. V. R. Singh, Y. A. Yang, H. Yu, et al., “Studying nucleic envelope and plasma membrane mechanics of eukaryotic cells using confocal reflectance interferometric microscopy,” Nat. Commun. 10(1), 3652 (2019). [CrossRef]  

18. M. G. Somekh, C. See, and J. Goh, “Wide field amplitude and phase confocal microscope with speckle illumination,” Opt. Commun. 174(1-4), 75–80 (2000). [CrossRef]  

19. T. Yamauchi, H. Iwai, M. Miwa, et al., “Low-coherent quantitative phase microscope for nanometer-scale measurement of living cells morphology,” Opt. Express 16(16), 12227–12238 (2008). [CrossRef]  

20. T. Yamauchi, H. Iwai, and Y. Yamashita, “Label-free imaging of intracellular motility by low-coherent quantitative phase microscopy,” Opt. Express 19(6), 5536–5550 (2011). [CrossRef]  

21. Z. Yaqoob, T. Yamauchi, W. Choi, et al., “Single-shot full-field reflection phase microscopy,” Opt. Express 19(8), 7587–7595 (2011). [CrossRef]  

22. M. G. Hyeon, K. Park, T. D. Yang, et al., “The effect of pupil transmittance on axial resolution of reflection phase microscopy,” Sci. Rep. 11(1), 22774 (2021). [CrossRef]  

23. M. G. Hyeon, T. D. Yang, J.-S. Park, et al., “Reflection phase microscopy by successive accumulation of interferograms,” ACS Photonics 6(3), 757–766 (2019). [CrossRef]  

24. T. Ikeda, G. Popescu, R. R. Dasari, et al., “Hilbert phase microscopy for investigating fast dynamics in transparent systems,” Opt. Lett. 30(10), 1165–1167 (2005). [CrossRef]  

25. G. Popescu, T. Ikeda, C. A. Best, et al., “Erythrocyte structure and dynamics quantified by Hilbert phase microscopy,” J. Biomed. Opt. 10(6), 060503 (2005). [CrossRef]  

26. J. W. Goodman, Introduction to Fourier optics (Roberts and Company publishers, 2005).

27. M. Kim, Y. Choi, C. Fang-Yen, et al., “Three-dimensional differential interference contrast microscopy using synthetic aperture imaging,” J. Biomed. Opt. 17(2), 026003 (2012). [CrossRef]  

28. M.-J. Chen and A. C. Bovik, “Fast structural similarity index algorithm,” Journal of Real-Time Image Processing 6(4), 281–287 (2011). [CrossRef]  

29. C. Li and A. C. Bovik, “Three-component weighted structural similarity index,” in Image quality and system performance VI (SPIE2009), pp. 252–260.`

30. A. Buades, B. Coll, and J.-M. Morel, “Non-local means denoising,” Image Processing On Line 1, 208–212 (2011). [CrossRef]  

31. P. Coupé, P. Yger, and C. Barillot, “Fast non local means denoising for 3D MR images,” in International conference on medical image computing and computer-assisted intervention (Springer2006), pp. 33–40.

32. J. V. Manjón, J. Carbonell-Caballero, J. J. Lull, et al., “MRI denoising using non-local means,” Med. Image Anal. 12(4), 514–523 (2008). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Illustration of the experimental setup. (a) Experimental schematic of the system. S: light source, M: mirror, L1-6: lens, PBS: polarizing beam splitter, HWP: half-wave plate, QWP: quarter-wave plate, OLR and OLS: objective lenses for reference and sample arms. DG: diffraction grating, LP: linear polarizer, TB: Two-hole Block, TR: translation stage. (b) Schematic of image acquisition. Multiple reflection images are acquired at various illumination angles, $\theta $ . (c) Scanning trajectory of the beam in the k-space. Images are taken at each spot.
Fig. 2.
Fig. 2. Illustration of the sparse scanning strategy. (a) Conventional mechanical scanning of a sample for 3D imaging. The sample is mechanically translated, and 2D depth images are acquired at every sample movement with a step size of Δzms. The image acquisition planes are indicated with red layers. Typically Δzms = 0.2 µm is used in the experiments. (b) The sparse scanning strategy. The sample is translated with a step size of Δzss, which is much larger than Δzms. And multiple angular images are acquired at the shifted planes indicated with blue layers. The blind depth images are generated from the acquired angular images by numerical propagation with a step size of Δzns. The numerical propagation covers the range from d = 0 to (M-1)·Δzns. Typically, we use Δzss = 10 µm, Δzns = 0.2 µm, and M = 50.
Fig. 3.
Fig. 3. Depth sectioning of the imaging system. A flat mirror was used as a test sample. (a) Intensity profile measured from the depth images taken by mechanically moving a mirror. (b) Intensity profile obtained from the depth images that were numerically generated. All depth images were produced from the single data set captured at the mirror surface without mechanical scanning. Inlets of (b): Intensity at d = 0 (top) and d = 4 µm (bottom). Red lines are Gaussian fits. FOV of inlet images in (b) : 20 × 20 µm2. Scalebar = 10 µm
Fig. 4.
Fig. 4. 3D image of a phantom. (a) (upper) Depth images of a phantom taken using the mechanical scanning. (lower) Depth images produced by the numerical scanning. In the numerical scanning, all depth images were generated from the image set captured at z = 0 µm. (b) Zoom-in images of the areas indicated by a grey box in (a). Both methods effectively display the similar structural changes in fibers and microbeads embedded within the phantom at different target depths. (c) The value of MS-SSIM calculated from the depth images obtained by both methods. The shaded area represents the range of standard deviation obtained from 6 repetitive experiments. FOV in all images: 168 × 168 µm2 and scale bars: 100 µm.
Fig. 5.
Fig. 5. Depth images of an adult C. elegans. (a) Stitched images of the C. elegans. (b) Pharynx (c) Posterior intestine and gonad arm. FOV of (b) and (c) is 100 × 100 µm2.

Tables (1)

Tables Icon

Table 1. Comparison of conventional mechanical scanning and proposed sparse scanning for imaging 100 µm thick sample. Scanning step is set to 0.2 µm and 10 µm each

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

E ~ j ( k ) = P j ( k ) O ~ ( k ; z ) ,
P ( k ) = P ( k ; κ j = 0 ) = { 1 , | k | 2 ( k 0 NA ) 2 0 , o t h e r w i s e ,
E ~ j ( k κ j ) = P j ( k κ j ) O ~ ( k κ j ; z ) = P ( k ) O ~ ( k κ j ; z ) .
E ~ j ( k κ j ; d ) = H ( k ; d ) E ~ j ( k κ j ) = H ( k ; d ) P ( k ) O ~ ( k κ j ; z ) .
E ~ j ( k ; d ) = H ( k + κ j ; d ) P ( k + κ j ) O ~ ( k ; z ) = H j ( k ; d ) P j ( k ) O ~ ( k ; z ) ,
E ~ ( k ; d ) = j = 1 N E ~ j ( k ; d ) = j = 1 N H j ( k ; d ) P j ( k ) O ~ ( k ; z ) = O ~ ( k ; z ) j = 1 N H ( k + κ j ; d ) P ( k + κ j ) = O ~ ( k ; d ) j = 1 N P ( k + κ j ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.