Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Automated phase unwrapping in digital holography with deep learning

Open Access Open Access

Abstract

Digital holography can provide quantitative phase images related to the morphology and content of biological samples. After the numerical image reconstruction, the phase values are limited between −π and π; thus, discontinuity may occur due to the modulo 2π operation. We propose a new deep learning model that can automatically reconstruct unwrapped focused-phase images by combining digital holography and a Pix2Pix generative adversarial network (GAN) for image-to-image translation. Compared with numerical phase unwrapping methods, the proposed GAN model overcomes the difficulty of accurate phase unwrapping due to abrupt phase changes and can perform phase unwrapping at a twice faster rate. We show that the proposed model can generalize well to different types of cell images and has high performance compared to recent U-net models. The proposed method can be useful in observing the morphology and movement of biological cells in real-time applications.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Biomedical imaging and healthcare-related imaging schemes deal with three-dimensional (3D) cell images. Off-axis digital holography in a microscopic configuration is a technique that can provide quantitative marker-free information about the morphology and contents of a sample. A quantitative phase image is acquired by recording the spatial interference pattern between a coherent or semi-coherent reference wave and an object wave passing through living cells [1].

The phase information associated with a fringe pattern in an interferogram from digital holographic microscopy (DHM) is calculated by shifting the fringe through different known phase increments or transforming the fringe pattern using Fourier transform, which is obtained by adding a substantial tilt to the wavefront causing carrier fringes [24]. In either case, the phase distribution of a phase image is principal values wrapped in a range of -π to π, which can cause 2π phase jumps due to the phase periodicity (with a phase modulus of 2π) of the trigonometric functions. A phase unwrapping process must be conducted to remove the 2π phase discontinuities in the image and estimate the true continuous phase image. Phase unwrapping consists of detecting the location of the phase jump and connecting the adjacent pixels by adding or subtracting multiples of 2π to remove the phase discontinuities.

Many phase unwrapping algorithms have been proposed to solve challenging problems, such as phase discontinuities. Advanced unwrapping algorithms can be categorized into three types: global, region, and path-following algorithms. Global algorithms minimize the differences between the discrete gradients of wrapped and unwrapped phase images [513]. Although these algorithms are robust, the computational requirements are huge, making them unsuitable for real-time live-cell imaging applications.

Region algorithms split an image into smaller ones, unwrap the regions with respect to each other and merge them into larger regions [5,11,1321]. These algorithms have been regarded as a compromise between robustness and computational intensiveness. Region algorithms are categorized into region-based algorithms [11,15,16,18,19,21] and tile-based algorithms [5,14,17,20] according to the method for defining a homogeneous region. Region-based algorithms determine homogeneous regions using phase gradients, while tile-based algorithms divide an image into a small local grid unwrapped by simpler algorithms.

Path-following algorithms are divided into path-dependent, residue-compensation, and quality-guided algorithms. Path-dependent algorithms perform unwrapping through a predetermined search path; however, they do not remove noise well [22]. Residue-compensation algorithms search for residues in a wrapped image and generate branch cuts to connect opposite orientation residues [11,2332]. These algorithms determine the quality of an unwrapped image according to a cut selection strategy.

Quality-guided algorithms are the most promising methods and depend on the assumption that a good quality or phase map will lead to a reasonable unwrapping path while grouping pixels [4,3352]. According to the phase map, the highest-quality pixels are unwrapped first, while the lowest-quality pixels are unwrapped last to prevent error propagation. These methods are computationally efficient and robust in real-time applications. Thus, this study employs quality-guided algorithms as path-following methods to acquire true unwrapped images for the model [37].

When using these methods, there are cases where the systematic phase unwrapping methods fail to restore wrapped data [4,53]. Abrupt phase changes occur at the cell boundaries in a phase image. When the phase continuously rises to π and exceeds it, the phase rapidly changes to -π due to the modulo 2π operation on the phase. This results in 2π discontinuities, which must be removed using phase unwrapping algorithms. However, if either the thickness of cells is out of the depth of focus or the cells are reconstructed on a partially defocused image plane, the phase in the cell boundary may immediately shoot up above π and abruptly fall to near -π due to the modulo 2π operation [53]. For these abrupt phase change problems during phase unwrapping, the phase difference in the local boundary is less than 2π; thus, the algorithm incorrectly interprets that the phases of the two pixels are in the same range and require no unwrapping. Therefore, phase unwrapping is not correctly executed in the cell area. We propose a new deep learning model that can effectively resolve this incomplete phase unwrapping in real-time. In addition, this model can also perform an autofocusing that converts the out-of-focus wrapped phase image to an in-focus unwrapped phase image. The proposed model is a fusion of deep learning and off-axis digital holography in a microscopic configuration to recover the phase value of biological samples, which is essential for studying morphological and material changes at the single-cell level [54].

Recently, deep learning models related to images have been increasingly developed [5560]. Deep learning models for phase unwrapping have been proposed using CNN models, especially the U-net type model [6169]. A CNN model learns to minimize a certain loss function, such as the Euclidean distance between a predicted and actual image. Because of its squared characteristic of calculated distance, it corrects well for large errors; however, is tolerant of small errors. This causes the deep learning model to produce blurry images. Besides, the CNN-based models face the problems of the abrupt phase shift that occurs in the numerical phase unwrapping algorithm. To overcome these problems, we first propose to apply a generative adversarial network (GAN) to the wrapped phase signals obtained using DHM, which can automatically learn a proper adversarial loss function [7073]. We employ Pix2Pix GAN [60], which consists of a generator and a discriminator and learns image-to-image translation with label images for automatically reconstructing unwrapped focused-phase images.

The proposed model is defined as “UnwrapGAN,” which consists of a U-net generator and a discriminator [58,60]. To train the UnwrapGAN model, we used three types of cancer cells and obtained the wrapped defocused-phase images for each cell with DHM. The unwrapped focused label images were obtained from the wrapped defocused-phase images using a quality-guided unwrapping algorithm (see Fig. 1(a)). We put the wrapped defocused-phase images as input to a generator and made the generator produce the unwrapped focused-phase images. A discriminator determines whether the output image is well-formed and proceeds to train the generator to create a similar image to the actual unwrapped phase image (see Fig. 1(b)).

 figure: Fig. 1.

Fig. 1. (a) shows one quantitative phase image of multiple lung cancer cells. The images are focused manually and then unwrapped by the quality-guided unwrapping algorithm. The unwrapped focused-phase images are used for labeled training in the model. The cross-section and 3D representation of one cell with wrapped and unwrapped signals are shown. (b) show training of model where the UnwrapGAN model consists of a discriminator and a U-net generator. (c) show the results for untrained cells. It tests whether the trained model can generate unwrapped focused-phase images from unseen images that have not been used for training and whether it is possible to recover phase values for other types of cells to evaluate the model generalization. We obtain that the proposed model enables the correction of problems on abrupt phase change. The proposed model’s result is also compared with that of the U-net.

Download Full Size | PDF

To test the trained model, we used defocused wrapped data, which was not used in training, as input to the generator. The trained model performed unwrapping and focusing work on the untrained data. The results were compared with that of the phase unwrapping algorithm through a single cell comparison and the entire image. Compared with the U-net model, we tested that the trained model reconstructs more elaborate phase images and showed that it is possible to generalize the proposed models because it performs phase reconstruction for other types of cells (liver cancer and colon cancer cells). Besides, we obtained that the proposed model can overcome the problem of the abrupt phase change caused by a phase-jumping aberration in which all numerical methods of phase unwrapping fail to restore true phase values. Furthermore, the model is several times faster than the conventional quality-guided algorithm. The quality-guided method requires sorting and must find the best unwrapping path, while the proposed model uses fixed trained weights to unwrap the phase image, making it faster. Therefore, our proposed model shows that it can perform phase unwrapping with autofocusing in real time, which can greatly influence the process of measuring biological samples through DHM.

2. Phase unwrapping in digital holographic imaging

2.1 Label-free off-axis digital holographic imaging

Figure 2 shows the general layout of the off-axis DHM, which is based on a Mach-Zehnder interferometer. A coherent laser source is split into an object wave (O) and a reference wave (R). The object wave illuminates the specimen, and a microscope objective (MO) collects and magnifies the object wavefront. O and R are joined by a beam collector to create a hologram with a small tilt angle between them to provide the off-axis geometry. Interferograms recorded by a CCD camera are transmitted to a computer for numerical reconstruction [74,75]. The recorded hologram IH is the interference between O and R, which is expressed as follows:

$${I_H} = {|R |^2} + {|O |^2} + {R^\ast}O + {O^\ast}R, $$
where R* and O* are the complex conjugates of the reference and object waves, respectively. The numerically reconstructed image from the recorded digital hologram includes zero-order diffraction noise (the first two terms in Eq. (1)), as well as a virtual image and the real image, which correspond to the third and fourth terms in Eq. (1), respectively [76,77].

 figure: Fig. 2.

Fig. 2. General layout of off-axis digital holographic microscopy.

Download Full Size | PDF

The small tilt angle between O and R allows for the parasitic orders to be eliminated and for the real image to be isolated from twin images and zero-order noise. Thus, the undesired data can be suppressed by applying a digitally defined filter mask to a Fourier transform of the off-axis hologram in the spatial spectrum domain to enhance image quality. The filtered hologram is represented by

$$I_H^F = IFFT\{{FFT({{I_H}} )\times Filter} \}= {R^\ast }O, $$
where FFT and IFFT are the fast Fourier and inverse fast Fourier transforms, respectively, and Filter denotes spatial filtering in the Fourier domain.

The reconstruction of a hologram is achieved by illuminating the filtered hologram with a replica of the reference wave. When the wavefront of the reconstructed image propagates toward the observation plane, the reconstruction is calculated using numerical scalar diffraction with Fresnel approximation, which is expressed as [77]:

$$\begin{array}{l} \Psi (m,n) = A\Phi (m,n)\exp \left[ {\frac{{i\pi }}{{\lambda d}}({{m^2}\varDelta {\xi^2} + {n^2}\varDelta {\eta^2}} )} \right]\\ \quad \times FFT{\left\{ {{R_D}({k,l} )I_H^F({k,l} )\exp \left[ {\frac{{i\pi }}{{\lambda d}}({{k^2}\varDelta {x^2} + {l^2}\varDelta {y^2}} )} \right]} \right\}_{m,n}} \end{array}, $$
where A = exp(i2πd/λ)/(iλd) is a constant, d is the distance between the camera (or hologram plane) and observation planes, λ is the wavelength of the illumination light, k, l, m, and n are integers, and N×N is the number of pixels in the CCD camera. Δx and Δy are the sampling intervals in the hologram plane, Δξ = λd/(NΔx) and Δη = λd/(NΔy) are the sampling intervals in the observation plane, and Φ(m, n) is the digital phase mask for the phase aberration correction, which is calculated by [77]
$$\Phi ({m,n} )= \exp \left[ {\frac{{ - i\pi }}{{\lambda D}}({{m^2}\Delta {\xi^2} + {n^2}\Delta {\eta^2}} )} \right]$$
where D is a parameter that must be adjusted to compensate for wavefront curvature:
$$D = \frac{1}{{{d_i}}}\left( {1 + \frac{{{d_o}}}{{{d_i}}}} \right), $$
where di is the distance between MO and image plane, and do is the distance between the specimen and MO. The digital reference wave RD is expressed by
$${R_D} = {A_R}\textrm{exp}[{i({{{2\pi } / \lambda }} )({{k_x}k\varDelta x + {k_y}l\varDelta y} )} ], $$
where kx and ky are two components of the wave vector, and AR is the amplitude of the reference wave. A fine adjustment of kx, ky, and D can be conducted without fringes by removing residual gradients or the curvature of the reconstructed phase distribution in some areas of the image where a constant phase is presumed [77]. The digital phase mask resolves the phase aberration caused by inserting MO in the object wave arm (Fig. 2). Eventually, a contrast phase image is obtained from the argument of
$$\phi (m,n) = {\tan ^{ - 1}}\left\{ {\frac{{{\mathop{\rm Im}\nolimits} [\Psi (m,n)]}}{{Re [\Psi (m,n)]}}} \right\}$$
So, the phase image is wrapped within the range from -π to π due to the range of the arctangent function.

2.2 Quality-guided path-following algorithms

Quality-guided algorithms consist of two main concepts: the calculation of reliability values and the design of the unwrapping path [37,41,44]. The algorithm uses criteria to determine the reliability of a point in an image, which is based on the gradients or differences between a pixel and its neighbors in the image. Reliability is a criterion that determines the degree of difference between each pixel and its surroundings. The reliability of a pixel is calculated using a second difference of the orthogonal and diagonal neighboring pixels. First, the second difference D of the (i, j)th pixel in the 3 × 3 window is separately calculated (see Fig. 3(a)) using the following equations

$$D({i,j} )= {[{{H^2}({i,j} )+ {V^2}({i,j} )+ D_1^2({i,j} )+ D_2^2({i,j} )} ]^{{1 / 2}}},$$
where
$$H(i,j) = \gamma [{\varphi ({i,j - 1} )- \varphi ({i,j} )} ]- \gamma [{\varphi ({i,j} )- \varphi ({i,j + 1} )} ],$$
$$V(i,j) = \gamma [{\varphi ({i - 1,j} )- \varphi ({i,j} )} ]- \gamma [{\varphi ({i,j} )- \varphi ({i + 1,j} )} ],$$
$${D_1}(i,j) = \gamma [{\varphi ({i - 1,j - 1} )- \varphi ({i,j} )} ]- \gamma [{\varphi ({i,j} )- \varphi ({i + 1,j + 1} )} ],$$
$${D_2}(i,j) = \gamma [{\varphi ({i + 1,j - 1} )- \varphi ({i,j} )} ]- \gamma [{\varphi ({i,j} )- \varphi ({i - 1,j + 1} )} ],$$
where i and j are coordinates of a given pixel in the phase image, H and V are the horizontal and vertical differences, respectively, and D1 and D2 are diagonal differences. γ (·) is a simple unwrapping operation to add or subtract 2π in a phase jump, and φ is the phase value at the corresponding pixel. The second differences can be calculated for all pixels, except at the borders of the image, where the second differences are set to infinity to be resolved last.

 figure: Fig. 3.

Fig. 3. (a) Calculation of reliability, (b) edge reliability, and (c) unwrapping path. Yellow pixels are unwrapped and grouped by the edge with the highest edge reliability; e.g., R5+R8. Orange pixels are unwrapped and grouped by the edge with the second highest edge reliability; e.g., R3+R6. It is assumed that the edge R5+R8 has the highest edge reliability and the edge R3+R6 has the second highest edge reliability.

Download Full Size | PDF

Next, the reliability of each pixel in a 3 × 3 window is separately defined as follows

$$R = \frac{1}{D}. $$
For simplicity, the reliability of each pixel in the window is represented by R1, R2, …, R9, as shown in Fig. 3(a). Initially, no pixels in the phase image are considered to belong to any group. The edge reliability is estimated by adding the reliability of two adjacent pixels, as shown in Fig. 3(b). The reliability of all edges is sorted and stored in one array. Then, phase unwrapping is conducted starting with two adjacent pixels with the highest edge reliability. For example, two yellow pixels with the highest edge reliability (the (i, j)th and (i+1, j)th pixels in Fig. 3(c)) are first unwrapped and then joined into a single group. Next, two orange pixels with the second highest edge reliability (the (i-1, j+1)th and (i, j+1)th pixels in Fig. 3(c)) are unwrapped and then joined into another single group.

Phase unwrapping is established by adding or subtracting multiples of 2π to each group. There are three situations in the phase unwrapping process: (1) two selected pixels belonging to different groups, (2) both pixels not belonging to any group, and (3) one pixel belonging to a group but the other pixel not belonging to any group. In the first case, the pixel in the smallest group is unwrapped with respect to any pixel in the largest group. Then, the two groups are joined. In the second case, both pixels are unwrapped with respect to each other and then joined into a single group. In the third case, the pixel that does not belong to any group is unwrapped with respect to the pixel that belongs to the group, and then the unwrapped pixel joins the group. The phase unwrapping is performed sequentially in the order of highest edge reliability until all edges in the sorted array are processed. Finally, the borders of the image are unwrapped with respect to the rest of the image [37].

3. Deep learning model

3.1 Model architecture

The proposed deep learning model is based on a Pix2Pix GAN model with generators and discriminators. The generator performs an image-to-image translation task. When a raw image is fed into the model, a modified output image is generated. The discriminator is used to accurately train the generator. The generated and real images are fed as input to the discriminator, which is trained to determine whether an input image is a generated or real image. The generator and discriminator consist of a convolution-BatchNorm-Leaky ReLU with a 3×3 filter.

The generator consists of down-sampling to extract the features of the input image for translation and up-sampling to reconstruct the image based on the extracted features (see Fig. 4(a)). Both down-sampling and up-sampling consist of eight convolution layers. When down-sampling is performed before up-sampling, much information of the original image is lost, resulting in a blurred output. Thus, we used a skip connection technique to share high-frequency information between the input and output. The skip connection reduces the blurry effects of the image generated by connecting the information in the ith layer of the down-sampling process to the information about the (n-i)th layer of the up-sampling process with the general shape of a U-net [58].

 figure: Fig. 4.

Fig. 4. (a) Architecture of the generator, which is similar to a U-net, to recover an unwrapped phase image from wrapped phase images. The U-net has convolution layers, batch normalization, and various activation functions. (b) Discriminator to compare fake and real images with convolution layers. Tanh is the hyperbolic tangent function.

Download Full Size | PDF

The discriminator learns to distinguish between real and fake patches (see Fig. 4(b)). The reason for evaluating images with patches is that the model can be trained faster with fewer parameters. We used the Adam solver as an optimization process with adaptive momentum and parameters of β1 = 0.5 and β2 = 0.999. The number of epochs is 100, and the learning rate is 0.0002. We trained models on a server with 5 NVIDIA RTX Quadro 6000 graphics cards.

3.2 Model objective

We used an adversarial loss combined with L1 loss with the objective of the Pix2Pix model to train the GAN model [60]. The loss function of the UnwrapGAN is defined as:

$${L_{cGAN}}({G,D} )= {E_{x,y}}[{\log D({x,y} )} ]+ {E_x}[{1 - \log D({x,G(x )} )} ], $$
where E means the expected value, x is the wrapped input image, y is the unwrapped label image, G is the generator, and D is the discriminator. The generator tries to minimize this objective against an adversarial D, which tries to maximize it. The UnwrapGAN model’s generator needs an L1 loss function to determine whether it produces an image, such as a labels image, and a discriminator and adversarial learning. The L1 function can reduce blurring.
$${L_{L1}}(G )= {E_{x,y}}[{{{||{y - G(x)} ||}_1}} ]. $$
The objective to train the model is as follows:
$${L^\ast } = \arg \mathop {\min }\limits_G \mathop {\max }\limits_D {L_{cGAN}}({G,D} )+ \lambda {L_{L1}}(G ). $$

3.3 Generation of dataset

We recorded wrapped phase images of three types of cancer cell lines: PC9 (lung cancer cells), SNU449 (liver cancer cells), and SW640 (colon cancer cells). The cells were seeded in a 35-mm imaging dish with a polymer coverslip on the bottom and low walls (Ibidi 80136). They were incubated in standard tissue culture conditions of 37℃, 5% CO2, and 95% humidity. The growth medium was BI Roswell Park Memorial Institute (RPMI) 1640 Medium (ATCC 30–2001) supplemented with 10% fetal bovine serum (FBS) (ATCC 30–2020). The training dataset was generated by recording time-lapse holograms, which were reconstructed using the numerical algorithm [77].

During the numerical reconstruction, a quality-guided unwrapping algorithm is toggled on and off, and two sets with the same area are stored for training: the wrapped phase and the corresponding unwrapped value. The reconstruction and unwrapping algorithms were run in MATLAB 2018. Cell segmentation was conducted using macro code in ImageJ [78].

We reconstructed cell images with a size of 900×900 pixels (a single cell covers an area of averagely 18 µm × 18 µm). The image size was changed to single cell level (256×256 pixels) or multiple cells level (1024×1024 pixels) using an interpolation method to fit the model. The PC9 cell line was used as a training dataset. The number of images was 5200 pairs of defocused wrapped and unwrapped phase images. All unwrapped phase images for training were focused on through manual control. Figure 5 shows a gallery of the images used for training. For training single cell level (256×256 pixels) with the proposed model, it took about 10 hours to train 100 epochs. It took about 3 days to train multiple cells (1024×1024 pixels) for the same epochs. Meanwhile, for comparison with the proposed model, U-net was trained only on multiple cells, and it took about 60 hours to train. The test dataset was composed of PC9, SNU449, and SW640 cell images.

 figure: Fig. 5.

Fig. 5. Gallery of lung cancer cell images used for training the model. The phase images were obtained using a numerical reconstruction algorithm [70] from off-axis holograms. A quality-guided unwrapping method [37] was switched off and on to provide the input and target images. The pairs were used for training.

Download Full Size | PDF

4. Unwrapping with the deep learning model

4.1 Reconstruction of unwrapped phase image at single cell level

We input the wrapped phase image of lung cancer cells at single cell level that were not used when training into the model generator to validate the trained model. The model output was compared with that of the systematic quality-guided unwrapping algorithm. Figure 6 shows the results of the model for lung cancer cells. The deep learning model can precisely remove 2π phase discontinuities in the wrapped phase image and restore the correct unwrapped phase of the cells.

 figure: Fig. 6.

Fig. 6. Results of the trained model compared with that of the quality-guided unwrapping method. The model was trained using the PC9 cell line (lung cancer cells). The right graphs show the phase profiles along the yellow line in the cell images.

Download Full Size | PDF

4.2 Reconstruction of an in-focus unwrapped phase image with multiple cells

Finding of the focus distance is the most essential factor in extracting the correct phase value. A phase image out of focus has incorrect phase values in the boundary. The focus distance can be adjusted manually to obtain the correct phase values or images. Recently, many techniques for automatically adjusting focus distance have been proposed [65,73,80,81]. In this study, we present an autofocus method based on the UnwrapGAN model. In order to train the proposed model, the images (1024×1024 pixels) including multiple cells were used. For model training, wrapped defocused-phase images at random positions as input images to our model were generated and matched with unwrapped phase images in focus. The proposed model learned to accurately generate the unwrapped phase images in focus from the wrapped defocused-phase images, which are reconstructed at the random positions deviated from exact reconstruction distance.

To test how the deep learning model learned well to reconstruct the in-focus unwrapped phase image from the wrapped defocused-phase image, we used the wrapped defocused-phase images obtained at different reconstruction distances as test data [77] and compared them with three cases (Fig. 7). In the first case, we used a quality-guided method for phase unwrapping. In the second case, the UnwrapGAN model was trained using the pair datasets in-focus. In the third case, we trained the UnwrapGAN model using the pair datasets out of focus, where the defocused-phase images are reconstructed at random positions. Note that to create the defocused-phase images, reconstruction distances are deliberately moved away from the correct focus distance by specific values. In the case of the phase image reconstruction with the quality-guided method (Fig. 7), the further away the reconstruction distance is from the in-focus, the greater the difference between the correct phase and reconstructed phase images. In the case of the model trained using the pair datasets in-focus, there was less difference than with the quality-guided method. However, the reconstruction accuracy decreased as the reconstruction distance deviated from the exact reconstruction distance. We observe that the phase images generated from the proposed model trained using the defocused dataset are almost identical to the phase image in-focus, even though the wrapped phase images obtained at different reconstruction distances away from the original focused position are fed into the model. It indicates that the trained model outperforms the numerical quality-guided path-following algorithm in phase unwrapping and is even possible to reconstruct a constant unwrapped phase image from wrapped images restored at various reconstruction distances.

 figure: Fig. 7.

Fig. 7. Gallery of phase images with different reconstruction distances. (a) The wrapped focused-phase image and the corresponding unwrapped focused-phase image. (b) The images of the cell in the red box in (a) with specific reconstruction distance away from in-focus. The first column line of (b) are wrapped phase images, the second line are unwrapped phase images reconstructed using the quality-guided phase unwrapping method, the third line are the unwrapped phase images reconstructed using the trained model with the pair datasets in-focus, and the last line are the unwrapped phase images reconstructed using our trained model with the pair datasets out of focus. (c) The result of numerically computed SSIM indices between the phase images obtained at different reconstruction distances and the unwrapped focused-phase image in (a).

Download Full Size | PDF

Besides, we quantified the structural similarity (SSIM) index [79] numerically to show how similar the results are to those of the in-focus phase image. The SSIM is the framework for quality assessment based on the degradation of structural information. The SSIM index is unity for two similar images and drops below 1 if they are less similar. The SSIM indices for the unwrapped phase images at different reconstruction distances were calculated based on the in-focus unwrapped phase image (the unwrapped focused-phase image in Fig. 7(a)), which is obtained using the quality-guided phase unwrapping method. The circles on the solid lines in the graph of Fig. 7 represent the average of SSIM indices for 30 different phase images as input data. The SSIM index of the in-focus unwrapped phase image from the quality-guided phase unwrapping method is exactly 1 since it is calculated based on the same phase image. When the reconstruction distance is away from the in-focus, the SSIM index decreases rapidly. Besides, the trained model with only in-focus pair datasets shows the same trend, although the SSIM index is higher than that of the quality-guided method. However, the model trained with the defocused datasets always shows a constant SSIM index with a value close to 0.9 regardless of the reconstruction distance values.

4.3 Model comparison for phase unwrapping

In this section, we compare the performance of the phase unwrapping method based on the UnwrapGAN model with that of the U-net model. U-net model is a CNN model that is trained only by comparing the generated image with the label image because there is no discriminator structure. On the other hand, UnwrapGAN not only compares the generated image with the label image, but also allow the discriminator to determine whether the generated image is a real image or a fake image. Figure 8 shows that the two-deep learning models are significantly different in the performance of phase unwrapping. The U-net model has a smoother phase distribution than the actual phase distribution (see Fig. 8). However, we observe that the UnwrapGAN model tried restoring the phase values close to the actual phase distribution as much as possible. We also quantified the SSIM value numerically to show the similarity between each model and those of the label phase image. As shown in the bottom graph of Fig. 8, the SSIM values for the UnwrapGAN model are above 0.9 on average, but for the U-net model, the SSIM is much less than that of the UnwrapGAN and deviation of the SSIM values is very large. These experimental results showed that the UnwrapGAN model more accurately produces an in-focus unwrapped phase image.

 figure: Fig. 8.

Fig. 8. Gallery of the phase recovery results generated by U-net and the UnwrapGAN. Wrapped defocused-phase images were fed into the model input. The in-focus unwrapped phase images of the corresponding inputs were obtained using the quality-guided path-following algorithm to make ground truth. The two output images indicate the unwrapped focused-phase images, which were reconstructed using the trained U-net and UnwrapGAN models, respectively. The middle graphs show the phase profiles along the straight line in sample # 3. The bottom graphs show the calculated SSIM index between the label and output images of each model with 110 single cells’ phase images, where the area marked with a square was considered as shown in the upper graph. The graph of the bottom left shows the SSIM index for each phase image, and the graph of the bottom right shows the mean and standard deviation of the SSIM indices for 110 phase images.

Download Full Size | PDF

4.4 Model-generalization with different cell types

Phase unwrapping removes the 2π phase discontinuities to obtain an estimation of the true continuous phase image. Thus, it is necessary to accurately phase unwrap all wrapped phase images regardless of the types of cells. We tested the liver and colon cancer cells to verify the validity of the proposed deep learning model, which was trained using lung cancer cells at single cell level. This study evaluated whether the model well learns general phase reconstruction methods rather than phase reconstruction for specific cells. The proposed model restored the phase values correctly for the other types of cells (see Fig. 9). This is because the training dataset of the same type of cells has different morphological features due to the heterogeneous cancer cell population. Thus, the proposed model can also phase unwrap images for other cell types. For example, the shape of colon cancer cells is partially different from that of lung cancer cells, but the phase unwrapping was accurately performed according to the ground truth phase values.

 figure: Fig. 9.

Fig. 9. Unwrapping results of liver and colon cancer cells. These cell types were not used during the training. The results showed that the model is generalized. The right graphs show the phase value of the yellow line in the cell image.

Download Full Size | PDF

4.5 Abrupt phase change problem

If a 2π phase discontinuity occurs, phase unwrapping is performed by adding or subtracting multiples of 2π. A phase image from DHM can be divided into two areas: the cell and background areas. The phase distribution within the cell area is larger than π (larger than the background). However, the phase distribution inside the cell is wrapped to -π due to the modulo 2π operation. The two areas have a phase difference and must be constantly recovered using the quality-guided unwrapping algorithm. When the optical path length at the cell boundary is smaller than that at the center of the cell, as shown in the phase profile of the cell (Fig. 10(a)), the cell boundary and background are grouped according to the grouping principle in the quality-guided unwrapping. The phase unwrapping algorithm unwraps the phase of the cell by adding or subtracting multiples of 2π to remove a phase discontinuity of the cell. To this end, the boundary between the interior of the cell and the background is crucial for phase unwrapping. However, if an abrupt phase change occurs due to either a partially reconstructed defocused-phase image or strong diffraction patterns on the cell boundary, the phase jumps above π at the boundary and wraps to -π. Here, the phase difference between the cell and background becomes smaller (less than π); thus, they are classified into the same group. This results in phase unwrapping failure, as shown in Fig. 10(a) and (b).

 figure: Fig. 10.

Fig. 10. (a) Abrupt phase change noise at a cell boundary with one enlarged area and the cross section of the phase value. (b) The quality-guided phase unwrapping algorithm fails to recover the image. (c) Manually unwrapped image and output of the proposed model. The 3D profile and cross section are also shown for visual comparison.

Download Full Size | PDF

The proposed model can successfully perform phase unwrapping by removing the abrupt phase change (Fig. 10(c)). We used wrapped defocused-phase image during training so that the model can convert a partially wrapped defocused-phase image into a focused unwrapped phase image in single cell level. The unwrapped phase image generated by the trained model was compared with the in-focus unwrapped phase image obtained by manually removing the abrupt phase changes. The results showed that the proposed model can remove phase jumps and successfully perform phase unwrapping. The quality-guided phase unwrapping algorithm performs phase unwrapping depending on the edge reliability. However, the proposed model extracts various features using a convolution layer and learns using an accurate unwrapped image database to obtain an unwrapped phase image.

4.6 Unwrapping time with quality-guided and proposed model

One of the advantages of the proposed model is that it can unwrap multiple phase images in a very short time. The proposed method only convolved trained filters for down-sampling and up-sampling an input image for phase unwrapping. Besides, it is fast and well-suited for real-time applications. The proposed method was compared with the quality-guided phase unwrapping algorithm, and the results are presented in Table 1.

Tables Icon

Table 1. Unwrapping time and standard deviation of our proposed method and quality-guided path-following algorithm

The phase unwrapping time was measured with 1000 images of the same size for the model trained using single cell level phase images (256×256 pixels). In order to calculate the phase unwrapping time taken regardless of the content of the image, the unwrapping time for 100 images was measured. The average and standard deviation were calculated by measuring the time taken for phase unwrapping of 100 images 10 times.

The quality guided phase unwrapping algorithm is widely used for real time phase unwrapping [37,4551]. It was confirmed that the proposed model generates the output about twice as fast as the quality guided phase unwrapping algorithm. Therefore, through the comparison of its unwrapping time, our proposed UnwrapGAN model shows that the phase unwrapping can be done in real time while solving various problems (autofocusing, generalization and abrupt phase change).

5. Conclusion

It is essential to restore the correct cell phase when studying live biological samples in real-time. Correct phase images can be used in medical applications, such as diagnosis and drug treatment. The experimental results showed that the proposed GAN-based learning models can automatically reconstruct the in-focus unwrapped phase image from the wrapped phase image regardless of the reconstruction distance. It showed higher performance than the recent U-net models. Besides, we showed that the proposed model can be generalized to observe different cell types with DHM. The proposed model outperforms the existing numerical phase unwrapping methods since it can solve problems related to abrupt phase changes and perform phase unwrapping at a faster rate. Thus, the proposed model can be used for analyzing the morphology and movement of biological cells in real-time applications.

Funding

National Research Foundation of Korea (NRF-2020R1A2C3006234).

Disclosures

The authors declare that there are no conflicts of interest related to this paper.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. B. Kemper and G. Bally, “Digital holographic microscopy for live cell applications and technical inspection,” Appl. Opt. 47(4), A52–A61 (2008). [CrossRef]  

2. M. Takeda, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry,” J. Opt. Soc. Am. 72(1), 156–160 (1982). [CrossRef]  

3. J. Wyant and K. Creath, “Recent advances in interferometric optical testing,” Laser Focus Electro-Optics 21(11), 118–132 (1985).

4. J. A. Quiroga and E. Bernabeu, “Phase-unwrapping algorithm for noisy phase-map processing,” Appl. Opt. 33(29), 6725–6731 (1994). [CrossRef]  

5. A. Baldi, “Two-dimensional phase unwrapping by quad-tree decomposition,” Appl. Opt. 40(8), 1187–1194 (2001). [CrossRef]  

6. M. D. Pritt and J. S. Shipman, “Least-squares two-dimensional phase unwrapping using FFTs,” IEEE Trans. Geosci. Remote Sensing 32(3), 706–708 (1994). [CrossRef]  

7. J. Strand, T. Taxt, and A. K. Jain, “Two-dimensional phase unwrapping using a block least-squares method,” IEEE Trans. on Image Process. 8(3), 375–386 (1999). [CrossRef]  

8. D. C. Ghiglia and L. A. Romero, “Robust two-dimensional weighted and unweighted phase unwrapping that uses fast transforms and iterative methods,” J. Opt. Soc. Am. A 11(1), 107–117 (1994). [CrossRef]  

9. G. Fornaro, G. Franceschetti, R. Lanari, and E. Sansosti, “Robust phase-unwrapping techniques: a comparison,” J. Opt. Soc. Am. A 13(12), 2355–2366 (1996). [CrossRef]  

10. L. Guerriero, G. Nico, G. Pasquariello, and S. Stramaglia, “New regularization scheme for phase unwrapping,” Appl. Opt. 37(14), 3053–3058 (1998). [CrossRef]  

11. K. Hung and T. Yamada, “Phase unwrapping by regions using least-squares approach,” Opt. Eng. 37(11), 2965–2970 (1998). [CrossRef]  

12. Y. Guo, X. Chen, and T. Zhang, “Robust phase unwrapping algorithm based on least squares,” Optics and Lasers in Engineering 63, 25–29 (2014). [CrossRef]  

13. R. Juarez-Salazar, C. Robledo-Sanchez, and F. Guerrero-Sanchez, “Phase-unwrapping algorithm by a rounding-least-squares approach,” Opt. Eng. 53(2), 024102 (2014). [CrossRef]  

14. M. Arevalillo-Herráez, D. R. Burton, M. J. Lalor, and D. B. Clegg, “Robust, simple, and fast algorithm for phase unwrapping,” Appl. Opt. 35(29), 5847–5852 (1996). [CrossRef]  

15. J. J. Gierloff, “Phase unwrapping by regions,” Proc. SPIE 0818, 2–9 (1987). [CrossRef]  

16. P. G. Charette and I. W. Hunter, “Robust phase-unwrapping method for phase images with high noise content,” Appl. Opt. 35(19), 3506–3513 (1996). [CrossRef]  

17. P. Stephenson, D. R. Burton, and M. J. Lalor, “Data validation techniques in a tiled phase unwrapping algorithm,” Opt. Eng. 33(11), 3703–3708 (1994). [CrossRef]  

18. A. Baldi, “Phase unwrapping by region growing,” Appl. Opt. 42(14), 2498–2505 (2003). [CrossRef]  

19. S. Liu and L. Yang, “Regional phase unwrapping method based on fringe estimation and phase map segmentation,” Opt. Eng. 46(5), 051012 (2007). [CrossRef]  

20. G. C. Antonopoulos, B. Steltner, A. Heisterkamp, T. Ripken, and H. Meyer, “Tile-based two-dimensional phase unwrapping for digital holography using a modular framework,” PLoS One 10(11), e0143186 (2015). [CrossRef]  

21. Y. Zhang and R. Chen, “Relative reliability-guided phase unwrapping algorithm based on region partition and application on train wheel profilometry,” in 2016 IEEE Far East NDT New Technology & Application Forum (FENDT) (2016), pp. 185–189.

22. A. Oppenheim and R. Schafer, Digital Signal Processing (Prentice-Hall, 1975), pp. 507–511.

23. J. M. Huntley, “Noise-immune phase unwrapping algorithm,” Appl. Opt. 28(16), 3268–3270 (1989). [CrossRef]  

24. R. M. Goldstein, H. A. Zebker, and C. L. Werner, “Satellite radar interferometry: Two-dimensional phase unwrapping,” Radio Sci. 23(4), 713–720 (1988). [CrossRef]  

25. R. Cusack, J. M. Huntley, and H. T. Goldrein, “Improved noise-immune phase-unwrapping algorithm,” Appl. Opt. 34(5), 781–789 (1995). [CrossRef]  

26. Y. Lu, X. Wang, and G. He, “Phase unwrapping based on branch cut placing and reliability ordering,” Opt. Eng. 44(5), 055601 (2005). [CrossRef]  

27. S. A. Karout, M. A. Gdeisat, D. R. Burton, and M. J. Lalor, “Two-dimensional phase unwrapping using a hybrid genetic algorithm,” Appl. Opt. 46(5), 730–743 (2007). [CrossRef]  

28. H. Zhong, J. Tang, and D. Liu, “A fast phase unwrapping algorithm based on minimum discontinuity by blocking,” in2010 2nd International Conference on Future Computer and Communication (2010), pp. V1-717–V1-721.

29. J. C. Souza, M. E. Oliveira, and P. A. M. Santos, “Branch-cut algorithm for optical phase unwrapping,” Opt. Lett. 40(15), 3456–3459 (2015). [CrossRef]  

30. D. Zheng and F. Da, “A novel algorithm for branch cut phase unwrapping,” Optics and Lasers in Engineering 49(5), 609–617 (2011). [CrossRef]  

31. J. Xu, D. An, X. Huang, and P. Yi, “An efficient minimum-discontinuity phase-unwrapping method,” IEEE Geosci. Remote Sensing Lett. 13(5), 666–670 (2016). [CrossRef]  

32. J. Wang and Y. Yang, “Branch-cut algorithm with fast search ability for the shortest branch-cuts based on modified GA,” Journal of Modern Optics 66(5), 473–485 (2019). [CrossRef]  

33. J. Schörner, A. Ettemeyer, U. Neupert, H. Rottenkolber, C. Winter, and P. Obermeier, “New approaches in interpreting holographic images,” Optics and Lasers in Engineering 14(4-5), 283–291 (1991). [CrossRef]  

34. J. A. Quiroga, A. Gonzalez-Cano, and E. Bernabeu, “Phase unwrapping algorithm based on adaptive criterion,” Appl. Opt. 34(14), 2560–2563 (1995). [CrossRef]  

35. P. Wade and C. Tyler, “An investigation of various unwrap/reduction methods to quantify phase-shifted holographic interferometry,” in International Congress on Instrumentation in Aerospace Simulation Facilities (1997), pp. 322–328.

36. M. D. Pritt, “Comparison of path-following and least-squares phase unwrapping algorithms,” in IEEE International Geoscience and Remote Sensing Symposium, IGARSS ‘97 (1997), pp. 872–874.

37. M. Arevalillo-Herráez, D. R. Burton, M. J. Lalor, and M. A. Gdeisat, “Fast two-dimensional phase-unwrapping algorithm based on sorting by reliability following a noncontinuous path,” Appl. Opt. 41(35), 7437–7444 (2002). [CrossRef]  

38. M. Arevalillo-Herráez, D. R. Burton, and M. J. Lalor, “Clustering-based robust three-dimensional phase unwrapping algorithm,” Appl. Opt. 49(10), 1780–1788 (2010). [CrossRef]  

39. X. Su and W. Chen, “Reliability-guided phase unwrapping algorithm: a review,” Optics and Lasers in Engineering 42(3), 245–261 (2004). [CrossRef]  

40. H. S. Abdul-Rahman, M. A. Gdeisat, D. R. Burton, M. J. Lalor, F. Lilley, and C. J. Moore, “Fast and robust three-dimensional best path phase unwrapping algorithm,” Appl. Opt. 46(26), 6623–6635 (2007). [CrossRef]  

41. S. Zhang, X. Li, and S. Yau, “Multilevel quality-guided phase unwrapping algorithm for real-time three-dimensional shape reconstruction,” Appl. Opt. 46(1), 50–57 (2007). [CrossRef]  

42. H. Cui, W. Liao, N. Dai, and X. Cheng, “Reliability-guided phase-unwrapping algorithm for the measurement of discontinuous three-dimensional objects,” Opt. Eng. 50(6), 063602 (2011). [CrossRef]  

43. S. Fang, L. Meng, L. Wang, P. Yang, and M. Komori, “Quality-guided phase unwrapping algorithm based on reliability evaluation,” Appl. Opt. 50(28), 5446–5452 (2011). [CrossRef]  

44. H. Zhong, J. Tang, S. Zhang, and M. Chen, “An improved quality-guided phase-unwrapping algorithm based on priority queue,” IEEE Geosci. Remote Sensing Lett 8(2), 364–368 (2011). [CrossRef]  

45. L. Ma, Y. Li, H. Wang, and H. Jin, “Fast algorithm for reliability-guided phase unwrapping in digital holographic microscopy,” Appl. Opt. 51(36), 8800–8807 (2012). [CrossRef]  

46. M. Zhao, L. Huang, Q. Zhang, X. Su, A. Asundi, and Q. Kemao, “Quality-guided phase unwrapping technique: comparison of quality maps and guiding strategies,” Appl. Opt. 50(33), 6214–6224 (2011). [CrossRef]  

47. M. Zhao and Q. Kemao, “Quality-guided phase unwrapping implementation: an improved indexed interwoven linked list,” Appl. Opt. 53(16), 3492–3500 (2014). [CrossRef]  

48. M. Arevalillo- Herráez, F. R. Villatoro, and M. A. Gdeisat, “A robust and simple measure for quality-guided 2D phase unwrapping algorithms,” IEEE Trans. on Image Process. 25(6), 2601–2609 (2016). [CrossRef]  

49. G. Jian, “Reliability-map-guided phase unwrapping method,” IEEE Geosci. Remote Sensing Lett 13(5), 716–720 (2016). [CrossRef]  

50. H. Zhong, J. Tang, Z. Tian, and H. Wu, “Hierarchical quality-guided phase unwrapping algorithm,” Appl. Opt. 58(19), 5273–5280 (2019). [CrossRef]  

51. E. Onat and Y. Özkazanç, “An analysis on path following phase unwrapping algorithms,” in 28th Signal Processing and Communications Applications Conference (SIU) (2020), pp. 1–4.

52. P. Andrä, U. Mieth, and W. Osten, “Strategies for unwrapping noisy interferograms in phase-sampling interferometry,” Proc. SPIE 1508, 50–60 (1991). [CrossRef]  

53. S. Heshmat, S. Tomioka, and S. Nishiyama, “Performance evaluation of phase unwrapping algorithms for noisy phase measurements,” in Fringe 2013 - 7th International Workshop on Advanced Optical Imaging and Metrology (2014), pp. 155–160.

54. M. Zitnik, F. Nguyen, B. Wang, J. Leskovec, A. Goldenberg, and M. M. Hoffman, “Machine learning for integrating data in biology and medicine: Principles, practice, and opportunities,” Information Fusion 50, 71–91 (2019). [CrossRef]  

55. C. Yan, Z. Li, Y. Zhang, Y. Liu, X. Ji, and Y. Zhang, “Depth image denoising using nuclear norm and learning graph model,” ACM Trans. Multimedia Comput. Commun. Appl. 16(4), 122 (2020). [CrossRef]  

56. C. Yan, Y. Hao, L. Li, J. Yin, A. Liu, Z. Mao, Z. Chen, and X. Gao, “Task-adaptive attention for image captioning,” IEEE Trans. Circuits Syst. Video Technol. (2021).

57. C. Yan, B. Gong, Y. Wei, and Y. Gao, “Deep multi-view enhancement hashing for image retrieval,” IEEE Trans. Pattern Anal. Mach. Intell. 43(4), 1445–1451 (2021). [CrossRef]  

58. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” in 18th International Conference on Medical Image Computing and Computer-Assisted Intervention (2015), pp. 234–241.

59. M. Mirza and S. Osindero, “Conditional generative adversarial nets,” arXiv:1411.1784 (2014).

60. P. Isola, J. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of 30th IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 5967–5976.

61. W. Schwartzkopf, T. E. Milner, J. Ghosh, B. L. Evans, and A. C. Bovik, “Two-dimensional phase unwrapping using neural networks,” in 4th IEEE Southwest Symposium on Image Analysis and Interpretation. (2000), pp. 274-277.

62. G. E. Spoorthi, S. Gorthi, and R. K. S. S. Gorthi, “PhaseNet: A deep convolutional neural network for two-dimensional phase unwrapping,” IEEE Signal Process. Lett. 26(1), 54–58 (2019). [CrossRef]  

63. K. Wang, Y. Li, Q. Kemao, J. Di, and J. Zhao, “One-step robust deep learning phase unwrapping,” Opt. Express 27(10), 15100–15115 (2019). [CrossRef]  

64. T. Zhang, S. Jiang, Z. Zhao, K. Dixit, X. Zhou, J. Hou, Y. Zhang, and C. Yan, “Rapid and robust two-dimensional phase unwrapping via deep learning,” Opt. Express 27(16), 23173–23185 (2019). [CrossRef]  

65. Y. Rivenson, Y. Wu, and A. Ozcan, “Deep learning in holography and coherent imaging,” Light Sci Appl 8(1), 85 (2019). [CrossRef]  

66. G. E. Spoorthi, R. K. S. S. Gorthi, and S. Gorthi, “PhaseNet 2.0: Phase unwrapping of noisy data based on deep learning approach,” IEEE Trans. on Image Process. 29, 4862–4872 (2020). [CrossRef]  

67. G. Dardikman-Yoffe, D. Roitshtain, S. K. Mirsky, N. A. Turko, M. Habaza, and N. T. Shaked, “PhUn-Net: ready-to-use neural network for unwrapping quantitative phase images of biological cells,” Biomed. Opt. Express 11(2), 1107–1121 (2020). [CrossRef]  

68. J. Li, Q. Zhang, L. Zhong, J. Tian, G. Pedrini, and X. Lu, “Quantitative phase imaging in dual-wavelength interferometry using a single wavelength illumination and deep learning,” Opt. Express 28(19), 28140–28153 (2020). [CrossRef]  

69. V. Krishna Sumanth and R. K. S. S. Gorthi, “A deep learning framework for 3D surface profiling of the objects using digital holographic Interferometry,” in IEEE International Conference on Image Processing (ICIP) (2020), pp. 2656–2660.

70. I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proceedings of the 27th International Conference on Neural Information Processing Systems (2014), pp. 2672–2680.

71. J. Ma, W. Yu, P. Liang, C. Li, and J. Jiang, “FusionGAN: A generative adversarial network for infrared and visible image fusion,” Information Fusion 48, 11–26 (2019). [CrossRef]  

72. S. Rawat and A. Wang, “Accurate and practical feature extraction from noisy holograms,” Appl. Opt. 60(16), 4639–4646 (2021). [CrossRef]  

73. A. Khan, Z. Zhijiang, Y. Yu, M. A. Khan, K. Yan, and K. Aziz, “Gan-Holo: Generative adversarial networks-based generated holography using deep learning,” Complexity 2021, 6662161 (2021). [CrossRef]  

74. E. Cuche, F. Bevilacqua, and C. Depeursinge, “Digital holography for quantitative phase-contrast imaging,” Opt. Lett. 24(5), 291–293 (1999). [CrossRef]  

75. U. Schnars and W. P. O. Jüptner, “Digital recording and numerical reconstruction of holograms,” Meas. Sci. Technol. 13(9), R85–R101 (2002). [CrossRef]  

76. E. Cuche, P. Marquet, and C. Depeursinge, “Spatial filtering for zero-order and twin-image elimination in digital off-axis holography,” Appl. Opt. 39(23), 4070–4075 (2000). [CrossRef]  

77. E. Cuche, P. Marquet, and C. Depeursinge, “Simultaneous amplitude-contrast and quantitative phase-contrast microscopy by numerical reconstruction of Fresnel off-axis holograms,” Appl. Opt. 38(34), 6994–7001 (1999). [CrossRef]  

78. C. A. Schneider, W. S. Rasband, and K. W. Eliceiri, “NIH Image to ImageJ: 25 years of image analysis,” Nat. Methods 9(7), 671–675 (2012). [CrossRef]  

79. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

80. Z. Ren, Z. Xu, and E. Y. Lam, “Autofocusing in digital holography using deep learning,” Proc. SPIE 10499, 104991V (2018). [CrossRef]  

81. T. Pitkäaho, A. Manninen, and T. J. Naughton, “Performance of autofocus capability of deep convolutional neural networks in digital holographic microscopy,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (online) (Optical Society of America, 2017), https://doi.org/10.1364/DH.2017.W2A.5.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. (a) shows one quantitative phase image of multiple lung cancer cells. The images are focused manually and then unwrapped by the quality-guided unwrapping algorithm. The unwrapped focused-phase images are used for labeled training in the model. The cross-section and 3D representation of one cell with wrapped and unwrapped signals are shown. (b) show training of model where the UnwrapGAN model consists of a discriminator and a U-net generator. (c) show the results for untrained cells. It tests whether the trained model can generate unwrapped focused-phase images from unseen images that have not been used for training and whether it is possible to recover phase values for other types of cells to evaluate the model generalization. We obtain that the proposed model enables the correction of problems on abrupt phase change. The proposed model’s result is also compared with that of the U-net.
Fig. 2.
Fig. 2. General layout of off-axis digital holographic microscopy.
Fig. 3.
Fig. 3. (a) Calculation of reliability, (b) edge reliability, and (c) unwrapping path. Yellow pixels are unwrapped and grouped by the edge with the highest edge reliability; e.g., R5+R8. Orange pixels are unwrapped and grouped by the edge with the second highest edge reliability; e.g., R3+R6. It is assumed that the edge R5+R8 has the highest edge reliability and the edge R3+R6 has the second highest edge reliability.
Fig. 4.
Fig. 4. (a) Architecture of the generator, which is similar to a U-net, to recover an unwrapped phase image from wrapped phase images. The U-net has convolution layers, batch normalization, and various activation functions. (b) Discriminator to compare fake and real images with convolution layers. Tanh is the hyperbolic tangent function.
Fig. 5.
Fig. 5. Gallery of lung cancer cell images used for training the model. The phase images were obtained using a numerical reconstruction algorithm [70] from off-axis holograms. A quality-guided unwrapping method [37] was switched off and on to provide the input and target images. The pairs were used for training.
Fig. 6.
Fig. 6. Results of the trained model compared with that of the quality-guided unwrapping method. The model was trained using the PC9 cell line (lung cancer cells). The right graphs show the phase profiles along the yellow line in the cell images.
Fig. 7.
Fig. 7. Gallery of phase images with different reconstruction distances. (a) The wrapped focused-phase image and the corresponding unwrapped focused-phase image. (b) The images of the cell in the red box in (a) with specific reconstruction distance away from in-focus. The first column line of (b) are wrapped phase images, the second line are unwrapped phase images reconstructed using the quality-guided phase unwrapping method, the third line are the unwrapped phase images reconstructed using the trained model with the pair datasets in-focus, and the last line are the unwrapped phase images reconstructed using our trained model with the pair datasets out of focus. (c) The result of numerically computed SSIM indices between the phase images obtained at different reconstruction distances and the unwrapped focused-phase image in (a).
Fig. 8.
Fig. 8. Gallery of the phase recovery results generated by U-net and the UnwrapGAN. Wrapped defocused-phase images were fed into the model input. The in-focus unwrapped phase images of the corresponding inputs were obtained using the quality-guided path-following algorithm to make ground truth. The two output images indicate the unwrapped focused-phase images, which were reconstructed using the trained U-net and UnwrapGAN models, respectively. The middle graphs show the phase profiles along the straight line in sample # 3. The bottom graphs show the calculated SSIM index between the label and output images of each model with 110 single cells’ phase images, where the area marked with a square was considered as shown in the upper graph. The graph of the bottom left shows the SSIM index for each phase image, and the graph of the bottom right shows the mean and standard deviation of the SSIM indices for 110 phase images.
Fig. 9.
Fig. 9. Unwrapping results of liver and colon cancer cells. These cell types were not used during the training. The results showed that the model is generalized. The right graphs show the phase value of the yellow line in the cell image.
Fig. 10.
Fig. 10. (a) Abrupt phase change noise at a cell boundary with one enlarged area and the cross section of the phase value. (b) The quality-guided phase unwrapping algorithm fails to recover the image. (c) Manually unwrapped image and output of the proposed model. The 3D profile and cross section are also shown for visual comparison.

Tables (1)

Tables Icon

Table 1. Unwrapping time and standard deviation of our proposed method and quality-guided path-following algorithm

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

I H = | R | 2 + | O | 2 + R O + O R ,
I H F = I F F T { F F T ( I H ) × F i l t e r } = R O ,
Ψ ( m , n ) = A Φ ( m , n ) exp [ i π λ d ( m 2 Δ ξ 2 + n 2 Δ η 2 ) ] × F F T { R D ( k , l ) I H F ( k , l ) exp [ i π λ d ( k 2 Δ x 2 + l 2 Δ y 2 ) ] } m , n ,
Φ ( m , n ) = exp [ i π λ D ( m 2 Δ ξ 2 + n 2 Δ η 2 ) ]
D = 1 d i ( 1 + d o d i ) ,
R D = A R exp [ i ( 2 π / λ ) ( k x k Δ x + k y l Δ y ) ] ,
ϕ ( m , n ) = tan 1 { Im [ Ψ ( m , n ) ] R e [ Ψ ( m , n ) ] }
D ( i , j ) = [ H 2 ( i , j ) + V 2 ( i , j ) + D 1 2 ( i , j ) + D 2 2 ( i , j ) ] 1 / 2 ,
H ( i , j ) = γ [ φ ( i , j 1 ) φ ( i , j ) ] γ [ φ ( i , j ) φ ( i , j + 1 ) ] ,
V ( i , j ) = γ [ φ ( i 1 , j ) φ ( i , j ) ] γ [ φ ( i , j ) φ ( i + 1 , j ) ] ,
D 1 ( i , j ) = γ [ φ ( i 1 , j 1 ) φ ( i , j ) ] γ [ φ ( i , j ) φ ( i + 1 , j + 1 ) ] ,
D 2 ( i , j ) = γ [ φ ( i + 1 , j 1 ) φ ( i , j ) ] γ [ φ ( i , j ) φ ( i 1 , j + 1 ) ] ,
R = 1 D .
L c G A N ( G , D ) = E x , y [ log D ( x , y ) ] + E x [ 1 log D ( x , G ( x ) ) ] ,
L L 1 ( G ) = E x , y [ | | y G ( x ) | | 1 ] .
L = arg min G max D L c G A N ( G , D ) + λ L L 1 ( G ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.