Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Numerical dark-field imaging using deep-learning

Open Access Open Access

Abstract

Dark-field microscopy is a powerful technique for enhancing the imaging resolution and contrast of small unstained samples. In this study, we report a method based on end-to-end convolutional neural network to reconstruct high-resolution dark-field images from low-resolution bright-field images. The relation between bright- and dark-field which was difficult to deduce theoretically can be obtained by training the corresponding network. The training data, namely the matched bright- and dark-field images of the same object view, are simultaneously obtained by a special designed multiplexed image system. Since the image registration work which is the key step in data preparation is not needed, the manual error can be largely avoided. After training, a high-resolution numerical dark-field image is generated from a conventional bright-field image as the input of this network. We validated the method by the resolution test target and quantitative analysis of the reconstructed numerical dark-field images of biological tissues. The experimental results show that the proposed learning-based method can realize the conversion from bright-field image to dark-field image, so that can efficiently achieve high-resolution numerical dark-field imaging. The proposed network is universal for different kinds of samples. In addition, we also verify that the proposed method has good anti-noise performance and is not affected by the unstable factors caused by experiment setup.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Bright-field microscopy (BM) and dark-field microscopy (DM) provide complementary visual information obtained from different illumination. BM is of high imaging efficiency allowing the investigation of samples having strong absorption. In DM, only the scattered beam carrying the high-frequency information contributes in imaging [1], which highlights the object edge in the dark background. It can be used in unstained biological tissue imaging, industrial processing detection, clinical medicine observation, chemical identification, etc. [25]. Although DM can break the Abbe limit and its resolution reaches tens of nanometers [6], the imaging efficiency is much lower than BM. To improve the signal-to-noise ratio of the dark-field image, high power light source and image averaging method need to be adopted [79]. The former is likely to damage the biological samples, and the latter limits the real-time imaging.

Numerical dark-field imaging is a method to obtain approximate dark-field image results by digitally processing the bright-field image with filters. Recently, to improve the contrast, the numerical dark-field images are obtained by simulating an opaque circular-stop in the diffraction calculation of the bright-field image [10]. This method is low cost and easy to implement, but it is necessary to accurately calculate the size of the virtual circular stop and its location before diffraction calculation. Similar numerical dark-field imaging method was proposed this year with an adaptive filter [11]. So far, the resolution of numerical dark-field image is consistent with that of the bright-field, for no extra frequency information is added in filtering, which ignored the high-frequency information in real dark-field image. This inspires us to seek a new numerical method to switch from bright-field image to dark-field image and improving the resolution of the numerical dark-field image.

In recent years, deep-learning, as a branch of machine learning that uses multiple layers of artificial neural networks, was developed to automatically analyze signals or data [12]. End-to-end convolutional neural network (CNN) has great potential as an emerging image processing tool and has been used to optimize many classical optical imaging methods [13,14]. For dark-field imaging, from the principle, it belongs to a kind of scattering field imaging. Mie scattering theory [15], Born approximation [16] and Rytov approximation [17] are three common methods to calculate the scattering field of objects. However, due to the randomness of the scattering field, it is difficult to calculate the complete scattering field of complex biological samples under the dark-field condition. So, if the statistical connection between BM and DM can be found by training CNN, we can directly use the trained network to convert the low-resolution bright-field image to high-resolution dark-field image. The network used to calculate this potential connection does not need any prior theoretical knowledge, and does not need to build a conceptual model in advance.

In the training process of the CNN, the accuracy of it highly relies on the matching pairs of the training data. There are two ways to prepare the training set, by using the simulated [18,19] or the experimental data [20,21]. The simulation of DM is challenging due to the randomness of the scattered light and thus we have to use experimental data as the ground truth for the network training. In order to obtain the experimental images of both bright- and dark-fields, it is necessary to build a composite imaging system to record two fields’ images conveniently. There are works describing imaging systems to switch between BM and DM [8,2227]. However, due to the uncontrollable system during the recording process, extra registration methods should be used [2831]. So, in order to avoid human interaction in data preparation and to reduce the man-made error, a new setup to get the perfect matching data in the recording process of the experiment is needed.

We propose to use learning-based method to build a statistical connection for the conversion of bright-field image and dark-field image, while proposing a method to obtain perfectly matched data from experiments to avoid the registration work. To ignore information alignment, a multiplexed image system is proposed to capture composite bright- and dark-field information in a single shot. The matched field-of-view of two images can be directly separated and retrieved in frequency domain. In this way, there is no need to change the hardware in the system, and no additional registration work is required. Once trained, the network can switch from BM to DM with improved resolution, while real dark-field illumination setting in recording and image averaging in processing are not required. The learning-based method discovers the potential conversion relationship from BM to DM images from a large number of statistical data, without using a physical model. Users do not need to have any professional knowledge of the dark-field or conversion relationship to obtain the dark-field image. The output converges to the real high-resolution dark-field result due to the constraint of the real dark-field image with high-frequency information during network training. It provides a novel method for the acquisition of numerical dark-field images constrained by experimental data. Experiments are carried out to prove that the proposed learning-based method can numerically generate high-resolution and high-contrast dark-field images from bright-field images. The anti-noise performance, stability and universality of the proposed method are further discussed.

2. Experimental setup for data preparation

2.1 Illumination system

To obtain ideal network training materials, we design a simultaneous bright- and dark-field images collection setup based on the polarization multiplexing illumination systems. A ring-shaped beam and a circular-shaped beam pass through an epidark illumination system to illuminate the object simultaneously. The epidark illumination system is shown in Fig. 1(a). To achieve high energy conversion rate for the ring-shaped beam, we use a combination of axicon and convergent lens to transform the Gaussian beam into a ring-shaped hollow beam [32]. The relationship between the radius of the hollow beam and the wavelength, the focal length of the Fourier lens and the axicon parameters (period, related to its refractive index and cone angle) needs to be analyzed in advance [3336], so that it can be coupled into the dark-field condenser successfully.

 figure: Fig. 1.

Fig. 1. (a) The front view structure of the epidark illumination system. The outer ring is a dark-field illumination beam path, and the central imaging objective is used as a bright-field illumination beam path. (b) Imaging system. The ring-shaped beam entering the outer ring of the epidark illumination system as the dark-field beam (green). The circle-shaped beam entering the center objective of the epidark illumination system as the bright-field beam (red). (Color does not represent wavelength.)

Download Full Size | PDF

The light field diagram of the illumination system is shown in Fig. 1(b). The bright-field illumination transmitted by the object enters the imaging objective, while the dark-field beam scattered by the object enters the imaging objective with a large divergence angle. This system incorporates polarization multiplexing [37], thus the two fields of information entering the imaging objective are orthogonally polarized, which can make them independent of each other. Otherwise, it will cause information aliasing, which means the dark-field information will submerged in the bright-field information.

2.2 Training data preparation

To capture pairs of bright- and dark-field images, we take advantage of the fast and flexible imaging of the digital holography systems [38]. Figure 2 is a schematic diagram of the bright-dark-fields composite digital holography system, which is an extension of Mach-Zehnder interferometer. A laser beam (wavelength: 532 nm) is divided into two beams by the beamsplitter BS1. After passing through the beam expander, the object wave is further divided into two beams having orthogonal polarization states by the polarizing beamsplitter PBS1. One of them passes through the axicon (Thorlabs AX2510-A) and the lens to be converted into a ring-shaped hollow beam. The other is shaped by a circular diaphragm. The two beams are recombined by the beamsplitter BS2, and coupled into the epidark illumination system (10${\times} $/NA 0.3) as the dark-field and bright-field illumination beams respectively. The cross-sectional view of the light field at the position of the dashed box is shown in Fig. 2. The objective will image the object onto the camera. The reference wave is also divided into two beams with orthogonal polarization by PBS2. They are respectively expanded by the two objectives OBJ1 and OBJ2 compensating the spherical phase of the imaging objective IO. After being reflected by BS3, the two reference beams will interfere with the corresponding object wave. A CMOS (The Imaging Source DMK-33UJ003) captures the composite hologram at the location of the imaging plane. In this way, the light path of BM and DM can be adjusted separately.

 figure: Fig. 2.

Fig. 2. Bright-dark-fields composite digital holographic system. M, mirror; BS, beam splitters; BCE, beam collimation expanding; PBS, polarizing beam splitters; A, aperture; L, lens; EIS, epidark illumination system; S, sample; IO, imaging objective; OBJ, objective. The front view of the light field at the position of the dashed box is shown.

Download Full Size | PDF

The object wave ${U_O}$ can be expressed as

$${U_O} = {U_B}{J_v} + {U_D}{J_h}$$
where ${U_B}$ and ${U_D}$ are the object wave of BM and DM respectively, ${J_v} = {[0\textrm{ }1]^T}$ and ${J_\textrm{h}} = {[1\textrm{ 0}]^T}$ are the Jones vectors representing the vertical and horizontal polarization states, respectively. To simultaneously record ${U_B}$ and ${U_D}$ without overlapping and separate them in the frequency domain, the angle multiplexing [39] is introduced in the off-axis holographic system. The reference beams have different directions and interfere with ${U_O}$, so the composite hologram can be expressed as:
$${I_C} = {|{{U_O} + {U_{R\alpha }}{J_v} + {U_{R\beta }}{J_h}} |^2}$$
where ${U_{R\alpha }}({\boldsymbol x} )= exp({i2\pi \alpha \cdot {\boldsymbol x}} )$ and ${U_{R\beta }}({\boldsymbol x} )= exp({i2\pi \beta \cdot {\boldsymbol x}} )$ are two reference beams having spatial frequencies α and β, and ${\boldsymbol x} = ({x,y} )$. Since ${J_v} \cdot {J_h} = 0$, combined with Eq. (2), the above equation can be rewritten as:
$$\begin{aligned} {I_C}({\boldsymbol x}) &= {I_D}({\boldsymbol x}) + {U_B}({\boldsymbol x}){U_{R\alpha }^\ast} ({\boldsymbol x}) + {U_B}^\ast ({\boldsymbol x}){U_{R\alpha }}({\boldsymbol x})\\ & + {U_D}({\boldsymbol x}){U_{R\beta }^\ast} ({\boldsymbol x}) + {U_D}^\ast ({\boldsymbol x}){U_{R\beta }}({\boldsymbol x}) \end{aligned}$$
where ${I_D}({\boldsymbol x} )$ is the DC component of the hologram. Therefore, adjusting α and β allows to separate the last four terms in the frequency domain.

We use the Edmund 1951 USAF target to quantify the system resolution. Figure 3(a) shows the composite hologram captured by the camera. It can be seen in the enlarged image that there are fringes generated by the interference of two groups of object beams and reference beams. The Fourier transform spectrum is shown in Fig. 3(b). The DC spectral component (yellow circle) is separated from the bright-field spectral component (blue circle) and the dark-field spectral component (red circle). According to the spatial resolution definition [40], the resolution of the dark-field amplitude is higher than the bright-field due to a larger NA. In Fig. 4(a) and Fig. 4(b), the reconstruction results of the composite hologram obtained by experiments are shown. Our system can clearly resolve the fourth element of group 8 (2.76 µm) for the bright-field and the second element of group 9 (1.74 µm) for the dark-field. We used polystyrene microsphere (1.97 µm in diameter) as a phase-contrast sample to test the stability of the proposed experimental setup to generate a perfectly matched image. In Fig. 4(c) and Fig. 4(d), the results of bright- and dark-field amplitude are shown respectively. By comparing the distribution of the microsphere in magnified image, the closely distributed microsphere which is blurred under bright-field can be clearly distinguished in the dark-field. So, by using the proposed experimental setup, pairs of unbiased images can be prepared for network training.

 figure: Fig. 3.

Fig. 3. (a) Composite hologram of the 1951 USAF target obtained by the proposed experimental setup. The partial enlarged is Element 2, Group 8. The interference fringes of the two directions are distributed at the edges (dark-field fringes) and inside (bright-field fringes) of the rectangular pattern on the test chart. (b) The Fourier transform spectrum of the composite hologram. It consists of five components in three types. Yellow for the DC component, blue for the bright-field components, and red for the dark-field components.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. The 1951 USAF target bright-field (a) and dark-field (b) image, and the polystyrene microsphere bright-field (c) and dark-field (d) images obtained by the proposed experimental setup. The enlarged image corresponds to the same field-of-view.

Download Full Size | PDF

The speckle noise generated by the coherent light decreases the image quality, dark-field images are more speckled. So, by adding a rotatable or movable diffuser in the DM experimental setup, reconstruction with higher signal-to-noise ratio can be obtained by capturing and averaging multiple images:

$$A = \sum\nolimits_i {{A_i}/N} $$
where N is the number of holograms, A is the numerical reconstruction of the amplitude map ${A_i}$, for $i = 1, \ldots ,n$. On the other hand, by collecting a sequence of images with scattered beams, the information of different scattering angles is enriched, and the high-frequency information is retrieved by the average method.

2.3 Network architecture

We demonstrated a convolutional neural network with U-Net [41] architecture, which has excellent performance in solving regression problems. The network architecture is shown in Fig. 5, and the input-label pairs for training are bright- and dark-field amplitude images obtained by experiment. The training network consists of a down-sampling path (encoder) and an up-sampling path (decoder). In both paths, in addition to the convolutional layer (Conv) used for feature extraction, there are also Batch Normalization (BN) [42] for improving the generalization ability of the network, and Rectified Linear Unit (ReLU) [43] to avoid vanishing gradient and exploding gradient problem. The dashed arrows indicate the skip connection. They are unique to the U-Net and can connect two paths which makes feature maps with the same size connected, so that the high-resolution features learned by the network during the encoding stage are transferred to the decoding stage. The network can learn the feature details of different sizes in the input image in the down-sampling path, and update the network parameters during the back propagation of the loss function. The mean square error (MSE) is used as the loss function, which is defined as:

$$Loss({{D_i},\widehat {{D_i}}} )= \frac{1}{n}{\sum\nolimits_{i = 1}^n {||{{D_i} - \widehat {{D_i}}} ||} ^2}$$
where ${D_i}$ and $\widehat {{D_i}}$ represent the real image and the numerical reconstructed image of the dark-field respectively, i is the number of the data in the training set and n is the batch size. This formula means that the final optimization function is the average of the sum of the difference between the output map and the label map obtained at each data point. To eliminate the influence of outliers in a large amount of data on the decline of the loss function, Adaptive Moment Estimation (Adam) [44] is introduced as an optimizer to iteratively update the weights and deviations of the network, and the learning rate is set to 0.001. As the two times moment estimation of the gradient in the network training updates the adaptive learning rate for different weights, it ensures the performance of the end-to-end network in sparse gradient problems and non-stationary (noise) problems. For testing, the conventional bright-field images directly collected are inputted into the trained network, and the corresponding high-resolution numerical dark-field image can be generated.

 figure: Fig. 5.

Fig. 5. Training and testing process of the network. In the schematic diagram of the network structure, the left and right halves represent the down-sampling and up-sampling paths, respectively. Each solid arrow connects the layers in the network along the data flow, and the dots with different colors next to it represent different data operations. The dashed arrow indicates the skip connection.

Download Full Size | PDF

For the dataset of polystyrene microsphere as an example, 2548 patches (128*128 pixels) were segmented from 55 pairs of perfectly matched experimental images (1280*960 pixels) without overlap. In the raw experimental images, some have poor quality due to the environmental noise in the experiment may have a negative impact on the network training. These images have been automatically eliminated from the training set through the signal-to-noise ratio threshold screening. It should be emphasized that the training set is derived from the real images obtained in the experiment. After training, to make the test set, we randomly crop two hundred patches with the same size from the additional five conventional bright-field images of the object. The bright-field images were taken after blocking the reference beams and the dark-field illumination beam in the same experimental setup, which ensure that images not seen by the network during training are provided to prove the performance of the network. The network is implemented by Python 3.6.8 based Pytorch 1.3.1, and trained and tested on a PC with a Core i7–8700 CPU (4300MHz) and 8GB RAM using NVIDIA GeForce GTX 1050Ti GPU. It takes about 2 hours to complete a training with 86,400 iterations.

3. Results and discussion

3.1 Network output and quantitative analysis

From the network output results shown in Fig. 6, it can be seen that the edges of the objects which represent high-frequency information are highlighted in the ground truth and the output of the network. Besides that, it can be also found that the resolution of dark-field image is higher than that of bright-field image, which coincides with the theoretical analysis and the real. The high-resolution output results of polystyrene microsphere testify that our method is different from the works that obtain high-contrast images using filters mentioned before. The curve in the upper right corner of each image is the normalized intensity distribution along the dashed line. The bright-field intensity profile is inverted (i.e., the peak represents the dark part in the image) in the results of polystyrene microsphere to make easy comparison with the dark-field results. In addition to the output constrained by real dark-field information, the network is able to suppress the noise by observing the intensity curve, which becomes smoother. This is due to the denoising effect of deep-learning [45,46].

 figure: Fig. 6.

Fig. 6. Results of 1951 USAF target and polystyrene microsphere. Network input, output and ground truth of different parts of samples are shown. The intensity normalized distribution along the dashed line is displayed in the upper right corner.

Download Full Size | PDF

Further, we test our method with herbaceous plant stem and axon of Drosophila compound eye samples. The datasets of these samples have undergone the same training and testing process as before. As shown in Fig. 7, unlike flat and uniform non-biological samples, the contours of biological samples are rough and even with folds and protrusions. These features cannot be observed under bright-field conditions are highlighted due to the increased high-frequency edge information in the dark-field image. Results show that the network can retrieve these features from the real dark-field images. The features of different parts are clearly visible in the output numerical dark-field images, but cannot be observed in the bright-field images. So it can be known that not only the contrast of dark-field image is improved but also the resolution is improved by our method due to the increased high-frequency edge information. Similarly, we performed quantitative analysis of the intensity normalization. In the intensity normalization plot on the right in Fig. 7, the black dashed line, purple solid line blue solid line represent the intensity values along the white dashed line in the input, ground truth and output image of part D of each sample, respectively. The intensity plot of the input image is inverted for easy comparison. It can be seen that plots of the output and the ground truth coincide. In addition, the peak-valley difference of the output curve is larger than the ground truth, which means that the output image has higher contrast. Experimental images as ground truth will inevitably be affected by environmental noise. But the network can remove the noise during the training iterations resulting in an increase of the image contrast. To sum up, the network also successfully learned the characteristic information of the real dark-field image of complex biological samples, which is in line with our expectation.

 figure: Fig. 7.

Fig. 7. The results of network output of herbaceous plant stem (a) and axons of Drosophila compound eye (b). The normalized intensity curve on the right corresponds to the intensity values along the dashed line in Part D of each sample. The structural similarity index is indicated in the lower right corner of the output images.

Download Full Size | PDF

For the results of samples currently used, the structural similarity index (SSIM) of the results of different datasets are shown in Fig. 8(a) as violin plots. It can be seen that the simpler the structure of the sample is, the easier the numerical dark-field image can be restored by the network, which is as expected. Figure 8(b) shows the convergence curve of the loss function of the network during the training for two biological samples. Therefore, we believe that the proposed network will show good versatility in the verification of more kinds of samples.

 figure: Fig. 8.

Fig. 8. (a) The violin plots of SSIM of output results for different samples. (b) The convergence curves of the loss function during the training of biological samples.

Download Full Size | PDF

3.2 Anti-noise performance

Considering the extreme case, we respectively add low- and high-level Gaussian noise into the bright-field images to simulate unsatisfactory experimental conditions where a poor-quality camera is used or the bad experimental environment. The anti-noise performance of the network is quantitatively analyzed by comparing the SSIM and the peak signal-to-noise ratio (PSNR) between the network output and the ground truth. As shown in the first row of Fig. 9, we input bright-field images under different level of noise to the trained model. The second row shows the corresponding output results with evaluation indicators. The structural similarity of the previous results is as expected, and the PSNR can reach 36.8755dB. From the noisy bright-field image, we can see that the object information with low resolution is submerged in the noise, but the network can still extract noise-free high-frequency information from it. For low and high noise level conditions, the PSNR of the numerical dark-field image output by the network can reach 32.2142dB and 31.3314dB, respectively, and the structural similarity is still within the acceptable range.

 figure: Fig. 9.

Fig. 9. The SSIM and PSNR of the numerical dark-field image output by the network under different noise levels are compared.

Download Full Size | PDF

3.3 Stability test

Since the learning-based imaging technique is data-driven one, the accuracy of the data is of vital importance in imaging quality. The input images for training are obtained in the experiment, which are more or less affected by the stability of experimental setup. Considering this problem, paired training sets are simultaneously recorded. Therefore, training sets are immune from the vibration of the setup. Test sets are recorded after blocking the reference waves and dark-field illumination beam. Month to hour interval between recording the training and test data is set for test the stability of the network. We found that the learning-based method has excellent spatiotemporal compatibility once the network is well trained. In order to enlarge the sensitivity of the network to the experimental setup, we artificially shift the original field-of-view, which can simulate the sample deviation caused by the instability of the optical device. The results are shown in Fig. 10. For each type of sample, there is a movement within [40, 70] pixels. The SSIM results show that the proposed method is not affected by the system disturbance.

 figure: Fig. 10.

Fig. 10. Test for the sensitivity of the setup to the results. The first column image of each sample is in the original field-of-view, and the second column is the image after the field-of-view is moved.

Download Full Size | PDF

3.4 Universality of trained model

In order to test the universality of the trained model, we use the model trained by herbaceous plant stem to generate dark-field images of other plant cell. The bright-field images of stem transection of Monocotyledons, leaf epidermis of Vicia faba and onion skin were input into the trained model, and the output results are respectively shown in Fig. 11. From the results, we find that although the trained model can “give answers” from its “own knowledge” of familiar object structure, it cannot give a perfect answer to the rest of “the superclass knowledge”. This is because although they are all plant samples, the characteristics of different samples under dark-field conditions are often unique, and the method we proposed is to let the network carry out focused training so as to obtain the unique dark-field characteristic information of samples.

 figure: Fig. 11.

Fig. 11. The output results of the model trained by herbaceous plant stem on different plant samples.

Download Full Size | PDF

4. Conclusion

High-resolution numerical dark-field imaging from low-resolution bright-field image by learning-based method is proposed and tested. The matching perfect image pairs used as training set are obtained by composite experimental setup. The well-trained network can discover the potential conversion relationship from bright-field image to dark-field image and realize fast generation. Compared with the traditional method, high-power light sources and dark-field condenser setup are no longer required. This is an advantage, since for the investigation of biological samples it is usually required to use low-light power to avoid phototoxicity [21,47]. In addition, compared with other numerical dark-field imaging methods, it is possible to obtain high-resolution information from real dark-field images. Moreover, the proposed experimental setup optimized by the idea of polarization multiplexing provides a new idea for the acquisition of completely matched images when using real experimental data for end-to-end network training, so that no additional registration work is needed. The introduction of the self-focusing characteristics of holography into the network is under investigation.

Funding

National Key Research and Development Program of China (2017YFB0503505); National Natural Science Foundation of China (61775097, 61975081); Key Laboratory of Virtual Geographic Environment (Nanjing Normal University), Ministry of Education (2017VGE02).

Acknowledgments

This work was supported by the National Science Foundation of China (NSFC) (61775097, 61975081); National Key Research and Development Program (2017YFB0503505); Open Foundation of Key Lab of Virtual Geographic Environment (Nanjing Normal University), Ministry of Education (2017VGE02).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. H. Sherman, S. Klausner, and W. A. Cook, “Incident dark-field illumination: A new method for microcirculatory study,” Angiology 22(5), 295–303 (1971). [CrossRef]  

2. K. Maslov, G. Stoica, and L. V. Wang, “In vivo dark-field reflection-mode photoacoustic microscopy,” Opt. Lett. 30(6), 625 (2005). [CrossRef]  

3. O. L. Krivanek, M. F. Chisholm, V. Nicolosi, T. J. Pennycook, G. J. Corbin, N. Dellby, M. F. Murfitt, C. S. Own, Z. S. Szilagyi, M. P. Oxley, S. T. Pantelides, and S. J. Pennycook, “Atom-by-atom structural and chemical analysis by annular dark-field electron microscopy,” Nature 464(7288), 571–574 (2010). [CrossRef]  

4. G. Aykut, G. Veenstra, C. Scorcella, C. Ince, and C. Boerma, “Cytocam-IDF (incident dark field illumination) imaging for bedside monitoring of the microcirculation,” ICMx 3(1), 4–10 (2015). [CrossRef]  

5. S. Perrin, H. Li, K. Badu, T. Comparon, G. Quaranta, N. Messaddeq, N. Lemercier, P. Montgomery, J.-L. Vonesch, and S. Lecler, “Transmission Microsphere-Assisted Dark-Field Microscopy,” Phys. Status Solidi RRL 13(2), 1800445 (2019). [CrossRef]  

6. P. Von Olshausen and A. Rohrbach, “0180) Microscopy; (070.6110) Spatial filtering; (030.1670) Coherent optical effects,” osapublishing.org (2013).

7. S. Kubota and J. W. Goodman, “Very efficient speckle contrast reduction realized by moving diffuser device,” Appl. Opt. 49(23), 4385–4391 (2010). [CrossRef]  

8. A. Faridian, G. Pedrini, and W. Osten, “High-contrast multilayer imaging of biological organisms through dark-field digital refocusing,” J. Biomed. Opt. 18(08), 1 (2013). [CrossRef]  

9. A. Faridian, G. Pedrini, and W. Osten, “Opposed-view dark-field digital holographic microscopy,” Biomed. Opt. Express 5(3), 728 (2014). [CrossRef]  

10. C. Trujillo and J. Garcia-Sucerquia, “Numerical dark field illumination applied to experimental digital lensless holographic microscopy for reconstructions with enhanced contrast,” Opt. Lett. 43(17), 4096 (2018). [CrossRef]  

11. M. Trusiak, J. A. Picazo-Bueno, P. Zdankowski, and V. Micó, “DarkFocus: numerical autofocusing in digital in-line holographic microscopy using variance of computational dark-field gradient,” Opt. Lasers Eng. 134, 106195 (2020). [CrossRef]  

12. Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015). [CrossRef]  

13. T. Nguyen, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Deep learning approach for Fourier ptychography microscopy,” Opt. Express 26(20), 26470 (2018). [CrossRef]  

14. K. de Haan, Z. S. Ballard, Y. Rivenson, Y. Wu, and A. Ozcan, “Resolution enhancement in scanning electron microscopy using deep learning,” Sci. Rep. 9(1), 12050 (2019). [CrossRef]  

15. Craig F. Bohren and Donald R. Huffman, Absorption and scattering of light by small particles (John Wiley & Sons, 2008).

16. E. Wolf, “Three-dimensional structure determination of semi-transparent objects from holographic data,” Opt. Commun. 1(4), 153–156 (1969). [CrossRef]  

17. Y. Sung, W. Choi, C. Fang-Yen, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Optical diffraction tomography for high resolution live cell imaging,” Opt. Express 17(1), 266 (2009). [CrossRef]  

18. E. Nehme, L. E. Weiss, T. Michaeli, and Y. Shechtman, “Deep-STORM: super-resolution single-molecule microscopy by deep learning,” Optica 5(4), 458 (2018). [CrossRef]  

19. C. N. Christensen, E. N. Ward, P. Lio, and C. F. Kaminski, “ML-SIM: A deep neural network for reconstruction of structured illumination microscopy images,” 1–9 (2020).

20. Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018). [CrossRef]  

21. M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018). [CrossRef]  

22. J. Zhang, M. C. Pitter, S. Liu, C. See, and M. G. Somekh, “Surface-plasmon microscopy with a two-piece solid immersion lens: Bright and dark fields,” Appl. Opt. 45(31), 7977–7986 (2006). [CrossRef]  

23. M. Lei and B. Yao, “Multifunctional darkfield microscopy using an axicon,” J. Biomed. Opt. 13(4), 044024 (2008). [CrossRef]  

24. T. Piper and J. Piper, “Variable bright-darkfield-contrast, a new illumination technique for improved visualizations of complex structured transparent specimens,” Microsc. Res. Tech. 75(4), 537–554 (2012). [CrossRef]  

25. G. Zheng, C. Kolner, and C. Yang, “Microscopy refocusing and dark-field imaging by using a simple LED array,” Opt. Lett. 36(20), 3987 (2011). [CrossRef]  

26. Z. Liu, L. Tian, S. Liu, and L. Waller, “Real-time brightfield, darkfield, and phase contrast imaging in a light-emitting diode array microscope,” J. Biomed. Opt. 19(10), 1 (2014). [CrossRef]  

27. P. Eugui, A. Lichtenegger, M. Augustin, D. J. Harper, M. Muck, T. Roetzer, A. Wartak, T. Konegger, G. Widhalm, C. K. Hitzenberger, A. Woehrer, and B. Baumann, “Beyond backscattering: optical neuroimaging by BRAD,” Biomed. Opt. Express 9(6), 2476 (2018). [CrossRef]  

28. G. Wu, M. Kim, Q. Wang, B. C. Munsell, and D. Shen, “Scalable High-Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning,” IEEE Trans. Biomed. Eng. 63(7), 1505–1516 (2016). [CrossRef]  

29. X. Yang, R. Kwitt, M. Styner, and M. Niethammer, “Quicksilver: Fast predictive image registration – A deep learning approach,” NeuroImage 158, 378–396 (2017). [CrossRef]  

30. Y. Rivenson, T. Liu, Z. Wei, Y. Zhang, K. de Haan, and A. Ozcan, “PhaseStain: the digital staining of label-free quantitative phase microscopy images using deep learning,” Light: Sci. Appl. 8, 23 (2019). [CrossRef]  

31. Y. Rivenson, Y. Wu, and A. Ozcan, “Deep learning in holography and coherent imaging,” Light: Sci. Appl. 8, 85 (2019). [CrossRef]  

32. K. Aït-Ameur and F. Sanchez, “Gaussian beam conversion using an axicon,” J. Mod. Opt. 46(10), 1537–1548 (1999). [CrossRef]  

33. P. Vaity and L. Rusch, “Perfect vortex beam: Fourier transformation of a Bessel beam,” Opt. Lett. 40(4), 597 (2015). [CrossRef]  

34. S. Fu, T. Wang, and C. Gao, “Perfect optical vortex array with controllable diffraction order and topological charge,” J. Opt. Soc. Am. A 33(9), 1836 (2016). [CrossRef]  

35. X. Li, H. Ma, C. Yin, J. Tang, H. Li, M. Tang, J. Wang, Y. Tai, X. Li, and Y. Wang, “Controllable mode transformation in perfect optical vortices,” Opt. Express 26(2), 651 (2018). [CrossRef]  

36. H. Zhang, X. Li, H. Ma, M. Tang, H. Li, J. Tang, and Y. Cai, “Grafted optical vortex with controllable orbital angular momentum distribution,” Opt. Express 27(16), 22930 (2019). [CrossRef]  

37. C. Yuan, G. Situ, G. Pedrini, J. Ma, and W. Osten, “Resolution improvement in digital holography by angular and polarization multiplexing,” Appl. Opt. 50(7), B6–B11 (2011). [CrossRef]  

38. S. Grilli, P. Ferraro, S. De Nicola, A. Finizio, G. Pierattini, and R. Meucci, “Whole optical wavefields reconstruction by Digital Holography,” Opt. Express 9(6), 294 (2001). [CrossRef]  

39. C. Yuan, H. Zhai, and H. Liu, “Angular multiplexing in pulsed digital holography for aperture synthesis,” Opt. Lett. 33(20), 2356 (2008). [CrossRef]  

40. W. Osten, A. Faridian, P. Gao, K. Körner, D. Naik, G. Pedrini, A. K. Singh, M. Takeda, and M. Wilke, “Recent advances in digital holography [Invited],” Appl. Opt. 53(27), G44 (2014). [CrossRef]  

41. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Springer, Verlag, 2015), Vol. 9351, pp. 234–241.

42. D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Instance Normalization: The Missing Ingredient for Fast Stylization,” (2016).

43. X. Glorot and A. Bordes, Y. B.-P. of the fourteenth international Conference, and U. 1938, “Deep Sparse Rectifier Neural Networks. 2011,” (n.d.).

44. D. P. Kingma and J. L. Ba, “Adam: A method for stochastic optimization,” in 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings (International Conference on Learning Representations, ICLR, 2015).

45. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising,” IEEE Trans. on Image Process. 26(7), 3142–3155 (2017). [CrossRef]  

46. C. Tian, L. Fei, W. Zheng, Y. Xu, W. Zuo, and C.-W. Lin, “Deep Learning on Image Denoising: An overview,” (2019).

47. L. Jin, B. Liu, F. Zhao, S. Hahn, B. Dong, R. Song, T. C. Elston, Y. Xu, and K. M. Hahn, “Deep learning enables structured illumination microscopy with low light levels and enhanced speed,” Nat. Commun. 11(1), 1934 (2020). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. (a) The front view structure of the epidark illumination system. The outer ring is a dark-field illumination beam path, and the central imaging objective is used as a bright-field illumination beam path. (b) Imaging system. The ring-shaped beam entering the outer ring of the epidark illumination system as the dark-field beam (green). The circle-shaped beam entering the center objective of the epidark illumination system as the bright-field beam (red). (Color does not represent wavelength.)
Fig. 2.
Fig. 2. Bright-dark-fields composite digital holographic system. M, mirror; BS, beam splitters; BCE, beam collimation expanding; PBS, polarizing beam splitters; A, aperture; L, lens; EIS, epidark illumination system; S, sample; IO, imaging objective; OBJ, objective. The front view of the light field at the position of the dashed box is shown.
Fig. 3.
Fig. 3. (a) Composite hologram of the 1951 USAF target obtained by the proposed experimental setup. The partial enlarged is Element 2, Group 8. The interference fringes of the two directions are distributed at the edges (dark-field fringes) and inside (bright-field fringes) of the rectangular pattern on the test chart. (b) The Fourier transform spectrum of the composite hologram. It consists of five components in three types. Yellow for the DC component, blue for the bright-field components, and red for the dark-field components.
Fig. 4.
Fig. 4. The 1951 USAF target bright-field (a) and dark-field (b) image, and the polystyrene microsphere bright-field (c) and dark-field (d) images obtained by the proposed experimental setup. The enlarged image corresponds to the same field-of-view.
Fig. 5.
Fig. 5. Training and testing process of the network. In the schematic diagram of the network structure, the left and right halves represent the down-sampling and up-sampling paths, respectively. Each solid arrow connects the layers in the network along the data flow, and the dots with different colors next to it represent different data operations. The dashed arrow indicates the skip connection.
Fig. 6.
Fig. 6. Results of 1951 USAF target and polystyrene microsphere. Network input, output and ground truth of different parts of samples are shown. The intensity normalized distribution along the dashed line is displayed in the upper right corner.
Fig. 7.
Fig. 7. The results of network output of herbaceous plant stem (a) and axons of Drosophila compound eye (b). The normalized intensity curve on the right corresponds to the intensity values along the dashed line in Part D of each sample. The structural similarity index is indicated in the lower right corner of the output images.
Fig. 8.
Fig. 8. (a) The violin plots of SSIM of output results for different samples. (b) The convergence curves of the loss function during the training of biological samples.
Fig. 9.
Fig. 9. The SSIM and PSNR of the numerical dark-field image output by the network under different noise levels are compared.
Fig. 10.
Fig. 10. Test for the sensitivity of the setup to the results. The first column image of each sample is in the original field-of-view, and the second column is the image after the field-of-view is moved.
Fig. 11.
Fig. 11. The output results of the model trained by herbaceous plant stem on different plant samples.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

U O = U B J v + U D J h
I C = | U O + U R α J v + U R β J h | 2
I C ( x ) = I D ( x ) + U B ( x ) U R α ( x ) + U B ( x ) U R α ( x ) + U D ( x ) U R β ( x ) + U D ( x ) U R β ( x )
A = i A i / N
L o s s ( D i , D i ^ ) = 1 n i = 1 n | | D i D i ^ | | 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.