Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Adaptive lens-free computational coherent imaging using autofocusing quantification with speckle illumination

Open Access Open Access

Abstract

Multi-distance phase retrieval (MDPR) based lensfree imaging is promising for an aberration free and compact biological imaging system. In the MDPR processing, the measurement uncertainty of the sample-to-sensor distance undermines its imaging quality and imposes a heavy workload to achieve a perfect reconstruction. The optimal distance can be searched by using an image sharpness quantification function with a refocused data set, however, the scanning is sensitive to noise and aliasing artifact for MDPR. In this work, we propose an adaptive imaging scheme with the help of a diffuser inserted in the lensfree system. The optimal sample-to-sensor distance is searched by combing speckle imaging with sharpness quantification function. With this speckle-based auxiliary, intensity patterns under a coherent illumination are directly used to achieve an in-focusing image reconstruction. Experiments are given to demonstrate the stability, imaging resolution and optical sectioning for our scheme. This method provides a simple, stable and robust tool for the auto-focusing imaging.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Conventional microscopy is an indispensable tool to magnify a microscopic object for distinct visualization. In the past decades, many novel architectures and algorithms have been proposed to reduce the optical aberration, enhance the resolution and expand the field-of-view of the microscopic imaging, such as imaging with photon sieves [1,2] and Fresnel zone plates [3,4], and the image stitching algorithm [5]. Generally these schemes are difficult to keep the balance for the trade-off between the resolution enhancement and the field-of-view expansion. The subpixel image stitching algorithms can combine different regions of interest to achieve a high-resolution imaging with a large field-of-view. However, this strategy merely outputs a synthesized intensity and the corresponding phase is lost, which is incapable of imaging a translucent or phase-only object. Different from these methods, computational imaging systems with iterative phase retrieval algorithms could effectively reduce the complexity of experimental setup and yield a high-resolution complex-valued reconstruction [6–8]. The phase retrieval technique, as a vital tool for solving an inverse optical problem, can reconstruct a complex-valued object from intensity-only observations [9–17]. The iterative phase retrieval methods have been applied for super resolution [18], lensfree imaging [19–21], optical encryption [22–24] and optical tomography [25,26]. Definitely the phase retrieval algorithms will play a significant role in the computational microscopic imaging.

The lensfree imaging based on multi-distance phase retrievals, as a useful scheme for the computational imaging, attracts many interests recently due to its compact and aberration-free implementation. What it suffers from is concluded as follows: (1) the imaging resolution is limited by the finite pixel size of the imaging sensor; (2) the imaging quality is degraded by tilt illumination; (3) the distance from the sample to the receiving plane cannot be accurately and stably measured. To break the pixel limitation, pixel super resolution technique is introduced with the sub-pixel interpolation by moving the light source [27] or the sample [28]. In our previous works, tilt illumination caused by off-axis displacement of light source is eliminated by using a tilt diffraction modality [29] or a Fourier-based alignment strategy [30]. Searching an optimal sample-to-sensor distance is regarded as a procedure of auto-focusing. If one of the wavefronts in the diffractive field is retrieved, the sample-to-sensor distance is possibly obtained by refocusing this retrieved wavefront to the object plane. The peak value of sharpness quantification function (SQF) is usually used as an indicator of this search process. At present, some new metrical function, such as structure tensor measurement [31], edge sparsity criterion [32], dual-wavelength criterion [33], are also defined to take this task. However, these SQF-based methods heavily depend on the robustness of the metric function. Also, if they are used for a multi-distance measurement, its auto-focusing accuracy is highly impaired by the experimental noise and the aliasing artifact. In addition, double-helix phase engineering [34] and Talbot pattern [35] are also used to tune the focal plane. Unfortunately, these methods merely search the focal plane with a manual operation. Recently, deep learning methods [36,37] are also applied for auto-focusing. They can achieve a fast and robust auto-focusing. However, deep learning methods depend on large amounts of experimental data to be trained (larger than 100) for a better result. It merely works well in the distance range trained before.

Different from these auto-focusing methods, we propose a simple but stable method to achieve automatic focusing in the multi-distance lensfree imaging system. In our scheme, only one diffuser is inserted in the upstream of object and other components keep unchanged. With the help of speckle illumination, simple sharpness quantification functions are enough to stably seek the optimal focused position. Equipped with this optimized position, an inverted propagation under coherent illumination allows us to obtain a perfect in-focusing object reconstruction. Compared to other multi-distance phase retrieval methods, our hybrid imaging strategy is demonstrated to achieve a higher imaging contrast and resolution. Experiments of our scheme on a biological sample demonstrate the performance of the multi-layers refocusing.

2. Method

We firstly introduce the basic notation and concept of the adaptive lensfree imaging technique. In the MDPR, an object is merely retrieved by a set of multiple intensity observations. As a plane wave incidents on the sample, the intensity patterns are sequentially recorded under different diffractive distances. Assume that u0 and un,n[1,N] denote the complex amplitude distributions of the sample plane and receiving plane, the intensity observations In in the recording planes are expressed as

In=|un|2+εn=|Anu0|2+εn,
where An and εnrepresent the forward propagation operator from the sample plane to the n-th receiving plane and the additive noise, respectively. The index n corresponds to the n-th propagation distance, Zn=Z0+(n1)d,n[1,N], where Z0 is the initial distance, d is the equivalent interval and N is the total recording number. With the known parameters In and An, the wave field of sample is retrieved by the iterative multi-distance phase retrieval. Here we consider a coherent illumination scenario with a paraxial approximation of the wave field propagation. The forth-and-back propagation between the sample and receiving plane are done by using the angular spectrum decomposition and its detailed description is shown in Ref [38].

In the traditional multi-distance lensfree imaging, the sample-to-sensor distance is difficult to be measured accurately. Usually, a refocusing search based on sharpness quantification function is used to measure this distance. However, this strategy is sensitive to the experimental noise. Also, the multi-distance measurement is easy to bring the aliasing artifact into system and it has negative impact on the auto-focusing operation. This artifact is caused by tilt illumination. Because it is hard to guarantee that a plane wave perpendicularly incidents on the sample, and thus leads to a tilt illumination for sample. In the paraxial approximation, this tilt illumination will result in a lateral shift of the diffraction patterns. The accuracy of auto-focusing will be impaired due to the lateral shift and the experimental noise. To enhance the stability and robustness of the auto-focusing, we devise a hybrid computational imaging technique as shown in Fig. 1. A switchable ground glass diffuser is inserted in the upstream of the object to tune the type of the illuminations. The on and off states of the diffuser indicate a speckle and a coherent illuminations, respectively. In this hybrid mode, two groups of intensity observations under the speckle and coherent illuminations are recorded separately. The speckle data set is used to search the optimal sample-to-sensor distance. The coherent data set is used for the image reconstruction. In our adaptive hybrid method, the manual refocusing operation is no longer needed and the stability of auto-focusing is therefore enhanced.

 figure: Fig. 1

Fig. 1 The schematic of the adaptive multi-distance lensfree imaging system. A diffuser is inserted before the object to switch the type of illuminations.

Download Full Size | PDF

There are two types of the multi-distance phase retrievals, single-beam, multiple-intensity reconstruction (SBMIR) [19] and amplitude-phase retrieval (APR) [39], which are the serial mode and the parallel mode, respectively. To remove the measurement uncertainty of the initial distance, we rewrite these two methods as self-loop modalities in Figs. 2(a) and 2(b), in which the forth-and-back propagation are iteratively defined from the first receiving plane to the last until the convergence is reached. For simplicity, the serial and parallel mode are termed as S mode and P mode in the following. As shown in Fig. 2(a), the flowchart of the S mode is generalized as follows: (1) assuming u^nk as a k-th complex-valued guess of the n-th receiving plane; (2) initializing the phase of the first receiving plane with zero (e.g., u^11=I1); (3) propagating successively from the n-th receiving plane to the (n + 1)-th plane and the k-th guess of the (n + 1)-th plane is expressed as

u^n+1k=Ad[Inu^nk|u^nk|],
where Ad is a forward angular spectrum propagation operator related to the equivalent interval d and the operation in the square bracket requires that the computed amplitude is replaced with the square root of measured intensity and the corresponding phase is retained; (4) while n is equal to N, the synthesized complex amplitude is propagated backward to the first receiving plane and the (k + 1)-th guess of the first plane is calculated as
u^1k+1=AN1[INu^Nk|u^Nk|],
where AN1 stands for the backward angular spectrum propagation operator from the N-th plane to the first one; (5) running iteratively from step (3) to (4) until the metric function Δ like as
Δ=xy||u^1k|I1|,
is below a given threshold. As shown in Fig. 2(b), the P mode runs as same as the S mode does except that the forward and backward propagation are changed as

 figure: Fig. 2

Fig. 2 The flowchart of adaptive multi-distance phase retrievals: (a) Serial mode, (b) Parallel mode, (c) The adaptive mode of our proposal.

Download Full Size | PDF

u^nk=A1n[I1u^1k|u^1k|],n[2,N],
u^1k+1=1N1n=2NAn1[Inu^nk|u^nk|].

These rewritten methods only output a wavefront of the first measuring plane. With this complete wavefront, any planes in the diffraction field are retrieved by an refocusing operation with angular spectrum propagation. As described above, tilt illumination will produce a sequential lateral shift of diffraction patterns with the imaging sensor moving. The speckle illumination is introduced to eliminate this obstruction. As a diffuser is placed in the illuminating path, the resulting speckle pattern could scramble the diffraction field. In principle, this tilt illumination imposes a phase-only modulation exp[2jπ/λ(xcosα+ycosβ)] on the sample, where λ is the wavelength and (α,β) are oblique angles of incident light related to x and y axis. The speckle pattern caused by the diffuser is regarded as a random-like pattern. Thus, this random pattern could disorganize the tilt phase factor and the lateral shift effect of intensity pattern is also offset. As proven in Ref [40,41], this feature of speckle pattern is called as optical memory effect. When a beam of light passes through multiple scattering, the direction changes of the illumination does not affect the speckle pattern generated. In addition, speckle illumination not only eliminates the aliasing artifact, but also enhances the robustness to the noise, which will be proved in the following numerical simulation.

The flowchart of our adaptive hybrid imaging is shown in Fig. 2(c). This hybrid imaging technique has two steps: the speckle-based calibration and the in-focusing image reconstruction. For the speckle-based calibration, a diffuser is inserted in the front of the sample and a series of intensity patterns are recorded. These speckle intensity images are input into the S mode and the wavefront of first measuring plane is obtained. With a range of scanning distance z, a set of projection complex-valued images are computed and the corresponding intensities are plugged in SQF to generate a metric curve. The peak value of metric curve is related to the optimal focusing distance. Here we apply a number of classical SQFs [42,43] to do this and their expressions are written as follows:

GRA(z)=|I(x,y;z)|dxdy,
SPEC(z)=ln{1+|F[I(x,y,z)I¯(z)]|}dxdy,
LAP(z)={2I(x,y;z)}2dxdy,
SG(z)={I(x,y;z)}2dxdy,
Tenengrad(z)=[ixI(x,y;z)]2+[iyI(x,y;z)]2dxdy,
where I=|u^0|2denotes the reconstructed intensity of sample. and (ix,iy) are the gradient and Sobel operators. is the convolution operation and F is the Fourier transform. Equations (7)-(11) are not innovative metric functions. In this work, we will show that using these simple functions is incapable of getting a stable and robust auto-focusing search for the multi-distance lensfree imaging. On the contrary, our adaptive method could fulfil this task.

For the in-focusing reconstruction, the diffuser is removed and a plane wave is used for the coherent illumination. As a number of intensity patterns are measured, the coherent data set is fed for the P mode and the complex amplitude of the first plane is obtained. Equipped with the optimal distance from the speckle-based calibration, an in-focusing wave field of the sample is reconstructed by the inverse propagation of the wavefront of the first plane. There is still an aliasing artifact in the coherent data set. We adopt the Fourier-based alignment strategy [30] to minimize the effects of the aliasing artifact by aligning all the shifted patterns to one center. The process is generalized as: (1) measuring the peak-moving of cross-correlation between the first pattern and following patterns to compute the relative shifts; (2) using these shifts to calculate the oblique angles of incident light; (3) Fourier shift theorem with these oblique angles enables all shifted patterns into one center. A clear reconstruction of the sample can be retrieved with these aligned patterns.

3. Simulations and experiments

In this section, numerical simulations are given to explain why we set S mode for the distance search and P mode for the image reconstruction, and how the speckle illumination enhances the stability of auto-focusing. The S and P modes have the ability of retrieving a full complex amplitude of sample. Ideally, they are both used to seek the optimal distance. However, the robustness to noise and convergence speed of these methods are different. Hence, it is necessary to find a proper strategy for the adaptive hybrid retrieval. The numerical simulations of convergence speed and robustness to noise are presented in Figs. 3 and 4. The ground truth image is shown in Fig. 3(a). The normalized correlation coefficient (NCC) between a ground truth image and a retrieved image is assigned as an indicator of the convergence speed, and the detailed description of NCC is expressed in Ref [39]. The higher NCC value is, the better the reconstructed quality is. The simulation parameters of Fig. 3 are listed as: (1) the ground truth image is sampled with 400⨯400 pixels and the pixel size is 3.1μm; (2) the initial distance Z0 is 20mm and interval d is 1mm; (3) the wavelength of incident plane wave is 532nm; (4) iterative number is 100; (5) the number of receiving plane N is set as 3, 6, 9 and 11; (6) a random phase mask with a peak-to-valley value of π is applied for a speckle illumination and the mask-to-sample distance is 20mm. At each iteration, the NCC value is calculated by the inverse propagation of the output of S mode or P mode with initial distance Z0.

 figure: Fig. 3

Fig. 3 The comparison of convergence speed for the P mode and the S mode. (a) The ground truth image, (d) the speckle pattern illuminated on the sample; (b) and (e) are NCC curves under coherent and speckle illumination for S mode; (c) and (f) are the results for P mode.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 The comparison of noise robustness for the P mode and the S mode. (a) NCC convergence curve, (b) and (c) the retrieved images with the variance of 0.001 and 0.005 for S mode, (d) and (e) the retrieved images for P mode with similar noise levels.

Download Full Size | PDF

With this definition, the results of the S and P modes are pictured in Figs. 3(b) and 3(c) for the coherent illumination and in Figs. 3(e) and 3(f) for the speckle illumination. The speckle pattern incident on sample is shown in Fig. 3(d). These curves show that the convergence speed of the S mode is superior to the P mode’s one for both speckle and coherent illumination. The S mode only takes ~20 iterations to be convergent, while the P mode will be longer than 50. As shown in Figs. 3(e) and 3(f), this fast-converging behavior belongs to the S mode under the speckle illumination. In practice, the fact that the step of scanning distance is denser could lead to a larger increase of computing time. Hence, the S mode is suitable for a fast focus search. We also note that the NCC values of Figs. 3(e) and 3(f) cannot reach to 1. That is because the retrieved image under the speckle illumination is the superposition of the incident speckle filed and the object function. In fact, there is no dominant impact on the convergence acceleration with the increase of N when N>10. Hence we set N = 11 in the following simulations.

Although the S mode performs well in the convergence speed, its robustness to noise is insufficient by contrast with the P mode. The numerical simulation with noise is given in Fig. 4. A zero-mean Gaussian noise εn is added in the each recorded coherent intensity patterns and the variance of noise is set as 0.001, 0.005 and 0.01. The other simulation parameters are the same with Fig. 3 except that the object image in Fig. 3(a) is merely illuminated by a coherent plane wave and the iterative number is increased to 1000. The corresponding convergence curve is pictured in Fig. 4(a). As the noise level increases, the reconstructed accuracy decreases for these two methods. It is noted that the retrieved NCC value in the P mode is still bigger than the S mode’s one for each noise level, which indicates that the P mode is more robust than S mode for the noise suppression. The corresponding retrieved images with the noise variance of 0.001 and 0.005 are displayed in Figs. 4(b) and 4(c) for the S mode and in Figs. 4(d) and 4(e) for the P mode, respectively. Similarly, the imaging quality in the P mode is visually better than that of the S mode. Based on this analysis, the P mode is beneficial to image reconstruction with coherent data set although it has lower convergence speed. Figures 3 and 4 give the reasonable support for our hybrid imaging technique. Taken together, our hybrid adaptive strategy is that, the S mode is utilized for the auto-focusing search and the P mode plays the role of the in-focusing image reconstruction.

Figure 5 shows the normalized metric curves by running the coherent and speckle data sets. We want to show that the speckle illumination is helpful for a stable and robust focus search. The outputs of the S and P modes are inversely propagated with a scanning distance ranging from 10 mm to 30 mm. The peak value of the SQFs is to point at the ground truth distance Z0. The curves in the left and middle columns denote the results with the coherent illumination. The curves in the right column are for the speckle illumination. The SQF curves in Figs. 5(a)-5(c) denote the search results with a clear and noise-free imaging processing, in which the extreme value of all SQFs refers to 20 mm. However, as the Gaussian noise and the aliasing artifact are added, the focus search operation via coherent data set starts to be inaccurate. Figures 5(d)-5(f) are plotted with the addition of the Gaussian noise (variance = 0.01). In this case, the SQFs of the coherent data set become fluctuated but the SQFs of speckle data set are still uniform. We impose a phase-only factor exp[2jπ/λ(xcosα+ycosβ)] on the sample to simulate the tilt illumination, where α=89.8 and β=90. There is a lateral shift on the coherent intensity patterns. Accordingly, the results of Figs. 5(g) and 5(h) become unstable. Especially for Fig. 5(h), the metric functions LAP and GRA are unable to find the ground truth distance. As is opposite to the coherent results, this aliasing artifact has no impact on speckle illumination in Fig. 5(i). To prove the capability of our method, a compound noise (tilt error and Gaussian noise) is added and the results are shown in Figs. 5(j)-5(l). In this case, the SQFs of the coherent illumination are more terrible. The affections of noise and aliasing artifact can be both eliminated by the speckle illumination. The simulations of Fig. 5 indicates that speckle illumination is helpful for a unique and robust auto-focusing search even using simple metric function.

 figure: Fig. 5

Fig. 5 The normalized SQF curves of coherent and speckle illuminations. The first two columns on the left illustrate the case of the coherent illumination in the S and P modes, respectively. The third column indicate the case of the speckle illumination in the S mode. (a), (d), (g) and (j) are retrieved with the addition of noise-free, Gaussian noise (variance = 0.01), aliasing artifact (tilt error = 0.2°) and compound noise for the S mode, respectively. (b), (e), (h) and (k) for P mode under coherent illumination. (c), (f), (i) and (l) correspond to (a), (d), (g) and (j) but with the speckle illumination.

Download Full Size | PDF

Obviously, if an object function can be straightway extracted from the retrieved speckle field of sample, the P mode is no longer needed and the corresponding computational time is decreased. At present, this superposed wave field can be separated by interferometric measurement [44,45], speckle rotation illumination [46] and lateral sample movement [47]. However, these methods increase the complexity of experiment. Different from these methods, our adaptive scheme merely adds a diffuser and collects two data sets of intensity patterns under the coherent and speckle illuminations. Also, the coherent and speckle data sets can be parallel calculated so that the total computing time still holds unchanged.

In experiment, a fiber laser (532nm) passes through a collimated lens (f = 200mm) to generate a plane wave illumination. The data collection is achieved by mounting a CCD camera (Point gray, 3.1μm) on the precision linear stage (M-403, Physik Instrumente Inc.). The initial distance is unknown and can be obtained by our adaptive method. A ground glass diffuser (DG10-120-MD, Thorlabs, 120 grit) is inserted for a speckle illumination and removed for a coherent illumination. Figure 6 shows the experimental results of the reconstruction of a calibration object (N = 21, d = 1mm). The recorded diffraction patterns under the coherent and speckle illuminations are shown in Figs. 6(d) and 6(e), respectively. The SQF curves in Figs. 6(a) and 6(b) are plotted in the S and P modes with the coherent data set after 1000 iterations. They indicate that directly running coherent data set via the P and S modes are both incapable of searching the optimized focusing distance. In contrast, the SQF curves of the speckle illumination are illustrated in Fig. 6(c) after 100 iterations, in which the maximum values of SQF curves collectively correspond to the focusing distance. The optimal initial distance Z0 is fixed at 31mm from Fig. 6(c). The retrieved joint wave fields under the speckle illumination is shown in Fig. 6(f). The calibration sample is clearly reconstructed in Fig. 6(g) with the coherent data set. This experiment verifies the effectiveness and validity of speckle illumination on auto-focusing.

 figure: Fig. 6

Fig. 6 The retrieved results of a calibration object. (a) and (b) are the normalized metric functions plotted in the S and P modes with the coherent data set, (c) the normalized metric functions plotted in the S mode with the speckle data set, (d) and (e) are the coherent and speckle intensity patterns, (f) and (g) are the retrieved amplitudes under the speckle and coherent illuminations. The white bar corresponds to 150 μm.

Download Full Size | PDF

A higher imaging resolution and contrast can be achieved by our adaptive hybrid imaging method. We compare our adaptive method with the present multi-distance phase retrieval methods (SBMIR and APR) and their results are shown in Fig. 7. A negative 1951 USAF target (R3L3S1N, Thorlabs) is used as the sample and the experimental parameters are listed as: (1) N = 15, d = 1; (2) the imaging size is the central region of 1972⨯1972 pixels for CCD camera; (3) the iterative number is 1000. Here the optimal initial distance Z0 is fixed at 27.6mm by the speckle-based calibration. With this preparation, the retrieved images of SBMIR, APR and our method are given in Figs. 7(a), 7(e) and 7(i). The details are zoomed in Figs. 7(b) and 7(c), and Figs. 7(f)-7(g) and Figs. 7(j)-7(k). It is noted that the imaging resolution and contrast of our method is superior to other two methods. To quantitatively show this improvement, the plotlines along red, green and blue lines are pictured in Figs. 7(d), 7(h) and 7(l). This selected region is not resolved by SBMIR. Although it is solved by APR, its imaging contrast is insufficient. According to diffraction limit in Ref [48], the theoretical resolution in this case is 2.35μm. But it is expanded to 6.2μm due to the pixel limitation (pixel size = 3.1μm). Therefore, the imaging resolution of adaptive mode in Fig. 7(l) reaches to this resolution limit, which proves that our mode could yield a high resolution and stable reconstruction.

 figure: Fig. 7

Fig. 7 The retrieved results of the negative 1951 USAF resolution chart. (a), (e) and (i) are retrieved by SBMIR, APR and our method after 1000 iterations. (b), (f) and (j) are the zoomed-in parts inside red dash rectangle. (c), (g) and (k) are the zoomed-in parts inside blue dash rectangle. (d), (h) and (l) are plotted along red, green and blue lines in (c), (g) and (k). The white bar corresponds to 500μm.

Download Full Size | PDF

To test the multi-layers refocusing capability, a multiple section sample is used for the retrieval and its experimental schematic is shown in Fig. 8. Two side-by-side calibration targets are illuminated by a plane wave and a set of intensity patterns are sequentially recorded by the CCD camera. Similar to Fig. 7, two groups of intensity images are measured when the diffuser is inserted and removed. In this case, there are two initial distances with respect to the first receiving plane, Z01 and Z02. Our speckle-based calibration is used to acquire these two distances. Plugging with these distances, the output of the P mode is inversely propagated to these two target planes. Accordingly, this two-layer sample can be separated and distinguished.

 figure: Fig. 8

Fig. 8 The experimental schematic of a multi-section object.

Download Full Size | PDF

The detailed results of two layer sample are shown in Fig. 9. A segmentation operation is utilized to extract useful messages for the speckle-based calibration. The segmentation process selects two rectangular regions enclosing the stripe and ‘3′. For two segmented regions, the SQF curves are respectively plotted by speckle-based calibration in Figs. 9(a) and 9(b), in which the refocusing distance Z01 and Z02 are respectively fixed at 22 mm and 18.1 mm. As Z01 and Z02 are fed for the in-focusing image reconstruction, the retrieved speckle and coherent images are displayed in Figs. 9(c) and 9(d) and Figs. 9(e) and 9(f). Figures 9(c) and 9(e) are recovered with Z02. The remaining two images are obtained with Z01.The stripe and number ‘3′ are placed in the different planes. Hence, for each distance, only one focused section is clear and another is blurred. To explicitly exhibit this difference, the plotlines of stripe are captured in Fig. 9(g) along red and blue lines. Similarly, the plotlines of ‘3′ are shown in Fig. 9(h). It is noted from these plotlines that only one focused plane has the flat slicing outline. This experiment demonstrates the optical sectioning of our adaptive mode.

 figure: Fig. 9

Fig. 9 The retrieved results of a two-section sample. (a) and (b) are the SQF curves for different segmented regions, (c)-(d) and (e)-(f) are the retrieved images using the coherent and speckle data sets, (g) and (h) are plotlines of stripe and ’3′ along red and blue lines in (e) and (f). The white bar at the left-top corner corresponds to 300μm.

Download Full Size | PDF

Another experiment of biological sample is carried out and the results are shown in Fig. 10. An ant specimen is used as the sample. The speckle-based calibration is used to find the optimal distance (Z0 = 21 mm) and its SQF curves are plotted in Fig. 10(a). Selecting the peak value for the in-focusing propagation, the retrieved ant specimen is given in Fig. 10(b). Different from the binary object, the biological sample is regarded as a multi-layer object. Hence the reconstructed image in Fig. 10(b) cannot enable all layers of sample focused. We adopt a pixel refocusing method in Ref [42]. to achieve multi-section reconstruction. This method is generalized as: (1) choosing the range larger than 80% of peak-to-valley value of SQF curve as scanning distance range; (2) many projection images are generated by scanning the wave field of Fig. 10(b) with angular spectrum propagation; (3) copying the minimum intensity value from these projection data set for each pixel. The multi-section reconstructed image is presented in Fig. 10(c), in which the imaging contrast is visually enhanced compared with Fig. 10(b). We also note that the contrast of the interference-like background pattern is also enhanced by this refocusing operation. This drawback originates from the coherent noise of the fiber laser source. We believe that it will be worked out by combing noise suppression method with multi-distance phase retrieval in the future.

 figure: Fig. 10

Fig. 10 The retrieved results of ant specimen. (a) SQF curves, (b) the retrieved image, and (c) the multi-section retrieval by pixel refocusing. The white bar corresponds to 300 μm.

Download Full Size | PDF

4. Conclusion

In this paper, we propose an adaptive multi-distance phase retrieval scheme for lensfree imaging system by only inserting a switchable diffuser. The scheme can perform an adaptive auto-focusing imaging and hence achieve a robust and stable reconstruction. This adaptive multi-distance phase retrieval has two steps, the speckle-based calibration and the in-focusing image reconstruction. For the speckle-based calibration, the diffuser is inserted and the optimal focusing distance is selected from the maximum of SQF curves. For the in-focusing reconstruction, the diffuser is removed. A data set with the coherent illumination are recorded and the sample is reconstructed by the multi-distance phase retrieval with this optimal initial distance. Numerical simulations and experiments have proven the validity of our adaptive multi-distance phase retrieval algorithm. This work provide a new application of speckle imaging and a powerful auto-focusing tool for lensfree imaging technique.

Funding

National Natural Science Foundation of China (61575055, 61575053).

Acknowledgments

The authors wish to thank the anonymous reviewers for the useful comments and suggestions.

References and links

1. L. Kipp, M. Skibowski, R. L. Johnson, R. Berndt, R. Adelung, S. Harm, and R. Seemann, “Sharper images by focusing soft X-rays with photon sieves,” Nature 414(6860), 184–188 (2001). [CrossRef]   [PubMed]  

2. K. Huang, H. Liu, F. J. Garcia-Vidal, M. Hong, B. Luk’yanchuk, J. Teng, and C. W. Qiu, “Ultrahigh-capacity non-periodic photon sieves operating in visible light,” Nat. Commun. 6(1), 7059 (2015). [CrossRef]   [PubMed]  

3. G. Saavedra, W. D. Furlan, and J. A. Monsoriu, “Fractal zone plates,” Opt. Lett. 28(12), 971–973 (2003). [CrossRef]   [PubMed]  

4. S. Rehbein, S. Heim, P. Guttmann, S. Werner, and G. Schneider, “Ultrahigh-resolution soft-x-ray microscopy with zone plates in high orders of diffraction,” Phys. Rev. Lett. 103(11), 110801 (2009). [CrossRef]   [PubMed]  

5. M. Guizar-Sicairos, S. T. Thurman, and J. R. Fienup, “Efficient subpixel image registration algorithms,” Opt. Lett. 33(2), 156–158 (2008). [CrossRef]   [PubMed]  

6. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]   [PubMed]  

7. A. Greenbaum, W. Luo, T. W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9(9), 889–895 (2012). [CrossRef]   [PubMed]  

8. J. Miao, T. Ishikawa, I. K. Robinson, and M. M. Murnane, “Beyond crystallography: diffractive imaging using coherent x-ray light sources,” Science 348(6234), 530–535 (2015). [CrossRef]   [PubMed]  

9. A. Faridian, D. Hopp, G. Pedrini, U. Eigenthaler, M. Hirscher, and W. Osten, “Nanoscale imaging using deep ultraviolet digital holographic microscopy,” Opt. Express 18(13), 14159–14164 (2010). [CrossRef]   [PubMed]  

10. M. Shan, L. Liu, Z. Zhong, B. Liu, G. Luan, and Y. Zhang, “Single-shot dual-wavelength off-axis quasi-common-path digital holography using polarization-multiplexing,” Opt. Express 25(21), 26253–26261 (2017). [CrossRef]   [PubMed]  

11. P. F. Almoro, P. N. Gundu, and S. G. Hanson, “Numerical correction of aberrations via phase retrieval with speckle illumination,” Opt. Lett. 34(4), 521–523 (2009). [CrossRef]   [PubMed]  

12. C. Zuo, J. Sun, and Q. Chen, “Adaptive step-size strategy for noise-robust Fourier ptychographic microscopy,” Opt. Express 24(18), 20724–20744 (2016). [CrossRef]   [PubMed]  

13. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik (Stuttg.) 35, 237–246 (1972).

14. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]   [PubMed]  

15. W. Chen, “Ghost identification based on single-pixel imaging in big data environment,” Opt. Express 25(14), 16509–16516 (2017). [CrossRef]   [PubMed]  

16. W. Bishara, H. Zhu, and A. Ozcan, “Holographic opto-fluidic microscopy,” Opt. Express 18(26), 27499–27510 (2010). [CrossRef]   [PubMed]  

17. J. M. Rodenburg, A. C. Hurst, A. G. Cullis, B. R. Dobson, F. Pfeiffer, O. Bunk, C. David, K. Jefimovs, and I. Johnson, “Hard-x-ray lensless imaging of extended objects,” Phys. Rev. Lett. 98(3), 034801 (2007). [CrossRef]   [PubMed]  

18. J. Sun, C. Zuo, L. Zhang, and Q. Chen, “Resolution-enhanced Fourier ptychographic microscopy based on high-numerical-aperture illuminations,” Sci. Rep. 7(1), 1187 (2017). [CrossRef]   [PubMed]  

19. G. Pedrini, W. Osten, and Y. Zhang, “Wave-front reconstruction from a sequence of interferograms recorded at different planes,” Opt. Lett. 30(8), 833–835 (2005). [CrossRef]   [PubMed]  

20. A. Greenbaum, Y. Zhang, A. Feizi, P. L. Chung, W. Luo, S. R. Kandukuri, and A. Ozcan, “Wide-field computational imaging of pathology slides using lens-free on-chip microscopy,” Sci. Transl. Med. 6(267), 267ra175 (2014). [CrossRef]   [PubMed]  

21. W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light Sci. Appl. 5(4), e16060 (2016). [CrossRef]  

22. W. Xu, H. Xu, Y. Luo, T. Li, and Y. Shi, “Optical watermarking based on single-shot-ptychography encoding,” Opt. Express 24(24), 27922–27936 (2016). [CrossRef]   [PubMed]  

23. C. Guo, I. Muniraj, and J. T. Sheridan, “Phase-retrieval-based attacks on linear-canonical-transform-based DRPE systems,” Appl. Opt. 55(17), 4720–4728 (2016). [CrossRef]   [PubMed]  

24. G. Li, W. Yang, D. Li, and G. Situ, “Cyphertext-only attack on the double random-phase encryption: Experimental demonstration,” Opt. Express 25(8), 8690–8697 (2017). [CrossRef]   [PubMed]  

25. R. Horstmeyer, J. Chung, X. Ou, G. Zheng, and C. Yang, “Diffraction tomography with Fourier ptychography,” Optica 3(8), 827–835 (2016). [CrossRef]   [PubMed]  

26. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2(2), 104–111 (2015). [CrossRef]  

27. W. Bishara, T. W. Su, A. F. Coskun, and A. Ozcan, “Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution,” Opt. Express 18(11), 11181–11191 (2010). [CrossRef]   [PubMed]  

28. M. Wang, S. Feng, and J. Wu, “Multilayer pixel super-resolution lensless in-line holographic microscope with random sample movement,” Sci. Rep. 7(1), 12791 (2017). [CrossRef]   [PubMed]  

29. C. Guo, Q. Li, C. Wei, J. Tan, S. Liu, and Z. Liu, “Axial multi-image phase retrieval under tilt illumination,” Sci. Rep. 7(1), 7562 (2017). [CrossRef]   [PubMed]  

30. C. Guo, Q. Li, J. Tan, S. Liu, and Z. Liu, “A method of solving tilt illumination for multiple distance phase retrieval,” Opt. Lasers Eng. 106, 17–23 (2018). [CrossRef]  

31. Z. Ren, N. Chen, and E. Y. Lam, “Automatic focusing for multisectional objects in digital holography using the structure tensor,” Opt. Lett. 42(9), 1720–1723 (2017). [CrossRef]   [PubMed]  

32. Y. Zhang, H. Wang, Y. Wu, M. Tamamitsu, and A. Ozcan, “Edge sparsity criterion for robust holographic autofocusing,” Opt. Lett. 42(19), 3824–3827 (2017). [CrossRef]   [PubMed]  

33. P. Gao, B. Yao, R. Rupp, J. Min, R. Guo, B. Ma, J. Zheng, M. Lei, S. Yan, D. Dan, and T. Ye, “Autofocusing based on wavelength dependence of diffraction in two-wavelength digital holographic microscopy,” Opt. Lett. 37(7), 1172–1174 (2012). [CrossRef]   [PubMed]  

34. A. Jesacher, M. R. Marte, and R. Piestun, “Three-dimensional information from two-dimensional scans: a scanning microscope with postacquisition refocusing capability,” Optica 2(3), 210–213 (2015). [CrossRef]  

35. J. Wu, G. Zheng, Z. Li, and C. Yang, “Focal plane tuning in wide-field-of-view microscope with Talbot pattern illumination,” Opt. Lett. 36(12), 2179–2181 (2011). [CrossRef]   [PubMed]  

36. S. Jiang, J. Liao, Z. Bian, K. Guo, Y. Zhang, and G. Zheng, “Transform- and multi-domain deep learning for single-frame rapid autofocusing in whole slide imaging,” Biomed. Opt. Express 9(4), 1601–1612 (2018). [CrossRef]   [PubMed]  

37. Z. Ren, Z. Xu, and E. Y. Lam, “Learning-based nonparametric autofocusing for digital holography,” Optica 5(4), 337–344 (2018). [CrossRef]  

38. J. W. Goodman, Introduction to Fourier Optics, 2nd ed. (McGraw-Hill, 1996).

39. C. Guo, Q. Li, X. Zhang, J. Tan, S. Liu, and Z. Liu, “Enhancing imaging contrast via weighted feedback for iterative multi-image phase retrieval,” J. Biomed. Opt. 23(1), 1–10 (2018). [CrossRef]   [PubMed]  

40. I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61(20), 2328–2331 (1988). [CrossRef]   [PubMed]  

41. J. Bertolotti, “Multiple scattering: unravelling the tangle,” Nat. Phys. 11(8), 622–623 (2015). [CrossRef]  

42. Y. S. Choi and S. J. Lee, “Three-dimensional volumetric measurement of red blood cell motion using digital holographic microscopy,” Appl. Opt. 48(16), 2983–2990 (2009). [CrossRef]   [PubMed]  

43. E. Krotkov, “Focusing,” Int. J. Comput. Vis. 1(3), 223–237 (1987). [CrossRef]  

44. W. Harm, C. Roider, A. Jesacher, S. Bernet, and M. Ritsch-Marte, “Lensless imaging through thin diffusive media,” Opt. Express 22(18), 22146–22156 (2014). [CrossRef]   [PubMed]  

45. A. K. Singh, D. N. Naik, G. Pedrini, M. Takeda, and W. Osten, “Looking through a diffuser and around an opaque surface: a holographic approach,” Opt. Express 22(7), 7694–7701 (2014). [CrossRef]   [PubMed]  

46. E. Mudry, K. Belkebir, J. Girard, J. Savatier, E. Le Moal, C. Nicoletti, M. Allain, and A. Sentenac, “Structured illumination microscopy using unknown speckle patterns,” Nat. Photonics 6(5), 312–315 (2012). [CrossRef]  

47. S. Dong, P. Nanda, R. Shiradkar, K. Guo, and G. Zheng, “High-resolution fluorescence imaging via pattern-illuminated Fourier ptychography,” Opt. Express 22(17), 20856–20870 (2014). [CrossRef]   [PubMed]  

48. T. Latychevskaia and H. W. Fink, “Resolution enhancement in digital holography by self-extrapolation of holograms,” Opt. Express 21(6), 7726–7733 (2013). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 The schematic of the adaptive multi-distance lensfree imaging system. A diffuser is inserted before the object to switch the type of illuminations.
Fig. 2
Fig. 2 The flowchart of adaptive multi-distance phase retrievals: (a) Serial mode, (b) Parallel mode, (c) The adaptive mode of our proposal.
Fig. 3
Fig. 3 The comparison of convergence speed for the P mode and the S mode. (a) The ground truth image, (d) the speckle pattern illuminated on the sample; (b) and (e) are NCC curves under coherent and speckle illumination for S mode; (c) and (f) are the results for P mode.
Fig. 4
Fig. 4 The comparison of noise robustness for the P mode and the S mode. (a) NCC convergence curve, (b) and (c) the retrieved images with the variance of 0.001 and 0.005 for S mode, (d) and (e) the retrieved images for P mode with similar noise levels.
Fig. 5
Fig. 5 The normalized SQF curves of coherent and speckle illuminations. The first two columns on the left illustrate the case of the coherent illumination in the S and P modes, respectively. The third column indicate the case of the speckle illumination in the S mode. (a), (d), (g) and (j) are retrieved with the addition of noise-free, Gaussian noise (variance = 0.01), aliasing artifact (tilt error = 0.2°) and compound noise for the S mode, respectively. (b), (e), (h) and (k) for P mode under coherent illumination. (c), (f), (i) and (l) correspond to (a), (d), (g) and (j) but with the speckle illumination.
Fig. 6
Fig. 6 The retrieved results of a calibration object. (a) and (b) are the normalized metric functions plotted in the S and P modes with the coherent data set, (c) the normalized metric functions plotted in the S mode with the speckle data set, (d) and (e) are the coherent and speckle intensity patterns, (f) and (g) are the retrieved amplitudes under the speckle and coherent illuminations. The white bar corresponds to 150 μm.
Fig. 7
Fig. 7 The retrieved results of the negative 1951 USAF resolution chart. (a), (e) and (i) are retrieved by SBMIR, APR and our method after 1000 iterations. (b), (f) and (j) are the zoomed-in parts inside red dash rectangle. (c), (g) and (k) are the zoomed-in parts inside blue dash rectangle. (d), (h) and (l) are plotted along red, green and blue lines in (c), (g) and (k). The white bar corresponds to 500μm.
Fig. 8
Fig. 8 The experimental schematic of a multi-section object.
Fig. 9
Fig. 9 The retrieved results of a two-section sample. (a) and (b) are the SQF curves for different segmented regions, (c)-(d) and (e)-(f) are the retrieved images using the coherent and speckle data sets, (g) and (h) are plotlines of stripe and ’3′ along red and blue lines in (e) and (f). The white bar at the left-top corner corresponds to 300μm.
Fig. 10
Fig. 10 The retrieved results of ant specimen. (a) SQF curves, (b) the retrieved image, and (c) the multi-section retrieval by pixel refocusing. The white bar corresponds to 300 μm.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

I n = | u n | 2 + ε n = | A n u 0 | 2 + ε n ,
u ^ n+1 k = A d [ I n u ^ n k | u ^ n k | ],
u ^ 1 k+1 = A N1 [ I N u ^ N k | u ^ N k | ],
Δ= xy | | u ^ 1 k | I 1 | ,
u ^ n k = A 1n [ I 1 u ^ 1 k | u ^ 1 k | ],n[2,N],
u ^ 1 k+1 = 1 N1 n=2 N A n1 [ I n u ^ n k | u ^ n k | ] .
GRA( z )= | I( x,y;z ) | dxdy,
SPEC( z )= ln{ 1+| F[ I( x,y,z ) I ¯ (z) ] | } dxdy,
LAP( z )= { 2 I( x,y;z ) } 2 dxdy,
SG( z )= { I( x,y;z ) } 2 dxdy,
Tenengrad( z )= [ i x I( x,y;z ) ] 2 + [ i y I( x,y;z ) ] 2 dxdy,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.