Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Mask-modulated lensless imaging via translated structured illumination

Open Access Open Access

Abstract

Lensless microscopy technique enables high-resolution image recovery over a large field of view. By integrating the concept of phase retrieval, it can also retrieve the lost phase information from intensity-only measurements. Here we report a mask-modulated lensless imaging platform based on translated structured illumination. In the reported platform, we sandwich the object in-between a coded mask and a naked image sensor for lensless data acquisition. An LED array is used to provide angle-varied illumination for projecting a translated structured pattern without involving mechanical scanning. For different LED elements, we acquire the lensless intensity data for recovering the complex-valued object. In the reconstruction process, we employ the regularized ptychographic iterative engine and implement an up-sampling process in the reciprocal space. As demonstrated by experimental results, the reported platform is able to recover complex-valued object images with higher resolution and better quality than previous implementations. Our approach may provide a cost-effective solution for high-resolution and wide field-of-view ptychographic imaging without involving mechanical scanning.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Conventional imaging systems employ lenses to relay objects to the image plane. Aberration and dispersion caused by an imperfect lens often degrade the performance of the imaging system. It is challenging to implement aberration correction and dispersion compensation in lens systems, leading to the well-known trade-off between resolution and imaging field of view (FOV). Lensless on-chip microscopy [13] is an imaging scheme that has the potential to address this trade-off, enabling high-resolution imaging across a large field of view. In a typical implementation, the target object is directly placed on the image sensor without using any lens, enabling a cost-effective set-up in a compact format. The imaging FOV of lensless on-chip microscopy is determined by the size of the utilized image sensor. By integrating the concept of phase retrieval, it can also retrieve the lost phase information from intensity-only measurements. As such, it enables phase imaging of transparent objects such as unstained biological samples and allows for post-measurement digital refocusing.

Various lensless on-chip microscopy schemes have been reported in the past few years [413]. The multi-height scheme uses a coherent or partially coherent light source for object illumination and records a sequence of intensity images with different object-to-sensor distances. With the recorded intensity images, the complex object image is computationally recovered through the phase retrieval process [46]. In this scheme, either the object or the image sensor has to be mechanically translated in the axial direction for phase-diversity measurements. In the ptychographic scanning scheme, the axial motion of the object or image sensor is replaced by the lateral scanning of a laser source or a pinhole for object illumination [79]. M. Stockmar et al. proposed to use a diffuser to create structured illumination and move the object laterally for phase-diversity measurements [10]. However, precise mechanical motion is required in all aforementioned schemes. Recently, a blind ptychographic lensless scheme, where the knowledge of lateral shift is not required, has been successfully demonstrated [11,12]. G. Zheng et al. proposed to place a thin diffuser in-between the object and the image sensor for light field modulation. A sequence of intensity images is recorded when the diffuser randomly moves to different positions. The adopted image recovery algorithm allows for estimating the displacement of the diffuser from the recorded intensity images, with which the complex object image can be recovered subsequently.

For many biomedical applications, it is desired to implement lensless microscopy without involving mechanical scanning. The multi-wavelength scheme is one strategy for introducing phase diversity without involving mechanical scanning [13,14]. In this scheme, images recorded at different wavelengths are used to recover the complex object information. However, as discussed in [1517], this scheme relies on the assumption that the object is transparent or semi-transparent at different wavelengths. Such an assumption can hardly be ensured in many biomedical applications, such as imaging of regular stained histology slides. Recently, a wide-field mask-modulated lensless platform was reported without involving mechanical scanning [18]. In this platform, a programmable LED array is used to generate angle-varied illumination on the object and the detector records the resulting intensity images through an optical mask. The key to successful image recovery is the use of a mask for light field modulation. Such a modulation offers supporting domain constraints. However, the method only achieved a half-pitch resolution of 4.92 µm when using an image sensor with a pixel size of 3.45 µm. The achievable resolution is even worse than the sampling resolution allowed by the sensor, which limits its applications in imaging biological samples.

In this paper, we report a new implementation of the mask-modulated lensless imaging scheme using translated structured illumination. The reported method allows for remarkably higher spatial resolution than the original implementation, bypassing the sampling resolution allowed by the image sensor. We make two major modifications to the original method. The first one is to swap the positions of the mask and the object, resulting in the target object being sandwiched in-between the mask and the image sensor. By projecting light from different LED elements, we generate, in effect, a full-field translated structured illumination pattern on the entire complex object. The second modification is in terms of the image recovery algorithm. Specifically, we employ the rPIE (regularized ptychographic iterative engine) algorithm to iteratively optimize the complex object image and implement an up-sampling process in the reciprocal-space. Our approach may provide a cost-effective solution for high-resolution and wide-FOV ptychographic imaging without involving mechanical scanning.

2. Method

2.1 Set-up

The schematic diagram of our method is shown in Fig. 1, where the object to be imaged is sandwiched in-between an image sensor and an optical mask. Specifically, the image sensor is placed at the bottom and the mask is placed on the top. We note that the mask was placed in-between the image sensor and the object in the original implementation. As will be demonstrated in the experiments, such a modification allows for resolution improvement in image recovery. In the reported platform, we use a programable LED array to generate angle-varied illumination, as the LED array is light-weight, low-cost, and able to generate accurate, repeatable, and fast angle-varied illumination. More importantly, no mechanical scanning is involved. Each element in the LED array can generate a plane wave with a unique incident angle. When the LEDs are lit up sequentially, the incident angle of the resulting illumination light field changes accordingly, generating a translated structured pattern on the complex object. The image sensor records the resulting intensity images corresponding to different incident angles. With the recorded images, the complex object image (with both amplitude and phase) is recovered through a ptychographic phase retrieval process. We note that the mask is essential in this lensless imaging scheme. The mask, whose pattern is pre-defined, modulates the illumination field on the object and provides a support constraint in the phase retrieval process.

 figure: Fig. 1.

Fig. 1. Schematic diagram of mask-modulated lensless imaging with multi-angle illuminations.

Download Full Size | PDF

2.2 Forward imaging model

The connection between the translated structured illuminations and the intensity images recorded is subject to the forward imaging model. As Fig. 2(a) shows, the i-th LED located at $({{x_{\textrm{LE}{\textrm{D}_i}}},{y_{\textrm{LE}{\textrm{D}_i}}}} )$ generates a plane wave at the central wavelength $\lambda $. The resulting illumination light field on the mask can be expressed by

$${P_i}({x,y} )= \exp \left[ {\frac{{\textrm{j}2\pi ({{k_x}x + {k_y}y} )}}{\lambda }} \right],$$
where $({x,y} )$ is the spatial coordinate, j is the imaginary unit, ${k_x} ={-} \sin [{\textrm{ta}{\textrm{n}^{ - 1}}({{{{x_{\textrm{LE}{\textrm{D}_i}}}} / d}} )} ]$, ${k_y} ={-} \sin [{\textrm{ta}{\textrm{n}^{ - 1}}({{{{y_{\textrm{LE}{\textrm{D}_i}}}} / d}} )} ]$, and d is the distance between the i-th LED and the center of the mask. The plane wave is characterized by its incident angle, $\theta$, on the mask, where $\theta = {\tan ^{ - 1}}\left( {{{\sqrt {x_{\textrm{LE}{\textrm{D}_i}}^2 + y_{\textrm{LE}{\textrm{D}_i}}^2} } / {{d_0}}}} \right)$ and ${d_0}$ is the distance between the LED array plane and the mask plane. The illumination light field is then modulated by the optical mask resulting in ${U_1} = M \cdot {P_i}$, where M is the mask pattern and “${\cdot}$” denotes the element-wise multiplication. Here we omit the coordinate $({x,y} )$ for simplicity.

 figure: Fig. 2.

Fig. 2. (a) Forward imaging model of the proposed method and (b) illustration of the angular illumination generated by the i-th LED.

Download Full Size | PDF

The mask pattern is pre-defined, random, and binary, as is shown in Fig. 3(a). The mask-modulated illumination light field, ${U_1}$, propagates to the object plane, and results in structured illumination, ${U_2}$, on the object. The light field ${U_2}$ can be expressed as ${U_2} = \textrm{PS}{\textrm{F}_1} \ast {U_1}$, where PSF1 denotes the point spread function (PSF) for free-space propagation over distance ${d_1}$ and “${\ast} $” denotes convolution. We note that the convolution process is typically implemented by element-wise multiplication in the Fourier space. Thus, here PSF1 is presented in the form of its Fourier transform,

$$F\{{\textrm{PS}{\textrm{F}_\textrm{1}}} \}= \exp \left( {\frac{{\textrm{j}2\pi {d_1}}}{\lambda } \cdot \sqrt {1 - {{({\lambda {f_x}} )}^2} - {{({\lambda {f_y}} )}^2}} } \right),$$
where $F\{{} \}$ denotes Fourier transform, ${f_x}$ and ${f_y}$ denote spatial frequency along x and y directions, respectively. After the propagation process, the light field interacts with the object O and the resulting light field below the object plane can be written as ${U_3} = O \cdot {U_2}$. The light field then propagates from the object plane to the image sensor plane. The resulting light field on the image sensor can be written as ${U_4} = \textrm{PS}{\textrm{F}_2} \ast {U_3}$, where the Fourier transform of PSF2 is $\exp \left[ {({\textrm{j}{{2\pi {d_2}} / \lambda }} )\cdot \sqrt {1 - {{({\lambda {f_x}} )}^2} - {{({\lambda {f_y}} )}^2}} } \right]$. Finally, the image sensor records the intensity of the light field ${U_4}$. Figure 3(b) shows a sample recorded intensity image by using a USAF-1951 resolution test chart as the target object. The whole forward imaging of the i-th recorded intensity image ${I_i}$ can be summarized as follows
$${I_i}({x,y} )= {|{{U_4}({x,y} )} |^2} = {|{\textrm{PS}{\textrm{F}_\textrm{2}}\ast \{{\textrm{PS}{\textrm{F}_\textrm{1}}\ast [{M({x,y} )\cdot {P_i}({x,y} )} ]\cdot O({x,y} )} \}} |^2}.$$

 figure: Fig. 3.

Fig. 3. (a) The optical mask with a random pattern used in our experiments. (b) A sample intensity image recorded through the forward imaging model, by using the mask (a) for modulation and a USAF-1951 resolution test chart as the target object. (c) The recovered USAF-1951 resolution test char image (intensity).

Download Full Size | PDF

By lighting up the LEDs sequentially, a series of intensity images, {Ii}, is recorded. The number of images recorded is equal to the number of LEDs used. With the algorithm depicted in the following section, the object image can be recovered from the recorded intensity images, as Fig. 3(c) shows.

2.3 Image recovery algorithm

The recovery process of the complex object image is based on the alternating projection strategy [8,9]. Subject to the forward imaging model, the image recovery algorithm attempts to reproduce the complex object image from the sequence of recorded intensity images. In comparison with the image recovery reported in [18], the algorithm employed in our work has two modifications. The first modification is the original ptychographic iterative engine (PIE) being replaced by the rPIE algorithm [8]. The second modification is the reciprocal-spaced upsampling algorithm [19] being adopted for image upsampling. The modified image recovery algorithm is detailed below.

The algorithm is in an iterative manner. Each iteration starts with the complex object image recovered within the last iteration, ${O^{({i - 1} )}}$. For the first iteration, ${O^{({i - 1} )}}$ is the initial guess of the object image. Given the iteration number i, which ranges from 1 to the number of images recorded, the i-th illumination light field ${P_i}$ is calculated with Eq. (1). Then the light field on the sensor plane, ${U_4}$, is calculated with Eq. (3). The light field on the sensor plane is then updated with the following equation,

$${U_4}^\prime = {U_4}({x,y} )\cdot \left( {\frac{{\sqrt {{I_i}{{({x,y} )}_{ \uparrow Q}}} }}{{\sqrt {{{\{{{{[{{{|{{U_4}({x,y} )} |}^2}\ast \textrm{ones}({Q,Q} )} ]}_{ \downarrow Q}}} \}}_{ \uparrow Q}}} }}} \right),$$
where $\textrm{ones}({Q,Q} )$ denotes a full-one matrix with Q × Q entries, the subscript “$\downarrow Q$” represents the downsampling process with a factor of Q, and the subscript $\uparrow Q$” represents the upsampling process with a factor of Q using the nearest neighborhood algorithm. As stated in [19], the downsampling process scales the intensities of Q × Q subpixels so that the sum of these subpixels matches the recorded image pixel intensity. The updated light field is denoted by ${U_4}^\prime$ and is then back-propagated to the object plane, resulting in the light field ${U_3}^\prime$,
$${U_3}^\prime = \textrm{conj}({\textrm{PS}{\textrm{F}_2}} )\ast {U_4}^\prime ({x,y} ).$$

Next, the complex object image is updated using the rPIE algorithm as follows

$${O^{(i )}} = {O^{({i - 1} )}} + \beta \cdot \frac{{\textrm{conj}({{U_2}} )}}{{({1 - \alpha } )\cdot ({{{|{{U_2}} |}^2}} )+ \alpha \cdot \mathop {\max }\limits_{x,y} ({{{|{{U_2}} |}^2}} )}} \cdot ({{U_3}^\prime - {U_3}} ),$$
where $\textrm{conj}({} )$ denotes complex conjugation, $\alpha$ and $\beta$ are two weights. With the updated object image, we can further calculate the updated light field above the object image, that is,
$${U_2}^\prime = {U_2} + \beta \cdot \frac{{\textrm{conj}\left[{{O^{(i )}}} \right]}}{{({1 - \alpha } )\cdot \left[{{{|{{O^{(i )}}} |}^2}} \right]+ \alpha \cdot \mathop {\max }\limits_{x,y} \left[{{{|{{O^{(i )}}} |}^2}} \right]}} \cdot \left({{U_3}^\prime - {U_3}} \right).$$

Moreover, the light field below the mask, ${U_1}^\prime$, can be derived by back-propagating the light field ${U_2}^\prime$ with a distance of ${d_1}$. The mask pattern can be updated with ${U_1}$, ${U_1}^\prime$, and ${U_2}^\prime$,

$${U_1}^\prime = \textrm{conj}({PS{F_1}} )\ast {U_2}^\prime ({x,y} ),$$
$${M^{(i )}} = {M^{({i - 1} )}} + \beta \cdot \frac{{\textrm{conj}({{P_i}(x,y)} )}}{{({1 - \alpha } )\cdot ({{{|{{P_i}(x,y)} |}^2}} )+ \alpha \cdot \mathop {\max }\limits_{x,y} ({{{|{{P_i}(x,y)} |}^2}} )}} \cdot ({{U_1}^\prime - {U_1}} ).$$

We note that although the mask pattern is known, the update of the mask pattern is necessary. As has been demonstrated in [18], updating the mask in each iteration adds robustness to the image recovery process. As both the complex object image and mask pattern are updated, an iteration is accomplished.

In the next iteration, the (i+1)-th recorded intensity image is substituted into the aforementioned process to update the complex object image and the mask pattern. With all recorded intensity images are used for a time, a loop is accomplished. Generally, 5 to 8 loops are enough for the convergence in image recovery.

We note that the whole image recovery process in our implementation consists of two stages. Stage 1 is referred to as the first loop and Stage 2 is referred to as the rest loops.

In Stage 1, we implement a blind recovery. The so-called blind recovery is referred to as using a guessed mask pattern for object image recovery. The motivation of the blind recovery is to calibrate the mask pattern. Although the pattern fabricated on the mask is pre-defined, it is difficult to guarantee the mask is well aligned with the image sensor, specifically, the center of the mask pattern registered with the center of the image sensor. Consequently, the actual mask pattern might have lateral shift and rotation relative to the ideal mask pattern. Any small amount of misalignment would result in a complete failure of the recovery process, especially when the feature size of the mask is small. In this stage, the complex object image and the mask pattern are initialized by a full-ones matrix, respectively. Then, one loop of image recovery is conducted. The actual mask pattern and the object image are roughly recovered within this loop. By conducting image registration with the recovered mask pattern and the ideal mask pattern, we can obtain the amount of lateral shift and rotation of the recovered mask regarding the ideal mask pattern. We then shift and rotate the ideal mask pattern accordingly and obtain the actual mask pattern. The actual mask pattern and the recovered object image will be used as the initial guess in the subsequent Stage 2 of image recovery.

3. Experiments

We experimentally demonstrate the proposed method with a USAF-1951 resolution target and two biological samples. To generate angle-varied illumination, we use a programmable LED array with 32 × 32 elements, each of which has red, green, and blue LED chips. Both the color and the ON/OFF status of each LED can be controlled independently. The distance of adjacent LEDs is 4 mm.

As the spectra shown in Fig. 4, the LED array can generate red (632.2 nm), green (510.5 nm), and blue (465.5 nm) illumination of partial coherence. In our experiments, we only use the central 9 × 9 elements of the LED array, as the more LEDs are used, the more recorded intensity images will be, and therefore longer image acquisition and recovery time. The distances in our set-up are given in Table 1. Given the distances, the illumination angle varies from -3.55 degrees to 3.55 degrees. 81 intensity images are recorded and the exposure time of each image is 27.5 ms. The images are recorded by using an image sensor (CM3-U3-50S5M-CS, Point Grey) whose pixel size is 3.45 µm. The image sensor has 2448 × 2048 pixels, allowing for a FOV of 8.45 mm (V) × 7.07 mm (H). The mask is by coating a coverslip with chromium metal. The thickness of the mask is ∼1 mm. The mask is fabricated with a pre-defined random binary pattern, as Fig. 3(a) shows. The ratio of transparent area to light-blocking area is designed to 1 (that is, fill factor = 0.5). The feature size of the random pattern is 27.6 µm, which is 8 folds of the pixel size of the image sensor. The recorded intensity images are up-sampled with a factor of Q = 2. The parameters used in the image recovery [Eqs. (6), (7) and (9)] are $\alpha = 0.9$ and $\beta = 0.2$, respectively.

 figure: Fig. 4.

Fig. 4. The spectra (measured by using Ocean Optics USB4000) of the LED array (inset) used our experiments.

Download Full Size | PDF

Tables Icon

Table 1. Distances in the resolution test experiment

3.1 Resolution test

In the first experiment, we characterize the achievable resolution of the proposed method. A USAF-1951 resolution test chart is used for the resolution test. In this experiment, only green LEDs are used. The images recovered are therefore monochromatic. We present the complex object image and mask pattern recovered within each loop in Fig. 5. As the figure shows, although the complex object image and the mask pattern are initialized by a full-one matrix, they both can be roughly recovered within Loop 1 (Stage 1). The mask pattern recovered in Stage 1 is used for lateral shift and rotation calibration. Then the calibrated mask pattern is used as the initial guess of the mask in Loop 2 (the first loop of Stage 2). Similarly, the recovered object image is used as the initial guess of the object in the first loop of Stage 2. The complex object image converges rapidly in Stage 2. The evolution of the recovery quality improvement can be observed in partial enlargements.

 figure: Fig. 5.

Fig. 5. Evolution of the complex object image and the mask pattern recovered by the proposed method.

Download Full Size | PDF

To demonstrate the effectiveness of the proposed method, we conduct a comparison between the proposed method and the original method [18]. The distances in both the original set-up and our improved set-up are given in Table 1. With the comparison, we also show how each of the two modifications we make to the original method affects the image recovery result. As there are four combinations of original / improved set-up and original / improved image algorithm, we show four sets of recovery results within Fig. 6. To conduct a fair comparison, the four cases in comparison all employ the two-stage image recovery strategy as depicted in Section 2.3. Stage 1 consists of 1 loop and Stage 2 consists of 5 loops.

 figure: Fig. 6.

Fig. 6. Comparison of the recovered images using the USAF-1951 resolution target.

Download Full Size | PDF

As Fig. 6 shows, the result by the original method can only resolve Group 6 Element 6 whose line width is 4.38 µm. The achievable resolution is even poorer than the sampling resolution (3.45 µm) allowed by the image sensor. There also appear artifacts in the background and around the edge of the bars. When the original set-up combines with the improved image recovery algorithm, the recovered image looks sharper. The bars in Group 6 Element 6 become distinguishable, but Group 7 Element 1 is still not able to be resolved. When the improved set-up combines with the original image recovery algorithm, a remarkable quality improvement is observed in the recovered image. Group 7 Element 2 (3.48 µm) can be resolved and the undesired artifacts are eliminated. The phase is recovered with a finer quality as well. For the result by the proposed method in which both the set-up and algorithm are improved, Group 7 Element 5 can be resolved. The achievable half-pitch resolution reaches 2.46 µm, bypassing the sampling resolution of the image sensor. In short, the proposed method improves the half-pitch resolution from 4.38 µm to 2.46 µm with reduced artifacts.

In terms of computational complexity, the proposed method adds negligible cost. We compare the computation time of the original algorithm and the improved algorithm on the computer equipped with an Intel Core i7-9750H CPU (2.6 GHz), 16 G RAM, a Nvidia GTX1650 graphic card, Windows 10 operating system, and MATLAB 2018b. To recover the object image with 1,200 × 1,200 pixels (unsampled with factor Q = 2) using 6 loops, the original algorithm takes 103.6 seconds, while the improved algorithm takes 109.9 seconds.

3.2 Biological samples imaging

We further demonstrate the proposed method in biological sample imaging. Two different biological samples, (style of lily and lotus root), are used. The former is used for monochromatic imaging and the latter is for full-color imaging. For monochromatic imaging, we use the central 81 green LEDs in the array to generate angle-varied illumination. Similarly, we also present the comparison results of biological sampling imaging as we do in the resolution test above. The results are presented in Figs. 7 and 8. For the full-color imaging, we use the central 81 LEDs to generate red, green, and blue illumination and recover the three chromatic components respectively. The final full-color result is obtained by fusing the three chromatic components recovered. The images are recovered with 6 loops. As the figures show, both results demonstrate the proposed method can improve the resolution and the quality of image recovery. The recovered images by the proposed method reveal finer details than those by the original methods. The contrast of images is also improved accordingly.

 figure: Fig. 7.

Fig. 7. Comparison of the recovered monochromatic images on a biological sample (lily).

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Comparison of the recovered full-color images on a biological sample (lotus root).

Download Full Size | PDF

4. Discussion

The experiment results demonstrate the effectiveness of the proposed method in improving image recovery quality. We believe that the achievable resolution might be further improved with the following factors considered.

  • 1. Using an image sensor with a smaller pixel size, which might allow finer structures to be recorded in the raw intensity images.
  • 2. Enhancing the coherence of the illumination source, especially the temporal coherence. In our experiments, the bandwidth of the LEDs is ∼30 nm, resulting in a much poorer temporal coherence than any coherent laser sources. The weak coherence of our current set-up results in the loss of fine details in the intensity image recorded. That lost information can hardly be recovered through computation.
  • 3. Increasing the range of illumination angles. As demonstrated in [20], large angular illumination allows the image sensor to record the high-frequency diffraction components of the object. Meanwhile, a denser arrangement of angle-varied illumination might also benefit the improvement of spatial resolution [20,21].

In this work, we mainly focus on the image recovery quality. Apart from imaging quality, the image acquisition time should also be taken into consideration in practical use. Inspired by the Tian et al. [22] and Zhou et al. [23], the image acquisition process might be accelerated by using the multiplexed coded illumination strategy.

5. Conclusion

We report a new implementation of the mask-modulated lensless imaging scheme using translated structured illumination. In the reported platform, the object is sandwiched in-between a coded mask and a naked image sensor for lensless data acquisition. An LED array is used to provide angle-varied illumination for projecting a translated structured pattern without involving mechanical scanning. In the reconstruction process, we employ the regularized ptychographic iterative engine and implement an up-sampling process in the reciprocal space. Experiments demonstrate that the reported method can achieve a sub-pixel resolution (2.46 µm half-pitch resolution) and a wide FOV (8.45 mm × 7.07 mm) by using an image sensor with 3.45-µm pixel size. Our method may provide a cost-effective solution for high-resolution and wide-FOV ptychographic imaging without mechanical scanning.

Funding

National Natural Science Foundation of China (61875074, 61905098, 62071219); Guangzhou Basic and Applied Basic Research Foundation (202002030319); Fundamental Research Funds for the Central Universities (11618307).

Disclosures

The authors declare no conflicts of interest.

References

1. A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9(9), 889–895 (2012). [CrossRef]  

2. S. B. Kim, H. Bae, K. Koo, M. R. Dokmeci, A. Ozcan, and A. Hademhosseini, “Lens-free imaging for biological applications,” J. Lab. Autom. 17(1), 43–49 (2012). [CrossRef]  

3. A. Ozcan and E. McLeod, “Lensless imaging and sensing,” Annu. Rev. Biomed. Eng. 18(1), 77–102 (2016). [CrossRef]  

4. L. Allen and M. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. 199(1-4), 65–75 (2001). [CrossRef]  

5. A. Greenbaum and A. Ozcan, “Maskless imaging of dense samples using pixel super-resolution based multi-height lensfree on-chip microscopy,” Opt. Express 20(3), 3129–3143 (2012). [CrossRef]  

6. J. Zhang, J. Sun, Q. Chen, J. Li, and C. Zuo, “Adaptive pixel-super-resolved lensfree in-line digital holography for wide-field on-chip microscopy,” Sci. Rep. 7(1), 11777 (2017). [CrossRef]  .

7. M. Guizar-Sicairos and J. R. Fienup, “Phase retrieval with transverse translation diversity: a nonlinear optimization approach,” Opt. Express 16(10), 7264–7278 (2008). [CrossRef]  

8. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109(10), 1256–1262 (2009). [CrossRef]  

9. A. Maiden, D. Johnson, and P. Li, “Further improvements to the ptychographical iterative engine,” Optica 4(7), 736–745 (2017). [CrossRef]  

10. M. Stockmar, P. Cloetens, I. Zanette, B. Enders, M. Dierolf, F. Pfeiffer, and P. Thibault, “Near-field ptychography: phase retrieval for inline holography using a structured illumination,” Sci. Rep. 3(1), 1927 (2013). [CrossRef]  

11. H. Zhang, Z. Bian, S. Jiang, J. Liu, P. Song, and G. Zheng, “Field-portable quantitative lensless microscopy based on translated speckle illumination and sub-sampled ptychographic phase retrieval,” Opt. Lett. 44(8), 1976–1979 (2019). [CrossRef]  

12. S. Jiang, J. Zhu, P. Song, C. Guo, Z. Bian, R. Wang, Y. Huang, S. Wang, H. Zhang, and G. Zheng, “Wide-field, high resolution lensless on-chip microscopy via near-field blind ptychographic modulation,” Lab Chip 20(6), 1058–1065 (2020). [CrossRef]  

13. M. Sanz, J. A. Picazo-Bueno, J. García, and V. Micó, “Improved quantitative phase imaging in lensless microscopy by single-shot multi-wavelength illumination using a fast convergence algorithm,” Opt. Express 23(16), 21352–21365 (2015). [CrossRef]  

14. C. Allier, S. Morel, R. Vincent, L. Ghenim, F. Navarro, M. Menneteau, T. Bordy, L. Hervé, O. Cioni, X. Gidrol, Y. Usson, and J. M. Dinten, “Imaging of dense cell cultures by multiwavelength lens-free video microscopy,” Cytom. Part A 91(5), 433–442 (2017). [CrossRef]  

15. W. Luo, Y. Zhang, A. Feizi, Z. Göröcs, and A. Ozcan, “Pixel super-resolution using wavelength scanning,” Light-Sci. Appl.5, e16060 (2016).

16. L. Waller, S. S. Kou, C. J. Sheppard, and G. Barbastathis, “Phase from chromatic aberrations,” Opt. Express 18(22), 22817–22825 (2010). [CrossRef]  

17. Y. Zhou, J. Wu, Z. Bian, J. Suo, G. Zheng, and Q. Dai, “Fourier ptychographic microscopy using wavelength multiplexing,” J. Biomed. Opt. 22(6), 066006 (2017). [CrossRef]  

18. Z. Zhang, Y. Zhou, S. Jiang, K. Guo, K. Hoshino, J. Zhong, J. Suo, Q. Dai, and G. Zheng, “Invited article: Mask-modulated lensless imaging with multi-angle illuminations,” APL Photonics 3(6), 060803 (2018). [CrossRef]  

19. D. J. Batey, T. B. Edo, C. Rau, U. Wagner, Z. D. Pešić, T. A. Waigh, and J. M. Rodenburg, “Reciprocal-space up-sampling from real-space oversampling in x-ray ptychography,” Phys. Rev. A 89(4), 043812 (2014). [CrossRef]  

20. W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light: Sci. Appl. 4, e261 (2015). [CrossRef]  

21. J. Zhang, Q. Chen, J. Li, J. Sun, and C. Zuo, “Lensfree dynamic super-resolved phase imaging based on active micro-scanning,” Opt. Lett. 43(15), 3714–3717 (2018). [CrossRef]  

22. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier Ptychography with an LED array microscope,” Biomed. Opt. Express 5(7), 2376–2389 (2014). [CrossRef]  

23. Y. Zhou, J. Wu, J. Suo, X. Han, G. Zheng, and Q. Dai, “Single-shot lensless imaging via simultaneous multi-angle LED illumination,” Opt. Express 26(17), 21418–21432 (2018). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Schematic diagram of mask-modulated lensless imaging with multi-angle illuminations.
Fig. 2.
Fig. 2. (a) Forward imaging model of the proposed method and (b) illustration of the angular illumination generated by the i-th LED.
Fig. 3.
Fig. 3. (a) The optical mask with a random pattern used in our experiments. (b) A sample intensity image recorded through the forward imaging model, by using the mask (a) for modulation and a USAF-1951 resolution test chart as the target object. (c) The recovered USAF-1951 resolution test char image (intensity).
Fig. 4.
Fig. 4. The spectra (measured by using Ocean Optics USB4000) of the LED array (inset) used our experiments.
Fig. 5.
Fig. 5. Evolution of the complex object image and the mask pattern recovered by the proposed method.
Fig. 6.
Fig. 6. Comparison of the recovered images using the USAF-1951 resolution target.
Fig. 7.
Fig. 7. Comparison of the recovered monochromatic images on a biological sample (lily).
Fig. 8.
Fig. 8. Comparison of the recovered full-color images on a biological sample (lotus root).

Tables (1)

Tables Icon

Table 1. Distances in the resolution test experiment

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

P i ( x , y ) = exp [ j 2 π ( k x x + k y y ) λ ] ,
F { PS F 1 } = exp ( j 2 π d 1 λ 1 ( λ f x ) 2 ( λ f y ) 2 ) ,
I i ( x , y ) = | U 4 ( x , y ) | 2 = | PS F 2 { PS F 1 [ M ( x , y ) P i ( x , y ) ] O ( x , y ) } | 2 .
U 4 = U 4 ( x , y ) ( I i ( x , y ) Q { [ | U 4 ( x , y ) | 2 ones ( Q , Q ) ] Q } Q ) ,
U 3 = conj ( PS F 2 ) U 4 ( x , y ) .
O ( i ) = O ( i 1 ) + β conj ( U 2 ) ( 1 α ) ( | U 2 | 2 ) + α max x , y ( | U 2 | 2 ) ( U 3 U 3 ) ,
U 2 = U 2 + β conj [ O ( i ) ] ( 1 α ) [ | O ( i ) | 2 ] + α max x , y [ | O ( i ) | 2 ] ( U 3 U 3 ) .
U 1 = conj ( P S F 1 ) U 2 ( x , y ) ,
M ( i ) = M ( i 1 ) + β conj ( P i ( x , y ) ) ( 1 α ) ( | P i ( x , y ) | 2 ) + α max x , y ( | P i ( x , y ) | 2 ) ( U 1 U 1 ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.