Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single exposure lensless subpixel phase imaging: optical system design, modelling, and experimental study

Open Access Open Access

Abstract

Design and optimization of lensless phase-retrieval optical system with phase modulation of free-space propagation wavefront is proposed for subpixel imaging to achieve super-resolution reconstruction. Contrary to the traditional super-resolution phase-retrieval, the method in this paper requires a single observation only and uses the advanced Super-Resolution Sparse Phase Amplitude Retrieval (SR-SPAR) iterative technique which contains optimized sparsity based filters and multi-scale filters. The successful object imaging relies on modulation of the object wavefront with a random phase-mask, which generates coded diffracted intensity pattern, allowing us to extract subpixel information. The system’s noise-robustness was investigated and verified. The super-resolution phase-imaging is demonstrated by simulations and physical experiments. The simulations included high quality reconstructions with super-resolution factor of 5, and acceptable at factor up to 9. By physical experiments 3 $\def\upmu{\unicode[Times]{x03BC}}\upmu$m details were resolved, which are 2.3 times smaller than the resolution following from the Nyquist-Shannon sampling theorem.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The high resolution reconstruction of a complex-domain wavefront is of importance in microscopy where a large number of biological specimens like red blood cells, various tissues, etc. are subjects of study [1,2]. These objects can not be analyzed by bright-field microscopy without special preparation of specimen, because amplitude characteristics do not provide enough information on their properties and internal structure, but this missing information can be obtained by reconstructing object’s phase characteristics. There are two main ways to extract phase information from the captured intensity either by using a holography [3], or phase-retrieval techniques [46]. The former, invented by Gábor Dénes around 1948, is well-known technique which allows phase reconstruction from holograms (interference patterns between object and reference beams); while the latter typically does not use reference beams and based on numerical reconstruction of the phase from multiple intensity patterns obtained for the object images disturbed by special wavefront modulation.

In view of the fact that the scheme is in-line these patterns are results of overlapping image-, zero-, and inverse image orders. As a result of this overlapping the iterative procedures can become stagnant [7] and the reconstruction from a single pattern fails, but recording several different diffraction patterns provides enough information sufficient for the phase reconstruction [8]. A redundancy necessary for phase-retrieval from multiple observations can be generated by moving [9], tilting or rotating [10] optical elements (sensor or object), modulating wavefronts using multi-wavelength illumination [11,12] or by special programmable optical elements, for example, spatial light modulators (SLM) [13,14].

Contrary to these techniques, in this paper, a single-exposure phase-retrieval is demonstrated. By using a random modulation phase-mask located between object and sensor planes, we spread the diffraction pattern across the sensor, gaining more information from the wavefront [15]. The phase-retrieval is an ill-posed problem, as a result the reconstruction from this widespread pattern can cause severe corruption in phase imaging. Therefore effective filtering is required. Previous works in this direction concerning single exposure optical setups based also on modulation of the object wavefront either by using a coded aperture [16] or modulated light [17] suffer from the noise generated by this spreading, which can not be eliminated by traditional filtering techniques. To obtain better resolution in subpixel level we apply the block-matching 3D (BM3D) filter [18] which already provided good results for several ill-posed optical phase imaging problems [19,20].

Traditionally for super-resolution imaging from multiple slightly shifted observations, the recorded patterns are combined to create an up-sampled pattern, which from the super-resolved image of an object is retrievable [2124]. In the single observation case only one recorded pattern is assumed, therefore up-sampling techniques developed for image processing [25,26] can be used to gain the required subpixel resolution. In this paper, as continuation of a previous research [27] we follow a different approach based on modeling of optical image formation and registration. The previous system was more complex due to a SLM device used for wavefront modulation. Contrary to this the system developed in the current manuscript is much simpler, the SLM is replaced by a single stationary binary phase-mask with smaller pixel size. The corresponding Super-Resolution Sparse Phase Amplitude Retrieval (SR-SPAR) algorithm [27,28] is modified and tuned for a single observation.

We consider the lensless optical setup, which means that the system is compact and free from optical lens aberrations, furthermore, provides much larger field of view [2933], compared to the cumbersome and expensive [34,35] phase imaging systems.

The contribution of this paper concerns the design of the optical system including wavefront modulation for a single observation setup with a single binary random phase-mask, and also the optimization of the SR-SPAR algorithm (Section 2.3) used for the object reconstruction from a registered diffraction pattern. The paper is focused on the development of this system including proper selection of the object-mask and mask-sensor distances (Section 3.2), as well as the phase-mask properties by simulating the optical setup (Section 3.3). It is shown in, that the designed system can provide the quality of subpixel imaging up to the computational super-resolution factor of 9, which is limited only by the transfer function of the angular spectrum method (Section 3.4). The results of this paper can be summarized as an extension of the conference paper [36] by investigating the mask parameters in more detail and applying the algorithm on physical experiments (Section 4), which correspond well to the simulations by using a calibrated test object and manufactured modulation mask.

2. Method description

2.1 Observation formation

The proposed optical setup (Fig. 1(a)) contains a laser source with a wavelength $\lambda$, test object, phase-mask, and mounted CMOS sensor. The intensity of the propagated wavefront captured at the sensor plane can be written as

$$z = \left | P_{d_{2}}\left \{ \mathcal{M}\circ P_{d_{1}}\left \{u_{o} \right \}\right \} \right |^{2},$$
where $u_{o}\in \mathbb {C}^{N \times N}$ is the $N \times N$ complex-valued object transfer function or wavefront just behind the object plane and $P_{d}:\mathbb {C}^{N \times N}\mapsto \mathbb {C}^{N \times N}$ stands for the free space forward propagation operator on the distance $d$. The wavefront after the mask can be written as the Hadamard product of phase-mask transfer function $\mathcal {M}\in \mathbb {C}^{N \times N}$ and the propagated object wavefront at the distance $d_{1}$. This complex wavefront is propagated toward the sensor with the distance $d_{2}$, where it is captured as intensity measurement $z\in \mathbb {R}_{+}^{N \times N}$ of the size $N \times N$. In Fig. 1(b), we show a cross-section fragment of the used binary phase-mask, illustrating the shape of the mask as well as its main parameters: pixel width $\Delta _{m}$ and random height $\Delta h$. This mask can be formalized as a complex-valued transfer function:
$$\mathcal{M}(x,y)=\textrm{exp}(j\phi(x,y)),$$
where $\phi _{x,y}$ is magnitude of phase-delay taken as binary random with equal probabilities for $0$ and $\Delta \varphi _{m}$ corresponding to the height of the pixel $\Delta h$.

 figure: Fig. 1.

Fig. 1. (a) Lensless optical setup with phase-mask and laser illumination, the object-mask distance is $d_{1}$ and mask-sensor distance is $d_{2}$. (b) Cross-section of a pixel fragment of an ideal binary phase-mask, with pixel size of $\Delta _{m}$ and height of $\Delta h$.

Download Full Size | PDF

The Rayleigh–Sommerfeld model stands for wavefront propagation, in which the angular spectrum (AS) [37] method is defined as:

$$u(x,y,d)=\mathfrak{F}^{-1} \left \{ H(f_{x},f_{y},d)\cdot\mathfrak{F} \left \{ u(x,y,0) \right \} \right \},$$
$$H\left (f_{x},f_{y},d \right)=\begin{cases} \textrm{exp}\left [ i\frac{2\pi }{\lambda }d\sqrt{1-\lambda ^{2}\left ( f_{x}^{2} + f_{y}^{2}\right )} \right ], & f_{x}^{2} + f_{y}^{2}\leq \frac{1}{\lambda ^{2}},\\ 0, & \textrm{otherwise.}\\\end{cases}$$
Here $u(x,y,0)$ is a wavefront in the initial object plane, which propagates the distance $d$ to create the diffractive wavefront $u(x,y,d)$. In Eq. (3), $\mathfrak {F}$ and $\mathfrak {F}^{-1}$ are the Fourier and inverse Fourier transforms, $H(f_{x},f_{y},d)$ is the AS transfer function depending on the spatial frequencies $f_{x},f_{y}$ and the wavelength $\lambda$.

In the optical systems without modulation, the captured intensity of a propagated wavefront does not provide enough information to reconstruct the object with high quality. The main idea behind our single-exposure system is to modulate this propagated wavefront, so the diffraction pattern will spread wider and provide more information on the object. This modulation depends on both the phase-mask as well as on the distances from the object and the sensor.

Since the modulator mask is known, subpixel information can be retrieved and used for object reconstruction. Our advanced algorithm followed by this idea allows us to replace the traditional lens systems [4] by a compact binary phase-mask without any moving elements.

2.2 Super-resolution acquirement

The digital phase-retrieval technique requires to discretize physical wavefronts to calculate their propagation. These wavefronts on object-, mask-, and sensor planes may have different size features and, respectively, different sampling intervals $\Delta _{o}$, $\Delta _{m}$, and $\Delta _{s}$, respectively. However, it is convenient to use the same computational sampling rate $\Delta _{c}$ and computational pixel size $\Delta _{c}\times \Delta _{c}$ for all wavefronts and elements of the optical system, in particular, for the element-wise operations in Eq. (1). The sampling rate of the continuous elements (phase-mask and object) can be arbitrary, therefore they are usually chosen to be equal to this value $\Delta _{o}=\Delta _{m}=\Delta _{c}$. The sampling rate on the sensor is limited by its pixel size, so we can take $\Delta _{c} = \Delta _{s}$, which means the conventional pixel-wise resolution of the captured diffraction pattern $z$.

For subpixel resolution, we may assume that the sensor pixels $\Delta _{s}\times \Delta _{s}$ are replaced by a smaller computational $\Delta _{c}\times \Delta _{c}$. In this way, the sensor diffraction pattern $z$ can be treated as up-sampled with the sampling period $\Delta _{c}$. If the resulting pattern $\tilde {z}$ has the computational pixels smaller than the physical sensor, $\Delta _{c} < \Delta _{s}$, and the reconstruction of the object is produced with $\Delta _{c}$, we say about subpixel resolution (or super-resolution). The ratio $r_{s}=\Delta _{s}/\Delta _{c}\geq 1$ is called the super-resolution factor.

2.3 Algorithm description

We use the modified SR-SPAR algorithm for reconstructions, its flowchart can be seen in Fig. 2.

 figure: Fig. 2.

Fig. 2. Flowchart of SR-SPAR method. The images on the left are showing the phases in the object plane before and after the BM3D filtering for given iterations.

Download Full Size | PDF

The up-sampled captured intensity $\tilde {z}$ and the phase-mask $\mathcal {M}$ are the inputs of the algorithm. An initial guess for the object wavefront $u_{o}^{0}$ is propagated forward to the sensor plane through the modulation mask (Step 1), where the amplitude is updated by the square root of $\tilde {z}$ (Step 2). This updated wavefront $u_{s}^{t}$ is then propagated back to the object plane (Step 3) resulting in an object wavefront $u_{o}^{t}$. The $\mathcal {P}_{d}^{-1}\{u\}$ stands for the inverse of Eq. (3), and $\mathcal {M}^{\ast }$ is the complex conjugate of $\mathcal {M}$. If the absolute phase range is higher than $2\pi$, an unwrapping algorithm is needed in order to achieve correct reconstruction (Step 4). The phase $\varphi _{o,abs}^{t}$ and amplitude $abs(u_{o}^{t})$ are BM3D filtered separately (Step 5) and they are merged together to produce an updated complex wavefront $u_{o}^{t}$ (Step 6). To highlight the influence of the algorithm the images on Fig. 2 show how the reconstructed phase on the object plane is changing from iteration to iteration before and after BM3D filtering.

2.3.1 Block-Matching 3D filter

One of the important elements of the SR-SPAR algorithm is the BM3D filter [18] based on nonlocal self-similarity sparsity. It assumes that an examined image is containing multiple small similar patches. These similar patches are stacked together and form 3D arrays-groups (grouping). The 3D orthonormal transforms are applied to these 3D groups. The obtained transform 3D coefficients are hard-thresholded and then the inverse transform gives the 3D filtered groups. The filtered 2D patches of these groups are returned on their original positions in the image (collaborative filtering). As a result for each pixel of the image we may obtain more than one estimate. These estimates are aggregated by the weighted mean operation (aggregation).

All these steps together define what is called the thresholding stage of BM3D. The Wiener filtering is applied to the obtained filtered image as the second stage of BM3D. The Wiener filtering stage repeat grouping, collaborative filtering and aggregation steps described above with the only difference that the hard-thresholding is replaced by the Wiener filtering (more details in [18]).

The $u_{o}$ complex wavefront can be determined as

$$u_{o}(x,y)=A_{o}(x,y)\cdot \textrm{exp}( i \varphi _{o}(x,y) ),$$
where the $A_{o}$ and $\varphi _{o}$ stand for the object amplitude and phase. These two images are filtered by BM3D separately in order to exploit the sparse representation on real-valued variables, which result in $\hat {\varphi }_{0}$ and $\hat {A}_{0}$ estimates of phase and amplitude obtained as sparse approximations of the true unknown phase and amplitude:
$$\hat{\varphi}_{o}=\textrm{BM3D}(\varphi _{o},th_{\varphi}),$$
$$\hat{A}_{o}=\textrm{BM3D}(A _{o},th_{A}).$$
The BM3D algorithm is applied with the threshold parameters $th_{\varphi }$ and $th_{A}$, respectively, for filtering phase and amplitude. To achieve the best resolution, these parameters are varying in iterations of SR-SPAR, taking smaller values in successive iterations.

2.3.2 Multi-scale filtering

Optional filtering of the algorithm is the so-called Multi-Scale Correction [38], which goal is to eliminate the correlated noise, caused by the errors of the optical system setup. Originally the technique is proposed to improve the denoising power of the BM3D algorithm by filtering down-sampled image and up-sample the filtered image to the original size. This method helps the grouping stage (see subsection (2.3.1)) to avoid the collection of the correlated noise patterns. We modified and integrated this technique into our SR-SPAR iteration algorithm. It means that we run a parallel loop with a down-sampled object wavefront $\widetilde {u_o^t}$ in every second iteration and up-sample the updated wavefront $\widetilde {u_o^{t+1}}$ to the original size. The two wavefronts ($\widetilde {u_o^{t+1}}$ and $u_o^{t+1}$) are now free from the correlated speckle noise, while averaging them together is giving a better guess for the iteration free from the high-frequency noise. In simulations this filter is not applied since the modelled system is free from noises and optical errors.

3. Simulations

3.1 Main parameters and test images

We set the values of parameters in simulation as corresponding to the values of the parameters of the physical prototype of the optical system: laser source with the wavelength of 532 nm, three built-in MATLAB images as test objects (Fig. 3), a phase-mask, and a CMOS sensor with the pixel size of 3.45 $\upmu$m, maximum resolution of $2448\times 2048$ and dynamic range of 12 bit. The object maximum magnitude of phase-delay is taken to be $\Delta \varphi _{o}=\pi /2$ and the mask’s ($\Delta \varphi _{m}$) is chosen via simulation experiments in Section 3.3. The phase-only USAF target is a binary object to evaluate the resolving power of our system, and the grayscale cameraman and cell images are used as continuous phase objects. In general, the propagation calculation requires matrices of the same size, therefore the wavefronts in each plane have to be zero-padded to the size $r_{s}\cdot N_{s}$, where $r_{s}$ is the super-resolution factor, and $N_s \times N_s$ is the number of the sensor pixels. In case of sensor subpixel resolution ($r_s > 1$), we simulate the sensor pixel size $\Delta _s$ by averaging the values of variables on $r_s \times r_s$ computational pixels of size $\Delta _c$. The objects are used with their pixel size equal to the computational pixel size ($\Delta _o=\Delta _c$).

 figure: Fig. 3.

Fig. 3. Phase images used in simulation tests (from left to right): binary USAF target, continuous cameraman and cell.

Download Full Size | PDF

The accuracy of the reconstructed images are measured by Relative Root-Mean Square error (RRMSE) between the true and reconstructed object’s models as

$$\textrm{RRMSE}=\frac{\left \| \varphi_{o}-\hat{\varphi_{o}} \right \|_{F}}{\left \| \varphi_{o} \right \|_{F}},$$
where $\varphi _o$ and $\hat {\varphi _o}$ are the phases of the true and reconstructed object, and $\left \| \cdot \right \|_{F}$ means the Frobenius norm.

Low values of $\textrm {RRMSE}<0.11$ means good reconstructions, while $\textrm {RRMSE}>0.3$ means that the reconstruction fails.

3.2 Distance tuning and noise-influence analysis

The tuning of distance parameters includes simulations assuming fixed object position, while the sensor is placed in the distance $d=d_1+d_2$ from the object.

Then the mask is positioned between them taking distance values from the object ($d_1$) from $0$ to $d$ with the step of 1 mm. The effect on RRMSE of different mask and sensor positions, has been tested by using cameraman object in pixel-wise reconstruction. The error of reconstructions using these distances are shown in Fig. 4(a). The colors indicate the RRMSE values: green colors are the acceptable reconstructions with $\textrm {RRMSE}\leq 0.11$, reddish colors are not acceptable reconstructions with $\textrm {RRMSE}>0.11$, and there are no values in the white area. This RRMSE values can be shifted in the super-resolution scenario, however we assume that general structure will stay the same. Generally, we can say that mask positioning too close to the sensor ($d_2=d-d_{1}\leq 6$ mm) do not allow efficient diffraction level, resulting in information loss and poor imaging accuracy. In other hand, it is beneficial to choose the distance $d$ as small as possible, provided that we can ensure that all the object frequencies are collected by the sensor. Considering these restrictions, we selected the distance between the object and the mask as $d_{1}=2$ mm (as sum of 1 mm gap and 1 mm mask width), and the total distance between the object and sensor as $d=17$ mm (as sum of $d_1$, 1 mm gap and 14 mm CS-mount).

 figure: Fig. 4.

Fig. 4. RRMSE of the reconstruction (a) in different distances with good (green), bad (reddish) and no values (white), (b) in different noise-level.

Download Full Size | PDF

The effect of different noise-levels was measured by calculating RRMSE of the reconstructed cameraman phase-only object in pixel-wise and with super-resolution factor of 3. The noise was generated by additive normal distributed noise and the signal-to-noise ratio (SNR) was measured by

$$\textrm{SNR}=10\cdot log_{10} \left ( \frac{S}{N} \right ),$$
where $S$ is the power of the examined pattern (signal) and $N$ is the power of the noise. Our method provided good results ($\textrm {RRMSE}<0.11$) with $\textrm {SNR}>7$ dB in case of pixel-wise, and $\textrm {SNR}>4.5$ dB in subpixel resolution (Fig. 4(b)), proving the system’s noise robustness.

3.3 Mask selection

Selection of the mask parameters (phase-delay $\Delta \varphi _m$ and mask pixel size $\Delta _m$) is based on the simulations with the cameraman image. As an extension of the conference paper [36] the parameters were investigated in details using super-resolution factor of 5 to improve the resolution even further. The RRMSEs of the reconstructed object (Eq. (8)) using masks of different magnitude of phase delay ($\Delta \varphi _{m}$) were calculated with different pixel sizes ($\Delta _m$) and are shown in Fig. 5(a). The figure highlights, that the best performance is achieved around $\Delta \varphi _{m}=\pi$. Keeping $\Delta \varphi _{m}=\pi$ we examined how the mask pixel size effects the RRMSE with super-resolution factors from 1 to 5 and the results are shown in Fig. 5(b). Previously [36] we assumed that the smaller mask pixels are always providing better reconstructions, but the new simulations show that this assumption is only valid as far as we are able to capture most of the intensity pattern. Taking smaller modulation mask pixel size the diffraction pattern can spread wider than the sensor size, loosing too much information for the efficient reconstruction. In our system for the given distances this effect occurs if the pixel-ration $\Delta _m/\Delta _s<1/3$.

 figure: Fig. 5.

Fig. 5. RRMSE of the reconstructed cameraman object (a) with phase-masks of different $\Delta \varphi$ phase-delay and $\Delta _m/\Delta _s$ mask-sensor pixel size ratio with super-resolution factor of $r_s = 5$. (b) RRMSEs with different $\Delta _m/\Delta _s$ ratio with super-resolution factors from 2 to 5 by keeping the phase-delay $\Delta \varphi =\pi$.

Download Full Size | PDF

3.4 Super-resolution demonstration

3.4.1 Binary imaging

Simulations with the USAF target are featured here to examine the super-resolution limits of our method. The mask magnitude of phase-delay is $\Delta \varphi _m=\pi$ and the pixel sizes were chosen by simulations. We present here high-quality reconstruction ($\textrm {RRMSE}<0.02$) with super-resolution factor of 3 and lowest resolved target with super-resolution factor of 5 with $\textrm {RRMSE}<0.11$. The reconstructed phase images and their cross-sections can be seen in Fig. 6. The method can resolve the object even for $r_{s}=5$ which has 5-times smaller details than the sensor pixel. Comparing our best super-resolution result $\Delta _{s}/5=0.69\ \upmu$m with the Abbe diffraction limited resolution $\Delta _{Abbe}$

$$\Delta_{Abbe} = \frac{\lambda}{NA}\approx\frac{2d\lambda}{N\Delta_{s}}=2.56~\mu m,$$
we may conclude, that our technique allows to demonstrate almost 4-times better resolution ($0.69\ \upmu$m vs $2.56\ \upmu$m). In Eq. (10), $NA$ stands for the numerical aperture, $\lambda =532$ nm is the wavelength, $d=d_{1}+d_{2}$ is the total distance from the object to the sensor. The sensor’s dimensions are $N_{s}\times M_{s} = 2448\times 2048$ and pixel size $\Delta _{s}=3.45\ \upmu$m, for Eq. (10) we choose $N=M_s$.

 figure: Fig. 6.

Fig. 6. USAF target phase reconstructions with super-resolution factors of 3 and 5, and the cross-sections of the smallest resolved target groups. These groups can be seen in the zoomed regions with the line width of 1.15 $\upmu$ m ($r_s=3$) and 0.69 $\upmu$ m ($r_s=5$)

Download Full Size | PDF

3.4.2 Continuous imaging

Using the USAF target is one of the most efficient way to measure the resolving power of a method, since it is a calibrated binary object, but real life objects are mainly continuous. We use continuous images (cameraman and cell) as phase-only objects to provide more evidence of quality imaging. All parameters are kept the same as for the binary object in the previous subsection and the results with super-resolution factors of 3 and 9 are shown in Fig. 7.

 figure: Fig. 7.

Fig. 7. Quality imaging of continuous phase-only cell object with super-resolution factors of 3 and 9. Please pay attention that the scales are different because of the different computational pixel size ($\Delta _c$).

Download Full Size | PDF

Our method is able to reconstruct details which are 9-times smaller than the sensor pixels with $\textrm {RRMSE}<0.11$, although we can see that the images are less sharp in this case, because part of the high frequencies are lost due to the hard thresholded filtering. It is important to mention that a resolution limit is coming from the transfer function of the angular spectrum method (Eq. (4)), because if $\Delta _c/\lambda <0.71$ the edges of the transfer function become zero and eliminate the high frequencies. As a result of this elimination the reconstruction is not acceptable anymore. In case of $\lambda =532$ nm this limit occurs if $\Delta _c<376$ nm, which corresponds to $r_s=9.17$. The obtained images and RRMSE values correspond well to Fig. 6. The system can provide outstanding quality with the super-resolution factor of 3 and the details are still distinct with factor of 9. The result with pixel size of $\Delta _s/9=0.383\ \upmu$m corresponds to lower than wavelength resolution ($\lambda =0.532\ \upmu$m) and comparing to a diffraction limited resolution $\Delta _{Abbe}=2.56\ \upmu$m (Eq. (10)) we can see outstandingly better resolution capability of our algorithm.

3.5 Positioning error estimation

The resolving power of our technique is sensitive to the distance errors. In our simulation tests we assume distance errors $d_{2,e}$ in $d_2$ and $d_{1,e}$ in $d_1$ and calculate the corresponding RRMSEs. The corresponding results are shown in Fig. 8. In physical experiments digital autofocusing technique is used to estimate the true distances of the optical setup. We noticed that due to $d_{1,e}$ and $d_{2,e}$ the desired planes can be essentially defocused. The elimination of this defocusing error is particularly important on the mask plane, since the method is based on the known wavefront modulation.

 figure: Fig. 8.

Fig. 8. RRMSE of the reconstructions (a) with different mask-sensor distance error, (b) with different object-sensor distance error.

Download Full Size | PDF

4. Physical experiments

The parameters of the physical system were already considered in Section 3.1 as a laser source with the wavelength of 532 nm, a resolution target (Phasefocus PFPT01-16-127) with etch depth of 127 nm and smallest line-width of 2 $\upmu$m, a phase-modulation mask with the pixel size of 4 $\upmu$m and magnitude of phase-delay of $\pi /2$ for the given $\lambda$, and a CMOS sensor (FLIR Chameleon3 CM3-U3-50S5M-CS) with the pixel size of 3.45 $\upmu$m, maximum resolution of $2448\times 2048$ and dynamic range of 12 bit. The distances have been set by following Section 3.2, with object-mask distance of $d_1=5.2$ mm and mask-sensor distance of $d_2=13.6$ mm.

The wavefront modulation and demodulation by phase-mask (element-wise multiplication and element-wise division) is crucial part of the algorithm, therefore we need the highest similarity between the focused wavefront on mask plane and the positioned mask itself. In order to enable the high accuracy positioning we do it for the super-resolution factor of $r_s=3.45$, so the mask pixel size ($\Delta _m = 4\ \upmu$m) is a multiple of the computational pixel size ($\Delta _c = 1\ \upmu$m).

Figures 9(c) and 9(d) show the reconstructed Phasefocus target, demonstrating the super-resolution power of the algorithm by resolving the 1st element from the group 9 with lines-thickness of $3~\mu m$. The corresponding cross-sections are shown in Fig. 9(g). The $\Delta h$ depth-values (height of the object) are calculated by

$$\Delta h=\frac{\Delta \varphi \lambda }{2\pi (n-1)},$$
where $\Delta \varphi$ is the magnitude of phase-delay, $\lambda$ is the wavelength of the illumination, and $n$ is the object’s refractive index. In our case the object was manufactured from fused silica with the refractive index of $n=1.4607$ for $\lambda =532$ nm.

<?fighere g009?>

 figure: Fig. 9.

Fig. 9. Object reconstruction. (a) Amplitude and (b) depth-map of the original Phasefocus target [39] obtained by Atomic force microscopy. (c) Amplitude and (d) depth-map of the reconstructed object which are calculated with the super-resolution factor of $r_s=3.45$, and (g) the cross-sections of the depth-values of the 6th element from group 7 (left plot) and 1st element from the group 9 (right plot). For comparison (e) amplitude and (f) depth-map of the reconstructed object which are calculated in pixel-wise resolution.

Download Full Size | PDF

We were able to resolve details of 3 $\upmu$m, which are 2.3 times smaller than the resolution of 6.9 $\upmu$m dictated by the Nyquist-Shannon sampling theorem for the camera pixel 3.45 $\upmu$m. It is important to mention that the smallest step we can take for positioning the mask in x and y direction is equal to the computational pixel size, causing positioning errors maximum of half of this step ($\textrm {error}_{\textrm {max}}=\Delta _c/2$), which is one of the main reasons of the lower super-resolution rate compared to the simulations. To find the best position for mask, we made several runs with different positions and choose the ones with the best results. Another problem is that the mask can be rotated, generating errors caused by the digital image rotation techniques, or can be inclined, causing focusing error between the nonparallel planes. Figure 8 shows that even a small error in mask-sensor distance can cause resolution drop, and the same apply for other positioning parameters as well. Because of these restrictions high thresholding is required for the BM3D which eliminates the shapes with thickness lower than 3 $\upmu$m.

For the experiments MATLAB R2018a was used on a computer with 64 GB of RAM and a 2.1 GHz Intel(R) Xeon(R) CPU E5-2620 v4 processor. An iteration in single-scale case took around 42 seconds and in multi-scale case took around 50 seconds. The multi-scale filter was used in every second iteration to eliminate the appeared correlated noise and help the convergence to avoid to stuck in local extrema. The convergence is quite fast, the separate horizontal and vertical lines of the smallest group already appears after 6 iterations, although the depth-values are less accurate. After 100 iteration the convergence slows down immensely, therefore the multi-scaling is turned off and only SR-SPAR is used for 10 more iteration to obtain the desired super-resolved image.

5. Conclusion

A single exposure super-resolution lensless phase-retrieval method was improved for quality imaging in sensor subpixel level and applied in physical experiments. The system development included phase-mask selections, optimization of distances between object, mask, and sensor, and optimization of sparse filtering in the iterative cycle. We successfully applied the SR-SPAR algorithm to reconstruct continuous test objects with super-resolution factors from 2 to 9, which corresponds lower than wavelength resolution. The resolution-efficiency is verified by resolving the USAF target with a super-resolution factor of 5, retrieving details 4 times smaller than Abbe-limit. The simulations also showed that the ideal mask pixel size is not a fixed value, but usually, it is around half of the sensor pixel size. The algorithm was applied successfully on the physical optical setup as well, providing a super-resolved reconstruction of a Phasefocus target.

As further work another phase-mask with smaller pixel size is planned to use in the optical setup, to achieve reconstructions with higher super-resolution factor. Another direction is to use metasurface diffuser, which was successfully applied for traditional multi-observation complex field and 3D imaging [40], so it can provide an effective alternative for the complex wavefront modulation both in phase- and amplitude domain.

Funding

Jane and Aatos Erkko Foundation and Finland Centennial Foundation funded Computational Imaging without Lens (Project "CIWIL").

Acknowledgments

The phase-mask was made in collaboration with Prof. S.N. Khonina and V.V. Podlipnov, Image Processing Systems Institute - Branch of the Federal Scientific Research Centre “Crystallography and Photonics” of Russian Academy of Sciences, Samara, Russia.

Disclosures

The authors declare no conflicts of interest.

References

1. L. V. Wang and H.-i. Wu, Biomedical optics: principles and imaging (John Wiley & Sons, 2012).

2. J. Pawley, Handbook of biological confocal microscopy (Springer Science & Business Media, 2010).

3. D. Gabor, A new microscopic principle (Nature Publishing Group, 1948).

4. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]  

5. R. W. Gerchberg, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

6. G.-z. Yang, B.-z. Dong, B.-y. Gu, J.-y. Zhuang, and O. K. Ersoy, “Gerchberg–saxton and yang–gu algorithms for phase retrieval in a nonunitary transform system: a comparison,” Appl. Opt. 33(2), 209–218 (1994). [CrossRef]  

7. Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32(3), 87–109 (2015). [CrossRef]  

8. D. Claus, G. Pedrini, and W. Osten, “Iterative phase retrieval based on variable wavefront curvature,” Appl. Opt. 56(13), F134–F137 (2017). [CrossRef]  

9. P. Almoro, G. Pedrini, and W. Osten, “Complete wavefront reconstruction using sequential intensity measurements of a volume speckle field,” Appl. Opt. 45(34), 8596–8605 (2006). [CrossRef]  

10. Y.-c. Lin, H.-C. Chen, H.-Y. Tu, C.-Y. Liu, and C.-J. Cheng, “Optically driven full-angle sample rotation for tomographic imaging in digital holographic microscopy,” Opt. Lett. 42(7), 1321–1324 (2017). [CrossRef]  

11. N. V. Petrov, V. G. Bespalov, and A. A. Gorodetsky, “Phase retrieval method for multiple wavelength speckle patterns,” in Speckle 2010: Optical Metrology, vol. 7387 (International Society for Optics and Photonics, 2010), p. 73871T

12. V. Katkovnik, I. Shevkunov, N. V. Petrov, and K. Egiazarian, “Multiwavelength surface contouring from phase-coded noisy diffraction patterns: wavelength-division optical setup,” Opt. Eng. 57(08), 1 (2018). [CrossRef]  

13. L. Camacho, V. Micó, Z. Zalevsky, and J. García, “Quantitative phase microscopy using defocusing by means of a spatial light modulator,” Opt. Express 18(7), 6755–6766 (2010). [CrossRef]  

14. C. Kohler, F. Zhang, and W. Osten, “Characterization of a spatial light modulator and its application in phase retrieval,” Appl. Opt. 48(20), 4003–4008 (2009). [CrossRef]  

15. E. J. Candes, X. Li, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” Appl. Comput. Harmon. Analysis 39(2), 277–299 (2015). [CrossRef]  

16. R. Horisaki, T. Kojima, K. Matsushima, and J. Tanida, “Subpixel reconstruction for single-shot phase imaging with coded diffraction,” Appl. Opt. 56(27), 7642–7647 (2017). [CrossRef]  

17. R. Horisaki, R. Egami, and J. Tanida, “Single-shot phase imaging with randomized light (spiral),” Opt. Express 24(4), 3765–3773 (2016). [CrossRef]  

18. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. on Image Process. 16(8), 2080–2095 (2007). [CrossRef]  

19. E. Achimova, V. Abaskin, D. Claus, G. Pedrini, I. Shevkunov, and V. Katkovnik, “Noise minimised high resolution digital holographic microscopy applied to surface topography,” Comput. Opt. 42(2), 267–272 (2018). [CrossRef]  

20. V. Katkovnik, I. Shevkunov, N. Petrov, and K. Eguiazarian, “Multiwavelength absolute phase retrieval from noisy diffractive patterns: wavelength multiplexing algorithm,” Appl. Sci. 8(5), 719 (2018). [CrossRef]  

21. R. Gerchberg, “Super-resolution through error energy reduction,” Opt. Acta 21(9), 709–720 (1974). [CrossRef]  

22. L. S. Joyce and W. L. Root, “Precision bounds in superresolution processing,” J. Opt. Soc. Am. A 1(2), 149–168 (1984). [CrossRef]  

23. A. Kirkland, W. Saxton, K.-L. Chau, K. Tsuno, and M. Kawasaki, “Super-resolution by aperture synthesis: tilt series reconstruction in ctem,” Ultramicroscopy 57(4), 355–374 (1995). [CrossRef]  

24. M. G. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000). [CrossRef]  

25. Y. Rivenson, A. Stern, and B. Javidi, “Single exposure super-resolution compressive imaging by double phase encoding,” Opt. Express 18(14), 15094–15103 (2010). [CrossRef]  

26. A. Agrawal and R. Raskar, “Resolving objects at higher resolution from a single motion-blurred image,” in 2007 IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2007), pp. 1–8.

27. V. Katkovnik, I. Shevkunov, N. V. Petrov, and K. Egiazarian, “Computational super-resolution phase retrieval from multiple phase-coded diffraction patterns: simulation study and experiments,” Optica 4(7), 786–794 (2017). [CrossRef]  

28. P. Kocsis, I. Shevkunov, V. Katkovnik, and K. Egiazarian, “Single exposure lensless phase imaging,” in EUVIP Workshop (2018).

29. M. Rostykus, M. Rossi, and C. Moser, “Compact lensless subpixel resolution large field of view microscope,” Opt. Lett. 43(8), 1654–1657 (2018). [CrossRef]  

30. S. Bernet, W. Harm, A. Jesacher, and M. Ritsch-Marte, “Lensless digital holography with diffuse illumination through a pseudo-random phase mask,” Opt. Express 19(25), 25113–25124 (2011). [CrossRef]  

31. E. McLeod, W. Luo, O. Mudanyali, A. Greenbaum, and A. Ozcan, “Toward giga-pixel nanoscopy on a chip: a computational wide-field look at the nano-scale without the use of lenses,” Lab Chip 13(11), 2028–2035 (2013). [CrossRef]  

32. E. McLeod and A. Ozcan, “Microscopy without lenses,” Phys. Today 70(9), 50–56 (2017). [CrossRef]  

33. I. Shevkunov, V. Katkovnik, N. V. Petrov, and K. Egiazarian, “Super-resolution microscopy for biological specimens: lensless phase retrieval in noisy conditions,” Biomed. Opt. Express 9(11), 5511–5523 (2018). [CrossRef]  

34. T. H. Nguyen, M. E. Kandel, M. Rubessa, M. B. Wheeler, and G. Popescu, “Gradient light interference microscopy for 3d imaging of unlabeled specimens,” Nat. Commun. 8(1), 210 (2017). [CrossRef]  

35. M. Symeonidis, R. N. Suryadharma, R. Grillo, A. Vetter, C. Rockstuhl, T. Bürgi, and T. Scharf, “High-resolution interference microscopy with spectral resolution for the characterization of individual particles and self-assembled meta-atoms,” Opt. Express 27(15), 20990–21003 (2019). [CrossRef]  

36. P. Kocsis, I. Shevkunov, V. Katkovnik, and K. Egiazarian, “Single exposure lensless subpixel phase imaging,” in Digital Optical Technologies 2019, vol. 11062 (International Society for Optics and Photonics, 2019), p. 1106212.

37. J. W. Goodman, Introduction to Fourier optics (Roberts and Company Publishers, 2005).

38. H. C. Burger and S. Harmeling, “Improving denoising algorithms via a multi-scale meta-procedure,” in Joint Pattern Recognition Symposium, (Springer, 2011), pp. 206–215.

39. T. Godden, A. Muñiz-Piniella, J. Claverley, A. Yacoot, and M. Humphry, “Phase calibration target for quantitative phase imaging with ptychography,” Opt. Express 24(7), 7679–7692 (2016). [CrossRef]  

40. H. Kwon, E. Arbabi, S. M. Kamali, M. Faraji-Dana, and A. Faraon, “Computational complex optical field imaging using a designed metasurface diffuser,” Optica 5(8), 924–931 (2018). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. (a) Lensless optical setup with phase-mask and laser illumination, the object-mask distance is $d_{1}$ and mask-sensor distance is $d_{2}$. (b) Cross-section of a pixel fragment of an ideal binary phase-mask, with pixel size of $\Delta _{m}$ and height of $\Delta h$.
Fig. 2.
Fig. 2. Flowchart of SR-SPAR method. The images on the left are showing the phases in the object plane before and after the BM3D filtering for given iterations.
Fig. 3.
Fig. 3. Phase images used in simulation tests (from left to right): binary USAF target, continuous cameraman and cell.
Fig. 4.
Fig. 4. RRMSE of the reconstruction (a) in different distances with good (green), bad (reddish) and no values (white), (b) in different noise-level.
Fig. 5.
Fig. 5. RRMSE of the reconstructed cameraman object (a) with phase-masks of different $\Delta \varphi$ phase-delay and $\Delta _m/\Delta _s$ mask-sensor pixel size ratio with super-resolution factor of $r_s = 5$. (b) RRMSEs with different $\Delta _m/\Delta _s$ ratio with super-resolution factors from 2 to 5 by keeping the phase-delay $\Delta \varphi =\pi$.
Fig. 6.
Fig. 6. USAF target phase reconstructions with super-resolution factors of 3 and 5, and the cross-sections of the smallest resolved target groups. These groups can be seen in the zoomed regions with the line width of 1.15 $\upmu$ m ($r_s=3$) and 0.69 $\upmu$ m ($r_s=5$)
Fig. 7.
Fig. 7. Quality imaging of continuous phase-only cell object with super-resolution factors of 3 and 9. Please pay attention that the scales are different because of the different computational pixel size ($\Delta _c$).
Fig. 8.
Fig. 8. RRMSE of the reconstructions (a) with different mask-sensor distance error, (b) with different object-sensor distance error.
Fig. 9.
Fig. 9. Object reconstruction. (a) Amplitude and (b) depth-map of the original Phasefocus target [39] obtained by Atomic force microscopy. (c) Amplitude and (d) depth-map of the reconstructed object which are calculated with the super-resolution factor of $r_s=3.45$, and (g) the cross-sections of the depth-values of the 6th element from group 7 (left plot) and 1st element from the group 9 (right plot). For comparison (e) amplitude and (f) depth-map of the reconstructed object which are calculated in pixel-wise resolution.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

z = | P d 2 { M P d 1 { u o } } | 2 ,
M ( x , y ) = exp ( j ϕ ( x , y ) ) ,
u ( x , y , d ) = F 1 { H ( f x , f y , d ) F { u ( x , y , 0 ) } } ,
H ( f x , f y , d ) = { exp [ i 2 π λ d 1 λ 2 ( f x 2 + f y 2 ) ] , f x 2 + f y 2 1 λ 2 , 0 , otherwise.
u o ( x , y ) = A o ( x , y ) exp ( i φ o ( x , y ) ) ,
φ ^ o = BM3D ( φ o , t h φ ) ,
A ^ o = BM3D ( A o , t h A ) .
RRMSE = φ o φ o ^ F φ o F ,
SNR = 10 l o g 10 ( S N ) ,
Δ A b b e = λ N A 2 d λ N Δ s = 2.56   μ m ,
Δ h = Δ φ λ 2 π ( n 1 ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.