Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Lensless microscopy by multiplane recordings: sub-micrometer, diffraction-limited, wide field-of-view imaging

Open Access Open Access

Abstract

Lensless microscopy is attractive because lenses are often large, heavy and expensive. We report diffraction-limited, sub-micrometer resolution in a lensless imaging system that does not need a reference wave and imposes few restrictions on the density of the sample. We use measurements of the intensity of light scattered by the sample at multiple heights above the sample and a modified Gerchberg-Saxton algorithm to reconstruct the phase of the optical field. We introduce a pixel-splitting algorithm that increases resolution beyond the size of the sensor pixels, and implement high-dynamic-range measurements. The resolution depends on the numerical aperture of the first measurement height only, while the field of view is limited by the last measurement height only. As a result, resolution and field of view can be controlled independently. The pixel-splitting algorithm also allows imaging with light of low spatial coherence, and we show that such low coherence is beneficial for a larger field of view. Using illumination from three LEDs, we produce full-color images of biological samples. Finally, we provide a detailed analysis of the limiting factors of this lensless microscopy system. The good performance demonstrated here can allow lensless systems to replace conventional microscope objectives in some situations.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Imaging without lenses is attractive, because lenses, especially when they are well corrected for aberrations, add significant cost, weight and volume to many instruments. The problem of lensless imaging is essentially equal to the problem of obtaining the phase of an electromagnetic field, because a field with known amplitude and phase can be propagated back to the object plane very easily. While the field amplitude is easily calculated from the square root of an intensity measurement, which is the output of most image sensors, the phase must be retrieved numerically from several measurements. Phase retrieval, in addition to enabling lensless imaging, can also be used to measure additional parameters, such as refractive index or very small deflections.

The most widely used method to reconstruct the phase is through holography [1], where a reference wave interferes with the light coming from the object. The unknown phase profile of the object wave is calculated based on the known phase profile of the reference wave. Numerous applications of holographic microscopy have been found [2]. In-line holographic systems [3] benefit from a simplified setup, which is often lensless, but impose restrictions on the types of samples that can be imaged. They also suffer from the twin image problem [4], where two overlapping reconstructions of the object partially obstruct each other. Off-axis holographic microscopy [5] has fewer restrictions on the samples that can be analyzed, but needs a separate reference wave. This imposes geometrical restrictions on the setup, and often requires more optical elements, such as beamsplitters. In the search for phase reconstruction methods that do not need a reference wave, a range of methods have been proposed including ptychography [6] and others which are summarized well by the introductions of other publications [7,8].

One particularly simple method for phase reconstruction without a reference wave, which was proposed in the past [7,9], is what we now call Lensless Microscopy by Multiplane Recordings. It uses the intensity of light scattered by the sample, measured at multiple distances from the sample. These measurements are in effect recording the evolution of the diffraction patterns produced by the sample as a function of distance along the optical axis. From intensity measurements at a sufficient number of distinct planes, one can reconstruct the phase using a variation of the well known iterative Gerchberg-Saxton (GS) algorithm [10].

In this paper we report vast improvements in the performance of Lensless Microscopy by Multiplane Recordings, obtained in part through the development of a pixel-splitting algorithm, which allows the measurement resolution to exceed the imaging sensor’s (camera’s) pixel size. We also introduce high dynamic range (HDR) in the data collection, which improves resolution, and reduce significantly the sample to sensor distance. These improvements result in diffraction-limited imaging with a resolution of 0.78 $\mu$m when using 0.63 $\mu$m illumination. Sub-micrometer resolution is maintained for fields of view (FOVs) up to 4 mm. Our new pixel splitting algorithm also makes it possible to work with light of much lower spatial coherence than before. In addition to reporting these results, we provide an analysis of the fundamental limiting factors of this lensless microscopy method. The insights into the limiting factors of lensless systems described here should also be of value for x-ray wavelengths, where it is much more difficult to obtain high quality lenses [11].

In past work with in-line holographic lensless micrsocopy, resolution was improved beyond the limits of pixel size by shifting the light source laterally [12] or by using an iterative algorithm for sparse samples [13]. Multiplane holographic images, at several distances from the sample, were used for imaging of more dense samples with overlapping regions of support and to eliminate the twin-image problem [14]. Mathematically more advanced holographic methods were presented to improve resolution and eliminate the twin image based on illumination from muliple angles [15] or measurements at multiple heights [15,16]. These systems also included pixel-splitting methods similar to what we use in the present paper. A multi-height phase retrieval method using a phase-coded layer applied to the image sensor has also been demonstrated [17], where multiple images are taken as the sample is moved through a divergent laser beam. Results are obtained rapidly after a calibration step.

The advantages of the multiplane method used in this paper are that both the measurement setup and the data processing algorithm are very simple. There are no limitations on the sparsity or density of the sample. Either laser or LED light sources can be used. The system is very compact because no lateral shifting of the light source is necessary and, when using coherent light sources, the collimated laser can be placed very close to the sample. We do not need to move the sample with respect to the illumination source and there is no coding layer on the sensor, which must be calibrated. Although we need axial translation, conventional microscopes need that too, even if it is not always motorized. This means that the system proposed in this paper uses less volume than either conventional microscopy or most holographic systems, introduces little additional mechanical complexity compared to a conventional microscope, has more relaxed constraints on the sample compared to holography, and has similar optical performance.

2. Experimental setup

The experimental setup, shown in Fig. 1 (see also photograph in Supplement 1), is similar to previous work [7]. See figure and caption for variable definitions. For coherent light we use a collimated beam with wavelength of 633 nm from a helium-neon (He-Ne) laser. The only elements between laser and sample are a variable attenuator and two mirrors used to aim the beam. We attenuate the laser beam such that measurements with an integration time of 200 $\mu$s do not saturate the sensor. For partially coherent light we use LEDs followed by an aperture, a bandpass filter and the same two mirrors as for the laser.

 figure: Fig. 1.

Fig. 1. Simplified experimental setup. Light is scattered by the sample and a portion passes through the aperture, after which it is measured by the movable sensor at $N_z$ positions. $z_{min}$, $z_{max}$, $z_T$, $\Delta z$ are respectively the minimum sample-sensor distance, the maximum distance, the total sensor travel distance and the distance between measurement planes. $L_x$, $N_x$, $\Delta x$ are respectively the size of sensor, number of pixels and pixel size in $x$ direction. ($L_y$, $N_y$, $\Delta y$, not shown, are defined similarly.) $\theta$ is the maximum angle of scattered rays that are captured.

Download Full Size | PDF

The sensor is a CMOS monochromatic camera made by Basler AG, model daA3840-45um, which we use in 12-bit mode. It has no housing so we can bring it very close to the sample. It has a resolution of 3840x2160 pixels and a pixel size of $\Delta x \times \Delta y =2\times 2~\mu$m2. For all our measurements we first crop the camera output to $N_x\times N_y = 2048\times 2048$ pixels, which means that our measurements are done on an area of $L_x\times L_y = 4.096\times 4.096~\mathrm {mm}^2$. Sometimes we crop further, as indicated in the text. Our sample is generally the negative USAF 1951 Test Target, which is mostly dark, with transparent rectangles. In most measurements, we place an aperture above the sample, to limit the field of view. This aperture is necessary, in order to limit the high spatial frequencies on the sensor, which could lead to aliasing, according to the Nyquist-Shannon sampling theorem [18]. The aperture is made of heavy stock black paper, cut with a laser cutter. The aperture and the glass plate protecting the sensor limit the subject-camera distance to at least 1.4 mm. This is the optical distance, calculated from our measurements. It is slightly different from the actual physical distance, due to the refractive index of the camera protective glass.

The sensor is mounted on a Physik Instrumente M-126 translation stage, with an accuracy of 2.5 $\mu$m. We ensure that the translation stage axis is parallel to the illuminating laser beam, and that both the sensor and the test target are perpendicular to the beam. For most results presented here, we take $N_z=26$ measurements spaced by $\Delta _z =0.1$ mm, for a total sensor travel distance of $z_T=2.5$ mm. With these values of $\Delta _z$ and $z_T$ we did not observe any twin image artifacts. We have chosen to work with these values of $N_z$ and $\Delta _z$ because using fewer than 26 planes or less than 100 $\mu$m spacing results in gradually poorer image quality, while more planes or a larger spacing does not bring significant improvements. We use MATLAB to drive the translation stage and to collect data from the camera.

3. Data processing

Data from the measured diffraction patterns is processed in MATLAB using single-precision floating point numbers. We use a modified Gerchberg-Saxton (GS) iterative algorithm [10], to which we made additional modifications (described in sec. 3.2) in order to increase the resolution beyond the sensor pixel size. The modified GS algorithm was described before [7], and is summarized here: First we take the square root of all measured intensities, to obtain the amplitudes. We start with the amplitude at the first plane, the initial plane, and assume as a first guess that the phase is zero. Using the angular spectrum method [19] we propagate the amplitude and phase from the initial plane to a new plane. The propagation is done using a 2D fast Fourier transform (FFT) to decompose the field into a set of plane waves to which we apply the transfer function of free space. At the destination plane we take an Inverse FFT to obtain the spatial amplitude and phase patterns. We discard the propagated amplitude and replace it with the measured amplitude. We keep the calculated phase pattern. The destination plane then becomes the initial plane for the next step, and we propagate to another plane. In this paper one iteration, means propagating once through all the planes. We use between 15 and 200 iterations to obtain the phase. With 26 planes, this corresponds to between 390 and 5200 propagation steps. At each iteration we use a new random permutation for the order in which we visit the measurement planes [20]. The final step is to propagate the field to the object plane. Both the amplitude and phase profiles at the object are obtained and one can reconstruct the image at multiple depths, without additional measurements or iterations.

The algorithm described so far has already been reported multiple times. To increase resolution and Field Of View, we made the improvements described in the next two subsections.

3.1 High dynamic range

Collecting light with high dynamic range (HDR) is important in lensless imaging, especially for high resolutions, which need a high numerical aperture (NA). In a conventional system a higher NA does not require a larger sensor area, because the lens collects the diverging rays from an object point and focuses them back to a small area on the sensor. In our lensless system, NA is proportional to sensor size (unless the object-sensor distance can be reduced). High-NA imaging requires the sensor to collect light that has low intensity, because it has spread over a large area. HDR helps because it allows the sensor to detect these low intensities on the sides of the sensor, and at the same time not saturate from the high intensities near the centre. In addition, measurements far from the object have lower intensities compared to those closer to the object.

We use a very simple method for HDR measurements, with two integration times. The lower integration time is chosen so as to not saturate the sensor at any of the planes. The other integration time is 32 times higher. To reduce noise we repeat all measurements 10 times. At each plane we first make 10 measurements at the high integration time, average the results and keep the unsaturated pixels. The values for the saturated pixels are taken from 10 averaged measurements at low integration time, multiplied by a factor of 32. Using a 12-bit sensor we obtain an effective dynamic range of 17 bits.

3.2 Pixel splitting

Generally, the pixel size imposes an approximate limit on resolution, because for higher resolutions we need to include more inclined waves (collecting light with higher NA). Assuming that the incident wavefield is decomposed into plane waves, the most inclined plane waves that can be correctly reconstructed are those that present a $180^{\circ}$ phase difference between adjacent pixels. Any waves that are more inclined, and present a phase difference of $180^{\circ} + \psi$ between adjacent pixels will be indistinguishable from waves with a phase difference of $-180^{\circ} + \psi$. As a result, the direction of such a wave will not be defined uniquely. With a phase difference of $180^{\circ}$ between adjacent pixels, the propagation angle $\theta$ away from the normal satisfies the relationship $\mathrm {sin}\theta =\lambda /2\Delta x$. This is equal to the NA, because $\theta$ represents the maximum inclination of waves that can be measured. Since the resolution is approximately given by $\lambda /2\mathrm {NA}$ we calculate the resolution to be around $\Delta x$, the pixel size.

To overcome this, we use the following pixel-splitting algorithm (shown schematically in Fig. 2) to reduce the image pixel size below the size of the sensor pixels. We first use the modified GS algorithm presented above to obtain the phase of the measured diffraction patterns at every sensor pixel. We then split each pixel into 4 quarters (divide by 2 in the $x$ and $y$ directions) and iterate again. As the initial guess for the new round of iterations, each of these 4 sub-pixels receives the same amplitude and phase as the original pixel that was split.

 figure: Fig. 2.

Fig. 2. Pixel-splitting: Iterations (in random order) are done on original pixel size (a). Then pixels are split into 4 sub-pixels, for a new set of iterations (b), and then split again (c).

Download Full Size | PDF

We use again the modified GS algorithm to propagate between planes and keep the calculated phase. For the calculated amplitudes, we maintain the relative ratios between the 4 sub-pixels, but replace their average value with the amplitude of the measured pixel. The steps are: calculate the average $a$ of the 4 sub-pixel amplitudes; calculate the ratio $r$ between the measured pixel value and $a$; multiply each of the 4 sub-pixels by $r$. Note that the sum of the amplitudes of all sub-pixels in the image is 4 times larger than the sum of the amplitudes of the original pixels. This factor of 4 does not affect the results presented here, because our calculated amplitude values are left in arbitrary units. One can, however, correct for this factor of 4 at the end of the calculations.

Because the matrices of amplitude and phase are now 4 times larger, calculations take 4 times longer. After these new iterations, we can split the pixels again and iterate once more. The procedure is the same, but now the averages $a$ use 16 sub-pixels. We have found that up to 3 rounds of pixel splitting can be beneficial (splitting one pixel into 64 sub-pixels).

We use a single thread on an Intel i7-7700K CPU and single precision floating point numbers. For measurements of 2048x2048 pixels at 26 planes, the time duration per iteration is approximately 8 seconds, 30 seconds and 2 minutes respectively for no pixel splitting, splitting once and splitting twice. The pixel splitting presented here is similar to some methods to increase resolution in holographic microscopy [15,16]. As described in Supplement 1, Sec. S2, we also interpolate and crop the final image in the frequency domain, which brings small advantages in image quality and computation time.

4. Results

4.1 Diffraction-limited resolution

Figure 3 compares the effects of HDR and pixel splitting on image resolution, using a USAF 1951 test target. Laser light of 633 nm is used. A 220 $\mu$m diameter round aperture is placed above the target, to limit the FOV. We use $N_z =26$ planes, separated by $\Delta z = 100~\mu$m. $N_x=N_y= 2048$ pixels. The closest and farthest planes are at $z_{min}=1.4$ mm and $z_{max}=3.9$ mm from the target. Four cases are presented, with various combinations of HDR and pixel splitting. For each case, the first image shows a wide view which captures the entire aperture, the second image shows details of Groups 8 and 9 of the target, and the last image shows amplitude profiles at the highest resolution in the vertical and horizontal directions. The small red and green bars in the second image show where the profiles were measured.

 figure: Fig. 3.

Fig. 3. Lensless reconstruction of USAF 1951 target, showing resolution improvements from high dynamic range (HDR) and pixel splitting. Images a,d,g,j show the entire aperture of 220 $\mu$m. Images b,e,h,k show details. Graphs c,f,i,l show amplitude profiles at the resolution limit. Red lines show horizontal resolution, at the locations marked with red bars. Green lines show vertical resolution, at the green bars. a-c: Standard (12-bit) dynamic range and pixels split twice to a size of 0.5 $\mu$m. d-f: HDR (17-bit) and no pixel splitting (2 $\mu$m pixels). g-i: HDR and pixels split once to 1 $\mu$m. j-k: HDR and pixels split twice to 0.5 $\mu$m.

Download Full Size | PDF

Figures 3(a-c), use standard dynamic range (SDR) and pixels split twice. (Each 2x2 $\mu$m pixel is split into 16 0.5x0.5 $\mu$m sub-pixels.) While the measured resolution is quite good (Group 9, Element 2 is resolved, corresponding to 0.87 $\mu$m features), the overall image quality in Fig. 3(b) is not as high as in 3(h). Figures 3(d-f) show the case with HDR and no pixel splitting. In this case Group 8, Element 2 is resolved, with 1.74 $\mu$m features. This is a little better than the pixel size of 2 $\mu$m for our sensor, and matches our approximate prediction from section 3.2 that resolution is limited to the pixel size. Figures 3(g-i) show the case with HDR and pixels split once, for a sub-pixel size of 1 $\mu$m. Group 9, Element 1 is resolved, with 0.98 $\mu$m features. The resolution is almost identical to the sub-pixel size.

Figures 3(j-l), show the case with HDR and pixels split twice to a size of 0.5 $\mu$m. Group 9, Element 3 is resolved, with 0.78 $\mu$m features. We note that this is only slightly larger than the wavelength of 633 nm. This resolution does not reach the diffraction limit based on the size of the sensor, but it matches the diffraction limit based on the divergence of light from the USAF target, because light did not diverge enough to fill the sensor. Only the center of the sensor measured light intensities above the noise floor, as shown in Supplement 1, Sec. S3.

When cropping the measurements to $N_x=N_y=1024$ pixels we obtain diffraction-limited resolution, shown in Fig. 4. Group 9, Element 2 is resolved with feature size of 0.87 $\mu$m. We use $L_x = L_y = 2.048$ mm on the sensor, $z_{min}=1.4$ mm and $z_{max}=3.9$ mm. Based on these dimensions, the NA is 0.59 and 0.25 at the first and last plane respectively. For coherent light, the diffraction-limited resolution is given by [21] $\mathrm {Res}=0.82\lambda /\mathrm {NA}$. This gives calculated resolutions of 0.88 $\mu$m and 2.1 $\mu$m at the first and last plane respectively. It is remarkable that only the first plane has sufficient NA for the achieved resolution of 0.87 $\mu$m.

 figure: Fig. 4.

Fig. 4. Image obtained by cropping the measurements to 1024x1024 pixels. Diffraction-limited resolution is 0.87 $\mu$m.

Download Full Size | PDF

In order to obtain the results in Fig. 4 with the cropped sensor output, we had to use many more iterations than before. Results in Fig. 3 used 15 iterations at each round (original pixels, splitting once and splitting twice). Results in Fig. 4 used 200 iterations at each round, but each step is 4 times faster due to the smaller matrix sizes. We need more iterations because most planes do not have sufficient NA and cannot contribute much to the fine image details. We have to visit the first plane many times in order to achieve the diffraction-limited resolution.

Supplement 1, Sec. S4 shows that three rounds of pixel splitting, going from 4x4 $\mu$m2 pixels to 0.5x0.5 $\mu$m2, also work well, with diffraction-limited resolution.

4.2 Wide field of view

For this lensless system, the field of view (FOV), or field size, is generally limited by the sensor resolution through the Nyquist-Shannon sampling limit theorem [18]. Light rays from opposite edges of the FOV which meet at the sensor form a standing wave. With a FOV of size $b$ and a sensor to object distance $z$, the standing wave will have a periodicity of $\lambda /(2\mathrm {sin}(\mathrm {tan}^{-1}(b/2z)))$. According to the sampling limit, the sensor needs a resolution at least two times smaller than this periodicity.

The results shown so far (Figs. 34) were obtained with a FOV of 220 $\mu$m, which was chosen to match the sampling limit requirements with our $z_{min}$ of 1.4 mm, pixel size $\Delta x$ of 2 $\mu$m and red light. Figure 5 shows a measurement with FOV of 760 $\mu$m and $z_{min} =1.9$ mm. The results are still good, although the resolution is slightly reduced to 0.87 $\mu$m. In this case, the sampling limit is met at sensor-object distances greater than 4.8 mm. The measurements have $z_{min} = 1.9$ mm and $z_{max} = 4.4$ mm. This means that the sampling limit is almost matched at the last measurement plane. Given that the USAF target has many black regions, it is not surprising that we can exceed the sampling limit somewhat. Supplement 1, Sec. S5 shows that further FOV increases result in poor reconstruction.

 figure: Fig. 5.

Fig. 5. Image obtained with a larger aperture of 760 $\mu$m. Resolution is 0.87 $\mu$m.

Download Full Size | PDF

Section 4.1 shows that diffraction-limited resolution can be obtained when only one plane of the entire multi-plane exposure satisfies the NA requirements. The same appears to be the case with FOV: Only the planes farthest from the object need to satisfy the sampling limit. To test this, we performed measurements without any aperture at the sample, but with a 2.1 mm jump between the middle planes as follows: The first 13 planes cover the distance from 1.8 to 3.0 mm from the object, with $\Delta z = 0.1$ mm; The remaining 13 planes cover the distance from 5.1 to 6.3 mm from the object, again with $\Delta z = 0.1$ mm. Results are shown in Fig. 6. The entire FOV of 4 mm is reconstructed correctly, resolving Group 9, Element 1 with a feature size of 0.98 $\mu$m. The resolution is only 25% lower than with a 220 $\mu$m aperture. We conclude that the measurements close to the object provide good resolution, while those farther away facilitate reconstruction with a very wide FOV. Although the sampling limit is not satisfied for any of the measurement planes, we still obtain good results because of the dim edges (due to the narrow laser beam used) and the many dark areas of the USAF target.

 figure: Fig. 6.

Fig. 6. Image obtained with no aperture at the sample, and with a 2.1 mm jump between the middle measurements planes. Pixels split twice. a: Entire field of view recorded (4 mm). Dim edges are due to illumination with a narrow laser beam. b-c: Detail of Groups 4-7. d-e: Detail of Groups 8 and 9 when focused at 1.825 (better horizontal resolution) and 1.827 mm (better vertical resolution) from the sensor respectively. f: amplitude profiles for best horizontal (red line, focused at 1.825 mm) and vertical (green line, focused at 1.827 mm) resolution.

Download Full Size | PDF

We notice, however, that the reconstruction introduces weak aberrations similar to the astigmatism produced by a cylindrical asymmetry in the optical system. In Fig. 6(d) we focus at 1.825 mm from the first measurement plane, while in Fig. 6(e) we focus at 1.827 mm. The first case gives a better horizontal resolution, while the second gives better vertical resolution. This aberration seems to be due to the data processing, as there is no physical reason for it. We have observed that such astigmatism depends on the exact order in which we visit the individual planes while iterating. If we change the random seed through which we randomize the order, the astigmatism changes.

4.3 Mostly transparent sample

The results so far have used the negative USAF 1951 target, which is mostly dark, with transparent bars. To test whether our method works also for the opposite case, we show in Fig. 7 results with a positive USAF 1951 target, which is mostly transparent. We obtain the same resolution as before, resolving Group 9, Element 3, with a feature size of 0.78 $\mu$m. We conclude that our method works across a very wide range of amplitude-modulated samples from mostly dark to mostly transparent. This is a difference compared to in-line digital holography techniques, which usually need a mostly transparent sample.

 figure: Fig. 7.

Fig. 7. Image of positive USAF 1951 target, showing similar resolution to negative target.

Download Full Size | PDF

4.4 Partially coherent light

All measurements presented until now were done with 633 nm coherent laser light. For the remainder of the paper we use partially coherent light. Except for the biological sample in sec. 4.5, we use a red LED with a 4 nm bandpass filter, centered at a wavelength of 632 nm. This results in a temporal coherence length of $\lambda ^2/\Delta \lambda = 100~\mu$m. We use an aperture at the LED, and a large distance between this aperture and the sample, in order to control the spatial coherence of the light.

Figure 8 shows the case where the size of the aperture at the sample is similar to the spatial coherence length. The sample aperture is 420 $\mu$m. The LED aperture is 0.8 mm and the LED distance is 32 cm, giving a spatial coherence of 250 $\mu$m. Even though the target aperture is somewhat larger than the spatial coherence, Fig. 8(a) shows that there are very few features included on the edges of the FOV, outside of Groups 6 and 7 of the USAF target. Most light comes from Groups 6 and 7, whose size matches the spatial coherence length. As before, $N_x = N_y = 2048$ pixels, $\Delta x = \Delta y = 2~\mu$m and pixels were split twice. We can resolve Group 9, Element 3, with a feature size of 0.78 $\mu$m. This matches exactly the resolution obtained with laser light in Figs. 3(j-l). Supplement 1, Sec. S6 shows results with measurements cropped to 1024x1024 pixels, which are also similar to the case with laser illumination. It is not surprising that these LED results match the laser results, because of the sufficiently large spatial and temporal coherence.

 figure: Fig. 8.

Fig. 8. Image obtained with partially coherent (LED) illumination and an aperture of 420 $\mu$m. Spatial coherence length is 250 $\mu$m. The resolution of 0.78 $\mu$m is equal to that with laser illumination.

Download Full Size | PDF

When the spatial coherence length at the sample is much smaller than the FOV, the spatial coherence forms a “virtual aperture”, which helps satisfy the sampling theorem limit. As discussed in section 4.2, rays from opposite edges of the FOV form a standing wave which must be resolved by the sensor. If, however, the illumination has low spatial coherence, rays will not interfere coherently if they come from object points that are far from each other. Standing waves form only with rays that originate within one spatial coherence length from each other. In our case, with $\Delta x = 2~\mu$m and $z_{min} = 1.9$ mm, we need a spatial coherence less than 300 $\mu$m to ensure that all standing waves can be resolved by the sensor, no matter the size of the FOV.

Figure 9 shows results with this “virtual window”. We use the same illuminating geometry as for Fig. 8 with spatial coherence length of 250 $\mu$m, and we use no aperture at the target. The FOV is 4 mm. Figures 9(a-b) show the case with no pixel splitting. Reconstruction of Groups 6 and above is poor. Splitting pixel once, shown in Figs. 9(c-e), improves the image significantly. Group 7, Element 2 is resolved, with feature size of 3.5 $\mu$m.

 figure: Fig. 9.

Fig. 9. Image obtained with partially coherent (LED) illumination and no aperture at the sample. Spatial coherence length is 250 $\mu$m. a-b: No splitting of pixels. Groups 6 and higher reconstruct very poorly. c-e: Splitting pixels once gives much better reconstruction. Resolution obtained is 3.5 $\mu$m.

Download Full Size | PDF

It is somewhat surprising that splitting the pixels removed the artifacts seen in Fig. 9(b). The pixel-splitting algorithm, as presented in sec. 3.2, is meant to improve the resolution beyond 2 $\mu$m, but the artifacts seen here appear at much bigger feature sizes. While we do not have a good explanation for it, we have observed several times that the pixel splitting algorithm helps remove artifacts when working with large FOVs.

We explain the resolution of 3.5 $\mu$m obtained in Figs. 9(d-e) as follows: The rays that arrive at one point on the sensor and interfere coherently to form a diffraction pattern come from an area on the object that is limited by the coherence length. This limits the maximum inclination of the rays that contribute to the image, and through this the NA of the imaging system. In this case, spatial coherence length is 250 $\mu$m and $z_{min} = 1.9$ mm. This corresponds to NA = 0.066, which gives a calculated resolution of 4.8 $\mu$m. The measured resolution is about 25% better, which indicates that the rays that participate in the image formation are more inclined than our assumption, but not by much. See Supplement 1, Sec. S7 for a more detailed analysis.

Supplement 1, Sec. S8 shows that the same relationship between spatial coherence and resolution also holds for shorter coherence lengths. Longer coherence lengths exceed the sampling limit and give poor results.

4.5 Biological sample

To test that our multiplane method also works with more complex samples, in Fig. 10 we show lensless measurements made on a cross section of a thin wood branch. Cells and cell membranes are seen clearly. Separate measurements are made with red (632 nm), green (532 nm) and blue (488 nm) LEDs. We use a 420 $\mu$m aperture at the sample. The illumination is done through an aperture of 0.8 mm placed at 38 cm from the sample resulting in spatial coherence lengths of 300, 250 and 230 $\mu$m respectively for red, green and blue.

 figure: Fig. 10.

Fig. 10. Image of a biological sample (cross-section through a branch of wood). a,c,e: Amplitude reconstructions with red, green, blue light. b,d,f: Phase reconstructions with red, green, blue light. g: Full-color reconstruction combining information from a,c,e. h: Imaging of the same area of the sample through a microscope objective, for comparison.

Download Full Size | PDF

In addition to the amplitudes, the lensless method also provides the phase of light transmitted through the sample, which we also show here. The calculated phase results had a linear phase shift from one side to the other, probably because the sensor was not perfectly perpendicular to the optical axis. We removed this phase shift numerically in the images shown here.

We combine the three colors to obtain a full-color lensless image, shown in Fig. 10(g). For comparison, we use the same three LEDs to take images through a conventional microscope objective (20x, NA=0.37) and show the results in Fig. 10(h). Because our sample and apertures moved slightly while changing LEDs, the images in the three colors were slightly shifted from each other. We shifted them back (based on image features) before producing the color images. Because each color is measured separately, we don’t have good information about the relative intensity of each color. This explains the difference in the overall color between the lensless and conventional image.

While the vast majority of the features in the lensless image match closely the microscope objective image, we could not reconstruct perfectly details at the top-left of the red image, and this produces the colour fringes in Fig. 10(g). We do not know what caused this, but the green and blue images are reconstructed correctly.

5. Discussion of performance limits

From these results, we conclude that the performance of our lensless imaging method is limited by the following factors:

Resolution is limited by NA, as with other imaging systems. The NA requirement must only be satisfied by the nearest measurement plane, but this requires more iterations. In order to resolve features smaller than the sensor pixel size, a pixel-splitting algorithm must be used.

Field of view (FOV), for light with sufficiently high spatial coherence, is limited by the sensor resolution and distance to the last plane through the Shannon-Nyquist sampling theorem, although our results show that we can exceed this limit slightly. Light with a spatial coherence length that is less than the aperture size required by the sampling limit, allows much larger FOVs, because the range of angles at which light rays interfere coherently is reduced. The trade-off, however, is a reduction in resolution, because the NA of the system is also reduced.

Spatial and temporal coherence of light: A wide range of spatial coherence lengths can be used. High spatial coherence gives good resolution, but needs the FOV to be limited by an aperture. Lower spatial coherence allows larger FOVs, with reduced resolution. Temporal coherence needs to be high enough, so that the inclined rays can interfere coherently, as detailed in Supplement 1, Sec. S9. A small FOV or a small spatial coherence allows one to work with small temporal coherence.

Number of measurement planes: We did not study this systematically, but we observed a reduction in image quality for less than approximately 15 planes. We can estimate the number of planes from a quantity of information perspective, as follows: If we split pixels 3 times, we get 64 sub-pixels for every original pixel. Sub-pixel values are complex, whereas the measurements provide real numbers. Therefore, there are 128 times more data points in our processed matrices, compared to the original sensor output. The aperture at the sample reduces the area over which we expect non-zero results. The resulting minimum number of planes should be equal to $128 \times (\mathrm {aperture\ area})/(\mathrm {sensor\ area})$, assuming the divergent light from the object fills the sensor.

Distance between planes: We observed a gradual reduction in quality for distances below 50 $\mu$m. More closely spaced measurements also require more iterations during data processing.

Number of iterations: This number varies greatly. In our measurements, we used between 15 and 200 iterations. (By one iteration we mean that we visit each plane once, which corresponds to 26 propagation steps.) If the distance travelled in each propagation step is increased, the required number of iterations decreases. If not all measurement planes satisfy the NA or sampling theorem limits, the number of iterations increases.

6. Conclusions

In this paper we reported results for lensless microscopy by multiplane recordings that are vastly improved compared to previous reports. We achieve sub-micrometer, diffraction limited resolution, with both laser and LED light. The best resolution obtained (0.78 $\mu$m) approaches the wavelength used (0.63 $\mu$m). Our improvements are due to a much smaller object-sensor distance, introducing a new pixel-splitting method, and collecting data with high dynamic range (HDR). The pixel-splitting allows resolutions below the pixel size, and also makes it possible to reconstruct images with large fields of view (FOVs) when using light with low spatial coherence. We demonstrate our results with objects that are mostly dark (negative USAF 1951 target), mostly transparent (positive USAF 1951 target), as well as biological samples. We use both laser and LED light, and combine results from three LED colors to obtain full-color images. While we use visible light, our methods should be directly transferable to other wavelengths, such as x-rays, where quality lenses are much more difficult to obtain.

Resolution is limited by the numerical aperture (NA), which in our case depends on the sensor size and object-sensor distance. Shorter distances result in improved NA and resolution. The FOV for coherent light is limited by the Shannon-Nyquist sampling theorem, and depends on the sensor resolution and object-sensor distance. Larger distances lead to increased FOV. A very beneficial aspect of multiplane recordings is that no plane must satisfy both the NA and FOV requirements. Resolution is determined by the closest measurement plane, while FOV is determined by the farthest plane. Since the distance between the closest and farthest plane can be adjusted easily, resolution and FOV can be controlled independently. In our measurements (Fig. 6), we obtained a FOV of 4x4 mm2 with resolution of 0.98 $\mu$m. FOV can also be increased by using light with lower spatial coherence and the pixel-splitting algorithm, but image resolution will be reduced. In this case, the only requirement is that the spatial coherence be less than the sampling limit.

Our lensless system occupies vastly less space than a conventional microscope, while providing comparable performance in some cases. Most results shown in this article were obtained with a 4x4 mm2 measurement area, with total travel distance of 2.5 mm. This results in a scanned volume of 40 mm$^3$. This volume does not include the mechanical translation stage and other electronics, however. We have also obtained sub-micrometer resolution with volumes of 2x2x2.5 mm$^3$. Reducing the total distance between measurement planes to less than 2.5 mm is possible, and only brings a gradual reduction in quality and gradual increase in computation time. Hence, further reductions in scanned volume are feasible. The lensless system avoids most optical aberrations, although we observed small amounts of astigmatism from the numerical processing. Our data processing provides exact dimensions for our images, and precise object-sensor distance, with an uncertainty limited only by the uncertainty in pixel pitch on the sensor.

As with holographic methods, we do not need to select a plane of focus ahead of time. Instead, we can focus after collecting the data, and can get information from an imaging volume, by scanning the focal plane during processing. We obtain easily both amplitude and phase information from the sample, which helps with transparent samples and reduces the need for staining or labeling. The main advantage compared with in-line holographic microscopy is that we do not require mostly transparent samples, or the presence of a reference beam, and our system is also more compact.

The biggest drawback of our method is the long duration for a measurement. Data collection is slower than conventional microscopy or holography, since we need to scan the sensor over a range of heights. Data processing is also slow (minutes to hours on an Intel i7-7700 CPU), significantly slower than with digital holography. A positive aspect, however, is that the software is very simple (a few hundred lines in MATLAB). The long measurement duration means that samples need to be stable over extended periods of time. It is also difficult to find the desired sample location, since there is no immediate visual feedback. There is, however, no need to focus ahead of time. An additional drawback is that one needs an aperture to limit the FOV, which must be properly positioned above or below the sample. Our system requires moving parts, which can be considered a drawback, but axial motion is also required in conventional microscopes for focusing, and in some holography systems for improving resolution and solving the twin-image problem.

Our measurements are done in transmission. Measurements in reflection could be done only by increasing the object-sensor distance, to make room for the illumination. There is no objective lens through which one could illuminate, as is often done in conventional microscopy.

This method of lensless imaging has been known for almost two decades [9]. We believe that it deserves renewed attention at this time, due to the significant performance improvements reported in this paper, which match in some instances the performance of conventional optical microscopy.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948). [CrossRef]  

2. B. Javidi, A. Carnicer, A. Anand, et al., “Roadmap on digital holography,” Opt. Express 29(22), 35078–35118 (2021). [CrossRef]  

3. W. Xu, M. Jericho, I. Meinertzhagen, and H. Kreuzer, “Digital in-line holography of microspheres,” Appl. Opt. 41(25), 5367–5375 (2002). [CrossRef]  

4. G. Koren, F. Polack, and D. Joyeux, “Iterative algorithms for twin-image elimination in in-line holography using finite-support constraints,” J. Opt. Soc. Am. A 10(3), 423–433 (1993). [CrossRef]  

5. P. Marquet, B. Rappaz, P. J. Magistretti, E. Cuche, Y. Emery, T. Colomb, and C. Depeursinge, “Digital holographic microscopy: a noninvasive contrast imaging technique allowing quantitative visualization of living cells with subwavelength axial accuracy,” Opt. Lett. 30(5), 468–470 (2005). [CrossRef]  

6. A. M. Maiden, M. J. Humphry, F. Zhang, and J. M. Rodenburg, “Superresolution imaging via ptychography,” J. Opt. Soc. Am. A 28(4), 604–612 (2011). [CrossRef]  

7. A. Schiebelbein and G. Pedrini, “Lensless phase imaging microscopy using multiple intensity diffraction patterns obtained under coherent and partially coherent illumination,” Appl. Opt. 61(5), B271–B278 (2022). [CrossRef]  

8. C. Zuo, J. Li, J. Sun, Y. Fan, J. Zhang, L. Lu, R. Zhang, B. Wang, L. Huang, and Q. Chen, “Transport of intensity equation: a tutorial,” Opt. Lasers Eng. 135, 106187 (2020). [CrossRef]  

9. G. Pedrini, W. Osten, and Y. Zhang, “Wave-front reconstruction from a sequence of interferograms recorded at different planes,” Opt. Lett. 30(8), 833–835 (2005). [CrossRef]  

10. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]  

11. H. N. Chapman and K. A. Nugent, “Coherent lensless x-ray imaging,” Nat. Photonics 4(12), 833–839 (2010). [CrossRef]  

12. W. Bishara, T.-W. Su, A. F. Coskun, and A. Ozcan, “Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution,” Opt. Express 18(11), 11181–11191 (2010). [CrossRef]  

13. O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab Chip 10(11), 1417–1428 (2010). [CrossRef]  

14. A. Greenbaum and A. Ozcan, “Maskless imaging of dense samples using pixel super-resolution based multi-height lensfree on-chip microscopy,” Opt. Express 20(3), 3129–3143 (2012). [CrossRef]  

15. W. Luo, Y. Zhang, Z. Gorocs, A. Feizi, and A. Ozcan, “Propagation phasor approach for holographic image reconstruction,” Sci. Rep. 6(1), 34679 (2016). [CrossRef]  

16. J. Zhang, J. Sun, Q. Chen, J. Li, and C. Zuo, “Adaptive pixel-super-resolved lensfree in-line digital holography for wide-field on-chip microscopy,” Sci. Rep. 7(1), 11777 (2017). [CrossRef]  

17. C. Guo, S. Jiang, P. Song, T. Wang, X. Shao, Z. Zhang, and G. Zheng, “Quantitative multi-height phase retrieval via a coded image sensor,” Biomed. Opt. Express 12(11), 7173–7184 (2021). [CrossRef]  

18. C. E. Shannon, “Communication in the presence of noise,” Proc. IRE 37(1), 10–21 (1949).

19. J. Goodman, Introduction to Fourier optics (McGraw-Hill, 1996).

20. P. F. Almoro, G. Pedrini, P. N. Gundu, W. Osten, and S. G. Hanson, “Phase microscopy of technical and biological samples through random phase modulation with a diffuser,” Opt. Lett. 35(7), 1028–1030 (2010). [CrossRef]  

21. M. Born and E. Wolf, Principles of optics : electromagnetic theory of propagation, interference and diffraction of light (Cambridge University, 1999).

Supplementary Material (1)

NameDescription
Supplement 1       Supplemental Information

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Simplified experimental setup. Light is scattered by the sample and a portion passes through the aperture, after which it is measured by the movable sensor at $N_z$ positions. $z_{min}$ , $z_{max}$ , $z_T$ , $\Delta z$ are respectively the minimum sample-sensor distance, the maximum distance, the total sensor travel distance and the distance between measurement planes. $L_x$ , $N_x$ , $\Delta x$ are respectively the size of sensor, number of pixels and pixel size in $x$ direction. ( $L_y$ , $N_y$ , $\Delta y$ , not shown, are defined similarly.) $\theta$ is the maximum angle of scattered rays that are captured.
Fig. 2.
Fig. 2. Pixel-splitting: Iterations (in random order) are done on original pixel size (a). Then pixels are split into 4 sub-pixels, for a new set of iterations (b), and then split again (c).
Fig. 3.
Fig. 3. Lensless reconstruction of USAF 1951 target, showing resolution improvements from high dynamic range (HDR) and pixel splitting. Images a,d,g,j show the entire aperture of 220 $\mu$ m. Images b,e,h,k show details. Graphs c,f,i,l show amplitude profiles at the resolution limit. Red lines show horizontal resolution, at the locations marked with red bars. Green lines show vertical resolution, at the green bars. a-c: Standard (12-bit) dynamic range and pixels split twice to a size of 0.5 $\mu$ m. d-f: HDR (17-bit) and no pixel splitting (2 $\mu$ m pixels). g-i: HDR and pixels split once to 1 $\mu$ m. j-k: HDR and pixels split twice to 0.5 $\mu$ m.
Fig. 4.
Fig. 4. Image obtained by cropping the measurements to 1024x1024 pixels. Diffraction-limited resolution is 0.87 $\mu$ m.
Fig. 5.
Fig. 5. Image obtained with a larger aperture of 760 $\mu$ m. Resolution is 0.87 $\mu$ m.
Fig. 6.
Fig. 6. Image obtained with no aperture at the sample, and with a 2.1 mm jump between the middle measurements planes. Pixels split twice. a: Entire field of view recorded (4 mm). Dim edges are due to illumination with a narrow laser beam. b-c: Detail of Groups 4-7. d-e: Detail of Groups 8 and 9 when focused at 1.825 (better horizontal resolution) and 1.827 mm (better vertical resolution) from the sensor respectively. f: amplitude profiles for best horizontal (red line, focused at 1.825 mm) and vertical (green line, focused at 1.827 mm) resolution.
Fig. 7.
Fig. 7. Image of positive USAF 1951 target, showing similar resolution to negative target.
Fig. 8.
Fig. 8. Image obtained with partially coherent (LED) illumination and an aperture of 420 $\mu$ m. Spatial coherence length is 250 $\mu$ m. The resolution of 0.78 $\mu$ m is equal to that with laser illumination.
Fig. 9.
Fig. 9. Image obtained with partially coherent (LED) illumination and no aperture at the sample. Spatial coherence length is 250 $\mu$ m. a-b: No splitting of pixels. Groups 6 and higher reconstruct very poorly. c-e: Splitting pixels once gives much better reconstruction. Resolution obtained is 3.5 $\mu$ m.
Fig. 10.
Fig. 10. Image of a biological sample (cross-section through a branch of wood). a,c,e: Amplitude reconstructions with red, green, blue light. b,d,f: Phase reconstructions with red, green, blue light. g: Full-color reconstruction combining information from a,c,e. h: Imaging of the same area of the sample through a microscope objective, for comparison.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.