Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Long-range Fourier ptychographic imaging of the dynamic object with a single camera

Open Access Open Access

Abstract

Fourier ptychographic imaging technology is a new imaging method proposed in recent years. This technology captures multiple low-resolution images, and synthesizes them into a high-resolution image in the Fourier domain by a phase retrieval algorithm, breaking through the diffraction limit of the lens. In the field of macroscopic Fourier ptychographic imaging, most of the existing research generally focus on high-resolution imaging of static objects, and applying Fourier ptychographic imaging technology to dynamic objects is a hot research area now. At present, most of the researches are to use camera arrays combined with multiplexed lighting, deep learning or other algorithms, but the implementation of these methods is complicated or costly. Based on the diffraction theory of Fourier optics, this paper proposes that by expanding and focusing the illumination area, we can apply Fourier ptychographic imaging technology with a single camera to moving objects within a certain range. Theoretical analysis and experiments prove the feasibility of the proposed method. We successfully achieve high-resolution imaging of the dynamic object, increasing the resolution by about 2.5 times. This paper also researches the impact of speckles in the illuminated area on imaging results and proposes a processing method to reduce the impact of speckles.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Fourier ptychographic imaging (FP) is a computational imaging technique that combines phase retrieval algorithm [1,2] and synthetic aperture technique [3]. It has been successfully applied to both microscopic [46] and macroscopic imaging fields [79]. FP is a method to overcome the inherent trade-off between resolution and field of view (FOV) in imaging systems. This technology has very important significance in the field of imaging [5]. It breaks the physical limit of high-throughput and high-resolution imaging by computation. For conventional imaging systems, small-aperture lenses will interfere with the propagation of wavefronts, preventing the collection of high-frequency information and causing diffraction blur. According to Rayleigh criterion, when the illumination light is incoherent light, the minimum resolvable distance is inversely proportional to the aperture size. Increasing the lens aperture size is a direct way to improve resolution, but large-aperture lenses will result in expensive cost and bulky volume, which are not applicable in most cases [911]. However, compared with incoherent light, when using coherent light illumination, different frequency information of the image will be regularly distributed on a plane with a certain distance [10]. Fourier ptychographic imaging technology collects information of different frequencies through a small-aperture lens and synthesizes them through a reconstruction algorithm to achieve the imaging effect of a large-aperture lens, which greatly improves the spatial-bandwidth product (SBP) and resolution of the imaging system.

In 2013, Guo et al. proposed Fourier ptychographic microscopy (FPM), a novel computational microscopy technique. In previous microscopic imaging, in order to obtain wide-FOV and high-resolution images, researchers generally used precision mechanical scanning microscope systems or lensless microscope devices. However, these methods require very harsh conditions, such as precise control, requiring the sample to be in close proximity to the sensor [12,13]. FPM uses an LED array to provide illumination from different angles to the object and uses one or more low numerical aperture (NA) optical systems to capture low-resolution intensity images corresponding to each angle. This technique synthesizes the images collected from each angle in a subsequent iterative phase retrieval algorithm. In the recovery process, the captured low-resolution images provide constraints in the spatial domain, and the pupil aperture provides support constraints in the Fourier domain. Finally, the information synthesized in the Fourier domain generates a high-resolution complex-valued object image, including intensity and phase characteristics [6,14,15]. This technology transforms a conventional optical microscope into a high-resolution, wide-FOV microscope with a final SBP of ∼1 gigapixel, which greatly improves the imaging capability of the microscope system [6,16,17].

In 2014, Dong et al. proposed the concept of macroscopic Fourier ptychographic imaging based on Fourier ptychographic microscopy, and demonstrated its feasibility through experiments. They increased the imaging distance to 0.7 m [7]. In 2017, Hollyway et al. built a 1.5 m long-distance transmission imaging system, which achieved a 4-7 times improvement in resolution. Macroscopic Fourier ptychographic imaging uses a laser as the illumination source. After the laser passes through a lens and illuminates a stationary object, the object’s light field undergoes Fraunhofer diffraction in the far field. The camera scans the diffraction plane to obtain low-resolution images information at different spectral range. Then these information are synthesized into high-resolution images by a reconstruction algorithm. In 2017, they built a reflective long-distance macroscopic Fourier ptychographic imaging experimental device, and successfully realized reflective, high-resolution imaging of rough-surfaced objects [8].

At present, in the field of macroscopic imaging, Fourier ptychographic imaging technology mostly performs high-resolution imaging of stationary objects [5,10,18,19]. However, in actual scenes, many objects are moving. For high-resolution imaging of dynamic objects, the traditional researches are to use camera arrays combined with deep learning, multiplexed lighting, and low-rank sampling [9,20,21]. According to the latest research by Wang et al., they arrange nine cameras in a 3 × 3 array to take a single shot of a moving object. Afterwards, the captured low-resolution images are substituted into the pre-trained convolutional neural network and finally they obtain the high-resolution imaging result of a dynamic object. This is the first demonstration of macroscopic Fourier ptychography to single-shot synthetic aperture imaging of dynamic events [21]. This method does not require camera apertures to overlap each other, and a single shot can achieve high-resolution imaging of dynamic objects. However, the camera array increases the cost and complexity of the experiment system, and the training of deep learning requires the establishment of a large and relevant data set.

This paper proposes a method that only needs a single camera combined with the traditional phase retrieval algorithm to realize Fourier ptychographic imaging of dynamic objects. For objects moving within a certain plane range, we change the angle of illumination according to the position of the object, or provide a wide range of illumination to keep the light field transmitted or reflected by the object at the same central point [22,23]. According to the theory of Fourier optics, the Fraunhofer diffraction field produced by objects at different positions only has a difference in phase [24]. The intensity images recorded by the camera only reflects the change of the position of the object. Fortunately, these effects can be eliminated after registration and cropping, which also means that moving objects can be equivalent to stationary objects. So we can still use the iterative phase retrieval algorithm to achieve high-resolution imaging of the dynamic object.

Based on this theory, the simulation and experiment are carried out in this paper. In the transmission macro-Fourier ptychographic imaging experiment, we project a larger range of light spot and constantly move the position of the object on a certain plane. At the focal plane, we use a camera to capture low-resolution images of different light field positions. Finally, we synthesized high-resolution images from these captured low-resolution images. Simulation and experimental results confirm the feasibility of this theory. We also research the effect of the laser speckle generated in the illumination system on the final reconstructed image results.

The structure of the article is as follows: In section 2, we will introduce the basic model, principle and reconstruction algorithm of macroscopic Fourier ptychographic imaging technology for a dynamic object. In section 3, we demonstrate the feasibility of this method through simulation. In section 4, we further prove that the method proposed in this paper can realize high-resolution imaging of dynamic objects through experiments, and research the influence of speckles effect on imaging results. Finally, we summarize the content of the article and look forward to the future research in section 5.

2. Principle

2.1 Image formation model

The Fig. 1 shows the experimental model of transmission macroscopic Fourier ptychography imaging. A fixed laser emits quasi-monochromatic light with a central wavelength of $\lambda $.The beam passes through a spatial filter and illuminates an object fully by a lens with a focal length of f.We assume that the object is on the optical axis of the focusing lens, and denote its amplitude transmittance by ${t_A}({\varepsilon ,\eta } )$, and its transmitted light field close to the object within the illumination spot by ${U_0}({\varepsilon ,\eta } )$[24].

$${U_0}({\varepsilon ,\eta } )= \left\{ {\frac{{Af}}{d}P\left[ {\varepsilon \frac{f}{d},\eta \frac{f}{d}} \right]\exp \left[ { - j\frac{k}{{2d}}({{\varepsilon^2} + {\eta^2}} )} \right]} \right\}{t_A}({\varepsilon ,\eta } )$$

Here, $Af/d$ is the amplitude of the spherical wave that illuminates the object, $d$ is the distance from the object to the focal plane. $P[{\varepsilon ({f/d} ),\eta ({f/d} )} ]$ is the pupil function on the input plane that describes the effective illumination area and it can be ignored when the object is fully illuminated. $k = 2\pi /\lambda $ denotes the wave number. We assume Fresnel diffraction from the object plane to the focal plane, where the secondary phase factor of the illumination light wave cancels out with a similar secondary phase factor in the Fresnel diffraction integral, so that the light field on the focal plane can be written as ${U_f}({u,v} )$ [25].

$$\begin{aligned} {U_f}({u,v} )&= \frac{{A\exp \left[ {j\frac{k}{{2d}}({{u^2} + {v^2}} )} \right]}}{{j\lambda d}}\frac{f}{d} \times \int {\int_{ - \infty }^{ + \infty } {{t_A}({\varepsilon ,\eta } )\exp \left[ { - j\frac{{2\pi }}{{\lambda d}}({u\varepsilon + v\eta } )} \right]d\varepsilon d\eta } } \\ &= \frac{{A\exp \left[ {j\frac{k}{{2d}}({{u^2} + {v^2}} )} \right]}}{{j\lambda d}}\frac{f}{d}F[{{t_A}({\varepsilon ,\eta } )} ]\end{aligned}$$

 figure: Fig. 1.

Fig. 1. Physical model of transmission macroscopic Fourier ptychographic imaging. From left to right: The quasi-monochromatic light emitted by the laser is illuminated on the focusing lens through the spatial filter. The beam converges by the lens and illuminates an object. The light field interacts with the object and focuses on the camera lens plane, while the rest of the light is blocked by a light barrier. The intensity information of the light field is finally captured by the camera.

Download Full Size | PDF

We assume that the lens is well corrected and that paraxial processing is to the same first order approximation as non-paraxial processing. Keeping the distance between the object and the focal plane constant, we move the position of the object to point $({a,b} )$, and assume that the image formed by the object has no obvious distortion. We set ${t_A}({\varepsilon^{\prime},\eta^{\prime}} )= {t_A}({\varepsilon - a,\eta - b} )$ and the light field ${U_f}({n,m} )$ on the focal plane can be expressed as:

$$\begin{aligned} {U_f}({n,m} )&= \frac{{A\exp \left[ {j\frac{k}{{2d}}({{n^2} + {m^2}} )} \right]}}{{j\lambda d}}\frac{f}{d} \times \int {\int_{ - \infty }^{ + \infty } {{t_A}({\varepsilon^{\prime},\eta^{\prime}} )\exp \left\{ { - j\frac{{2\pi }}{{\lambda d}}[{n\varepsilon^{\prime} + m\eta^{\prime}} ]} \right\}d\varepsilon ^{\prime}d} } \eta ^{\prime}\\ &= \frac{{A\exp \left[ {j\frac{k}{{2d}}({{n^2} + {m^2}} )} \right]}}{{j\lambda d}}\frac{f}{d}F[{{t_A}({\varepsilon^{\prime},\eta^{\prime}} )} ]\end{aligned}$$

According to the Fourier phase shift theorem, $F[{{t_A}({\varepsilon^{\prime},\eta^{\prime}} )} ]$ can be written as

$$\begin{aligned} F[{{t_A}({\varepsilon^{\prime},\eta^{\prime}} )} ]&= F[{{t_A}({\varepsilon - a,\eta - b} )} ]\\ &= \int {\int_{ - \infty }^{ + \infty } {{t_A}({\varepsilon - a,\eta - b} )} } \exp [{ - j2\pi ({n\varepsilon + m\eta } )} ]d\varepsilon d\eta \\ &= \int {\int_{ - \infty }^{ + \infty } {{t_A}({\varepsilon^{\prime},\eta^{\prime}} )} } \exp \{{ - j2\pi [{n({\varepsilon^{\prime} + a} )+ m({\eta^{\prime} + b} )} ]} \}d\varepsilon ^{\prime}d\eta ^{\prime}\\ &= F[{{t_A}({\varepsilon ,\eta } )} ]\exp [{ - j2\pi ({na + mb} )} ]\end{aligned}$$

Now, formula (3) can be rewritten as

$${U_f}({n,m} )= \frac{{A\exp \left[ {j\frac{k}{{2d}}({{n^2} + {m^2}} )} \right]}}{{j\lambda d}}\exp [{ - j2\pi (na + mb)} ]\frac{f}{d}F[{{t_A}({\varepsilon ,\eta } )} ]$$
and there is only one more phase factor compared with formula (2). Except for another quadratic phase factor, the amplitude distribution on the focal plane after the object moves is still the Fourier transform formula of the part of the object surrounded by the illuminated area.

Due to the converging effect of the lens, the center points of the light fields generated by the object at different positions are consistent. We use the camera to scan the light field plane on the focal plane. We set the camera lens with limited aperture as $O({u - {j_i},v - {k_i}} )$, $({{j_i},{k_i}} )$ represents the central position of the lens on the focal plane, and i represents the $i - th$ position. The camera only acquires information within the aperture. Since the camera only records the intensity value of the image, when the object is on the optical axis, the intensity images recorded by the camera can be expressed as [24]

$${I_i}({\varepsilon ,\eta ,{j_i},{k_i}} )= {|{{F^{ - 1}}[{{U_f}(u,v)O({u - {j_i},v - {k_i}} )} ]} |^2}$$

When the object position moves to point $({a,b} )$, the intensity information recorded by the camera can be expressed as

$${I_i}^\prime ({\varepsilon^{\prime},\eta^{\prime},{j_i},{k_i}} )= {|{{F^{ - 1}}[{{U_f}(n,m)O({n - {j_i},m - {k_i}} )} ]} |^2}$$

In formula (6) and formula (7), we ignore the quadratic phase factor and constants in ${U_f}({u,v} )$ and ${U_f}({n,m} )$ [26,27]. When the camera is at a certain position $i^{\prime}$, since ${U_f}({n,m} )$ only has an extra phase factor $\exp [{ - j2\pi (na + mb)} ]$ compared to ${U_f}({u,v} )$, the difference between the intensity images of a dynamic object intercepted in the camera aperture and that of a stationary object is only a translation amount.

Fortunately, in actual experiments, we can register low-resolution images based on phase correlation technology to eliminate the impact of translation [28]. In fact, by moving the camera to capture low-resolution images, the registration operation is required in the data processing stage. Through the previous analysis, we know that in terms of intensity images, moving objects will have more translation than stationary objects. For a series of low-resolution images of a stationary object, we usually use the image captured by the camera at the center of the light field as a starting reference image for registration. For the dynamic object, we can use the same method. The general method and principle are as follows:

  • 1. We first obtain the spectral center image $I({x,y} )$, and an image $I^{\prime}({x,y} )$ of the adjacent aperture that need to be registered. Due to the high overlap ratio between adjacent apertures, there is a large similarity between corresponding images. Therefore, the intensity image $I^{\prime}({x,y} )$ captured by the adjacent aperture can be approximated as $I({x,y} )$ translation $({x^{\prime},y^{\prime}} )$ distance.
    $$I^{\prime}(x,y) = I({x - x^{\prime},y - y^{\prime}} )$$
  • 2. We respectively two-dimensional Fourier transform two images to get $\varphi ({{f_x},{f_y}} )$ and $\varphi ^{\prime}({{f_x},{f_y}} )$.
    $$\varphi ({{f_x},{f_y}} )= F[{I({x,y} )} ]$$
    $$\varphi ^{\prime}({{f_x},{f_y}} )= F[{I^{\prime}({x,y} )} ]$$

    According to the natural Fourier transform and formula (8), we can get:

    $$\varphi ^{\prime}({{f_x},{f_y}} )= \exp [{ - j2\pi ({{f_x}x^{\prime} + {f_y}y^{\prime}} )} ]\varphi ({{f_x},{f_y}} )$$

  • 3. According to the phase correlation theory, the phase term in formula (11) is equal to the phase of the frequency cross power spectrum of the two images, which can be expressed as:
    $$\exp [{j2\pi ({{f_x}x^{\prime} + {f_y}y^{\prime}} )} ]= \frac{{\varphi ({{f_x},{f_y}} ){{\varphi ^{\prime}}^\ast }({{f_x},{f_y}} )}}{{|{\varphi ({{f_x},{f_y}} ){{\varphi^{\prime}}^\ast }({{f_x},{f_y}} )} |}}$$
    where $\varphi ^{\prime\ast }({{f_x},{f_y}} )$ is the conjugate of $\varphi ^{\prime}({{f_x},{f_y}} )$, the inverse Fourier transform of the phase term is an impulse function, and the position of the impulse peak is at the translation amount $({x^{\prime},y^{\prime}} )$.
  • 4. According to the obtained displacement information $({x^{\prime},y^{\prime}} )$, we move $I^{\prime}({x,y} )$ to align with $I({x,y} )$. And repeat the above operation until all images are registered.

It is worth noting that in the registration process, we need to register step by step from the center position outward, instead of using the center image as a direct reference for all low-resolution images. This is because if the shooting positions are far apart, the corresponding images will have a large difference, which may easily cause inaccurate registration. Therefore, when we register images that gradually move away from the center of the spectrum, we need to use the registered images of adjacent apertures as the reference images. When all object images are registered by the above method, we eliminating the impact of the translation on the measurement results, and finally we can get a series of intensity images equivalent to the stationary object.

According to the above analysis, the dynamic object will disturb the diffraction field, but we eliminate the influence of object displacement by means of registering the recorded intensity images. At this point, we can equate the moving object to a stationary object through registration, and substitute the low-resolution images taken with a limited aperture into the subsequent restoration algorithm, and still obtain a high-resolution image.

2.2 Algorithm for image reconstruction

The series of low-resolution images at different camera positions after registration and cropping is denoted as ${\hat{I}_i}$. Then, we apply a phase retrieval method based on alternating minimization to recover the high-resolution target image. The recovery algorithm is based on the error-reduction phase retrieval algorithm proposed by Gerchberg and Saxton [1]. We start by estimating the high-resolution Fourier light field $\Psi ({u,v} )$ of the target image from the average value of the registered low-resolution images. Next, we alternate between imposing constraints in the frequency domain and the spatial domain until we obtain the final Fourier light field, which can be written as ${\Psi ^\ast }({u,v} )$.

$${\Psi ^\ast }({u,v} )= \mathop {\arg \min }\limits_\Psi {\sum\limits_i {\left\|{\sqrt {{{\hat{I}}_i}} - {F^{ - 1}}[{\Psi ({u,v} )O({u - {j_i},v - {k_i}} )} ]} \right\|} _2}$$

In each iteration $\textrm{k}$, we go through the following steps,

  • 1. From the estimated Fourier field ${\Psi ^k}({u,v} )$, we calculate the estimated complex-valued image $\phi _i^k$
    $$\phi _i^k = {F^{ - 1}}[{{\Psi ^k}O({u - {j_i},v - {k_i}} )} ]\;\textrm{for all}\;i$$
  • 2. Replace the amplitude of the complex-valued image estimated in step 1 with the intensity image ${\hat{I}_i}$ taken at position i,
    $$\sqrt {\frac{{{{\hat{I}}_i}}}{{{{|{\phi_i^k} |}^2}}}} \phi _i^k \to \phi _i^k\;\textrm{for all}\;i$$
  • 3. Then update the estimated value of ${\Psi ^k}({u,v} )$ by solving the following regularized least squares problem
    $${\Psi ^{k + 1}} \leftarrow \mathop {\textrm{minimize}}\limits_\Psi \sum\limits_i {||{\phi_i^k - {F^{ - 1}}[{{\Psi ^k}O({u - {j_i},v - {k_i}} )} ]} ||} _2^2 + \tau ||\Psi ||_2^2$$
    where $\tau > 0$ is an appropriately chosen regularization parameter. Tikhonov regularization is used to improve numerical stability during reconstruction. This problem has a closed-form solution that can be efficiently computed with the fast Fourier transform. We can inverse Fourier transform the final updated Fourier field ${\Psi ^\ast }({u,v} )$ to obtain a high-resolution image [29,30].

The recovery algorithm requires a certain degree of overlap between adjacent sub-spectral regions, without using deep learning or adding other constraints. Data redundancy is essential for reconstructing high-resolution images. When the camera takes low-resolution images from different positions, we cannot reconstruct high-resolution images if the adjacent apertures overlap is below a certain threshold. Redundant measurements help to constrain the reconstruction and make it more robust to noise, while a higher overlap rate is also beneficial for image registration [14,31]. We define the overlap rate as the ratio of aperture diameter minus the distance between adjacent aperture center to aperture diameter. Dong and Hollway et al.'s research show that for an ideal imaging system, phase recovery algorithms need an overlap rate of at least 50% between adjacent apertures [9,32]. At the same time, the size of the synthetic aperture also determines the reconstruction effect. The large synthetic aperture captures a larger spectral range, obtaining more information and improving the resolution of reconstructed images [9].

3. Simulation

We conduct simulation experiments to verify the feasibility of the proposed method. We use a resolution board with 512 × 512 pixels, which has line pairs with varying line widths from 20 to 1 pixel. The range of line pairs for each pixel is (0.025-0.5). As shown in Fig. 2 (a1)-(a3), the resolution board is placed in a 1012 × 1012 pixel background, and its position is constantly changing. Moving within this background range, the image will not produce significant distortion in the imaging plane.

 figure: Fig. 2.

Fig. 2. Simulation of the resolution board image. (a1)-(a3). The resolution board image is in the different positions. (b1)-(b3). The spectrums corresponding to images in group a. The circles indicate the scanning positions of the aperture. (c1)-(c3). The images obtained by the inverse Fourier transform of the circle part in group b. (d1) Original resolution board image. (d2) Reconstructed image of the dynamic resolution board. (d3) Image of the spectral center position of the dynamic resolution board after registration and cropping. (d4) The reconstructed image of the resolution board at a certain position.

Download Full Size | PDF

We assume that the object and its background are in the range of coherent light with a wavelength of 532 nm, and the coherent light is gathered by a lens with a focal length of 800 mm. The size of the resolution board is 64 mm × 64 mm and it is placed 50 meters away from the camera. The aperture of the imaging system is set as a circle with a diameter of 36 mm, which scanning on the focal plane. In order to ensure the quality of the reconstructed image, we adopt a sampling method of 15 × 15 grid, and there is a 70% overlap rate between adjacent apertures. Figure 2 (b1)-(b3) show the light field corresponding to Fig. 2 (a1)-(a3) at the focal plane and the circles indicate the scanning position of the aperture. Here the light field is shown by means of the Fourier magnitude spectrum. Figure 2 (c1)-(c3) show the intensity images captured by the CCD. After normalizing all images, we compute the root-mean-square error (RMSE) between the reconstructed image of the dynamic object and the original image, as well as between the reconstructed image of the stationary object and the original image.

When we collect images at all scanning positions, we need to register and crop them. Then we substitute the rectified images into the recovery algorithm described in section 2 and set the number of iterations to 1000. Figure 2 (d1) is the original resolution board image. Figure 2 (d2) and (d3) show the final high-resolution image of the dynamic object and the image captured at the center of the light field, respectively. The Fig. 2 (d4) shows the reconstructed image when the object is stationary under the same sampling method. According to the calculation, whether the object is moving or stationary, the RMSE between the reconstructed high-resolution image and the original image is 0.1690, which is the same as the theoretical analysis result.

We simulate a grayscale image in the same way and show them in Fig. 3. Among them, Fig. 3 (a1)-(a3) show that the grayscale image is moved to different positions within a certain range. Figure 3 (b1)-(b3) show the Fourier magnitude spectrums corresponding to different positions of the object. The circles are the camera positions, and the intensity images captured by the camera are shown in Fig. 3 (c1)-(c3). Figure 3 (d1)-(d3) represent the original grayscale image, the reconstructed result of the dynamic grayscale image and the image captured at the center of the light field, respectively. Figure 3 (d4) shows the reconstructed image when the position of the grayscale image is unchanged. After normalization, we calculate the RMSE between the reconstructed high-resolution images and the original image under moving and stationary conditions, and the results are both 0.1083.

 figure: Fig. 3.

Fig. 3. Simulation of the grayscale image. (a1)-(a3). The grayscale image is in different positions. (b1)-(b3). The spectrums corresponding to images in group a. The circles indicate the scanning position of the aperture. (c1)-(c3). The images obtained by the inverse Fourier transform of the circle part in group b. (d1) Original grayscale image. (d2) Reconstructed image of the dynamic grayscale image. (d3) Image of the spectral center position of the dynamic grayscale image after registration and cropping. (d4) The reconstructed image with the grayscale image at a certain position.

Download Full Size | PDF

In the above simulation, we let the object move within a range of about twice its size and the simulation results show that the influence of the same object on the Fraunhofer diffraction field at different positions can be eliminated by registering the intensity images, which is consistent with the intensity information of the stationary object. Therefore, in an ideal imaging system, the movement of an object within a certain range does not affect the final reconstruction results. We can use Fourier ptychographic imaging technology to obtain the high-resolution image of dynamic objects.

4. Experiment

4.1 Experimental design

In this section, we will prove the feasibility of the theory by experiments. The experimental system is shown in Fig. 4. The system uses a laser with a wavelength of 532 nm. We expand the beam directly through the collimator, and the output beam diameter is 160 mm. The expanded beam passes through a plano-convex lens with a diameter of 234 mm and a focal length of 1350 mm.

 figure: Fig. 4.

Fig. 4. Experimental structure diagram. From left to right: A fixed laser emits a beam with a wavelength of 532 nm. The diameter of the beam is expanded to 160 mm through the collimator and propagates to the focusing lens. The light beam converged by the lens illuminates the hollow object and the light barrier. The object is placed on the translation stage. The stage moves the object in a certain plane within the range of the light spot. The exit beam is focused on the camera lens aperture plane. Due to the effect of the lens, the light field at the focal plane is the Fraunhofer diffracted light field of the object. The high-precision translation stage drives the limited-aperture camera to move on the focal plane and capture image information at different positions in the light field.

Download Full Size | PDF

The object is placed behind the lens, and it is constantly moved in a plane 10 mm away from the lens through the electric stage. The range of motion is within a circle with a diameter of 120 mm and the speed of the object is in the range of 1 to 5 mm/s. Moving within this range, the object will not produce strong graphic distortion on the imaging plane. The object uses the hollowed-out standard U.S. Air Force resolution board (USAF), the number of line pairs per millimeter ranges from (1-14.25), and the corresponding group numbers are 0, 1, 2, and 3. The group number is not hollowed out, and the rest of the light is isolated by a light barrier around it. The size of the object is 12 mm × 12 mm.

The camera lens is placed 1340 mm away from the object, which is the focal plane of the focusing lens. The aperture of the lens is set to 2 mm by the diaphragm, and the lens adopts a 12-120 mm zoom lens, and the maximum focal length is used. The light field passing through the lens eventually reaches the camera sensor. The camera adopts (MER-500-7UM) industrial camera with a pixel size of 2.2 µm × 2.2 µm. It is placed on a high-precision translation stage and connected to a computer to record images through a data cable.

In the experiment, the position of the object is continuously moved on the same plane within the range of the spot by the moving stage. Since other light is blocked, only the light passing through the object reaches the focal plane. Due to the Fourier transform properties of the lens, the focal plane presents the Fourier transformed light field of the object. According to the phase shift formula in the second section, at the focal plane, there is only one phase factor difference between the light fields generated by objects at different positions. The camera lens is located at the focal plane. A high-precision translation stage drives the camera to scan the plane of the light field. We scan in a grid format of 15 × 15, the overlap rate of adjacent apertures is 70%, and the synthetic aperture ratio is 5.2.

4.2 Experiment results

4.2.1 Single-shot reconstruction result

In the experiment, the exposure time of the camera is adjusted to capture the image of the moving object clearly. Figure 5 shows part of the process of acquiring low-resolution images. The object moves randomly within the light spot range, and the camera shoots at the focal plane according to the set sampling position ${C_i}$. The information of the light field entering the limited aperture is collected by the CCD and finally we get the low-resolution intensity images of the corresponding areas. Figure 5 shows part of the captured images and their registered and cropped results when the camera is in different positions ${C_i}$.

 figure: Fig. 5.

Fig. 5. Low-resolution images collected during the experiment. The object is constantly moving within the range of the light spot. ${P_1} - {P_4}$ represent a part of random position during the motion of the object. ${C_1} - {C_4}$ represents a part of the set camera sampling positions. The light field passes through the limited aperture, and the CCD records the amplitude information of the light field in the corresponding area, and we finally obtain the low-resolution intensity images.

Download Full Size | PDF

After the camera scans all locations ${C_i}$, we need to register and crop all the captured low-resolution images. The image captured by the camera at the center of the Fourier field is shown in Fig. 6 (a). Then we substitute the rectified low-resolution images into the restoration algorithm, and finally obtain the reconstructed high-resolution image as shown in Fig. 6 (b).

 figure: Fig. 6.

Fig. 6. The reconstruction image and comparison with central image. (a) The image captured by the camera at the center of the Fourier field. (b) The reconstructed image.

Download Full Size | PDF

From the results, this experimental method has successfully achieved high-resolution imaging of the moving object through the technology of Fourier ptychography imaging. In Fig. 6 (a), we enlarge the information of the three areas, and the stripes in these areas are already difficult to distinguish, but in Fig. 6 (b), the information of these areas can still be presented relatively clearly. The resolution of the reconstructed image is increased from 2.83 lp/mm to 6.35 lp/mm compared with the image taken by the camera at the center of the Fourier field.

4.2.2 Reconstruction results after averaging multiple images

When using a collimator to expand the laser beam, we find that there are inevitably some phase distortions of the expanded beam. The speckle caused by these distortions will affect the single-shoot imaging results. Figure 7 (a) shows the registration imaging results of the object at different positions ${P_i}$ when the camera position is fixed. These two images should be the same in theory, but due to the influence of speckle, there are differences between them such as deletion, deformation, and noise caused by scattered light.

 figure: Fig. 7.

Fig. 7. The influence of speckles on the images and the result of averaging. (a) The camera is in the center of the light field to capture images of the object at different positions ${P_i}$. (b) Single shot images and the averaged result. The upper box represents five images of an object taken at random different positions ${P_i}$ by the camera in the same position ${C_i}$. The image on the lower left is the cropped result of one of the single-shot images, and the image on the lower right is the average result of five images after registration and cropping.

Download Full Size | PDF

In order to explore how the speckle affects the reconstructed image, we capture five images of the moving object at random different positions ${P_i}$ on a 15 × 15 array for each camera position ${C_i}$. We divide these images into 5 groups, which are respectively substituted into the restoration algorithm to reconstruct high-resolution images. At the same time, we average the five images at each camera location ${C_i}$ and reconstruct the high-resolution image. Figure 7 (b) shows the single-shot and averaged images when the camera is at a certain position ${C_i}$. Since the object is affected by speckle differently when it is in different positions ${P_i}$, this effect is random. Averaging multiple images can provide complementary information, thereby reducing the impact of speckle noise [8]. We can observe that some missing and stray light effects are removed. Figure 8 shows the reconstructed image results of the six groups of data, enlarged images of some details, and the intensity distribution at the drawn line. Figure 8 (a1)-(e1) are the reconstruction results from single-shot data, and Fig. 8 (f1) is the reconstruction result from the average images data. We calculate the contrast for each group of line pairs in the line-drawing section. Here we define the contrast formula as

$$D = \frac{{\bar{\omega } - \bar{\nu }}}{{\bar{\omega } + \bar{\nu }}}$$

Among them, $\bar{\omega }$ is the mean value of the sum of the three points with the highest intensity value at each bright stripe, and $\bar{\nu }$ is the mean value of the sum of the three points with the lowest intensity value at each dark stripe.

 figure: Fig. 8.

Fig. 8. Reconstruction results of different datasets. (a1)-(e1) show reconstructed images and magnified details from single-shot data. (f1) The reconstructed image and magnified details from averaging data. (a2)-(f2) The distribution of intensity values corresponding to the drawn line of the magnified details. (g) Contrast of intensity values for each group of data in (a2)-(f2). Groups A-C in the chart represent three groups of stripes from top to bottom in the magnified details.

Download Full Size | PDF

The first point of concern is the resolution of the reconstructed images. Comparing the magnified details and the intensity value distribution of the reconstructed images of the five single-shot data, the bottom stripes in Fig. 8 (a1) and Fig. 8 (b1) cannot be clearly distinguished. However, the bottom stripes in Fig. 8 (c1)-(e1) and the reconstructed image Fig. 8 (f1) obtained from the averaged data are still distinguishable. This shows that the data of a single shot may cause some information to be lost or cannot be reconstructed. This is because the high-frequency information occupies a small proportion of the image, and is more susceptible to the influence of speckle during the acquisition process, resulting in loss of information.

The second point of concern is the contrast of high-frequency information. Figure 8 (g) shows the intensity contrast of the three groups of stripes in the magnified detail images of each group of reconstructed images. The high-frequency part of the reconstructed images from single-shot data has lower overall contrast than that from the averaged data. There may even be a situation in Fig. 8 (a1), where the contrast of the three groups of stripes is low. The intensity contrast of each group of stripes from top to bottom in the magnified detail image of Fig. 8 (a1) is 0.6190, 0.4029, and 0.1682, respectively, while in Fig. 8 (f1) they are 0.9954, 0.9598, and 0.8160.

To summarize, incorporating the average values of multiple images captured by the camera at the same position into the reconstruction algorithm can reduce the possibility of missing high-frequency image information and enhance the stability of the reconstructed image resolution. Additionally, averaged data maintains high-frequency information intensity contrast at a high level, and improves imaging quality.

5. Conclusion and discussion

In the transmission macroscopic Fourier ptychographic imaging experiment, based on the theory of Fourier optics, we achieve high-resolution imaging of an object moving within a certain range by expanding the illumination spot. In our experiment, we increase the image resolution from 2.83 lp/mm to 7.13 lp/mm, proving the feasibility of applying Fourier ptychographic imaging technology to dynamic objects under certain conditions. At the same time, we analyze the impact of the speckle in the experiment on the imaging results. And by averaging multiple captured images, we reduce the impact of speckle, improve the stability of the reconstructed image resolution and the contrast of high-frequency information. This provides some references for solving the speckle problem in Fourier ptychographic imaging experiments.

The experimental method proposed in this paper can be further extended. In theory, the large light spot that illuminates the object in the experiment can be replaced by a small light spot that changes the illumination angle with the position of the object. We could also change the transmissive experiment to reflective, but we need to further analyze the effect of laser speckle produced by rough surface objects. In the future, we plan to design reflective system as an extension of the work in this paper.

Funding

Foundation of Key Laboratory of Science and Technology Innovation of Chinese Academy of Sciences (CXJJ-20S028); Youth Innovation Promotion Association of the Chinese Academy of Sciences (2020438).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. R. W. Gerchberg, “A Practical Algorithm for the Determination of Phase From Image and Diffraction Plane Pictures,” Optik 35(2), 237–246 (1972).

2. J. R. Fienup, “Phase retrieval algorithms: a personal tour,” Appl. Opt. 52(1), 45–56 (2013). [CrossRef]  

3. W. M. Brown, “Synthetic aperture radar,” IEEE Trans. Aerosp. Electron. Syst. AES-3(2), 217–229 (1967). [CrossRef]  

4. G. Zheng, “Breakthroughs in Photonics 2013: Fourier Ptychographic Imaging,” IEEE Photonics J. 6(2), 1–7 (2014). [CrossRef]  

5. P. C. Konda, L. Loetgering, K. C. Zhou, S. Xu, A. R. Harvey, and R. Horstmeyer, “Fourier ptychography: current applications and future promises,” Opt. Express 28(7), 9603–9630 (2020). [CrossRef]  

6. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

7. S. Dong, R. Horstmeyer, R. Shiradkar, K. Guo, X. Ou, Z. Bian, H. Xin, and G. Zheng, “Aperture-scanning Fourier ptychography for 3D refocusing and super-resolution macroscopic imaging,” Opt. Express 22(11), 13586–13599 (2014). [CrossRef]  

8. J. Holloway, Y. Wu, M. K. Sharma, O. Cossairt, and A. Veeraraghavan, “SAVI: Synthetic apertures for long-range, subdiffraction-limited visible imaging using Fourier ptychography,” Sci. Adv. 3(4), 16025 (2017). [CrossRef]  

9. J. Holloway, M. S. Asif, M. K. Sharma, N. Matsuda, R. Horstmeyer, O. Cossairt, and A. Veeraraghavan, “Toward Long-Distance Subdiffraction Imaging Using Coherent Camera Arrays,” IEEE Trans. Comput. Imaging 2(3), 251–265 (2016). [CrossRef]  

10. G. Zheng, C. Shen, S. Jiang, P. Song, and C. Yang, “Concept, implementations and applications of Fourier ptychography,” Nat. Rev. Phys. 3(3), 207–223 (2021). [CrossRef]  

11. M. Bashkansky, R. L. Lucke, E. Funk, L. J. Rickard, and J. Reintjes, “Two-dimensional synthetic aperture imaging in the optical domain,” Opt. Lett. 27(22), 1983–1985 (2002). [CrossRef]  

12. L. Denis, D. Lorenz, E. Thiebaut, C. Fournier, and D. Trede, “Inline hologram reconstruction with sparsity constraints,” Opt. Lett. 34(22), 3475–3477 (2009). [CrossRef]  

13. G. Zheng, S. A. Lee, Y. Antebi, M. B. Elowitz, and C. Yang, “The ePetri dish, an on-chip cell imaging platform based on subpixel perspective sweeping microscopy (SPSM),” Proc. Natl. Acad. Sci. U.S.A. 108(41), 16889–16894 (2011). [CrossRef]  

14. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22(5), 4960–4972 (2014). [CrossRef]  

15. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier Ptychography with an LED array microscope,” Biomed. Opt. Express 5(7), 2376–2389 (2014). [CrossRef]  

16. X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, “Quantitative phase imaging via Fourier ptychographic microscopy,” Opt. Lett. 38(22), 4845–4848 (2013). [CrossRef]  

17. A. Williams, J. Chung, X. Ou, G. Zheng, S. Rawal, Z. Ao, R. Datar, C. Yang, and R. Cote, “Fourier ptychographic microscopy for filtration-based circulating tumor cell enumeration and analysis,” J. Biomed. Opt. 19(6), 066007 (2014). [CrossRef]  

18. Y. Zhou, J. Wu, Z. Bian, J. Suo, G. Zheng, and Q. Dai, “Fourier ptychographic microscopy using wavelength multiplexing,” J. Biomed. Opt. 22(6), 066006 (2017). [CrossRef]  

19. J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Sampling criteria for Fourier ptychographic microscopy in object space and frequency space,” Opt. Express 24(14), 15765–15781 (2016). [CrossRef]  

20. Z. Chen, G. Jagatap, S. Nayer, C. Hegde, and N. Vaswani, “Low rank fourier ptychography,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (Calgary, CANADA, 2018), pp. 6538–6542.

21. B. Wang, S. Li, Q. Chen, and C. Zuo, “Learning-based single-shot long-range synthetic aperture Fourier ptychographic imaging with a camera array,” Opt. Lett. 48(2), 263–266 (2023). [CrossRef]  

22. M. Xiang, A. Pan, Y. Zhao, X. Fan, H. Zhao, C. Li, and B. Yao, “Coherent synthetic aperture imaging for visible remote sensing via reflective Fourier ptychography,” Opt. Lett. 46(1), 29–32 (2021). [CrossRef]  

23. S. Pacheco, B. Salahieh, T. Milster, J. J. Rodriguez, and R. Liang, “Transfer function analysis in epi-illumination Fourier ptychography,” Opt. Lett. 40(22), 5343–5346 (2015). [CrossRef]  

24. J. W. Goodman, Introduction to Fourier optics[M], (Roberts and Company publishers, 2005).

25. M. B. A. E. Wolf, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light, (CUP Archive, 2000).

26. T. B. Edo, D. J. Batey, A. M. Maiden, C. Rau, U. Wagner, Z. D. Pesic, T. A. Waigh, and J. M. Rodenburg, “Sampling in x-ray ptychography,” Phys. Rev. A 87(5), 053850 (2013). [CrossRef]  

27. R. Feder, E. Spiller, J. Topalian, A. N. Broers, W. Gudat, B. J. Panessa, Z. A. Zadunaisky, and J. Sedat, “High-resolution soft-x-ray microscopy,” Science 197(4300), 259–260 (1977). [CrossRef]  

28. M. Guizar-Sicairos, S. T. Thurman, and J. R. Fienup, “Efficient subpixel image registration algorithms,” Opt. Lett. 33(2), 156–158 (2008). [CrossRef]  

29. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2(2), 104–111 (2015). [CrossRef]  

30. C. Zuo, J. Sun, and Q. Chen, “Adaptive step-size strategy for noise-robust Fourier ptychographic microscopy,” Opt. Express 24(18), 20724–20744 (2016). [CrossRef]  

31. S. Dong, R. Shiradkar, P. Nanda, and G. Zheng, “Spectral multiplexing and coherent-state decomposition in Fourier ptychographic imaging,” Biomed. Opt. Express 5(6), 1757–1767 (2014). [CrossRef]  

32. S. Dong, Z. Bian, R. Shiradkar, and G. Zheng, “Sparsely sampled Fourier ptychography,” Opt. Express 22(5), 5455–5464 (2014). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Physical model of transmission macroscopic Fourier ptychographic imaging. From left to right: The quasi-monochromatic light emitted by the laser is illuminated on the focusing lens through the spatial filter. The beam converges by the lens and illuminates an object. The light field interacts with the object and focuses on the camera lens plane, while the rest of the light is blocked by a light barrier. The intensity information of the light field is finally captured by the camera.
Fig. 2.
Fig. 2. Simulation of the resolution board image. (a1)-(a3). The resolution board image is in the different positions. (b1)-(b3). The spectrums corresponding to images in group a. The circles indicate the scanning positions of the aperture. (c1)-(c3). The images obtained by the inverse Fourier transform of the circle part in group b. (d1) Original resolution board image. (d2) Reconstructed image of the dynamic resolution board. (d3) Image of the spectral center position of the dynamic resolution board after registration and cropping. (d4) The reconstructed image of the resolution board at a certain position.
Fig. 3.
Fig. 3. Simulation of the grayscale image. (a1)-(a3). The grayscale image is in different positions. (b1)-(b3). The spectrums corresponding to images in group a. The circles indicate the scanning position of the aperture. (c1)-(c3). The images obtained by the inverse Fourier transform of the circle part in group b. (d1) Original grayscale image. (d2) Reconstructed image of the dynamic grayscale image. (d3) Image of the spectral center position of the dynamic grayscale image after registration and cropping. (d4) The reconstructed image with the grayscale image at a certain position.
Fig. 4.
Fig. 4. Experimental structure diagram. From left to right: A fixed laser emits a beam with a wavelength of 532 nm. The diameter of the beam is expanded to 160 mm through the collimator and propagates to the focusing lens. The light beam converged by the lens illuminates the hollow object and the light barrier. The object is placed on the translation stage. The stage moves the object in a certain plane within the range of the light spot. The exit beam is focused on the camera lens aperture plane. Due to the effect of the lens, the light field at the focal plane is the Fraunhofer diffracted light field of the object. The high-precision translation stage drives the limited-aperture camera to move on the focal plane and capture image information at different positions in the light field.
Fig. 5.
Fig. 5. Low-resolution images collected during the experiment. The object is constantly moving within the range of the light spot. ${P_1} - {P_4}$ represent a part of random position during the motion of the object. ${C_1} - {C_4}$ represents a part of the set camera sampling positions. The light field passes through the limited aperture, and the CCD records the amplitude information of the light field in the corresponding area, and we finally obtain the low-resolution intensity images.
Fig. 6.
Fig. 6. The reconstruction image and comparison with central image. (a) The image captured by the camera at the center of the Fourier field. (b) The reconstructed image.
Fig. 7.
Fig. 7. The influence of speckles on the images and the result of averaging. (a) The camera is in the center of the light field to capture images of the object at different positions ${P_i}$. (b) Single shot images and the averaged result. The upper box represents five images of an object taken at random different positions ${P_i}$ by the camera in the same position ${C_i}$. The image on the lower left is the cropped result of one of the single-shot images, and the image on the lower right is the average result of five images after registration and cropping.
Fig. 8.
Fig. 8. Reconstruction results of different datasets. (a1)-(e1) show reconstructed images and magnified details from single-shot data. (f1) The reconstructed image and magnified details from averaging data. (a2)-(f2) The distribution of intensity values corresponding to the drawn line of the magnified details. (g) Contrast of intensity values for each group of data in (a2)-(f2). Groups A-C in the chart represent three groups of stripes from top to bottom in the magnified details.

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

U 0 ( ε , η ) = { A f d P [ ε f d , η f d ] exp [ j k 2 d ( ε 2 + η 2 ) ] } t A ( ε , η )
U f ( u , v ) = A exp [ j k 2 d ( u 2 + v 2 ) ] j λ d f d × + t A ( ε , η ) exp [ j 2 π λ d ( u ε + v η ) ] d ε d η = A exp [ j k 2 d ( u 2 + v 2 ) ] j λ d f d F [ t A ( ε , η ) ]
U f ( n , m ) = A exp [ j k 2 d ( n 2 + m 2 ) ] j λ d f d × + t A ( ε , η ) exp { j 2 π λ d [ n ε + m η ] } d ε d η = A exp [ j k 2 d ( n 2 + m 2 ) ] j λ d f d F [ t A ( ε , η ) ]
F [ t A ( ε , η ) ] = F [ t A ( ε a , η b ) ] = + t A ( ε a , η b ) exp [ j 2 π ( n ε + m η ) ] d ε d η = + t A ( ε , η ) exp { j 2 π [ n ( ε + a ) + m ( η + b ) ] } d ε d η = F [ t A ( ε , η ) ] exp [ j 2 π ( n a + m b ) ]
U f ( n , m ) = A exp [ j k 2 d ( n 2 + m 2 ) ] j λ d exp [ j 2 π ( n a + m b ) ] f d F [ t A ( ε , η ) ]
I i ( ε , η , j i , k i ) = | F 1 [ U f ( u , v ) O ( u j i , v k i ) ] | 2
I i ( ε , η , j i , k i ) = | F 1 [ U f ( n , m ) O ( n j i , m k i ) ] | 2
I ( x , y ) = I ( x x , y y )
φ ( f x , f y ) = F [ I ( x , y ) ]
φ ( f x , f y ) = F [ I ( x , y ) ]
φ ( f x , f y ) = exp [ j 2 π ( f x x + f y y ) ] φ ( f x , f y )
exp [ j 2 π ( f x x + f y y ) ] = φ ( f x , f y ) φ ( f x , f y ) | φ ( f x , f y ) φ ( f x , f y ) |
Ψ ( u , v ) = arg min Ψ i I ^ i F 1 [ Ψ ( u , v ) O ( u j i , v k i ) ] 2
ϕ i k = F 1 [ Ψ k O ( u j i , v k i ) ] for all i
I ^ i | ϕ i k | 2 ϕ i k ϕ i k for all i
Ψ k + 1 minimize Ψ i | | ϕ i k F 1 [ Ψ k O ( u j i , v k i ) ] | | 2 2 + τ | | Ψ | | 2 2
D = ω ¯ ν ¯ ω ¯ + ν ¯
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.