Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Overcoming the diffraction limit by exploiting unmeasured scattering media

Open Access Open Access

Abstract

Scattering is not necessarily an obstacle to imaging. It can help enhance imaging performance beyond the reach of a lens system. However, current scattering-enhanced imaging systems require prior knowledge of the transmission matrix. There are also some techniques that do not require such prior knowledge to see through strongly scattering media, but the results are still limited by the optics used. Here we propose overcoming the diffraction limit through a visually opaque diffuser. By controlling the distance between the diffuser and lens system, light with higher spatial frequencies is scattered into the entrance pupil. With the deformed wavefront corrected, we experimentally achieved imaging with $3.39 \times$ enhancement of the Rayleigh limit. In addition, our method works well for objects that are $4 \times$ larger than the memory effect range and can maintain super-resolution performance for a depth of field $6.6 \times$ larger than a lens can achieve. Using our method, an obstructive scattering medium can enhance the throughput of the imaging system, even though the transmission matrix of the scattering medium has not been measured beforehand.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. INTRODUCTION

Because the direction of the rays is randomized in the presence of a scattering medium, the mapping between the object plane and the image plane becomes complicated [1,2]. Consequently, the performance of conventional imaging techniques is degraded [3,4]. However, scattering is not necessarily an obstacle. In some cases, the scrambled wavefront induced by the scattering medium can achieve imaging with higher performance than the flat wavefront maintained by a normal lens system [58]. For instance, owing to wavefront deformation, every point in a volumetric field of view (FOV) produces a unique pattern on the image plane. After the point-spread-function (PSF) of every point is calibrated, a 3-dimensional (3D) image can be captured using a 2D sensor with a diffuser [810]. In addition, when the propagation direction is randomized by scattering, light with much higher frequencies can enter the lens system (Fig. 1). With prior knowledge of the transmission matrix, light scattering can enhance both the FOV and the resolution of imaging system [11,12]. The physics behind these striking results is that a scattered optical beam can possess more spatial frequencies and longitudinally varying modes [13,14]. Once the PSF is deconvoluted or corrected, not only is the image recovered but the throughput of the imaging system is also increased. However, current scattering-enhanced imaging methods require the scattering medium to be well known, which means that the PSF is calibrated accurately [612,15] or the deformed wavefront is measured beforehand via interferometric detection [16]. The sophisticated calibration and requirement for coherent illumination restrict the current scattering-enhanced imaging in cases where the information of the scattering medium is inaccessible or the illumination is incoherent.

 figure: Fig. 1.

Fig. 1. Principle of overcoming the diffraction limit by exploiting the unmeasured scattering medium. (a) Lens imaging system. (b) Scattering imaging system. In the presence of the scattering medium, the largest oblique angle ${\theta ^\prime _{\rm{MAX}}}$ becomes larger. The light with spatial frequencies much higher than the cutoff frequency of the lens is scattered into the entrance pupil. Then wavefront correction is performed by the SLM, and image with super-resolution is recorded. (c) The flow chart of the iterative process to find the optimum wavefront correction. (d) Gradual improvement of the image gradient in the iterative process to find the optimum wavefront correction. (e) Diffraction-limited image recorded by the lens system in (a). (f) Featureless image directly recorded by the scattering system in (b). (g) Super-resolution image recorded by the scattering imaging system after wavefront correction. (h) The marked profile of (e)–(g).

Download Full Size | PDF

There are numerous advanced techniques that can see through a scattering medium without any prior information, but the performance is still limited by the optics utilized. To the best of our knowledge, no previous research has demonstrated a practical method to overcome the limitations of the lens system through an unknown scattering medium. Intensity correlation based on the memory effect (ME) is a prior-independent imaging technique in scattering circumstances [1720]. ME reveals that the PSFs of neighboring sources are tilt-invariant [21]. Therefore, images within a given range can be computationally reconstructed. In this line of research, no prior knowledge is required, but the FOV of the imaging is limited by the ME range. In addition, the spatial resolution of the recovered image is diffraction limited. Although the wavefront deformation of the beam can be searchingly obtained under the guidance of maximizing the brightness of the refocused spot, wavefront shaping does not necessarily require any beforehand information of the scattering medium. Using wavefront shaping to compensate for scattering, the light beam can be focused on a spot far smaller than the diffraction limit of the lens [2227]. Requiring for focusing on all points within the desired FOV, imaging through a scattering medium is much more difficult than focusing [2830]. Owing to the tilt-invariance of the neighboring PSFs, a single wavefront correction can be effective over the ME angular range. This inspired researchers to directly recover the image via single wavefront correction, rather than a large number of different wavefront corrections to focus on point by point [31,32]. Benefitting from the non-scanning scheme, researchers recovered the image of an object that extends beyond about $2 \times$ ME range. However, overcoming the diffraction limit and achieving a larger FOV through scattering media remain to be conquered. Recently, researchers have begun to perform super-resolution through an unknown scattering medium by computationally processing the recorded speckle [33]. However, this method requires the point source in the target scene to be switchable, which limits its use in some specific cases. Towards general situations, widefield, noninvasive, prior-independent, and super-resolution imaging through a scattering medium is still desired.

To fill this research gap, we demonstrate a significant enhancement in the throughput of the imaging system by collecting and correcting heavily scattered fields. By controlling the distance between the scattering medium and lens system, light with spatial frequencies much higher than the cut-off frequency of the lens is scattered into the entrance pupil. Subsequently, the diffraction limit of the lens is overcome after image-guided wavefront shaping. Using our method, a super-resolution image of an object extending beyond the ME range is retrieved. Better yet, because the defocus aberrations are also cancelled after wavefront correction, super-resolution is maintained even if the object is badly out of focus. Experimentally, we demonstrate that imaging through a visually opaque diffuser, with a resolution of 3.39-fold enhancement of the diffraction limit, a stitched FOV beyond 4-fold of the memory range, and a depth of field (DoF) larger than 6.6-fold of the lens can be achieved. Under the proposed technique, the scattering medium without any prior information turns to be favorable for widefield, super-resolution, and large DoF imaging.

2. PRINCIPLE

In the lens imaging system depicted in Fig. 1(a), the largest oblique angle of the entrance pupil (${\theta _{\rm{MAX}}}$) cuts off the highest frequency of light allowed to enter, thereby determining the diffraction limit [34]. When a scattering medium exists between the object and the entrance pupil, the rays diffuse in random directions. Light with much higher spatial frequencies gains entry to the pupil of the lens. Subsequently, ${\theta _{\rm{MAX}}}$ is enlarged, and the diffraction limit can be overcome. In addition, when the scattering medium is closer to the object—still in the far field—${\theta _{\rm{MAX}}}$ will be larger, and higher resolution can be achieved. The sacrifice lies in that the wavefront is seriously deformed and the image cannot be extracted directly. To correct wavefront deformation, a spatial light modulator (SLM) is placed on the conjugate plane of the scattering medium to perform wavefront correction, as shown in Fig. 1(b). Light corrected by the SLM is recorded by a camera on the image plane of the object. The phase mask introduced by the SLM is iteratively optimized using a genetic algorithm shown in Fig. 1(c). Thus, the resolution of the finally recorded image is beyond the diffraction limit of the lens. In the following, we quantitatively demonstrate the performance of this method and detail how to find the optimum correction.

For simplicity and without loss of generality, a 4-$f$ system is considered, as shown in Fig. 1(a). The PSF takes the following form:

$${H_l}({\vec r_t},{\vec r_i}) = {\left| {\int_{- \frac{D}{2}}^{\frac{D}{2}} P(\vec r)\exp\!\left\{{- \frac{{i2\pi}}{{\lambda f}}({{\vec r}_i} - {{\vec r}_t}) \cdot \vec r} \right\}{\rm d}\vec r} \right|^2},$$
where ${\vec r_t}$ and ${\vec r_i}$ are the coordinates of the object and the image planes, respectively. $P(\vec r)$ is the entrance pupil with a diameter of $D$. According to the Van-Zernike theorem [35], ${H_l}({\vec r_t},{\vec r_i})$ is a Bessel function of the first kind with a Full Width at Half Maximum (FWHM) of ${R_l} = 1.22\lambda f/D$, which corresponds to the resolution limit of the imaging system. The largest oblique angle ${\theta _{\rm{MAX}}} = \arctan \frac{D}{{2f}}$.

When an unknown scattering medium appears and a SLM is conjugately located, as shown in Fig. 1(b), the scattering medium becomes a new entrance pupil. The PSF of the points within a single ME range [21] becomes

$$\begin{split}{H_s}({{\vec r}_t},{{\vec r}_i}) = & \bigg| \int_{- \frac{{D^\prime}}{2}}^{\frac{{D^\prime}}{2}} P^\prime ({{\vec r}_s}){e^{i{\phi _s}({{\vec r}_s})}}{e^{i{\phi _c}(- {{\vec r}_s})}}\\ & \times\exp\!\left\{{- \frac{{i2\pi}}{{\lambda f}}({{\vec r}_i} - {{\vec r}_t}) \cdot {{\vec r}_s}} \right\}{\rm d}{{\vec r}_s} \bigg|^2,\end{split}$$
in which the quadratic phases are neglected for simplicity. $P^\prime ({\vec r_s})$ is the entrance pupil of the scattering imaging system. If the spatially spread angle of the light transmitted from the object is approximately $2\pi ({\rm sr})$, the diameter of $P({\vec r_s})$ is determined by the FOV of the lens system as $D^\prime = {d_s}D/f$, where ${d_s}$ is the distance between the scattering medium and the lens. ${\phi _s}({\vec r_s})$ and ${\phi _c}(-{\vec r_s})$ represent the deformed phase and the compensation phase on the SLM, respectively. ${d_t}$ is the distance between the scattering medium and the object. Once the wavefront distortion is well corrected, which means that the distribution of ${\phi _s}({\vec r_s}) + {\phi _c}(- {\vec r_s})$ is planar, the FWHM of ${H_s}({\vec r_t},{\vec r_i})$ is ${R_s} = 1.22\frac{{\lambda {d_t}f}}{{{d_s}D}}$. The largest oblique angle of the scattering system becomes ${\theta ^\prime _{\rm{MAX}}} = \arctan \frac{{{d_s}D}}{{2{d_t}f}}$. Compared with the lens imaging system expressed in Eq. (1), the resolution limit of the scattering imaging system is enhanced by a factor of $\frac{{{d_t}}}{{{d_s}}}$, as
$${R_s} = \frac{{{d_t}}}{{{d_s}}}{R_l}.$$
By setting ${d_s}$ larger than ${d_t}$, the diffraction limit of the lens system can be overcome. In addition, when the lens gets farther from the scattering medium, the Rayleigh limit can be overcome to a greater extent. However, the scattering medium must be located in the far field. Therefore, the resolution cannot be improved infinitely.

To achieve super-resolution imaging, the key challenge is to find the optimum phase mask for compensation. When the scattered, speckle-like PSF is corrected to a bright spot, the PSF exhibits the highest contrast. In other words, the sharpness of the recorded image reaches its maximum [36]. Therefore, maximizing the sharpness-parameter of the recorded image can effectively guide the optimum compensation to be found via an iterative process. The sharpness-parameter can be quantified using the gradient as

$$G = \iint {\left({\frac{{\partial I({{\vec r}_i})}}{{\partial {x_i}}}} \right)^2} + {\left({\frac{{\partial I({{\vec r}_i})}}{{\partial {y_i}}}} \right)^2}{\rm d}{x_i}{\rm d}{y_i},$$
in which $I({\vec r_i})$ is the recorded image, and ${x_i}$ and ${y_i}$ are the components of $\,\vec r_i$ in two orthogonal directions.

The iterative process of finding the optimum compensation is shown by Fig. 1(c), which is performed by a typical genetic algorithm. The 1st parent population with $N$ phase masks is randomly produced. Each cycle of the iterative process includes four steps. First, when the $k$th parent population including $\{\phi _1^k,\phi _2^k \cdots \phi _N^k\}$ is loaded on the SLM, the corresponding $\{G_1^k,G_2^k \cdots G_N^k \}$ of the recorded images are calculated. Subsequently, the phases $\{\phi _1^k,\phi _2^k \cdots \phi _N^k\}$ are ranked by the gradient of the resultant images. Second, the phase masks, which can produce images with a larger gradient, are selected with a larger probability. Third, the selected subpopulation is hybridized with another new subpopulation to produce the child population. Finally, the child population is mutated as $\{\phi _1^{k + 1},\phi _2^{k + 1} \cdots \phi _N^{k + 1}\}$, and the child population grows into the $(k + 1)$th parent in the next cycle, until the calculated $\{G_1^k,G_2^k \cdots G_N^k \}$ are large enough.

Without prior information, a single correction is effective within a single ME range. For objects extending several such FOVs, the wavefront deformation of the point sources from different FOVs is independent of each other. Consequently, when wavefront correction is performed, an image within a single, yet randomly selected, FOV will be retrieved. The light from the rest of the object appears as background noise for this process. By performing the iterative process multiple times, partial images will be extracted, from which the entire image could be assembled. On the other hand, when the object extends over three or four ME regions, because of background noise, the initial image is featureless [31,3739]. For wavefront correction, the optimization of the image metric is the core issue for extracting features from the initial image with low contrast. In Ref. [36], researchers compared the performance of various image metrics [36]. They found that image gradient worked best for making the bright region brighter from the background. In the field of image processing and recognition, the image gradient is also used to deal with images with background noise [4042]. With strong robustness on the background, our wavefront shaping can achieve three or four single FOVs and thereby assemble a large stitched FOV.

3. EXPERIMENTS

The experimental setup is illustrated in Fig. 2(a). A CW laser with a center wavelength of 532 nm is scattered by a rotating ground glass (RGG) to produce incoherent illumination. To make the spatial coherence area of the illumination sufficiently small, the exit surface of the RGG is imaged on the object plane with lens ${{\rm L}_1}$. Light from the object passes through a scattering medium, which is a visually opaque diffuser (Thorlabs DG10-220-A). The exit surface of the diffuser is then imaged with a magnification of $M = 5$ on a phase-only SLM (Hamamatsu X10468-04) using a microscope, which consists of an objective (Mitutoyo “Plan-Apochromat” $2 \times$, ${\rm N.A.} = {0.055}$) and a tube lens ${{\rm L}_2}$ with focal length of ${f_2} = 500\;{\rm mm} $. Because the object is located ${d_t}$ (mm) in front of the diffuser, the conjugate plane of the object, indicated by the dashed line P1, is ${M^2}{d_t}$ (mm) in front of the SLM. P1 is imaged by the 4-$f$ system on the CCD, which is the secondary image plane. Both the CCD (Hawks-2020BSI) and SLM are controlled by a computer to perform the iterative process of wavefront correction.

 figure: Fig. 2.

Fig. 2. Experimental setup and super-resolution imaging results. (a) Experimental setup. Incoherent illumination is produced by modulating a cw laser with RGG. The exit plane of the diffuser, which serves as the scattering medium in the experiment, is imaged on the SLM with a magnification of 5. The red dashed lines represent the conjugate planes of the object. ${{\rm L}_3}$ and ${{\rm L}_4}$ form a 4-f system. M1 and M2 are mirrors. HWP: half-wavelength plate, AP: aperture, O: object, OBJ: objective, BS: beam splitter, Cam: CCD camera, WD: working distance, P1 and P2: the conjugate planes of the object. (b)–(c) Diffraction limited images obtained by the lens system in the absence of diffuser. (d)–(f) The initial results recorded by the CCD. (g)–(i) The super-resolution image after wavefront correction. (j)–(l) The compensation phase masks. Scale bars: 10 µm. (m) The super-resolution performance of our method at different ${d_t}$.

Download Full Size | PDF

First, the performance of the super-resolution imaging is verified. When there is no diffuser, the phase mask imprinted on the SLM is planar and ${d_t} = 0$. The diffraction-limited image is recorded by the CCD, as shown in Figs. 2(b) and 2(c). The resolution limit of the imaging system can be calculated from the N.A. of the objective as ${R_l} = 5.90 \;{\unicode{x00B5}{\rm m}}$. The width of line pair marked by the red box in Fig. 2(b) is 6.20 µm, thus failing to be resolved. For comparison, when an unmeasured diffuser is involved and wavefront shaping is performed, super-resolution images are captured, as shown in Figs. 2(g)–2(i) (see Visualization 1). When the distance between the object and the diffuser ${d_t} = 9.5\;{\rm mm} $, a resolution of ${R_s} = 3.48 \;{\unicode{x00B5}{\rm m}}$ is achieved, which is approximately a 1.70-fold enhancement of the diffraction limit. The lines marked in Fig. 2(g) are resolved. To test the principle expressed in Eq. (3), the distance between the diffuser and object changes while the SLM remains conjugative. As the diffuser gets closer to the object, light with higher frequencies enters the objective, and the resolution overtakes the diffraction limit to a greater extent. When the distance between the object and diffuser ${d_t} = 7\;{\rm mm} $, 2.40-fold of diffraction limit is achieved, and a line pair with a width of 2.46 µm is resolved, as shown in Fig. 2(h). It is important to note that when the objective moves farther away from the diffuser, light from a larger scattering area enters the objective. Therefore, more SLM pixels are required to perform wavefront correction. In our experiment, when the distance between the object and scattering medium ${d_t} = 5\;{\rm mm} $, a super-resolution with 3.39-fold of Rayleigh limit is achieved, as shown in Fig. 2(i), in which the width of the marked line pair is 1.74 µm. The super-resolution performance of our system is verified at five different ${d_t}$, as shown in Fig. 2(m). The super-resolution performance increases linearly as the distance between the object and diffuser decreases, which is consistent with Eq. (3). The difference between the proportional coefficient in Fig. 2(m) and that in Eq. (3) is due to the limited spread angle. That is, the spread angle of the light transmitted from the resolution chart is far less than the theoretical assumption of $2\pi$. In our wavefront shaping, the pixels of the SLM are binned by $5 \times 5$. For example, the degree-of-freedom of the phase mask shown in Fig. 2(l) is $95 \times 95$. The population $N$ in the genetic algorithm keeps at 50, and the phase masks of the 1st parent are randomly produced. Guided by maximizing the image gradient of the recorded image, the optimum phase mask shown in Fig. 2(l) is obtained after about 500 cycles.

Second, super-resolution, widefield imaging covering multiple ME ranges is demonstrated. The ME range of the diffuser is measured by tilting the incident angle of a laser and recording the decorrelation of the speckle [21]. The decorrelation curve is shown in Fig. 3(a), which indicates that the ME range is about 11.6 mrad. After a single wavefront correction, the image of the resolution chart within a randomly selected, single ME range is retrieved. The size of the single-correction FOV is about 50 µm. Considering that the distance between the object and the diffuser is about 5 mm, the angular size of single-correction FOV is about 10 mrad, which is very close to the ME range. When wavefront shaping is performed multiple times, four partial images are obtained successively, as depicted by the four left images in Figs. 3(c) and 3(d). By assembling the four partial images, the entire image with a combined FOV covering about $4 \times$ ME range is reconstructed, which is depicted by the right images in Figs. 3(c) and 3(d). By contrast, a diffraction-limited image is obtained without the scattering medium as shown in Fig. 3(b). In image-guided wavefront shaping, it is common to select the region of interest (ROI) in the initial image to guide the final image to be recovered [31]. In our method, the desired features in the initial image can be intentionally selected as the ROI. Subsequently, the final image around the ROI can be recovered. In this way, the time of wavefront shaping required by recovering four adjacent single FOVs can be reduced significantly. In our experiment, wavefront correction was performed 17/14 times to obtain Figs. 3(c) and 3(d).

 figure: Fig. 3.

Fig. 3. Imaging results with simultaneous widefield and super-resolution. (a) The decorrelation curve for measuring ME range. (b) Diffraction-limited image recorded by the lens system. (c), (d) In the presence of the diffuser, partial image of the resolution chart within a single ME range is retrieved after single wavefront correction. The wavefront correction is performed multiple times, four partial images are retrieved successively as shown by the left four images, and then the whole image is assembled as shown by the rightmost ones. (c), (d) is obtained with a combined FOV covering about 4-fold ME range and about 2.5-fold/3-fold enhancement of diffraction limit. Scale bars: 10 µm.

Download Full Size | PDF

Third, significant resistance against defocus is presented. Usually, in a lens imaging system, there is a contradiction between a higher transverse resolution and a larger DoF. A larger N.A. means a smaller DoF. With the lens imaging system, the objective cannot resolve the slits in group 8, and the DoF is only 91.00 µm. When the distance between the resolution chart and the focus plane of the objective is farther than 0.1 mm, both the contrast and the resolution of the lens imaging system are drastically reduced further, as shown by the marked images on the left of Fig. 4(a). In our scattering imaging system, both the wavefront deformation induced by scattering and defocus aberration are cancelled by wavefront correction. Therefore, once the distance between the scattering medium and the object keeps the same, a resolution of 2.46 µm, which is 2.40-fold super-resolution, maintains when the DoF is as far as 0.6 mm, which is about 6.6 times the DoF of the objective. The results are shown in the marked images on the right side of Fig. 4(a). This means that the scattering system overcomes the trade-off between transverse resolution and DoF. That is, in practical applications, our method does not require accurate measurement of the distance between the scattering medium and the entrance pupil to achieve super-resolution imaging.

 figure: Fig. 4.

Fig. 4. Super-resolution imaging over a large DoF. (a) The images on the left are the imaging results by lens system. Limited by the N.A., the slits of group 8 cannot be resolved. Even worse, the images rapidly deteriorate further once the object is out of focus. The right images marked by solid boxes are the wavefront-corrected results in the presence of the scattering medium. $\delta$ is the defocus distance. 2.40-fold enhancement of diffraction limit is maintained within a DoF of 0.6 mm. (b) The FWHM of PSFs produced by lens imaging, speckle correlation, and our wavefront correction at different defocus distances, which are represented by the lens PSF, speckle PSF, and corrected PSF, respectively. The three images, from left to right, are the lens PSF, speckle PSF, and corrected PSF measured at the focus plane.

Download Full Size | PDF

In addition to wavefront correction, the popular speckle correlation method is also defocus-resistant [19,43]. Therefore, to compare their performances, the PSFs of the lens system, speckle correlation, and wavefront correction at different defocus distances are obtained as shown in Fig. 4(b). To measure the PSFs, we use a pinhole with a diameter of 2 µm as the object, which is smaller than the resolution limit of both lens imaging and scattering imaging. The lens PSF is obtained by directly recording the pinhole’s image without diffuser. Limited by N.A., the FWHM of the measured PSF is about 5.90 µm when the object is on the focus plane of the objective. When the object is out of focus, the lens PSF extends rapidly, as shown by the curve in Fig. 4(b). When the diffuser is induced, speckle is recorded by the CCD. The speckle PSF is calculated from the autocorrelation of the speckle. When the pinhole is focused, the FWHM of the speckle PSF is about 5.29 µm, which indicates that speckle correlation can slightly overcome the diffraction limit of the objective by utilizing the scattered light. Benefitting from the defocus-resistance of the speckle, the FWHM of the speckle PSF keeps almost the same regardless of the defocus. In Fig. 4(b), the corrected PSF obtained via our method is the narrowest and is independent of the defocus distance. Therefore, our wavefront correction can achieve resolution significantly higher than the limit of the optics utilized, which is beyond the reach of speckle correlation.

4. DISCUSSION

In our experiment, although the stitched FOV covers about 4 ME ranges, the ME range is still there, which is difficult to overcome by single wavefront shaping. Our experiments demonstrate that image-guided wavefront shaping works well for objects that are larger than 4 ME ranges. There are three important conditions for obtaining a large combined FOV. First, an initial image with a moderate contrast is required. The contrast of the initial image is inversely proportional to the size of the object [17,19]. When the object extends three or four ME ranges, the initial image has a very low contrast, especially when the object is gray-scale. Therefore, a CCD with high dynamic range is required. Secondly, conjugating the SLM to the diffuser can help the single-correction FOV approach the ME range. In the defocus-resistant experiment, defocusing degrades the conjugation between the scattering medium and SLM, thus reducing the FOV of a single-correction. Finally, a robust image metric is useful for large combined FOV. This is because during the wavefront correction for single ME region, light from other ME regions acts as additive noise. The image gradient is robust against background noise, thus helping achieve a wide FOV. On the other hand, single image-guided wavefront correction is valid within a region where the wavefront deformation of the neighboring sources is tilt-invariant. Therefore, our method cannot simultaneously correct the wavefront deformation of light from two distantly separated slides, via a single static wavefront correction.

In lens imaging, when the object is badly out of focus, clear images can still be obtained, if the distance between the lens and CCD is tuned. However, when the image distance is adjusted, the imaging resolution will change accordingly. By contrast, the resolution of our method is determined by the distance between the object and the scattering medium as shown by Eq. (3). Therefore, the super-resolution remains the same regardless of defocusing, which is different from lens imaging.

Around the 1990s, I. Freund predicted that a scattering medium can be used as an optical element for imaging [21,43]. This visionary point has been validated in the recent decades. Here we provided an example that scattering medium, although the transmission matrix of which is unmeasured, can turn to be favorable to largely enhance the throughput of the imaging system. In our experiment, the refresh rate of the SLM is low, which limits the imaging speed. Considering the rapid progress of devices and algorithms [4446], significant advances are foreseeable for wavefront shaping. These are all conducive to the applications of our method in biomedical investigations and standoff observations.

Funding

Research Program of National University of Defense Technology (ZK22-58); Science Fund for Distinguished Young Scholars of Hunan Province (2021JJ10051); National Natural Science Foundation of China (62105365, 62275270).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper can be obtained from the authors upon reasonable request.

REFERENCES

1. O. Lib and Y. Bromberg, “Quantum light in complex media and its applications,” Nat. Phys. 18, 986–993 (2022). [CrossRef]  

2. L. G. Wright, F. O. Wu, D. N. Christodoulides, et al., “Physics of highly multimode nonlinear optical systems,” Nat. Phys. 18, 1018–1030 (2022). [CrossRef]  

3. S. Yoon, M. Kim, M. Jang, et al., “Deep optical imaging within complex scattering media,” Nat. Rev. Phys. 2, 141–158 (2020). [CrossRef]  

4. J. Bertolotti and O. Katz, “Imaging in complex media,” Nat. Phys. 18, 1008–1017 (2022). [CrossRef]  

5. V. Boominathan, J. T. Robinson, L. Waller, et al., “Recent advances in lensless imaging,” Optica 9, 1–16 (2022). [CrossRef]  

6. Ç. Işıl, D. Mengu, Y. Zhao, et al., “Super-resolution image display using diffractive decoders,” Sci. Adv. 8, eadd3433 (2022). [CrossRef]  

7. P. Yu, Y. Liu, Z. Wang, et al., “Ultrahigh-density 3D holographic projection by scattering-assisted dynamic holography,” Optica 10, 481–490 (2023). [CrossRef]  

8. N. Antipa, S. Necula, R. Ng, et al., “Single-shot diffuser-encoded light field imaging,” in IEEE International Conference on Computational Photography (ICCP) (2016), pp. 1–11.

9. N. Antipa, G. Kuo, R. Heckel, et al., “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5, 1–9 (2018). [CrossRef]  

10. F. L. Liu, G. Kuo, N. Antipa, et al., “Fourier DiffuserScope: single-shot 3D Fourier light field microscopy with a diffuser,” Opt. Express 28, 28969–28986 (2020). [CrossRef]  

11. Y. Choi, T. D. Yang, C. Fang-Yen, et al., “Overcoming the diffraction limit using multiple light scattering in a highly disordered medium,” Phys. Rev. Lett. 107, 023902 (2011). [CrossRef]  

12. Y. Choi, C. Yoon, M. Kim, et al., “Scanner-free and wide-field endoscopic imaging by using a single multimode optical fiber,” Phys. Rev. Lett. 109, 203901 (2012). [CrossRef]  

13. B. Shapiro, “Large intensity fluctuations for wave propagation in random media,” Phys. Rev. Lett. 57, 2168–2171 (1986). [CrossRef]  

14. J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts and Company, 2007).

15. H. Yu, T. R. Hillman, W. Choi, et al., “Measuring large optical transmission matrices of disordered media,” Phys. Rev. Lett. 111, 153902 (2013). [CrossRef]  

16. O. Haim, J. Boger-Lombard, and O. Katz, “Image-guided computational holographic wavefront shaping,” arXiv, arXiv:2305.12232 (2023). [CrossRef]  

17. J. Bertolotti, E. G. Van Putten, C. Blum, et al., “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012). [CrossRef]  

18. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6: 549–553 (2012). [CrossRef]  

19. O. Katz, P. Heidmann, M. Fink, et al., “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014). [CrossRef]  

20. L. Zhu, F. Soldevila, C. Moretti, et al., “Large field-of-view non-invasive imaging through scattering layers using fluctuating random illumination,” Nat. Commun. 13, 1447 (2022). [CrossRef]  

21. I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988). [CrossRef]  

22. I. M. Vellekoop and A. P. Mosk, “Focusing coherent light through opaque strongly scattering media,” Opt. Lett. 32, 2309–2311 (2007). [CrossRef]  

23. I. M. Vellekoop, A. Lagendijk, and A. P. Mosk, “Exploiting disorder for perfect focusing,” Nat. Photonics 4, 320–322 (2010). [CrossRef]  

24. J. H. Park, C. Park, H. Yu, et al., “Subwavelength light focusing using random nanoparticles,” Nat. Photonics 7, 454–458 (2013). [CrossRef]  

25. R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics 9, 563–571 (2015). [CrossRef]  

26. P. Lai, L. Wang, J. W. Tay, et al., “Photoacoustically guided wavefront shaping for enhanced optical focusing in scattering media,” Nat. Photonics 9, 126–132 (2015). [CrossRef]  

27. G. Noetinger, S. Métais, G. Lerosey, et al., “Superresolved imaging based on spatiotemporal wave-front shaping,” Phys. Rev. Appl. 19, 024032 (2023). [CrossRef]  

28. H. Cao, A. P. Mosk, and S. Rotter, “Shaping the propagation of light in complex media,” Nat. Phys. 18, 994–1007 (2022). [CrossRef]  

29. Z. Yu, H. Li, T. Zhong, et al., “Wavefront shaping: a versatile tool to conquer multiple scattering in multidisciplinary fields,” Innovation 3, 100292 (2022). [CrossRef]  

30. S. Gigan, O. Katz, H. B. De Aguiar, et al., “Roadmap on wavefront shaping and deep imaging in complex media,” J. Phys. Photonics 4, 042501 (2022). [CrossRef]  

31. T. Yeminy and O. Katz, “Guidestar-free image-guided wavefront shaping,” Sci. Adv. 7, eabf5364 (2021). [CrossRef]  

32. S. Sun, Z. W. Nie, Y. G. Li, et al., “Ghost synthetic aperture with computational wavefront shaping,” arXiv, arXiv:2208.08644v3 (2022). [CrossRef]  

33. D. Wang, S. K. Sahoo, X. Zhu, et al., “Non-invasive super-resolution imaging through dynamic scattering media,” Nat. Commun. 12, 3150 (2021). [CrossRef]  

34. S. Zhou and L. Jiang, “Modern description of Rayleigh’s criterion,” Phys. Rev. A 99, 013808 (2019). [CrossRef]  

35. J. W. Goodman, Introduction to Fourier Optics (Roberts and Company, 2005).

36. J. R. Fienup and J. J. Miller, “Aberration correction by maximizing generalized sharpness metrics,” J. Opt. Soc. Am. A 20, 609–620 (2003). [CrossRef]  

37. R. Muller and A. Buffington, “Real-time correction of atmospherically degraded telescope images through image sharpening,” J. Opt. Soc. Am. 64, 1200–1210 (1974). [CrossRef]  

38. D. Conkey, A. Brown, A. Caravaca-Aguirre, et al., “Genetic algorithm optimization for focusing through turbid media in noisy environments,” Opt. Express 20, 4840–4849 (2012). [CrossRef]  

39. M. Hirose, N. Miyamura, and S. Sato, “Deviation-based wavefront correction using the SPGD algorithm for high-resolution optical remote sensing,” Appl. Opt. 61, 6722–6728 (2022). [CrossRef]  

40. G. H. Chen, C. L. Yang, S. L. Xie, et al., “Gradient-based structural similarity for image quality assessment,” in International Conference on Image Processing (IEEE, 2006), pp. 2929–2932.

41. C. Li and A. C. Bovik, “Content-partitioned structural similarity index for image quality assessment,” Signal Process. Image Commun. 25, 517–526 (2010). [CrossRef]  

42. J. Zhu and N. Wang, “Image quality assessment by visual gradient similarity,” IEEE Trans. Image Process. 21, 919–933 (2011). [CrossRef]  

43. I. Freund, “Looking through walls and around corners,” Physica A 168, 49–65 (1990). [CrossRef]  

44. Z. Cheng, C. Li, A. Khadria, et al., “High-gain and high-speed wavefront shaping through scattering media,” Nat. Photonics 17, 299–305 (2023). [CrossRef]  

45. O. Tzang, E. Niv, S. Singh, et al., “Wavefront shaping in complex media with a 350 kHz modulator via a 1D-to-2D transform,” Nat. Photonics 13, 788–793 (2019). [CrossRef]  

46. B. Y. Feng, H. Guo, M. Xie, et al., “NeuWS: Neural wavefront shaping for guidestar-free imaging through static and dynamic scattering media,” Sci. Adv. 9, eadg4671 (2023). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       This video shows the process of image recovery via wavefront correction. (left) The image is gradually retrieved from the initial result with low contrast. (right) The corresponding image metric, which gradually reaches its maximum.

Data availability

Data underlying the results presented in this paper can be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. Principle of overcoming the diffraction limit by exploiting the unmeasured scattering medium. (a) Lens imaging system. (b) Scattering imaging system. In the presence of the scattering medium, the largest oblique angle ${\theta ^\prime _{\rm{MAX}}}$ becomes larger. The light with spatial frequencies much higher than the cutoff frequency of the lens is scattered into the entrance pupil. Then wavefront correction is performed by the SLM, and image with super-resolution is recorded. (c) The flow chart of the iterative process to find the optimum wavefront correction. (d) Gradual improvement of the image gradient in the iterative process to find the optimum wavefront correction. (e) Diffraction-limited image recorded by the lens system in (a). (f) Featureless image directly recorded by the scattering system in (b). (g) Super-resolution image recorded by the scattering imaging system after wavefront correction. (h) The marked profile of (e)–(g).
Fig. 2.
Fig. 2. Experimental setup and super-resolution imaging results. (a) Experimental setup. Incoherent illumination is produced by modulating a cw laser with RGG. The exit plane of the diffuser, which serves as the scattering medium in the experiment, is imaged on the SLM with a magnification of 5. The red dashed lines represent the conjugate planes of the object. ${{\rm L}_3}$ and ${{\rm L}_4}$ form a 4-f system. M1 and M2 are mirrors. HWP: half-wavelength plate, AP: aperture, O: object, OBJ: objective, BS: beam splitter, Cam: CCD camera, WD: working distance, P1 and P2: the conjugate planes of the object. (b)–(c) Diffraction limited images obtained by the lens system in the absence of diffuser. (d)–(f) The initial results recorded by the CCD. (g)–(i) The super-resolution image after wavefront correction. (j)–(l) The compensation phase masks. Scale bars: 10 µm. (m) The super-resolution performance of our method at different ${d_t}$.
Fig. 3.
Fig. 3. Imaging results with simultaneous widefield and super-resolution. (a) The decorrelation curve for measuring ME range. (b) Diffraction-limited image recorded by the lens system. (c), (d) In the presence of the diffuser, partial image of the resolution chart within a single ME range is retrieved after single wavefront correction. The wavefront correction is performed multiple times, four partial images are retrieved successively as shown by the left four images, and then the whole image is assembled as shown by the rightmost ones. (c), (d) is obtained with a combined FOV covering about 4-fold ME range and about 2.5-fold/3-fold enhancement of diffraction limit. Scale bars: 10 µm.
Fig. 4.
Fig. 4. Super-resolution imaging over a large DoF. (a) The images on the left are the imaging results by lens system. Limited by the N.A., the slits of group 8 cannot be resolved. Even worse, the images rapidly deteriorate further once the object is out of focus. The right images marked by solid boxes are the wavefront-corrected results in the presence of the scattering medium. $\delta$ is the defocus distance. 2.40-fold enhancement of diffraction limit is maintained within a DoF of 0.6 mm. (b) The FWHM of PSFs produced by lens imaging, speckle correlation, and our wavefront correction at different defocus distances, which are represented by the lens PSF, speckle PSF, and corrected PSF, respectively. The three images, from left to right, are the lens PSF, speckle PSF, and corrected PSF measured at the focus plane.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

H l ( r t , r i ) = | D 2 D 2 P ( r ) exp { i 2 π λ f ( r i r t ) r } d r | 2 ,
H s ( r t , r i ) = | D 2 D 2 P ( r s ) e i ϕ s ( r s ) e i ϕ c ( r s ) × exp { i 2 π λ f ( r i r t ) r s } d r s | 2 ,
R s = d t d s R l .
G = ( I ( r i ) x i ) 2 + ( I ( r i ) y i ) 2 d x i d y i ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.