Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Imaging through a thin scattering layer and jointly retrieving the point-spread-function using phase-diversity

Open Access Open Access

Abstract

Recently introduced angular-memory-effect based techniques enable non-invasive imaging of objects hidden behind thin scattering layers. However, both the speckle-correlation and the bispectrum analysis are based on the statistical average of large amounts of speckle grains, which determines that they can hardly access the important information of the point-spread-function (PSF) of a highly scattering imaging system. Here, inspired by notions used in astronomy, we present a phase-diversity speckle imaging scheme, based on recording a sequence of intensity speckle patterns at various imaging planes, and experimentally demonstrate that in addition to being able to retrieve the image of hidden objects, we can also simultaneously estimate the pupil function and the PSF of a highly scattering imaging system without any guide-star nor reference.

© 2017 Optical Society of America

1. Introduction

The interaction between light and complex samples with inhomogeneous refractive index in many imaging scenarios induces light scattering, which is always seen as an obstacle for imaging objects hidden inside or behind such samples and makes direct observation impossible, instead, generates a complex speckle pattern [1]. In recent years, wavefront shaping techniques have emerged as a powerful tool for imaging hidden objects or focusing through highly scattering media by controlling the incident light [2–12]. However, these techniques are complex and lengthy, since they require a detector or an optical/acoustical probe in the plane of interests. A recent breakthrough reported by Bertolotti et al. [13] avoided the use of guide-stars and enabled non-invasive imaging through thin scattering layers by exploiting the inherent angular-correlations, known as “memory effect” in the scattered speckle patterns [14, 15]. When an object is placed within the range determined by the angular-memory-effect, the angular signal is the convolution between the object and the system’s point-spread-function (PSF), which is a highly complex speckle pattern, as generated by any light point source on the object. Since the autocorrelation of the PSF is close to a δ function, the Fourier-amplitude of object is retrieved from a large speckle pattern (i.e. sufficient speckle grains) by calculating its autocorrelation, and the lost phase information is recovered via an iterative phase-retrieval algorithm [16]. Katz et al. [17] put forward a single-shot approach of the aforementioned concept, inspired by astronomical techniques in which they regarded the scattering imaging system as an incoherent imaging system, and directly obtain the signal on a camera. Bispectrum analysis [18] also allows imaging from a single image in the same scenario, by exploiting the fact that the Fourier-phase of an object can be deterministically extracted from a large speckle pattern by relying on the property that the bispectrum of the system’s PSF is real-valued [19]. Although the speckle-correlation and the bispectrum analysis can retrieve hidden objects, they cannot directly access the scatterer’s exact phase distortion, i.e. the PSF, since both methods are based on the idea of statistical average and the influence of PSF is eliminated or ignored in the reconstruction processes. Alternatively, if a light point source is present, one can measure the intensity PSF of the highly scattering imaging system and then perform image reconstruction by deconvolution [20]. However, it is inconvenient and impractical to introduce such a guide-star in most real applications, e.g. in biomedical imaging. Yet, the prospect of accessing not only the object, but the scattering layer’s properties, and the exact PSF, would be highly beneficial for non-invasive imaging, for instance to image more complex objects.

In this work, we experimentally demonstrate a non-invasive imaging scheme based on angular-memory-effect. Inspired by the phase-diversity technique used in astronomy imaging [21], a sequence of speckle patterns from multiple planes are sequentially collected by a translational camera. From only a small region of such speckle patterns sequence, we can jointly retrieve the image of incoherently illuminated objects, hidden behind a thin, but highly scattering layer, and estimate the local PSF of the highly scattering imaging system, without any reference nor additional experimental constraint on the object side. In addition, our method is straightforward to implement and to the best of our knowledge, is the first experimental demonstration that phase-diversity technique can be used not only in the aberration regime, but also in the highly scattering cases, as was envisioned in the earlier work of phase-diversity [21].

2. Principle and numerical simulations

The principle of the experiments for phase-diversity speckle imaging and a numerical simulation are presented in Fig. 1. An object, hidden at a distance u behind a highly scattering medium, is illuminated by a spatially incoherent and narrowband source. The scattered light is recorded by a camera, which is initially placed at a distance v from the other side of the scattering medium. Since the object is located within the range determined by angular-memory-effect, each point on the object generates a nearly identical, but shifted, random speckle pattern on the camera. The camera image is a simple incoherent superposition of these shifted random speckle patterns, generated by all the points on the object. Therefore, it allows the system in Fig. 1(a) to be regarded as an incoherent imaging system with a shift-invariant PSF, i.e. the identical random speckle pattern, and the camera image is the convolution of the object and the PSF [17]. In order to jointly retrieve the image of the object and estimate the PSF of the system, we continuously change the position of the camera with a fixed interval δ and collect the camera image In(x) at each position [Fig. 1(b)], which reads:

In(x)=O(x)Sn(x)+wn(x)
where n=0,1,2,...,N-1. n=0 means the initial position and N denotes the number of all the diversity images. O(x) is the image of object, Sn(x) denote the corresponding PSFs of the system when camera is placed at different positions and wn(x) is a noise term. Since the speckle pattern is very large (up to millions of speckle grains) the full problem is numerically extremely challenging. We therefore apply a phase-diversity algorithm to various sub-regions of the speckle patterns, and retrieve simultaneously the image of the hidden object [Fig. 1(c)] and the corresponding sub-PSF [Fig. 1(d)] for the selected sub-region. Detailed information of the method is given below and in Fig. 2.

 figure: Fig. 1

Fig. 1 Concept and numerical simulations: Jointly imaging hidden objects and estimating the PSF of a highly scattering imaging system. (a) An object is illuminated by a spatially incoherent light. The scattered light is recorded by a camera at various positions with a fixed interval δ. P0 is the initial position; (b) Simulated diversity speckle patterns, in which the selected sub-region is marked by a dashed box; (c) Retrieved image and diffraction-limited image; (d) Estimated sub-PSF and true sub-PSF (only the intensity is shown). Scale bar: 400 camera pixels in (b) and 10 camera pixels in (c) and (d).

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 Detailed information of the reconstruction of the speckle patterns in phase-diversity speckle imaging. Only a small region of speckle patterns is sufficient to reconstruct. Additional apodization function is required to smooth the edges of each sub-image. Scale bar: 10 camera pixels.

Download Full Size | PDF

To verify the uniqueness of the solution of the estimated sub-PSF via the phase-diversity speckle imaging method, two other simulated hidden objects, i.e. the “letter H” and the “double stars” are respectively used to estimate the sub-PSF of the same scattering imaging system, as is used for “digit 2”. The corresponding results are shown in Fig. 3, in which the first row shows the retrieved images of different hidden objects, and the sub-PSFs estimated by different objects are shown in the second row, in which the three recovered sub-PSFs are almost the same, thus demonstrating the iterative processes of the reconstructions converge to a unique solution, independently of the hidden objects.

 figure: Fig. 3

Fig. 3 Estimating the sub-PSF of scattering imaging system with different objects. (a), (b) and (c) are respectively the corresponding simulated results with the objects “digit 2”, “letter H” and “double stars”. The first row shows the retrieved images and the second one is the estimated sub-PSFs. Scale bar: 10 camera pixels.

Download Full Size | PDF

As an early technique in astronomy imaging, phase-diversity has been used to estimate the unknown phase aberrations and eliminate the phase distortions caused by the atmosphere turbulence, often by taking one or multiple snapshots around the focus, producing a set of blurred, but recognizable images for weak aberrations [21]. However, in highly scattering cases, only a 3D propagating speckle pattern is produced and sampled by the camera, containing no obvious visual information on the hidden object. The illuminated area on the scattering layer, or any aperture placed between the scattering layer and the camera is regarded as the pupil of a highly scattering imaging system, and its generalized pupil function can be defined. When introducing diversity, a set of generalized pupil functions of the highly scattering imaging system can still be written as follows in its most general form (See derivation in Appendix A):

Hn(f)=|Hn(f)|exp{i[ϕ(f)+θn(f)]}
where f is the position in the pupil, ϕ(f) is the unknown random phases on the pupil, and θn(f)denotes the known phase function, corresponding to the n-th diversity image. We consider that the modulus of the generalized pupil function |Hn(f)| is 1 over a round pupil and 0 elsewhere. The effective pupil diameter is determined by the lateral decorrelation length of speckle grains, ΔX=1.0λ(v/D) [1], where D denotes the diameter of the aperture stop placed between the scattering medium and the camera. As was envisioned in Ref [21], phase-diversity technique cannot only be used to solve the problem of weak phase distortions in the aberration regime, but also work on any other kind of phase fluctuations, e.g. in the highly scattering cases, in which the totally random and complex phase function ϕ(f) can also be parametrized by the suitable basis functions without the loss of generality and the parameters could represent the point-by-point phase values, in which case the basis functions would be the Kronecker delta functions. In order to determine the complex phase values, multiple diversity images are required to provide sufficient information. For simplicity and accessibility, we use a simple axial translation of the camera, which corresponds to the known parabolic diversity phase function of defocus [22]:
θn(f)=nδπλv(v+nδ)|f|2
where λ is the wavelength of the incident light, v is the imaging distance and |f| is the distance to the center of the pupil.

If we assume that the additive white Gaussian noise is the dominant noise (e.g. thermal noise) in the speckle patterns on the camera, a cost function can be found to estimate the local phase values on the pupil of the system via a maximum-likelihood estimation [21, 23]:

L[ϕ(f)]=f[|k=0N-1I˜k(f)S˜k*(f)|2j=0N-1|S˜j(f)|2+σn=0N-1|I˜n(f)|2]
where I˜k,j,n(f), S˜k,j,n(f) are the Fourier transforms of In(x) and Sn(x) in Eq. (1). “”denotes the complex conjugation and σ is a positive parameter to improve the convergence and stability of the optimization process. The first term in the right hand side of Eq. (4) is the main part of the cost function required to be computed during the optimization process, while the second term is actually a constant independent of the unknown random phase values.

The nonlinear optimization algorithm could be used to estimate the random phases ϕ(f) via the cost function of Eq. (4). In order to ensure a relatively high efficiency of the optimization, we choose the quasi-Newton method, which retains the good convergence property as Newton’s method, yet avoids its heavy computation of Hessian matrix [24]. The search direction in quasi-Newton method can be found by di=Hig[ϕi(f)], where ()is the gradient operator and i denotes the index of iteration.g[ϕi(f)]=L[ϕi(f)], which is based on the fact that maximizing the likelihood function of Eq. (4) is equivalent to minimizing its opposite value. Hi is a computed matrix that is an approximation to the inverse of the Hessian matrix of g at the current iteration. Many methods are available to update Hi in each iteration and we choose the formula of Broyden-Fletcher-Goldfarb-Shanno [24], which is considered the most effective in general cases. ϕ0(f) is the initial guess of the unknown random phases and H0 is the initial value of the computed matrix that can be initialized as an identity matrix. A line search method is used to update the estimation with a form of ϕi+1(f)=ϕi(f)+α*di, where α* is the step length parameter minimizing g[ϕi(f)+αdi]. The final estimation of the random phase values can be acquired after several iterations.

The PSF of the system is the squared modulus of the coherent impulse response function, which is the inverse Fourier transform of the generalized pupil function (See derivation in Appendix A):

Sn(x)=|F1{Hn(f)}|2
where F1{}implies the inverse Fourier transform. The image of the hidden object is simultaneously retrieved from [21, 23]:

O(x)=F1{n=0N-1I˜n(f)S˜n*(f)k=0N-1|S˜k(f)|2+σ}

Interestingly, since the phase-diversity speckle imaging method is not based on the statistical average of speckle grains, a small region of speckle pattern only, as marked by the dashed box in Fig. 2, is sufficient to retrieve the hidden object and simultaneously estimate a sub-PSF of the system, provided it is larger than the object autocorrelation. Larger sub-images may certainly enable the estimation of a more complex sub-PSF and objects, since the sub-image size in turn determines the largest size of hidden objects that can be recovered by deconvolution method. However, increasing the sub-image size strongly increase the computational complexity and the number of diverse images required, while would not improve the resolution or the contrast of the final image. Considering a sub-region of the speckle pattern gives rise to an inevitable problem of the discontinuities of the edges of the sub-image [25]. The discontinuities produce artifacts in the Fourier transforms, which are implemented as two-dimensional, discrete fast Fourier transforms. These artifacts affect the accuracy of the reconstruction of the object and the estimation of the sub-PSF. The way in our method to overcome this problem is to smooth the edges of the sub-image by adding an apodization function (e.g. Hanning window). It is worth noting that the lateral diffusion of scattered light during axial propagation would cause a boundary effect. More specifically, when moving the camera in the axial direction, the scattered light propagates and therefore diffuses laterally (together with a global translation for off-axis regions). This means that because we keep the size of phase-diversity images, there will always be a mismatch between the propagated images and the captured sub-images. In order to minimize the influence of the boundary effect, the selection of the sub-image is advantageously done near the optical axis and the maximum translation of camera from the initial position should be limited. It depends on the number of speckle grains contained in the sub-image and on the amount of translation. As an example, in our numerical simulation of Figs. 1 and 3, the lateral decorrelation length is 2.25 pixels, and there are 70 pixels in one dimension of sub-images, corresponding to around 31 speckle grains. 19 diversity images are collected corresponding in total to 6 axial decorrelation length, meaning in our case that we sample 6 speckle grains along the axial direction. This number is well below the number of speckle grains contained in one lateral dimension of the sub-image and the apodization window, ensuring a limited boundary effect.

3. Experimental results

Figure 4 is the optical setup of the experimental demonstration of this concept. The light source is a spatially incoherent light-emitting diode (Thorlabs, M625L3), whose nominal wavelength is 625nm and bandwidth is 18nm, filtered by a narrow band-pass filter (Andover, 633FS02-50, 1.0+/−0.2nm) mounted on the camera (Andor, ZYLA-5.5-USB 3.0), to ensure the high contrast of the speckle patterns. The incoherent light illuminates the object, digit “2” (Edmund, 1951 USAF Negative Target, 2” × 2”, ~350um), which is shown in Fig. 5(a) and hidden ~60cm behind a scattering medium (Edmund, Ground Glass Diffuser). The illuminated area on the scattering medium is adjusted by a contiguous iris with a diameter of 5.21mm to control the size of speckle grains. The camera is initially placed at a distance of ~12cm in front of the scattering medium to collect the scattered light and a 50mm linear translation stage (Thorlabs, DDSM50) is used to move the camera.

 figure: Fig. 4

Fig. 4 Experimental set-up. A light-emitting diode is used as the spatially incoherent source, and a narrow band-pass filter is mounted on the camera to ensure the high contrast of the speckle patterns. The camera is moved step-by-step with a linear translation stage. L: Lens.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Experimental results of phase-diversity speckle imaging. (a) Original image of hidden object, digit “2”; (b) Raw diversity camera images, in which three independent groups of sub-images are selected and reconstructed, respectively; (c) First column: estimated random local phase values, corresponding to different groups; second column: estimated sub-PSFs; third column: retrieved images of the hidden object from the three groups of speckle patterns. Scale bar: 500 camera pixels in (a) and (b); 10 camera pixels in (c).

Download Full Size | PDF

Figure 5(b) shows some of the 19 acquired frames of raw diversity camera images with a fixed interval δ=ΔZ/3, where ΔZ=7.1λ(v/D)2 is the axial decorrelation length [1]. The camera images are spatially normalized for the slowly varying envelope of the scattered light pattern, and then smoothed by a Gaussian kernel. We select three independent groups of sub-images around the center of the processed camera images. Each sub-image contains 84 camera pixels in one dimension and is smoothed by a Hanning window to solve the problem of discontinuities of edges in discrete Fourier transforms and limit the boundary effect. A quasi-Newton algorithm from Matlab Optimization Toolbox is used to solve the optimization problem in Eq. (4) to reconstruct each group of sub-images, respectively. The initial guess of the local phase values is zero and the parameter σ is 10−5, which is selected by trial and error. If σ is too small, the reconstructions degrade because of the amplification of error; while if it is too large, it overly smooths the reconstructions. The optimized random phases are estimated after 1200 iterations, as is shown in the first column of Fig. 5(c). The estimated sub-PSFs of the system and the retrieved images of hidden object are naturally obtained via Eq. (5) and Eq. (6).

As presented, our phase-diversity speckle imaging method cannot only image the hidden object behind a thin, but complex scattering layer, as the speckle-correlation and the bispectrum analysis, but also estimate the pupil function and the PSF of a highly scattering imaging system without any reference. Interestingly as long as the PSF is acquired, we can directly and efficiently recover the image of other hidden objects from the complex speckle pattern via simple deconvolutions, instead of repeating foregoing procedures or using other complex imaging methods.

As a demonstration of this concept, after estimating the PSF, we replace digit “2” with another object, digit “3” (~350um), which is from the same USAF target and shown in Fig. 6(a). The speckle pattern for digit “3” shown in Fig. 6(b) is captured by the camera at the same initial position as digit “2”. After implementing the same preprocessing to the raw camera image, we select the sub-images at the same areas, where the sub-PSFs have been estimated. The Hanning window is used to smooth each sub-image, as is shown in the first column of Fig. 6(c). Then, deconvolutions are implemented by applying the Lucy-Richardson method with the iterations between 30 and 50, using codes from the Matlab Image Processing Toolbox. Deconvolution results of the three groups are shown in Fig. 6(c), which demonstrates the feasibility of the deconvolution method, and moreover, verifies the validity of the estimated sub-PSFs via our phase-diversity speckle imaging method. Beyond this simple demonstration, more complex objects, and even broadband or polychromatic objects, could be in principle retrieved, since the local pupil phase function is known.

 figure: Fig. 6

Fig. 6 Experimental results of imaging hidden object by simple deconvolution with the estimated sub-PSFs from phase-diversity speckle imaging method. (a) Original image of hidden object, digit “3”, placed at the same position as digit “2”; (b) Raw camera image. Selecting the same three areas, where the sub-PSFs are estimated in advance; (c) First column: selected three sub-images, smoothed by the Hanning window; second column: retrieved results from three sub-images by deconvolution. Middle images are the corresponding sub-PSFs estimated by phase-diversity speckle imaging method as in Fig. 5. Scale bar: 500 camera pixels in (a) and (b); 10 camera pixels in (c).

Download Full Size | PDF

The different regions of speckle pattern are uncorrelated, which determines the local PSF for one sub-image of the speckle pattern is unique and the hidden object cannot be recovered by deconvolution with different local PSFs. As an example, we use the sub-PSFs estimated from different groups of sub-images to reconstruct only the sub-image of group 1, which is shown in Fig. 7(a). Figure 7(b) is the deconvolution result by using its corresponding sub-PSF estimated from group 1, while no information about the hidden object can be recovered if we use the other two different sub-PSFs to reconstruct the sub-image of group 1, as are respectively shown in Figs. 7(c) and 7(d).

 figure: Fig. 7

Fig. 7 Deconvolution results with the sub-PSFs estimated from different groups of sub-images. (a) Sub-image of group 1; (b) Deconvolution result by using its corresponding sub-PSF; (c) and (d) are the results reconstructed with the sub-PSFs of group 2 and group 3, respectively. Scale bar: 10 camera pixels.

Download Full Size | PDF

4. Conclusion

In conclusion, we experimentally demonstrated a phase-diversity speckle imaging method, which allows non-invasive imaging of hidden objects behind a thin, but highly complex scattering layer, and can jointly estimate the pupil function and the PSF of a highly scattering imaging system without any reference. Since our method is not based on the statistical average of large amounts of speckle grains, only a small region of speckle pattern is sufficient, at the cost of multiple frames acquisition. The smallest dimension of the sub-image depends on the image of hidden objects, whose size can be roughly determined by the autocorrelation of the camera image, and on the amount of translation performed. Larger region certainly allows us to acquire a larger PSF, but on the other hand, it means more complex phase values on the pupil to determine, which increases the computational complexity. Furthermore, while we only obtain the local PSF within a sub-region of the speckle pattern, the process can easily be parallelized on multiple sub-apertures and a complete and high resolution reconstruction of the scatterer’s topography could in principle be achieved, as in ptychography. For this proof of concept, we used a simple quasi-Newton algorithm to solve the optimization problem, which is possible to be trapped in the local minima and fail to converge. Slightly changing the available information (e.g. the frames of speckle patterns) or using random initial guesses would be potential solutions. In addition, other advanced optimization algorithms are expected to perform more efficiently. To this end, we freely make available the source codes and experimental data to use by the scientific community, as shown in the Code 1 (Ref. [26]) and Dataset 1 (Ref. [27]).

Appendix A Generalized pupil function in a highly scattering imaging system

Figure 8 is a schematic of the impulse response of a highly scattering imaging system. Before we define a generalized pupil function of a highly scattering imaging system, we first consider its impulse response, which is defined as h(u,v;ε,η) created by a point source (i.e. a δ function) at the plane of coordinates (ε,η) [22, 28]. Ignoring the time-dependent phase factor exp(jωt), and considering the paraxial approximation and dropping the constant phase factor, the field distribution on the front surface of the scattering layer can be written:

Us_front=1jλz1exp{jk2z1[(xε)2+(yη)2]}
where z1 and z2 respectively denote the object distance and the image distance. λ is the wavelength and k=2π/λ is the wavenumber. If we assume the scattering layer is very thin, the field distribution on the back of the scattering layer Us_back should be an element-wise multiplication of Us_front with an additional field introduced by the thin scattering layer:
Us_back=Us_frontP(x,y)
where P(x,y)is the additional field introduced by the scattering layer and it can be expressed:
P(x,y)=|P(x,y)|exp[jφ(x,y)]
where || is the operator of modulus and φ(x,y) is the unknown random phase function. Therefore, the impulse response of the imaging system shown in Fig. 8 can be calculated by the Fresnel diffraction with the paraxial approximation:
h(u,v;ε,η)=1jλz2Us_backexp{jk2z2[(ux)2+(vy)2]}dxdy
where the constant phase factors are dropped. Substituting Eq. (7) and (8) into Eq. (10), we can obtain the following expression:

 figure: Fig. 8

Fig. 8 Impulse response of a highly scattering imaging system.

Download Full Size | PDF

h(u,v;ε,η)=1λ2z1z2exp[jk2z1(ε2+η2)]exp[jk2z2(u2+v2)]×P(x,y)exp{jk2[(1z1+1z2)(x2+y2)]}×exp{jk[x(εz1+uz2)+y(ηz1+vz2)]}dxdy

If only the intensity distribution is of interest, the quadratic phase factor outside the integral can be discarded, and the impulse response can be further simplified as follows:

h(u,v;ε,η)H(x,y)exp{jk[x(εz1+uz2)+y(ηz1+vz2)]}dxdy

Equation (12) demonstrates that the impulse response is the inverse Fourier transform of H(x,y), which is the generalized pupil function introduced in the highly scattering imaging system and can be expressed:

H(x,y)=|P(x,y)|exp{jk2[(1z1+1z2)(x2+y2)]+jφ(x,y)}=|H(x,y)|exp[jϕ(x,y)]
where |H(x,y)|=|P(x,y)| and ϕ(x,y)=k2[(1z1+1z2)(x2+y2)]+φ(x,y). When we introduce the defocus as the diversity phase function, the computation of H(x,y) for the imaging distance z2 is also valid for any distance. For each value of defocus distance, H(x,y) becomes:
Hn(x,y)=|Hn(x,y)|exp{jk2[(1z1+1z2+nΔz)(x2+y2)]+jφ(x,y)}=|Hn(x,y)|exp{jk2[(1z1+1z2nΔzz2(z2+nΔz))(x2+y2)]+jφ(x,y)}=|Hn(x,y)|exp{jk2[(1z1+1z2)(x2+y2)]+jφ(x,y)+jk2[nΔzz2(z2+nΔz)](x2+y2)}=|Hn(x,y)|exp{j[ϕ(x,y)+θn(x,y)]}
where n is the index of diversity image, varying from 0 to N-1, and N is the number of diversity images. nΔz denotes the defocus distances at different times if the diversity images are captured with an equidistant interval Δz. θn(x,y)=[nΔzπλz2(z2+nΔz)](x2+y2) denotes each defocus phase function. If we use the symbol f to represent the position at the plane of scattering layer, instead of using the original coordinate (x,y), Eq. (14) would be the same form used in Eq. (2), i.e. Hn(f)=|Hn(f)|exp{i[ϕ(f)+θn(f)]}.

Funding

National Natural Science Foundation of China (NSFC) (61575154); European Research Council (ERC) (278025 and 724473); China Scholarship Council (CSC) (201506960026).

Acknowledgments

The authors would like to thank Xueen Wang for the insightful discussions, and Huijuan Li for help with experiments. S. G. is a member of the Institut Universitaire de France.

References and links

1. J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts and Company Publishers, 2007).

2. I. M. Vellekoop and A. P. Mosk, “Focusing coherent light through opaque strongly scattering media,” Opt. Lett. 32(16), 2309–2311 (2007). [PubMed]  

3. I. M. Vellekoop and C. M. Aegerter, “Scattered light fluorescence microscopy: imaging through turbid layers,” Opt. Lett. 35(8), 1245–1247 (2010). [PubMed]  

4. S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104(10), 100601 (2010). [PubMed]  

5. A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).

6. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).

7. T. Chaigne, J. Gateau, O. Katz, E. Bossy, and S. Gigan, “Light focusing and two-dimensional imaging through scattering media using the photoacoustic transmission matrix with an ultrasound array,” Opt. Lett. 39(9), 2664–2667 (2014). [PubMed]  

8. I. M. Vellekoop, “Feedback-based wavefront shaping,” Opt. Express 23(9), 12189–12206 (2015). [PubMed]  

9. P. Lai, L. Wang, J. W. Tay, and L. V. Wang, “Photoacoustically guided wavefront shaping for enhanced optical focusing in scattering media,” Nat. Photonics 9(2), 126–132 (2015). [PubMed]  

10. R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics 9, 563–571 (2015). [PubMed]  

11. S. Rotter and S. Gigan, “Light fields in complex media: Mesoscopic scattering meets wave control,” Rev. Mod. Phys. 89, 015005 (2017).

12. J. Yoon, K. Lee, J. Park, and Y. Park, “Measuring optical transmission matrices by wavefront shaping,” Opt. Express 23(8), 10158–10167 (2015). [PubMed]  

13. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012). [PubMed]  

14. S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61(7), 834–837 (1988). [PubMed]  

15. I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61(20), 2328–2331 (1988). [PubMed]  

16. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [PubMed]  

17. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).

18. T. Wu, O. Katz, X. Shao, and S. Gigan, “Single-shot diffraction-limited imaging through scattering layers via bispectrum analysis,” Opt. Lett. 41(21), 5003–5006 (2016). [PubMed]  

19. A. W. Lohmann, G. Weigelt, and B. Wirnitzer, “Speckle masking in astronomy: triple correlation theory and applications,” Appl. Opt. 22(24), 4028 (1983). [PubMed]  

20. E. Edrei and G. Scarcelli, “Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media,” Sci. Rep. 6, 33558 (2016). [PubMed]  

21. R. G. Paxman, T. J. Schulz, and J. R. Fienup, “Joint estimation of object and aberrations by using phase diversity,” J. Opt. Soc. Am. A 9, 1072–1085 (1992).

22. J. W. Goodman, Introduction to Fourier Optics (Roberts and Company Publishers, 2005).

23. C. R. Vogel, T. Chan, and R. Plemmons, “Fast algorithms for phase diversity-based blind deconvolution,” Adaptive Optical System Technologies, Parts 1 and 2 3353, 994– 1005 (1998).

24. J. Nocedal and S. J. Wright, Numerical Optimization, 2nd ed. (Springer, 2006).

25. O. Von der Lühe, “Speckle imaging of solar small scale structure I. Methods,” Astron. Astrophys. 268, 374–390 (1993).

26. T. Wu, “codes.zip,” figshare (2017), https://figshare.com/s/ac47b433be00ef4ef893.

27. T. Wu, “data.zip,” figshare (2017), https://figshare.com/s/4969d19518a7680d07ec.

28. A. K. Singh, D. N. Naik, G. Pedrini, M. Takeda, and W. Osten, “Exploiting scattering media for exploring 3D objects,” Light Sci. Appl. 6, e16219 (2017).

Supplementary Material (2)

NameDescription
Code 1       Matlabs codes of phase-diversity
Dataset 1       Experimental data of phase-diversity

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Concept and numerical simulations: Jointly imaging hidden objects and estimating the PSF of a highly scattering imaging system. (a) An object is illuminated by a spatially incoherent light. The scattered light is recorded by a camera at various positions with a fixed interval δ . P0 is the initial position; (b) Simulated diversity speckle patterns, in which the selected sub-region is marked by a dashed box; (c) Retrieved image and diffraction-limited image; (d) Estimated sub-PSF and true sub-PSF (only the intensity is shown). Scale bar: 400 camera pixels in (b) and 10 camera pixels in (c) and (d).
Fig. 2
Fig. 2 Detailed information of the reconstruction of the speckle patterns in phase-diversity speckle imaging. Only a small region of speckle patterns is sufficient to reconstruct. Additional apodization function is required to smooth the edges of each sub-image. Scale bar: 10 camera pixels.
Fig. 3
Fig. 3 Estimating the sub-PSF of scattering imaging system with different objects. (a), (b) and (c) are respectively the corresponding simulated results with the objects “digit 2”, “letter H” and “double stars”. The first row shows the retrieved images and the second one is the estimated sub-PSFs. Scale bar: 10 camera pixels.
Fig. 4
Fig. 4 Experimental set-up. A light-emitting diode is used as the spatially incoherent source, and a narrow band-pass filter is mounted on the camera to ensure the high contrast of the speckle patterns. The camera is moved step-by-step with a linear translation stage. L: Lens.
Fig. 5
Fig. 5 Experimental results of phase-diversity speckle imaging. (a) Original image of hidden object, digit “2”; (b) Raw diversity camera images, in which three independent groups of sub-images are selected and reconstructed, respectively; (c) First column: estimated random local phase values, corresponding to different groups; second column: estimated sub-PSFs; third column: retrieved images of the hidden object from the three groups of speckle patterns. Scale bar: 500 camera pixels in (a) and (b); 10 camera pixels in (c).
Fig. 6
Fig. 6 Experimental results of imaging hidden object by simple deconvolution with the estimated sub-PSFs from phase-diversity speckle imaging method. (a) Original image of hidden object, digit “3”, placed at the same position as digit “2”; (b) Raw camera image. Selecting the same three areas, where the sub-PSFs are estimated in advance; (c) First column: selected three sub-images, smoothed by the Hanning window; second column: retrieved results from three sub-images by deconvolution. Middle images are the corresponding sub-PSFs estimated by phase-diversity speckle imaging method as in Fig. 5. Scale bar: 500 camera pixels in (a) and (b); 10 camera pixels in (c).
Fig. 7
Fig. 7 Deconvolution results with the sub-PSFs estimated from different groups of sub-images. (a) Sub-image of group 1; (b) Deconvolution result by using its corresponding sub-PSF; (c) and (d) are the results reconstructed with the sub-PSFs of group 2 and group 3, respectively. Scale bar: 10 camera pixels.
Fig. 8
Fig. 8 Impulse response of a highly scattering imaging system.

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

I n ( x ) = O ( x ) S n ( x ) + w n ( x )
H n ( f ) = | H n ( f ) | exp { i [ ϕ ( f ) + θ n ( f ) ] }
θ n ( f ) = n δ π λ v ( v + n δ ) | f | 2
L [ ϕ ( f ) ] = f [ | k = 0 N - 1 I ˜ k ( f ) S ˜ k * ( f ) | 2 j = 0 N - 1 | S ˜ j ( f ) | 2 + σ n = 0 N - 1 | I ˜ n ( f ) | 2 ]
S n ( x ) = | F 1 { H n ( f ) } | 2
O ( x ) = F 1 { n = 0 N - 1 I ˜ n ( f ) S ˜ n * ( f ) k = 0 N - 1 | S ˜ k ( f ) | 2 + σ }
U s _ f r o n t = 1 j λ z 1 exp { j k 2 z 1 [ ( x ε ) 2 + ( y η ) 2 ] }
U s _ b a c k = U s _ f r o n t P ( x , y )
P ( x , y ) = | P ( x , y ) | exp [ j φ ( x , y ) ]
h ( u , v ; ε , η ) = 1 j λ z 2 U s _ b a c k exp { j k 2 z 2 [ ( u x ) 2 + ( v y ) 2 ] } d x d y
h ( u , v ; ε , η ) = 1 λ 2 z 1 z 2 exp [ j k 2 z 1 ( ε 2 + η 2 ) ] exp [ j k 2 z 2 ( u 2 + v 2 ) ] × P ( x , y ) exp { j k 2 [ ( 1 z 1 + 1 z 2 ) ( x 2 + y 2 ) ] } × exp { j k [ x ( ε z 1 + u z 2 ) + y ( η z 1 + v z 2 ) ] } d x d y
h ( u , v ; ε , η ) H ( x , y ) exp { j k [ x ( ε z 1 + u z 2 ) + y ( η z 1 + v z 2 ) ] } d x d y
H ( x , y ) = | P ( x , y ) | exp { j k 2 [ ( 1 z 1 + 1 z 2 ) ( x 2 + y 2 ) ] + j φ ( x , y ) } = | H ( x , y ) | exp [ j ϕ ( x , y ) ]
H n ( x , y ) = | H n ( x , y ) | exp { j k 2 [ ( 1 z 1 + 1 z 2 + n Δ z ) ( x 2 + y 2 ) ] + j φ ( x , y ) } = | H n ( x , y ) | exp { j k 2 [ ( 1 z 1 + 1 z 2 n Δ z z 2 ( z 2 + n Δ z ) ) ( x 2 + y 2 ) ] + j φ ( x , y ) } = | H n ( x , y ) | exp { j k 2 [ ( 1 z 1 + 1 z 2 ) ( x 2 + y 2 ) ] + j φ ( x , y ) + j k 2 [ n Δ z z 2 ( z 2 + n Δ z ) ] ( x 2 + y 2 ) } = | H n ( x , y ) | exp { j [ ϕ ( x , y ) + θ n ( x , y ) ] }
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.