Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Ptychographical intensity interferometry imaging with incoherent light

Open Access Open Access

Abstract

Intensity interferometry (II), the landmark of the second-order correlation, enables very long baseline observations at optical wavelengths, providing imaging with microarcsecond resolution. However, the unreliability of traditional phase retrieval algorithms required to reconstruct images in II has hindered its development. We here develop a method that circumvents this challenge, which enables II to reliably image complex shaped objects. Instead of measuring the whole object, we measure it part by part with a probe moving in a ptychographic way: adjacent parts overlap with each other. A relevant algorithm is developed to reliably and rapidly recover the object in a few iterations. Moreover, we propose an approach to remove the requirement for a precise knowledge of the probe, providing an error-tolerance of more than 50% for the location of the probe in our experiments. Furthermore, we extend II to short distance scenarios, providing a lensless imaging method with incoherent light and paving a way towards applications in X-ray imaging.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In 1950s, Hanbury Brown and Twiss (HBT) invented II and discovered the second-order correlations of incoherent light, which is named HBT effect [1]. This discovery overturned the traditional view on “incoherence” of thermal light, and immediately invoked a hot debate [2], which led to intensively and extensively understanding of the quantum nature of thermal light [3, 4]. HBT’s work had a profound impact on the development of quantum optics [5]. Meanwhile, the second-order correlation has heavily influenced imaging, such as ghost imaging [6, 7]. Instead of measuring the phase relation of light fields, the second-order correlation measures the variation of light intensities and seeks for the correlation of the intensity fluctuation [8], which therefore brings us many advantages including high resolution, the use of incoherent sources and turbulence-insensitivity. Thus, the second-order correlation imaging with incoherent sources has attracted a lot of attentions [9–13].

II is a landmark of the second-order correlation system. It consists of a set of telescopes far from each other. Each telescope independently records the time-varying intensity of an object temporally and spatially, and electronically (rather than optically) conveys the intensity information to a center where the correlations of the intensities are computed. This feature dramatically simplifies the detection process and reduces the cost [14]. Moreover, II is insensitive to either atmospheric turbulence or telescopic optical imperfection. Thus, these great advantages enable very long baselines (kilometers) observations in short optical wavelengths with microarcsecond resolutions [15–18]. During 1963 to 1974, II had been used to measure the angular diameters of 32 star, with a resolution as high as 0.4 milliarcsecond [1, 19, 20]. Recently, an international project called Cherenkov Telescope Array has been being attempting to exploit air Cherenkov telescopes to construct kilometers baselines of II, in order to acquire a resolution approaching 30 microarcsecond [21–23].

Since II can only achieve the spatial Fourier magnitude of an object’s intensity distribution, it requires phase retrieval algorithms to recover the missing Fourier phase [24]. However, the currently employed algorithms such as Error Reduction (ER) and Hybrid Input-Output (HIO) are unreliable and time-consuming, preventing from recovering complex shaped objects [25–27]. This challenge has become a major obstacle to the development of intensity interferometry imaging. Inspired by ptychographical iterative engine (PIE) [28, 29], we here introduce a ptychography-type measurement into the second-order correlation and create a new type of imaging method with a novel algorithm, which is named as ptychographical intensity interferometry imaging (PIII). Instead of detecting a whole object, we measure the object part by part in a ptychographical way that adjacent parts highly overlap [30]. Along with the proposed algorithm, we circumvent the challenge and enable II stably and rapidly imaging the fine structure of objects. Moreover, taking the advantage of the second-order correlation of incoherent light, PIII can remove the requirement for the precise knowledge of the illumination function, such as the illumination’s shape, position and amplitude distribution [31–34]. For instance, in PIE that usually works with coherent diffractive imaging (CDI), the probe function needs to be pre-determined experimentally or post-calculated with a proper algorithm such as extended PIE (ePIE) [32, 34]. In contrast, in PIII, the probe is merely an aperture function. We only need to specify a size that is roughly larger than its actual value. Then the algorithm recovers high-fidelity images even when the estimation of the illumination’s location has a 50% deviation. This feature makes the realization of this method practical and simple. Furthermore, based on this technique, we proposed a lensless imaging system with incoherent light in short distance, which can be applied to X-ray imaging for complex-shaped object with high resolution, extending the application scope of II.

2. Principles

2.1. Physical mechanism

Figure 1 depicts a scenario potentially applicable to PIII, which images a nebula with a telescope array. The telescopes simultaneously detect the same section of the nebula (the lowest yellow dashed circle). After measuring the second-order correlation for the current section, the telescopes then change their angular areasand detect the next section. Usually telescope can provide a small field of view about several arcseconds. Repeat this procedure until all the sections have been detected. Here, the adjacent sections overlap with each other, which is a ptychography-type measurement.

 figure: Fig. 1

Fig. 1 A nebula is imaged with a telescope array on the ground. All the telescopes together measure the nebula section by section. The yellow dashed circles indicate three overlapping sections.

Download Full Size | PDF

On each section, the second-order correlation is calculated from the product of two intensity fluctuations detected at two telescopes:

ΔG(2)(r1,r2)ΔI1(t,r1)ΔI2(t+τ,r2)=C|γ12|2,
where ΔI1(t,r1)=I1(t,r1)I¯1 and ΔI2(t+τ,r2)=I2(t+τ,r2)I¯2 are the intensity fluctuations at telescopes, respectively. τ must be much shorter than the coherence time of the light, i.e., the measurement of the intensities must be done within such a short time window. 〈…〉 represents the ensemble average of the short time measurements. C is a normalization coefficient. γ12 is the mutual correlation function of the E-fields at the two locations. According to the van Cittert-Zernike theorem [35, 36],
γ12(Δr)E1(r1)E2(r2)=eiϕΓ(ρ)exp[i2πλz(Δrρ)]dρ,
where Δr = r2r1. Γ(ρ) = |O(ρ)|2 is intensity distribution function of the object. O(ρ) is the object field distribution function. Thus, we can obtain the Fourier magnitude of the object’s modulus square from the second-order correlation,
|Δr{Γ(ρ)}|2=|γ12|2=ΔG(2)(Δr)/CΔg(2)(Δr),
where ℱk(x) stands for a Fourier transform from x domain to k domain. Δg(2)r) is the normalized second-order correlation. With incoherent light, the second-order correlation is able to yield the Fourier transform magnitude of an object’s intensity distribution. This is equivalent to use a coherent wave illuminate an amplitude object and obtain the Fourier magnitude on the far-field plane. Note that, this process does not work for complex-valued objects (phase objects), since it recovers Γ(ρ) instead of the object function itself.

Unfortunately, even for Γ(ρ), the Fourier phase is lost during the measurements, and the image can not be directly recovered by inverse Fourier transform. Phase retrieval algorithms are required to reconstruct the missing phase. However, the noise and imprecise measurements will corrupt the Fourier magnitude and make the phase retrieval algorithm struggle to converge to the correct result [37, 38]. We therefore introduce the ptychography-type measurement and achieve a set of Fourier magnitudes from different sections, {Aj} with j = 1…Np (Np is the total number of sections). The j-th one is

Aj(Δr)=|Δr{Γ(ρ)Pj(ρ))}|=Δgj(2)(Δr).

In PIII, Pj(ρ) is merely an aperture function that specifies the location and area of the j-th section (see discussion for details). The adjacent sections must overlap, which is the key to connect different constraints together, providing redundant information. The more constraints for the same object, the more robust is the convergency of the phase retrieval. Because redundant constraints can effectively solve the problems of multiple solutions and converging to local minimums. Based on the above theory, we redesign an image reconstruction algorithm. The details are shown in the next subsection.

2.2. Image reconstruction algorithm

Pj(ρ) is actually an aperture function, which is equivalent to divides a small sub-object, i.e., Γ(ρPj(ρ). Therefore, in the algorithm, Pj(ρ) acts as a support similar to that in ER algorithm. The realness and non-negativity constrains are still applied, since PIII only work for amplitude objects. In the following, we can see that the PIII algorithm uses a similar way as that in ER to update the sub-objects in turns in each iteration. Figure 2 is the flowchart.

 figure: Fig. 2

Fig. 2 The flowchart of the algorithm. The dashed rectangle indicates the small loop of calculating the phases of different sections. The outer of the rectangle is the big iterative loop. The exiting condition is whether the maximum of iteration number is reached or a satisfactory image is obtained.

Download Full Size | PDF

  1. seed the algorithm with an initial guess for the whole image, Γ(ρ). Then start the following iterative calculation from the first big iteration (k = 1) and the first section of the object (j = 1).
  2. calculate the Fourier transform of the j-th section: Fk,j={Γ(ρ)Pj(ρ)}.
  3. replace the measured Fourier magnitude to the last calculation:
    Fk,j=AjFk,j|Fk,j|.
  4. compute the inverse Fourier transform to obtain the image of j-th section: Θk,j(ρ)=1{Fk,j}.
  5. apply realness and non-negativity constrains to Θk,j(ρ), and yield Θk,j(ρ):
    Θk,j(ρ)={Re{Θk,j(ρ)},Pj(ρ)=1[Re{Θk,j(ρ)}0]0,Pj(ρ)=0[Re{Θk,j(ρ)}<0],
    where Re{} is to compute the real parts. When ρ is within the region of the jth section, Pj(ρ) = 1; otherwise Pj(ρ) = 0.
  6. update Γ(ρ) with Θk,j(ρ):
    Γ(ρ)=Γ(ρ)[1Pj(ρ)]+Θk,j(ρ).
  7. if j is the last section, goto step 8; otherwise goto step 2 and compute the next section, i.e., Θk,j+1(ρ). Here, we use rebounding scanning: in the iteration of odd k, the scanning is from 1 to np; in the next iteration (even k), it rebounds from np back to 1.
  8. after all the sections has been gone through, start the next big iteration (kk + 1) and goto step 2.
  9. if a satisfactory image is obtained or maximum iteration number is reached, exit the calculation.

Here,

Pj(ρ)P(ρRj)={1,if|ρRj|a0,if|ρRj|>a,
where Rj stands for the center, and a is the size of the aperture. Before the calculation, we roughly guess the location of each section. The guess is unnecessary to be precise. Instead we estimate a number that is larger than the actual one. In the experiments, we will show that even the guessed values for the locations have a 50% deviation, the algorithm can still steadily recover the image. This makes the method quite practical, because realistically it is difficult to precisely determine where the light comes from.

3. Simulation and experiment

In 1957, HBT revealed that the second-order correlation results from the intensity fluctuations of thermal light [8]. The intensity of thermal light always fluctuates temporally and spatially. Temporally, its values at a location randomly vary from one moment to another moment, where ‘a moment’ is a time period much shorter than the coherence time of the light. Spatially, if we capture the intensity distribution within ‘a moment’, we can observe a stationary speckle-like pattern (the intensities sharply change from one location to another). Roughly, we may say that “incoherence” consists of a set of temporally and spatially random speckle patterns, but the correlation of the speckles yields the HBT effect. However, the coherence time of thermal light is on the order of femtosecond, and it is hard for a detector to follow such a rapidly varying intensity. In 1964, it is found that a laser beam scattered by a moving ground glass can perfectly mimic the intensity fluctuation of thermal light [39], but its coherence time can be artificially slowed down to be milliseconds or even slower, allowing slow response detectors (such as CCD or CMOS) to sense its intensity fluctuation. Thus, with the pseudo-thermal light, we can perform a close perfect II experiment [40]. Since 1964, the pseudo-thermal light has become a standard and widely used source in quantum optics and imaging [41, 42].

This article aims to demonstrate the principle and the preliminary experiments of PIII. We therefore employ a pseudo-thermal source and a CMOS camera for the experiments. As shown in Fig. 3, the experimental setup mimics the scenario described in Fig. 1. The pseudo-thermal illuminating a hollow object simulates a light emitting object. The camera simulates the telescope array: each pixel mimics a telescope. Since the pixels of the camera does not have ability of adjusting the field of view like telescopes, we use a light cone to let all the pixels simultaneously detect a small section of the object. The cone is made of a thick black paper. Its big tail lies on a fixed rod, and is loosely tied to the rod with a rubber band. Its small head is attached to a pinhole mounted to a motorized stage. Since the cone’s body can deform a little bit and the rubber band can provide some extent movement, the small head can scan horizontally or vertically in a small distance (but big enough to cover the whole object in our experiments). Actually, the function of the cone is similar to using an aperture to scan the object. Meanwhile, it can prevent most of environment light entering the camera.

 figure: Fig. 3

Fig. 3 The object is made of hollow letters of “51201816”. The pseudo-thermal source is built with a laser beam scattered by a rotating ground glass (RGG). The scattered light is collimated by a lens and then projected onto the object, mimicking incoherent light illuminating the object. A light cone is placed in front of the camera as a section selector. Its head is 80 mm away from the object. Its body is 150 mm long.

Download Full Size | PDF

The wavelength of the laser beam is 457 nm (DPSS 85-BLS-601). Its beam size is ~ 10mm. The pseudothermal light is collimated by a lens with a focal length of 300 mm. The distance between the ground glass and the collimated lens is 300 mm. The ground glass (220 grit) rotates at a speed of 1rpm(round per miniute), resulting in a coherence time of 100ms. The distance between the Lens and the object is 100 mm. The CMOS camera (Infinity 3-1M) with 1392*1040 pixels, is located 300 mm away from the object. It captures the speckle patterns at 20 ms exposure time. The size of a pixel is 6.5µm. The light cone, mounted on a motorized stage, is used to scan 16 sections of the object.

3.1. Simulation

First of all, we carry out a numerical simulation to investigate how well the imaging mechanism and the proposed algorithm work. In the simulation, we simply model the light cone as an aperture function, P(ρRj), which defines a circular aperture function centered at Rj with a diameter of a. Thus, with the aperture function, we can precisely select a section of object to be measured in each ptychographic run.

The pseudo-thermal light is modeled as electromagnetic fields with spatially random phases. For j-th sub-object, the E-field propagates onto the object plane and passes through the object and the j-th probe. It thus carries the information of the sub-object. On the detection plane, the intensity of the transmitted field forms a speckle pattern. We use 100 sets of random phases and generate 100 different speckle patterns. We calculate the autocorrelation of each speckle pattern, and then take their average. After subtracting the background from the average autocorrelation, we also apply a rectangular window to filter out the noise. We then obtain the second-order correlation of the sub-object (see Eq. (1) and Eq. (3)), which yields Aj, the j-th Fourier magnitude. With different aperture functions, we obtain the whole set of {Aj} with j = 1…Np. By applying {Aj} into our algorithm, we recover the whole image, as shown in Fig. 4(a). Note that the initial guess of the object is an array of random numbers.

 figure: Fig. 4

Fig. 4 (a) the recovered image with PIII within 10 iterations. (b) the reconstructed image after 1000 ER iterations ; (b) the reconstructed image after 1000 HIO iterations; (d), (e) and (f) are the corresponding relative residuals VS iterations for PIII, ER and HIO, respectively. Scale bar: 200pixels.

Download Full Size | PDF

To investigate the performance of PIII, we compare it with the traditional imaging method with ER and HIO algorithms. The simulations are similar to PIII, but the speckle patterns are produced from the whole object rather than from sub-objects. 100 sets of random phases are used to generate 100 different speckle patterns. Without the ptychographic process, it is difficult for ER and HIO to reconstruct the images, as shown in Fig. 4(b) and Fig. 4(c). Both ER and HIO do not have any sign of converging to a correct image after 1000 iterations (see Fig. 4(e) and Fig. 4(f)). In contrast, PIII converges to the clear image within 6 big iterations (which is equivalent to 96 small iterations since 16 sub-objects are used in each big iteration), as we can see in Fig. 4(d).

3.2. Experiment

We then perform the experiment with the setup depicted in Fig. 3. With the stage, the head is shifted to 16 locations in a step of 0.8 mm, dividing the object to 16 sub-objects. In each location, we capture 100 speckle patterns, one of which is shown in Fig. 5(a). The calculation process of the autocorrelation is the same as that in the simulation. The result is shown in Fig. 5(b). By using the proposed algorithm, we achieve the image in 9 iterations, as shown in Fig. 6. As shown Fig. 6(c), the covergence is stable without oscillation. Note that the initial guess of the object is an array of random numbers.

 figure: Fig. 5

Fig. 5 (a) One of the captured speckle patterns for the first probe. (b) The average autocorrelation function calculated from 100 speckles for the first probe. Scale bar: 650 µm.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 (a) The recovered images after nine big iterations. (b) The corresponding relative residual VS. iteration (only the first nine iterations). (c) The variance of the relative residual in 1000 iterations. Scale bar: 500 µm.

Download Full Size | PDF

In realistic applications, especially in long distance scenarios, it is difficult to precisely determine the location of the probe. The inaccurate knowledge would result in blurred images or even unacceptable reconstruction. As shown in Fig. 7(a), assuming that the locations of probes are even and denoting the distance of two adjacent probe as ΔR = RjRj−1, the location of j-th probe is Rj = R1 + (j − 1) ∗ ΔR. If our estimation is ΔR′ with an absolute deviation h = ΔR′ − ΔR, the probe will be inaccurate located to Rj=Rj+(j1)h (the dashed circles in Fig. 7(b)). Let us define a shift deviation in percentage as sd = hR∗100%. In the iterative calculation, if we intentionally add a shift deviation of 25%, the algorithm easily fails to recover a clear image (Fig. 8(a)).

 figure: Fig. 7

Fig. 7 (a) The actual position and size of the j − 1 and j-th probes. The diameter of the probes is a.(b) The gray dashed circles indicate the inaccurate estimation of the j − 1 and j-th probes. The deviation of the j-th probe is δj = (j − 1)∗h. The red dotted circles present the two loose supports which are enlarged to a diameter of a′.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Recovered images with different loose rates when the shift deviation is 25%. (a) loose rate=0. (b) loose rate=25%. All the reconstructions take 20 iterations.

Download Full Size | PDF

We found that, if we internationally use a larger number rather than their actual size for the supports in the calculation, i.e. a′ → a + ΔRrl (the red dotted circles in Fig. 7(b)), this difficulty can be overcome. We name this process “loose support”, where rl defines the “loose rate”. Fig. 8(b) shows that with a 25% loose rate, the algorithm can successfully retrieve the image under the 25% shift deviation. Even for a shift deviation of 50%, PIII can still retrieve a good result after introducing a 75% loose rate, although it takes relatively more iterations (see Fig. 9). since, the deviation of the j-th probe will accumulate, i.e., RjRj=(j1)h. The needed loose rate increases relatively more quickly than the shift deviation does. Note that, in the calculation, a is 280 pixels, and ΔR is only 20 pixels, meaning that the loose support is a small expansion to the size of the probes. This approach not only provides correct results, but also reduces the experimental requirement.

 figure: Fig. 9

Fig. 9 Recovered images with different loose rates when the shift deviation is 50%. (a) loose rate=0. (b) loose rate=25%. (c) loose rate=75%. All the reconstructions take 20 iterations.

Download Full Size | PDF

4. Applications in short distance

Besides the applications in long distance imaging, PIII also has promising applications in short distance scenarios. Similar to CDI, II is a natural lensless imaging system. II works with not only incoherent sources, but also most coherent sources, since in most situations it is easy to turn a coherent source to an incoherent source by scrambling the phase of light with a diffuser. In contrast, it is difficult to change a source from incoherent to coherent. From this sense, if we are concerned for amplitude objects (rather than complex-valued objects), II has a wider application scope than CDI. In the following, we will use the laser of 457 nm wavelength to demonstrate a PIII experiment for short distance applications such as X-ray imaging.

The schematic is described in Fig. 10. Different than Fig. 3, the probe is replaced with a circular aperture plate and placed right in front of the object, which makes it very easier to determine the size and location of the probe. The size of the aperture in the probe is 3 mm. We move the probe in a step of 0.32 mm. The lens is also removed. The distance from the ground glass to the probe is adjusted to 150 mm. The distance from the object to the camera is 200 mm. Thus, the imaging resolution is 0.457µm ∗ 200mm/3mm ≈ 30µm. Besides the object of “51201816”, we also tested a USAF 1951 resolution chart board. The recovered images are shown in Fig. 11(a) and 11(b), both of which are done in 9 iterations. In Fig. 11(b), element 6 of group 3 in the resolution chart can be resolved, corresponding to a resolution of 35.08 µm, consistent with the theoretical estimation.

 figure: Fig. 10

Fig. 10 The schematic of the short-distance experiment setup. The probe is of a circular aperture with a diameter of 3 mm.

Download Full Size | PDF

 figure: Fig. 11

Fig. 11 (a) The recovered image of “51201816” after 9 PIII iterations. Scale bar: 500 µm (b) The recovered image of the resolution chart board after 9 PIII iterations. The left column is of group 2, and the right column is of group 3. Scale bar: 100 µm.

Download Full Size | PDF

5. Discussion

Actually, PIII is different than PIE both in mechanism and in algorithm. “Ptychographical” in our method rather indicates the ptychography-type measurement. PIE is based on the first-order interference, where the probe information is critical to reconstruction. The amplitude and phase distribution has to be accurately determined, which may be pre-calibrated or post-calculated using algorithms such as ePIE [32, 43], since a small phase change of the probe may significantly change the diffraction pattern. In contrast, PIII is based on the second-order correlation. The phase and amplitude distribution of the transmitted field from the object varies temporally and spatially. Thus, the probe merely acts as a support that just specifies the area of a sub-object. In the algorithm, we do not need to precisely determine the position, size or shape of the probe, but just use a little large support to replace it, which is similar to HIO and ER. This is the mechanism of “loose support”.

To investigate the detail of the mutual coherence function, we rewrite it as

E(r1)E*(r2)=1λ2z2eik2z(r12r22)T(ρ)T*(ρ)eik2z(ρ2ρ2)×eikz(r1ρr2ρ)dρdρ
where T(ρ) = O(ρ) ∗ U(ρ) is the transmitted field of the object. U(ρ) is a probe function containing random illumination phases. Theoretically, the ensemble average will result in a dirac delta function if we ideally assume the grains of the object is infinitely small, i.e., 〈T(ρ)T (ρ′)〉 = |T(ρ)|2δ(ρρ′) = Γ(ρP(ρ). So the spatial variation of the random probe illumination is averaged out, and P(ρ) works as an aperture function. Consequently, the terms related to Fresnel diffraction, eik2z(r12r22) and eik2z(ρρ2), will be canceled out, and eventually we obtain,
|γ12|2|Γ(ρ)P(ρ)eikzρ(r1r2)dρ|2.

Therefore, the second-order correlation gives the same Fourier magnitude pattern both in near-field and far-field [44]. Thus, we are able to move the camera closer to the object to increase the detection sensitivity. Above all, the second-order correlation provides a simple way to obtain a Fourier magnitude for an amplitude object. We would like mention that, our method can not recover the phase distribution of a complex-valued object, which limits its applications.

On the other hand, the pixels of the camera must be fine enough to clearly image the shape of the speckles on the detection plane. Meanwhile, the number of speckles must be big enough to reach a sufficient ensemble average, which can be realized by either increasing the samples of speckle patterns (in the cost of detection time), or using more pixels distributed in a larger area. The good thing is that, the pixels can be sparsely distributed on the detection plane [45]. If the number of pixels is sufficiently large, a single shot is able to resolve the Fourier magnitude of the object’s intensity distribution, saving a lot of detection time.

By introducing the ptychography-type measurement and the corresponding algorithm, PIII gains the ability of reliably and efficiently recovering a complex shaped object which is inaccessible with the traditional intensity interferometry imaging. By utilizing the loose support approach, PIII removes the requirement of the precise knowledge of the probe’s location and size, by an error-tolerance more than 50% (defined in shift deviation). Recently, incoherent light has become an attracting source for imaging, such as ghost imaging with X-ray [9–11]. We can expect that, PIII would be another promising lensless imaging method.

Funding

National Natural Science Foundation of China (NSFC) (11503020), 111 Project of China (B14040) and National Basic Research Program of China (973 Program)(Grant No. 2015CB654602).

References and links

1. R. H. Brown and R. Q. Twiss, “A test of a new type of stellar interferometer on sirius,” Nature , 178, 1046–1048 (1956). [CrossRef]  

2. E. M. Purcell, “The Question of Correlation between Photons in Coherent Light Rays,” Nature , 178, 1449–1450 (1956). [CrossRef]  

3. U. Fano, “Quantum Theory of Interference Effects in the Mixing of Light from Phase-Independent Sources,” Am. J. Phys. 29, 539–545 (1961). [CrossRef]  

4. B. L. Morgan and L. Mandel, “Measurement of Photon Bunching in a Thermal Light Beam,” Phys. Rev. Lett. 16, 1012–1015 (1966). [CrossRef]  

5. R. J. Glauber, “The quantum theory of optical coherence,” Phys. Rev. 130, 2529–2539 (1963). [CrossRef]  

6. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52, 3429–3431 (1995). [CrossRef]  

7. A. Valencia, G. Scarcelli, M. D’Angelo, and Y. Shih, “Two-photon “ghost” imaging with thermal light,” Phys. Lett. 94, 557–559 (2004).

8. R. H. Brown and R. Q. Twiss, “Interferometry of the Intensity Fluctuations in Light. I. Basic Theory: The Correlation between Photons in Coherent Beams of Radiation,” Proc. Roy. Soc. Lon. A , 242, 300–324 (1957). [CrossRef]  

9. H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, “Fourier-transform ghost imaging with hard x rays,” Phys. Rev. Lett. 117, 113901 (2016). [CrossRef]   [PubMed]  

10. D. Pelliccia, A. Rack, M. Scheel, V. Cantelli, and D. M. Paganin, “Experimental x-ray ghost imaging,” Phys. Rev. Lett. 117, 113902 (2016). [CrossRef]   [PubMed]  

11. A.X. Zhang, Y.H. He, L.A. Wu, L.M. Chen, and B.B. Wang,“Table-top X-ray Ghost Imaging with Ultra-Low Radiation,” Optica , 5, 374–377 (2018). [CrossRef]  

12. A. Classen, K. Ayyer, H. N. Chapman, R. Rohlsberger, and J. V. Zanthier, “Incoherent diffractive imaging via intensity correlations of hard x rays,” Phys. Rev. Lett. 119, 053401 (2017). [CrossRef]   [PubMed]  

13. R. Schneider, T. Mehringer, G. Mercurio, L. Wenthaus, A. Classen, G. Brenner, O. Gorobtsov, A. Benz, D. Bhatti, and L. Bocklage, “Quantum imaging with incoherently scattered light from a free-electron laser,” Nature Phys. 1412 (2017). [CrossRef]  

14. G. Baym, “The physics of hanbury brown–twiss intensity interferometry: from stars to nuclear collisions,” Acta Phys. Pol 29, 1839 (1998).

15. D. Dravins and S. Lebohec, “Stellar intensity interferometry: astrophysical targets for sub-milliarcsecond imaging,” Proc. SPIE7734, (2010). [CrossRef]  

16. P. R. Lawson, “Principles of long baseline stellar interferometry,” in “Principles of Long Baseline Stellar Interferometry,” 50–60 (JPL2000).

17. D. W. Mccarthy and F.J. Low,“Initial results of spatial interferometry at 5 microns,” Astrophys. J. , 202, 37–40 (1975). [CrossRef]  

18. D. H. Staelin and M. Shao, “Long-baseline optical interferometer for astrometry,” J. Opt. Soc. Am. 67, 81–86 (1978).

19. R. Hanbury-Brown and R. Q. Twiss, “Correlation between photons in two coherent beams of light,” J. Astrophys. 15, 13–19 (1994).

20. R. H. Brown and D. Scarl, “The intensity interferometer, its application to astronomy,” Phys. Tod. 28, 54–55 (1975). [CrossRef]  

21. S. L. Bohec and J. Holder, “Optical intensity interferometry with atmospheric cherenkov telescope arrays,” Astrophys. J. 649, 399–405 (2006). [CrossRef]  

22. V. Malvimat, O. Wucknitz, and P. Saha, “Intensity interferometry with more than two detectors?” Mon. Not. Roy. Astro. Soc. 437, 798–803 (2014). [CrossRef]  

23. D. Dravins, S. Lebohec, H. Jensen, and P. D. Nuñez, “Optical intensity interferometry with the cherenkov telescope array,” Astro. Phys. 43, 331–347 (2013). [CrossRef]  

24. W. Wang, Z. Tang, H. Zheng, H. Chen, Y. Yuan, J. Liu, Y. Liu, and Z. Xu, “Intensity correlation imaging with sunlight-like source,” Opt. Commun. 414, 92–97 (2018). [CrossRef]  

25. J. R. Fienup, “Reconstruction of an object from the modulus of its fourier transform,” Opt. Lett. 3, 27 (1978). [CrossRef]   [PubMed]  

26. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef]   [PubMed]  

27. C. C. Wackerman and J. R. Fienup, “Phase-retrieval stagnation problems and solutions,” J. Opt. Soc. Am. A 3, 1897–1907 (1986). [CrossRef]  

28. H. M. Faulkner and J. M. Rodenburg, “Movable aperture lensless transmission microscopy: a novel phase retrieval algorithm,” Phys. Rev. Lett. 93, 023903 (2004). [CrossRef]   [PubMed]  

29. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica , 2, 104–111 (2015). [CrossRef]  

30. W. Hoppe, “Trace structure analysis, ptychography, phase tomography,” Ultra , 10, 187–198 (1982). [CrossRef]  

31. C. Liu, T. Walther, and J. M. Rodenburg, “Influence of thick crystal effects on ptychographic image reconstruction with moveable illumination,” Ultra. 109, 1263–1275 (2009). [CrossRef]  

32. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultra , 109, 1256–1262 (2009). [CrossRef]  

33. A. M. Maiden, M. J. Humphry, M. C. Sarahan, B. Kraus, and J. M. Rodenburg, “An annealing algorithm to correct positioning errors in ptychography,” Ultra. 120, 64–72 (2012). [CrossRef]  

34. F. Zhang, I. Peterson, J. Vila-Comamala, A. Diaz, F. Berenguer, R. Bean, B. Chen, A. Menzel, I. K. Robinson, and J. M. Rodenburg, “Translation position determination in ptychographic coherent diffraction imaging,” Opt. Express 21, 13592 (2013). [CrossRef]   [PubMed]  

35. P. van Cittert, “Die wahrscheinliche schwingungsverteilung in einer von einer lichtquelle direkt oder mittels einer linse beleuchteten ebene,” Physica , 1, 201–210 (1934). [CrossRef]  

36. F. Zernike, “The concept of degree of coherence and its application to optical problems,” Physica , 5, 785–795 (1938). [CrossRef]  

37. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nature Photo. 6, 549–553 (2012). [CrossRef]  

38. J. Bertolotti, E.G. van Putten, C. Blum, Ad Lagendijk, W. L. Vos Allard, and P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature Photo. 491, 232–234 (2012). [CrossRef]  

39. W. Martienssen and E. Spiller, “Coherence and fluctuations in light beams,” Am. J. Phys. 32, 919–926 (1964). [CrossRef]  

40. H. Chen, T. Peng, and Y. Shih, “100% correlation of chaotic thermal light,” Phys. Rev. A , 88, 023808 (2013). [CrossRef]  

41. V. Alejandra, S. Giuliano, D. Milena, and S. Yanhua, “Two-photon imaging with thermal light,” Phys. Rev. Lett. 94, 063601 (2005). [CrossRef]  

42. T. Peng, H. Chen, Y. Shih, and M. O. Scully, “Delayed-choice quantum eraser with thermal light,” Phys. Rev. Lett. 112, 180401 (2014). [CrossRef]   [PubMed]  

43. F. Haije, J. M. Rodenburg, A. M. Maiden, and P. A. Midgley, “Extended ptychography in the transmission electron microscope: possibilities and limitations,” Ultra. 111, 1117–1123 (2011). [CrossRef]  

44. L. I. Goldfischer, “Autocorrelation function and power spectral density of laser-produced speckle patterns,” J. Opt. Soc. Am. 55247 (1965). [CrossRef]  

45. J.R. Fienup and P.S. Idell, “Imaging Correlography With Sparse Arrays Of Detectors,” Opti. Eng 27(9), 279778 (1988).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 A nebula is imaged with a telescope array on the ground. All the telescopes together measure the nebula section by section. The yellow dashed circles indicate three overlapping sections.
Fig. 2
Fig. 2 The flowchart of the algorithm. The dashed rectangle indicates the small loop of calculating the phases of different sections. The outer of the rectangle is the big iterative loop. The exiting condition is whether the maximum of iteration number is reached or a satisfactory image is obtained.
Fig. 3
Fig. 3 The object is made of hollow letters of “51201816”. The pseudo-thermal source is built with a laser beam scattered by a rotating ground glass (RGG). The scattered light is collimated by a lens and then projected onto the object, mimicking incoherent light illuminating the object. A light cone is placed in front of the camera as a section selector. Its head is 80 mm away from the object. Its body is 150 mm long.
Fig. 4
Fig. 4 (a) the recovered image with PIII within 10 iterations. (b) the reconstructed image after 1000 ER iterations ; (b) the reconstructed image after 1000 HIO iterations; (d), (e) and (f) are the corresponding relative residuals VS iterations for PIII, ER and HIO, respectively. Scale bar: 200pixels.
Fig. 5
Fig. 5 (a) One of the captured speckle patterns for the first probe. (b) The average autocorrelation function calculated from 100 speckles for the first probe. Scale bar: 650 µm.
Fig. 6
Fig. 6 (a) The recovered images after nine big iterations. (b) The corresponding relative residual VS. iteration (only the first nine iterations). (c) The variance of the relative residual in 1000 iterations. Scale bar: 500 µm.
Fig. 7
Fig. 7 (a) The actual position and size of the j − 1 and j-th probes. The diameter of the probes is a.(b) The gray dashed circles indicate the inaccurate estimation of the j − 1 and j-th probes. The deviation of the j-th probe is δj = (j − 1)∗h. The red dotted circles present the two loose supports which are enlarged to a diameter of a′.
Fig. 8
Fig. 8 Recovered images with different loose rates when the shift deviation is 25%. (a) loose rate=0. (b) loose rate=25%. All the reconstructions take 20 iterations.
Fig. 9
Fig. 9 Recovered images with different loose rates when the shift deviation is 50%. (a) loose rate=0. (b) loose rate=25%. (c) loose rate=75%. All the reconstructions take 20 iterations.
Fig. 10
Fig. 10 The schematic of the short-distance experiment setup. The probe is of a circular aperture with a diameter of 3 mm.
Fig. 11
Fig. 11 (a) The recovered image of “51201816” after 9 PIII iterations. Scale bar: 500 µm (b) The recovered image of the resolution chart board after 9 PIII iterations. The left column is of group 2, and the right column is of group 3. Scale bar: 100 µm.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

Δ G ( 2 ) ( r 1 , r 2 ) Δ I 1 ( t , r 1 ) Δ I 2 ( t + τ , r 2 ) = C | γ 12 | 2 ,
γ 12 ( Δ r ) E 1 ( r 1 ) E 2 ( r 2 ) = e i ϕ Γ ( ρ ) exp [ i 2 π λ z ( Δ r ρ ) ] d ρ ,
| Δ r { Γ ( ρ ) } | 2 = | γ 12 | 2 = Δ G ( 2 ) ( Δ r ) / C Δ g ( 2 ) ( Δ r ) ,
A j ( Δ r ) = | Δ r { Γ ( ρ ) P j ( ρ ) ) } | = Δ g j ( 2 ) ( Δ r ) .
F k , j = A j F k , j | F k , j | .
Θ k , j ( ρ ) = { R e { Θ k , j ( ρ ) } , P j ( ρ ) = 1 [ R e { Θ k , j ( ρ ) } 0 ] 0 , P j ( ρ ) = 0 [ R e { Θ k , j ( ρ ) } < 0 ] ,
Γ ( ρ ) = Γ ( ρ ) [ 1 P j ( ρ ) ] + Θ k , j ( ρ ) .
P j ( ρ ) P ( ρ R j ) = { 1 , if | ρ R j | a 0 , if | ρ R j | > a ,
E ( r 1 ) E * ( r 2 ) = 1 λ 2 z 2 e i k 2 z ( r 1 2 r 2 2 ) T ( ρ ) T * ( ρ ) e i k 2 z ( ρ 2 ρ 2 ) × e i k z ( r 1 ρ r 2 ρ ) d ρ d ρ
| γ 12 | 2 | Γ ( ρ ) P ( ρ ) e i k z ρ ( r 1 r 2 ) d ρ | 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.