Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

All-digital quantum ghost imaging: tutorial

Open Access Open Access

Abstract

Quantum ghost imaging offers many advantages over classical imaging, including the ability to probe an object with one wavelength and record the image with another, while low photon fluxes offer the ability to probe objects with fewer photons, thereby avoiding photo-damage to light sensitive structures such as biological organisms. Progressively, ghost imaging has advanced from single-pixel scanning systems to two-dimensional (2D) digital projective masks, which offer a reduction in image reconstruction times through shorter integration times. In this tutorial, we describe the essential ingredients in an all-digital quantum ghost imaging experiment and guide the user on important considerations and choices to make, aided by practical examples of implementation. We showcase several image reconstruction algorithms using two different 2D projective mask types and discuss the utility of each. We additionally discuss a notable artifact of a specific reconstruction algorithm and projective mask combination and detail how this artifact can be used to retrieve an image signal heavily buried under artifacts. Finally, we end with a brief discussion on artificial intelligence (AI) and machine learning techniques used to reduce image reconstruction times. We believe that this tutorial will be a useful guide to those wishing to enter the field, as well as those already in the field who wish to introduce AI and machine learning to their toolbox.

© 2023 Optica Publishing Group

1. INTRODUCTION

A consequence of quantum mechanics is the nonlocal correlation of a multi-particle system that is observable in the joint detection of spatially separated particle-detectors [1]. Quantum ghost imaging is one such phenomena that saw its inception in 1995 when Pittman and co-workers demonstrated this novel imaging approach informed by the theoretical work of Klyshko in 1988 [2]. Quantum ghost imaging employs entangled signal and idler photon pairs produced by spontaneous parametric downconversion (SPDC), as initially demonstrated in the experiment undertaken by Pittman et al. [3]. Traditionally, to acquire an image, one would point a camera directly at the object to be imaged; however, in quantum ghost imaging, the positional information of photons that have not interacted with the object is measured. One photon from the entangled pair interacts with the object and is subsequently bucket detected (detected with no spatial resolution) while its twin photon does not interact with the object and is directed to a spatially resolving detector. The beauty of quantum ghost imaging is that no individual photon, of the entangled pair, can reveal the image information; rather, the image is revealed through the mutual correlations of the entangled photon pair by detecting the pair in coincidence [46].

Pittman et al. concluded that, although they had utilized entangled photons, it was indeed possible to utilize a classical source that would reproduce this behavior. Subsequently, Abouraddy and co-workers developed a theoretical model that showed an entangled photon source producing images in which the characteristics of the image differed fundamentally from images obtained by a correlated, but not entangled, photon source [7]. In 2002, Bennink et al. experimentally demonstrated that a classical source yielded a ghost image, albeit with higher photon numbers than in the quantum regime [8]. A theoretical paper in 2003 by Gatti et al. showed that classically it is not possible to reproduce a key feature of quantum ghost imaging using an SPDC source [9]. This key feature was that entanglement allows for ghost images to be reconstructed in both the near and far field of the entanglement source, while the classical source can form an image in either the near or far field but not both [9]. This translates back to the rules of quantum mechanics where it is possible to first emit a photon and only after it has been emitted to decide whether to measure its position or transverse momentum. In 2004, this was confirmed, where it was shown that a source of entangled photons could form ghost images in either its near or far field [10]. Quantum ghost imaging is, therefore, a powerful alternative imaging technique for light sensitive structures. In the quantum regime, low photon numbers allow for little to no damage to photo-sensitive structures, which we believe paves the way toward quantum microscopy of biological samples [11], while also permitting measurements in either the near or far field of the entangled photon source, which is not reproducible by a classical source. Additionally, characteristic to quantum ghost imaging is the special performance it offers, where it is possible to produce non-degenerate signal and idler photons. The object is probed or illuminated with one wavelength while the spatial information is recorded at another wavelength where the imaging detector is less noisy, more sensitive, or cost effective [12].

In preparation for physical objects, such as biological samples, digital objects are used in what is known as all-digital quantum ghost imaging. Digital objects are displayed on a light modulating device, this also serves the purpose of knowing the ground truth so as to establish image quality. Recent years have seen the emergence of all-digital quantum ghost imaging in an effort to develop and test advancements to technique and efficiency, with the goal being to realize the most efficient state of quantum ghost imaging, before utilizing these advancements to image light sensitive biological structures. Quantum ghost imaging was recently demonstrated with entanglement swapped photons [13] and symmetry engineered quantum states [14] and has seen significant growth in the past two decades [11,1517], with the promise of enhanced resolution [18,19], while expanding the application to the imaging of photo-sensitive structures [12,15] and moving into the x-ray [20,21] and electron wavelength spectrum [22,23]. Imaging quality has steadily improved while acquisition times have steadily decreased; these advancements are fueled by progress in computational imaging [24,25], compressive sensing [26,27], and deep-learning-based methods [2832]. While there have been considerable advancements to date, quantum ghost imaging still requires substantial development before a commercially viable product becomes available. All-digital quantum ghost imaging provides a platform for efficient tests, which will pave the way toward quantum microscopy.

In this tutorial, we outline how to get started with all-digital quantum ghost imaging. We provide basic theoretical considerations, followed by a getting started section where important considerations and choices on the optical setup are discussed. We follow this up with a practical example of an all-digital quantum ghost imaging optical implementation where we detail experimental specifics. We introduce different image reconstruction algorithms and image quality factor tests to allow the user to make an informed decision on algorithm choice based on image quality tests. Finally, we present examples of what quantum images typically look like and briefly discuss artificial intelligence methods for time-efficient quantum imaging. While we present the tutorial in a linear style, it does not have to be read in this manner for those familiar with the field. For those entering the field, this tutorial provides a thorough guide that discusses all considerations while providing practical examples along the way. We additionally provide references throughout for those wishing to expand quantum ghost imaging beyond the scope of this tutorial.

2. THEORETICAL CONSIDERATIONS

Before delving into the practicality of quantum ghost imaging, it is important to first introduce the physics of why quantum ghost imaging occurs. In traditional (classical) imaging, if the object is assumed to be either externally illuminated or self-illuminating, each point on the surface of the object can then be imagined as a point radiation sub-source. An imaging lens is used to focus the light scattered and/or reflected by the object onto an image plane defined by the Gaussian thin lens equation,

$$\frac{1}{{{s_{\!o}}}} + \frac{1}{{{s_{\!i}}}} = \frac{1}{f},$$
where ${s_{\!o}}$ is the distance between the object and the imaging lens, ${s_{\!i}}$ is the distance between the imaging lens and the image plane, and $f$ is the focal length of the imaging lens. The Gaussian thin lens equation defines a point-to-point relationship between the object plane and the image plane. This is illustrated in Fig. 1(a). Each point on the object plane is mapped to a unique point on the image plane. In a classical imaging scenario, the photons illuminating the object must travel to the image plane and transfer information by virtue of position correlations that are established by the optical system itself.
 figure: Fig. 1.

Fig. 1. Imaging systems. (a) Classically the photons used to image an object must physically interact with the object, travel to the image plane, and transfer information by virtue of position correlations that are established by the optical system itself. (b) Position correlations are established by virtue of quantum entanglement. The correlations are innate to the entangled photon pair, where the photon pair share quantum correlations at the source.

Download Full Size | PDF

By exploiting correlations unique to the quantum world, it becomes possible to establish the position correlations required for imaging, by virtue of quantum entanglement. The correlations are innate to the entangled photon pair, where the entangled photon pair share quantum correlations at the source. This process is illustrated in Fig. 1(b). Spatially separating this pair, one photon is sent to the object, and the other is sent to an imaging detector, which detects either the transverse position or momentum of the photon. By detecting both photons in coincidence, thereby extracting their quantum correlations, it is possible to reconstruct an image of the object. By spatially resolving one photon, attaining the position of its entangled counterpart is possible. This technique is known as quantum ghost imaging. Quantum ghost imaging, therefore, offers an alternative imaging technique specialized for light sensitive structures, and light that has not physically interacted with the object is detected to image the object, which is of particular importance when imaging photo-sensitive matter.

A. Quantum Correlations and the Phase-Matching Condition

Quantum ghost images are acquired by correlating the output of a bucket detector (a mode insensitive detector [33], which collects the photons that have interacted with the object) with the output from a high-resolution scanning detector or photo-detector array, which detects the position of the photons that have not interacted with the object. Neither photon alone can yield an image—the bucket detector cannot spatially resolve the object photon while its twin photon has not interacted with the object; this is illustrated conceptually in Fig. 1(b).

At the core of quantum ghost imaging lies the production of entangled photon pairs. The most efficient and commonly employed method to generate entangled photon pairs is that of spontaneous parametric downconversion (SPDC) by a non-linear crystal (NLC). It has been well-established that SPDC leads to the creation of entangled photons [3441]. This non-linear process, of optical non-linearity ${\chi ^{(2)}}$, facilitates the decay of a pump photon into two daughter photons (termed the signal and idler photons for historic reasons). A pump beam of angular frequency ${\omega _p}$ when incident on a NLC produces two daughter photons of ${\omega _s}$ and ${\omega _i}$, respectively. By energy conservation, the angular frequencies of the signal and idler photons must add to the angular frequency of the pump photon that produced them,

$${\omega _p} = {\omega _s} + {\omega _i}.$$

If ${\omega _s} = {\omega _i}$ the downconversion process is degenerate where the signal and idler photons are of the same wavelength, while if ${\omega _s} \ne {\omega _i}$ the downconversion process is non-degenerate, and the signal and idler photons are of different wavelengths. In both cases, Eq. (2) must be satisfied. Additionally the linear momentum of the daughter photons must add to that of the pump photon, given by

$${{\textbf k}_p} = {{\textbf k}_s} + {{\textbf k}_i},$$
where ${{\textbf k}_{p,s,i}}$ are the wavevectors of the pump, signal, and idler photons, respectively. The efficiency of emission for the photons is greatest when energy and linear momentum relations are obeyed and the properties are conserved. This is known as the phase-matching condition, and the limitations imposed by meeting this hold for the entire length of the crystal.

Entangled photon pairs consist of a signal photon and an idler photon; each photon carries distinct spatial modes and polarization. In the generation of entangled photons by spontaneous parametric downconversion, the photon pair is entangled in both energy and momentum, including orbital angular momentum (OAM), such that the two-photon state can be described by

$$|\Psi \rangle = \sum\limits_\ell {a_{\ell , - \ell}}\left| {{\ell _s}} \right\rangle \left| {{\ell _i}} \right\rangle ,$$
where ${| {{a_{\ell , - \ell}}} |^2}$ is the probability of finding the signal photon in the OAM state $|\ell \rangle$ and the idler photon in the state $| - \ell \rangle$. Similarly, this can be extended to the position space with $| {{x_s}} \rangle$ and $| {{x_i}} \rangle$ representing the positions of the signal and idler photon, respectively. Note that the photons are anticorrelated in OAM and correlated in their position. The idler photon encounters a spatially resolved single-photon detector, registering its position without engaging with the object that is being probed. Simultaneously, the signal photon interacts with the object, experiencing processes like transmission, reflection, and scattering depending on the object type. The resulting intensity pattern is captured by a spatially integrated bucket detector that has no spatial resolution.

The core of quantum ghost imaging resides in the second-order correlation function, which quantifies the correlation between the positions of the signal and idler photons, respectively. This function is defined as the joint probability of detecting a signal photon and an idler photon at specific positions and divided by the product of the individual probabilities. Accordingly, even though the idler photon avoids direct interaction with the object, the joint detection events of the idler photon and the intensity measurements of the signal photon employ a nontrivial correlation structure governed by the second-order correlation function ${G^{(2)}}({\overrightarrow {{x_s}} ,\overrightarrow {{x_i}}})$. For full details on the transverse spatial part of the second-order correlation function, see [42]. The correlation originates from the shared entangled state of the photon pairs. To reconstruct an image of the object, it is necessary to accumulate a substantial number of correlated photon pairs and analyze the second-order correlation function to extract the spatial information of the object, and this is achieved through image reconstruction. Notably, the joint probability distribution of entangled photon pairs is an important factor in quantum optics. This distribution describes the likelihood of measuring certain outcomes for the position or momentum of one photon given the measurements of the entangled twin photon. The degree of entanglement between the photons affects the strength of the correlations between the entangled photon pair, and, accordingly, the quality of the reconstructed image is also affected. In particular, second-order correlations play a role in enhancing the visibility of the reconstructed image. When the photons are strongly correlated, the visibility of the reconstructed image is improved.

Polarization sensitivity and phase-matching conditions results in different SPDC phase-matching regimes, namely Type 0, I, and II. In Type 0, the downconverted photons are produced with the same polarization basis as the pump photon. Types I and II are the more commonly used types in quantum optics. In Type I, the downconverted photons are produced with the same polarization basis, which is orthogonal to the polarization of the pump photon. Daughter photons are emitted on concentric cones centered around the pump beam’s axis of propagation. Type II SPDC emits one photon with the same polarization as the pump and the other with orthogonal polarization, and these photons are polarization entangled. SPDC, additionally, consists of two geometric types: these are collinear and non-collinear. In the collinear geometry, the output field wavevectors propagate in the same direction as the pump field, as shown in Fig. 2(a). Conversely, in the non-collinear SPDC geometric case, the output fields propagate off-axis with respect to the pump field; however, their trajectories are equal although on opposite sides as illustrated in Fig. 2(b). A good review in [43] covers the theoretical derivation of SPDC, which is beyond the scope of this tutorial. The SPDC details presented above are, however, sufficient for an understanding of how quantum ghost imaging works. The different types of SPDC phase-matching and geometric types are summarized in Table 1.

 figure: Fig. 2.

Fig. 2. Diagrams of SPDC interaction geometries for (a) collinear and (b) non-collinear phase-matching. Insets show the required momentum conservation.

Download Full Size | PDF

Tables Icon

Table 1. Summary of the Different Phase-Matching Regimes for SPDC, Assuming That the Pump Is Horizontally (H) Polarized and Whether Collinear or Non-Collinear Geometries Are Possible

B. Position and Momentum Configurations

The ghost imaging protocol makes use of the spatial correlation of photon pairs that can, in principle, be accomplished by a classical source [44]. However, in the context of Einstein–Podolsky–Rosen (EPR) studies, one needs to demonstrate two-photon correlations in each of the two conjugate variables, such as both in position and transverse momentum [45]. In the position configuration of the quantum ghost imaging protocol [shown in Fig. 3(a)], the plane of the NLC is imaged to both the object ($O(x,y)$) and projective mask (${P_i}(x,y)$), thereby relying on the near-perfect spatial correlations between the entangled photons generated by SPDC to reconstruct the image. However, in the plane of the crystal, the signal and idler photons are anticorrelated in their transverse momenta. As a consequence in the far field, the positions of the signal and idler photons are anticorrelated. By placing both the object and the imaging detector in the far field of the crystal, one will achieve a momentum configured quantum ghost imaging system with the image inverted, as shown in Fig. 3(b). The image type (upright or inverted), therefore, depends on the system configuration and is a manifestation of EPR entanglement. Although both results could be obtained independently, no single classical system could produce both [46,47]. An additional embodiment of ghost imaging is when the SPDC is phase-matched to produce non-degenerate signal and idler wavelengths [12,32]; this depends on the combination of the pump photon’s energy as well as the NLC in use. The wavelength non-degeneracy offers special performance in applications to the biological sciences as the object is illuminated with one wavelength while the spatial information is recorded at another wavelength, especially when mitigating the risk of photo-damage to light sensitive organisms [48,49].

 figure: Fig. 3.

Fig. 3. Conceptual sketch of quantum ghost imaging optical setups in the position and momentum configurations. Entangled photons, generated by a high energy pump photon at a non-linear crystal (NLC) by spontaneous parametric downconversion (SPDC), are spatially separated along two arms. One photon interacts with the object and is collected by a bucket detector—the idler photon. The other photon (signal photon) is collected by a spatially resolving detector consisting of a projective mask and a bucket detector. Each detector is connected to a coincidence counting (CC) device to perform coincidence measurements. (a) Illustration of a ghost imaging optical setup in which the object and projective mask are placed in the near field of the crystal. (b) Illustration of a ghost imaging optical setup in which the object and projective mask are placed in the far field of the crystal. The insets show the respective ghost images obtained for the different experimental configurations, taken from [50]. ${f_i}$ is indicative of the different focal lengths for lenses required for either the position of momentum configuration.

Download Full Size | PDF

Figures 3(a) and 3(b) conceptually illustrates how a quantum ghost imaging optical setup is implemented in both the position and momentum configurations, respectively. It is known that the spatial correlations between signal and idler photon pairs, produced by SPDC, can be used within a quantum imaging system [1,3,7,51]. Image information is revealed by the correlations between the signal and idler photons and is not present in the detection of each individual photon. A large-area, single-pixel detector (or a bucket detector) collects the photons that have interacted with the object (idler photons), yet no image information of these photons is recorded. In the signal arm, the positional information of the detected signal photons is recorded. The signal photons are, therefore, spatially resolved, yet similarly no image information is recorded. The imaging protocol requires that the coincidence counts between signal and idler photons are recorded. Specifically, the position or momentum (depending on the quantum ghost imaging configuration) of the photon impinging on the spatially resolving detector is recorded only if its detection is coincident with the recording of a photon by the bucket detector. In essence, the idler photons have illuminated the object, and the signal photons have been recorded by a detector with spatial resolution. The subset of those position-measured signal photons that is coincident with detection of the position of the signal photon reveals the image ($I(x,y)$). Each position is weighted by the recorded coincidence counts, and by a linear combination of each weighted position an image (of the object) is reconstructed.

In terms of Klyshko’s advanced wave model [2], it is possible to use classical optics to predict the spatial distribution of the quantum photon correlations in a setup that employs SPDC. The Klyshko advanced wave model can be implemented in either the position or momentum configuration as discussed in [50]. The bucket detector is replaced with a laser diode at a central wavelength that is the same as the SPDC wavelength. The diode laser is coupled to a multimode fiber, and the light is projected backward through the object arm to the NLC. The NLC is replaced by a mirror, and the light then bounces off the mirror and travels through the other arm of the experimental setup toward the imaging detector [11]. This allows for one to experimentally simulate the outcome of a quantum optical setup with the use of classical light. Klyshko showed that backprojection in this manner generates a classical analog to predict what the measured quantum correlations from the quantum optical system would look like [2]. The measured intensity measurements will reflect what the quantum correlations (coincidence counts) would look like when detecting the entangled photons in coincidence.

C. Spatial Resolution and Field of View

Importantly one must assess the resolution limit of the quantum ghost imaging system, prior to experimentally setting up, to determine the number of available effective pixels for image resolution. In a quantum ghost imaging system, the resolution of the image (or the number of pixels the system is able to resolve) is limited by the point spread function (PSF) of the optics comprising the spatial resolution detector; this is further reduced by the strength of the spatial correlations inherent in SPDC [18]. As aforementioned, the uniqueness of quantum imaging lies in choosing whether one measures the position or momentum correlations between the signal and idler photons in a single imaging setup; here we outline the resolution limit calculation for the measurement of the momentum anticorrelations as measured in the far field. The strength of the momentum anticorrelations is set by the momentum uncertainty in the pump beam, which is controlled by the diameter of the pump beam. Accordingly, the position correlation radius (as measured in the far field) between signal and idler photons is given by

$${\sigma _x} \approx f\frac{{2{\lambda _p}}}{{\pi {w_p}}},$$
where $f$ is the effective focal length of the Fourier transform lens, ${\lambda _p}$ is the wavelength of the pump beam, and ${w_p}$ is the waist of the pump beam. In [18], the authors postulate that irrespective of the resolving power of the optical system this correlation sets the resolution limit, which cannot be exceeded by the quantum imaging system while the size of the pump beam sets the resolution.

Additionally, the field of view (FOV) of the imaging system is set by the phase-matching imposed by the length of the chosen NLC,

$${{\rm FOV}_x} \approx f\sqrt {\frac{{{\lambda _p}}}{L}} ,$$
where $L$ is the length of the chosen NLC. It is, therefore, possible to infer that a limit on the number of resolution cells available for ghost imaging is generated by the SPDC,
$$N = {\left({\frac{{{{\rm FOV}_x}}}{{{\sigma _x}}}} \right)^2} \approx \frac{{{\pi ^2}w_p^2}}{{4L{\lambda _p}}}.$$

The number $N$ corresponds to the limit of the number of effective pixels in the reconstructed quantum ghost images that are resolved due to the properties of the SPDC. $N$ is also the Schmidt number of the entanglement [52]. This is an important factor to determine prior to building the experimental setup, and this parameter also determines the number of pixels available to image with.

3. GETTING STARTED

In this section, we introduce the reader to practical considerations, required components, and choices to make prior to implementing a quantum ghost imaging optical setup. These considerations will lead to restrictions on the types of samples that can be probed through quantum ghost imaging and will determine the filters, lenses, and other optics one may require. Additionally we introduce the reader to calculating the detection probability, which is useful when simulating the outcome of the experiment. The detection probability will allow the reader to simulate what the quantum ghost image would typically look like. The simulated image is the theoretical calculation and will allow for theory versus experimental comparison post-experiment.

A. SPDC Wavelengths and Detection

It is of utmost importance to determine which wavelengths are of interest in the quantum ghost imaging experiment. Will the SPDC wavelengths be degenerate or non-degenerate in nature? Which wavelength range is most appropriate to probe your sample? Which wavelength range is the easiest or most cost effective to spatially resolve? These are questions that need to be carefully considered before building your setup as these choices dictate the pump photon wavelength, the NLC to use, and single-photon detectors, as well as the different coatings of the optics used in each arm of the experiment. It is always useful to consider several boundary conditions and trade-offs for these selections.

As the all-digital case is a step toward probing biological samples, it is important to make adequate considerations in preparation for the biological sample you wish to probe, i.e., what is ideal for testing the system versus what is ideal for probing the sample. Commonly, SPDC wavelengths are in the near-infrared range and are degenerate; this is due to the availability of lasers in the UV wavelength range as well as the NLC phase-matched for these wavelength ranges. Typically, samples are easily probed and/or sensitive to these wavelengths. Pump lasers, NLCs, and single-photon detectors for this range are readily available and have adequate efficiency. Above a wavelength of $\lambda = 900\;{\rm nm} $, single-photon detector efficiency decreases drastically, although the mid-infrared range is the best for probing biological matter. Table 2 briefly outlines the differences between commonly used avalanche photodiodes (APDs), also known as single-photon detectors, for different wavelength ranges. Importantly, in the infrared wavelength range, detection efficiency decreases to approximately 20%. Different manufacturers provide detectors with specs that will differ marginally, and the specifications provided in Table 2 are for APDs that are readily available and commonly used in quantum optics labs. For visible and near-infrared wavelength detection, most detectors are silicon-based. Silicon-based sensors are typically insensitive to wavelengths above 900 nm; therefore, InGaAs/InP detectors were developed to detect infrared wavelengths. However, their efficiency is considerably lower in comparison.

Tables Icon

Table 2. Specifications of Commonly Used Avalanche Photodiodes (APDs)

Consequentially, one may require infrared wavelengths to probe the sample while visible or near-infrared wavelengths have a better detection efficiency. Probing a sample with one wavelength while measuring the position of a photon of another wavelength is possible in non-degenerate SPDC, where the entangled photon pair are not of the same wavelength. In such a case, the infrared photon is sent to the object while its twin photon, either a visible or near-infrared photon, is spatially resolved at a higher detection efficiency. The SPDC wavelengths are determined by the wavelength of the pump photon and the phase-matching conditions of the chosen NLC. In general, SPDC produces broadband fields and not fields of a single frequency; it is, therefore, important to use narrow bandpass filters at the detectors to narrow down the detected frequencies. In accordance with the phase-matching rules, the bandpass filters at each detector must be suited for the wavelength in each arm. If possible, the ideal case is when the bandpass filter is centered at the required wavelength with a FWHM as narrow as possible. A typical example of a transmission graph for a bandpass filter centered at $\lambda = 810\;{\rm nm} $ with a FWHM of 10 nm is shown in Fig. 4, and the transmission efficiency increases at the central wavelength and drops off quickly at lower and higher wavelengths. The narrower the filter is, the more specialized the coating, and this often increases the cost of these specialized filters.

 figure: Fig. 4.

Fig. 4. Transmission graph of a bandpass filter centered at $\lambda = 810\;{\rm nm} $ with a 10 nm FWHM.

Download Full Size | PDF

The quality of the entangled photon source (in this case a non-linear crystal) influences the resolution, contrast, and fidelity of the reconstructed quantum ghost image. If the entanglement between the photons is high, the correlations between the signal and idler photons are strong, leading to a better-defined image. The optimal photon source for quantum ghost imaging would ideally produce highly entangled photon pairs with strong correlations. This typically involves using the most accessible source, that is, SPDC. SPDC produces entangled photon pairs with well-defined correlations. There are, however, other entangled photon sources such as semiconductor quantum dots that generate triggered entangled photon pairs through a cascaded radiative decay process [5355] and the deterministic embedding of GaAs quantum dots in broadband photonic nanostructures that enable Purcell-enhanced emission [56]. In contrast, SPDC is intrinsically probabilistic and, therefore, generates lower rates of photon pairs; it is, however, the most readily available and accessible source of entangled photons [57]. However, the Poissonian statistics of such sources limit the brightness to a rate that imposes a great challenge in quantum optics that require high efficiency [56].

B. Pump Laser

Deciding the SPDC wavelengths poses an automatic restriction on the wavelength of the pump photon (and vice versa) as per the rules of energy conservation. It is, however, important to select a suitable pump laser, taking into consideration its wavelength, maximum power, beam quality factor, and polarization. For entangled photon generation, the spatial profile of the beam should be of a high quality and as close to a perfect Gaussian beam as possible when it impinges on the NLC [58]. There is, however, a caveat to this: in cases where shaping the pump beam is of interest to control spatial mode entanglement, one would prefer to not have a Gaussian beam impinging on the NLC. In this case, the SPDC is tailored to a specific desired state through shaping the pump light field (for full details on a basis-independent approach to pump shaping, see [59]). In this tutorial, we focus on a pump beam profile as close to a Gaussian beam as possible.

The type of laser dictates the beam quality, while typical gas, solid state, and fiber lasers have good beam qualities, and the beam quality of diode lasers is less than adequate. Diode lasers are, however, more popular due to their compactness and efficiency [60]. There is, however, a solution: a spatial filter is used prior to pumping the NLC. A spatial filter can be constructed by a $4f$ system prior to the crystal where a pinhole (fixed sized aperture) is placed in the Fourier plane. Higher order modes will be removed if the parameters of the filter match the incoming beam in an optimal fashion, and only the fundamental mode is transmitted through the filter. Figures 5(a) and 5(b) show the beam profile of a diode laser centered at a wavelength of $\lambda = 405\;{\rm nm} $ before and after a spatial filter, respectively. Reference [61] is a good resource for complete details on spatial filtering. While the fundamental mode is the only mode that is transmitted in the optimal case, this coincides with a loss of laser power, which must be tested before and after spatial filtering to determine if there is adequate laser power to pump the crystal, in order to generate entangled photons. Typically one would expect laser power to be at or higher than 100 mW for quantum ghost imaging. The power parameter necessary for the experiment is, however, dependent on the laser type, crystal type, spatial light modulators (SLM) in use, noise, and many other aspects of the system. Considering the factors of how much power the SLM can withstand and if the images are noisy, it would then be appropriate to lower the laser power, often done with neutral density filters placed in the path of the laser beam, prior to pumping the crystal. While Figs. 4 and 5 provide information on the transmission graph of the bandpass filter and spatial filtering, these figures will provide valuable information to graduate students who are new to and entering the field of quantum ghost imaging and are included for completeness.

 figure: Fig. 5.

Fig. 5. Beam profiles of a diode laser centered at a wavelength of $\lambda = 405\;{\rm nm} $, (a) before and (b) after spatial filtering.

Download Full Size | PDF

C. Non-linear Crystals

Entangled photons in SPDC are stimulated by vacuum fluctuations in the NLC. Photon pair production is restricted by conditions imposed by both the NLC and the pump photon. SPDC photons can be produced by any energy and momentum combination that follows the rules of energy conservation.

There are a wide range of non-linear crystals (NLCs) that have been used for SPDC. Examples of commonly used NLCs in quantum optics are beta-barium borate (BBO), potassium titanyl phosphate (KTP), and potassium dihydrogen phosphate (KDP). Each type of crystal has its own efficiency as a result of its intrinsic properties. There are, however, a number of criteria to consider before choosing a crystal. Phase-matching can be achieved by tuning the birefringence of the crystal by two different methods. The first method takes into account the angular dependence of the crystal planes to the incoming pump photons. In this case, the incident polarizations will see different refractive indices, depending on the angle of incidence as well as the polarization of the incident photons. The crystal is rotated so that the angle of incidence is tuned for optimal phase-matching conditions; this is illustrated in Fig. 6(a). A second method makes use of periodic poling as illustrated in Fig. 6(b) and is temperature controlled for optimal phase-matching conditions. The poling within each domain of the crystal is altered to account for wavevector mismatch such that the mismatch in the adjoining domain has the opposite sign. It is for this reason that periodically poled crystals are considered to be more efficient than other crystal types. The poling orientation is indicated by the arrow direction in each domain of the crystal in Fig. 6(b).

 figure: Fig. 6.

Fig. 6. Methods to achieve optimal phase-matching by (a) angle tuning and (b) periodic poling (temperature tuning).

Download Full Size | PDF

Sandwich crystals are specialized NLCs used to produce polarization entangled photons of a collinear geometry. As an example, two adjacent Type-I NLCs are optically glued together with their optic axes aligned in perpendicular planes. Physically this means that if the first crystal’s optics axis and the pump photon define the vertical plane then the second crystal’s optic axis and the pump photon define the horizontal plane. If the incoming photons are either horizontally or vertically polarized, then the sandwich crystal will act as a Type-I crystal, but if the incoming photons are diagonally or anti-diagonally polarized, the pump photon has an equal probability of downconverting in either crystal, resulting in polarization entangled photons of a collinear geometry. More details on the two crystal geometry are contained in [62].

SPDC is the most efficient method for producing entangled photons; however, the probability of a spontaneous decay into a pair of entangled photons is very low: approximately 1 in every ${10^{12}}$ photons are downconverted [40]. Therefore, a very sensitive electron multiplier charge-coupled device camera is needed to image the ring of photons. In Fig. 7, the far-field images of the SPDC transitions (from left to right) of a collinear geometry to a non-collinear geometry from a Type-I BBO sandwich NLC, tilted at different angles, are shown. The tilt angle determines whether the geometry is collinear, non-collinear, or something in between. The pump photon was at a wavelength of $\lambda = 405\;{\rm nm} $ while the SPDC photons were centered at wavelengths of 545 nm and 1570 nm, respectively. Shown in Fig. 7 is the downconverted light seen using a bandpass filter centered at $\lambda = 545\;{\rm nm} $ with a FWHM of 6 nm. Additionally it is important to consider the crystal dimensions. Mainly the length of the crystal is important when calculating the spatial resolution, field of view, and number of spatial modes as discussed in Section 2.C.

 figure: Fig. 7.

Fig. 7. Far-field images of the SPDC transitions (from left to right) of a collinear geometry to a non-collinear geometry from a Type-I BBO sandwich NLC tilted at different angles. In this case, the tilt angle determines the geometry.

Download Full Size | PDF

D. Spatial Light Modulators

Spatial light modulators (SLMs) are highly efficient at preserving signal and are, therefore, the photon modulation device of choice in quantum optics. The liquid crystals are integrated onto a silicon substrate where the light field is modulated by a double pass effect. The light is reflected behind the liquid crystal cells and is modulated upon reflection from the screen. A SLM and a schematic structure of the liquid crystal display for a reflective SLM screen are shown in Fig. 8. A 2D amplitude grating is formed by the pixel array as the spacing between each pixel reduces the fill factor of the device, and the light is the diffracted in many directions. This pixelation reduces the efficiency of the device, but to a lesser extent than other light modulating devices. Most modern SLMs are phase-only modulators; fortunately, there are several clever techniques to switch from a phase only response to an amplitude or complex-amplitude response [63].

 figure: Fig. 8.

Fig. 8. Image of a typical Holoeye PLUTO 2 spatial light modulator. The inset shows the schematic structure of the liquid crystal display for a reflective SLM, taken from [63].

Download Full Size | PDF

In all-digital quantum ghost imaging, the phase-only SLM is commonly used as an amplitude modulating device by leveraging a clever technique. The SLM, when connected to a PC, acts as an external screen display. What is displayed on the SLM is often termed a hologram and can be created by any programming language and saved as an image. Essentially the transmission functions of the digital object and projective mask are encoded onto the SLM. The holograms may be created via a programming language of the user’s choice and saved as an image, and the image is then displayed on the SLM via the extended screen function. The concept of adding a phase grating to the phase distribution during generation of a hologram is important when considering how to tailor the amplitude of a light field with a phase-only SLM. There are two important details to remember. First, the application of a grating onto a phase distribution hologram allows for the modulated photons to be directed into a certain position according to the spatial distribution of the phase hologram. By doing this, it is possible to separate specific points on the hologram by either displaying the grating or not. This allows for the first diffracted order to be directed away from the unmodulated order; by displaying the grating at only certain positions, only the photons impinging on the position with the grating will be diffracted. Secondly, the depth of the grating determines the efficiency at which the photons are directed to the desired position. By varying the depth of the grating with its transverse position $(x,y)$, it is possible to structure an amplitude profile for an outgoing light field. For quantum ghost imaging experiments, it is important to direct the first diffracted order of light away from the unmodulated zeroth order.

By judiciously either displaying a grating or not in the form of a projective mask, the SLM then modulates the amplitude of the impinging photons. As the modulated photons are separated from the unmodulated, it is, therefore, possible to collect or detect only those photons that are modulated. Figure 9 illustrates the creation of an example hologram typically used for all-digital ghost imaging. For both the digital object and projective masks, a blazed grating is added. A blazed grating is a linear phase ramp and has the added advantage of diffracting 100% of the light into the first order. While there are other grating types, such as binary and sinusoidal gratings, they are not as efficient as blazed gratings. More details on shaping light, holograms, and gratings are contained in [63].

 figure: Fig. 9.

Fig. 9. To generate the hologram to be displayed on the SLM, the digital object (or projective mask) is combined with a phase grating also known as a blazed grating.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. The digital object to be imaged is superimposed on a Gaussian distribution, which is representative of the SPDC distribution containing the idler photons (left). A projective mask is superimposed on a Gaussian distribution representative of the SPDC distribution containing the signal photons (right). The mask is smaller than the Gaussian distribution, and the object is smaller than the mask. In this example, the SPDC geometry is collinear. Both SPDC distributions are the same size.

Download Full Size | PDF

While in the all-digital case it is possible to engineer the objects to be smaller than the imaging area, the ideal case is to design the magnification system such that the area of a Gaussian distribution, representative of the SPDC photon distribution, is larger than that of the object to be imaged. It is important to note that when designing the magnification system one takes into considerable account the size and number of pixels on the SLM screen, as well as the total area available on the SLM. The available area on the SLM screen, as well as the number of pixels, is the determining factor for the size of the object. The magnification of the SPDC photon distribution must be smaller than the size of the SLM screen while the object of interest must be smaller than the area covered by this distribution. An example of this is shown in Fig. 10, where the object is superimposed on a Gaussian distribution, which is representative of the SPDC distribution containing the idler photons. Similarly a projective mask is superimposed on a Gaussian distribution, of the same size, representative of the SPDC distribution containing the signal photons. The mask is smaller than the Gaussian distribution, and the object is smaller than the mask; in this example, the SPDC geometry is collinear. In degenerate quantum ghost imaging, one may use a single SLM where the screen is divided into two equal parts: one half of the screen will display the digital object to be imaged while the other half is used to display the projective masks, which are used to spatially resolve the signal photon. In so doing, one reduces the optical resources required for quantum ghost imaging. In non-degenerate quantum ghost imaging, it is, however, required to employ the use of two SLMs, as each SLM is designed for a specific wavelength range (e.g., visible, near-infrared, infrared).

E. Projective Masks

Raster scanning was historically used to detect the signal photon, and the required spatial resolution for the signal photons was achieved by spatially scanning the transverse plane [51,64]. Following the ghost imaging protocol, one workaround to a mechanical single-pixel scanning detector was the introduction of projective masks such as single pixels; this avoids the instability caused by a mechanical scanning single-pixel detector and can be implemented on a spatial light modulating device. The natural choice would be to construct a small detector scanning system by using single-pixel masks to accomplish single-pixel scanning. Employing a single pixel detector, however, considerably decreases the collection efficiency and results in an increase in the time required to reconstruct the image (extended integration times per mask are required) [31], i.e., the time spent on a single position on the transverse plane is considerably more to detect enough photons to establish a signal, and if this is not done the image quality is compromised. Instead 2D projective masks (${P_i}(x,y)$), where $i$ represents the ${i}$th mask in the sequence, were developed that consist of pixels judiciously turned on and off and distributed across a grid of pre-determined size (determined by the SLM screen size). This was first implemented by Duart et al. in the Walsh-Hadamard basis [65] followed by Shapiro et al. in the random pseudo-basis [24]. Here, the signal photons are projected onto patterned projective masks that are then weighted by a factor determined by the measured coincidences between the signal and idler photons. Summation over these weighted projective masks then allows for reconstruction of an image of the object.

Random patterns (random projective masks), since they are not correlated to each other, form an over-complete set with randomly distributed binary pixels that are either turned on or off [24]. The random projective masks form a pseudo-complete set; therefore, it is commonly known that approximately twice the amount of random masks are required. In the random basis, for a complete image solution, $2{N^2}$ masks are required. Figure 11 shows examples of the random patterned projective mask types. A drawback is that a large number of these patterned projective masks are required ($2{N^2}$ or more, where $N \times N$ is the number of pixels in the image and, subsequently, the number of pixels in each projective mask) to reconstruct an image that is noisy and, therefore, of poor quality [16]. Small improvements can be seen if half the pixels in each pattern are activated [66]. An example of activating 50% of pixels in a random manner is shown in Fig. 11. Importantly random patterns do not constitute a complete spatial basis as the patterns are non-orthogonal; therefore, many more measurements are required for a general image solution. Additionally random patterns are not correlated to each other, and the random basis is, therefore, a pseudo basis as there is no distinct set of random patterns that constitutes a complete basis for reconstruction of a general image solution.

 figure: Fig. 11.

Fig. 11. Examples of $32 \times 32\;{\rm pixel}$ resolution random patterned projective mask types used to spatially resolve the signal photon.

Download Full Size | PDF

The Walsh-Hadamard basis is an orthogonal basis where each element is generated from the Hadamard matrix of specific order $N$ when the number of pixels making up the image is ${N^2}$. The Hadamard transform is commonly used for recording spatial frequencies [67,68], or for multiplexing the direction of the illumination of an object [69,70]. The Walsh-Hadamard patterns are orthogonal and derived from the Hadamard matrix. Walsh-Hadamard projective masks are generated by extracting the Walsh functions from the Hadamard matrix of order $N$. By performing the outer product between columns of a Hadamard matrix of order $N$, the result is a complete set of ${N^2}$ Walsh-Hadamard projective masks of $N \times N$ pixel resolution. Walsh-Hadamard projective masks are orthogonal to each other [65]. Choosing the order of the Hadamard matrix will, therefore, result in the corresponding mask resolution. The added advantage to employing the Walsh-Hadamard basis as the patterned projective masks to spatially resolve the signal photon is that it is an orthogonal basis. An orthogonal basis set of patterned projective masks systematically samples the object, and an image is acquired in ${N^2}$ measurements, for a $N \times N$ pixel general image solution. An example of how the Walsh-Hadamard patterns are superimposed on the SPDC distribution of the signal photon is shown in Fig. 10.

The mask resolution, independent of the type of mask chosen to spatially resolve the signal photons, will determine the resolution at which the object will be imaged. A higher resolution results in a larger number of basis elements; therefore, an increased number of masks is needed to reconstruct the image. Increasing the resolution has direct consequences on the reconstruction time, and the number of Walsh-Hadamard masks required to form a complete set scales as ${N^2}$. Specifically for a complete or general image solution, in the Walsh-Hadamard basis, ${N^2}$ masks are required. As an example, a $32 \times 32\;{\rm pixel}$ image would, therefore, require 1024 masks. Examples of $32 \times 32\;{\rm pixel}$ Walsh-Hadamard masks are shown in Fig. 12.

 figure: Fig. 12.

Fig. 12. Examples of $32 \times 32\;{\rm pixel}$ resolution Walsh-Hadamard patterned projective mask types used to spatially resolve the signal photon.

Download Full Size | PDF

Masks are generated as a pre-measurement step, i.e., they are generated prior to starting the experiment and stored in memory or in a local directory. Many programming languages can be used to generate the masks as they are defined above. Programming languages often come with native functions allowing the user quick and easy projective mask generation. As an example in the MATLAB programming language by calling the function Hadamard(N), a Hadamard matrix of order $N$ is generated where $N$ defines the number of pixels in the mask. Similarly to generate a random mask, one would use rand (N,N) in MATLAB.

Importantly, the projective masks used to reconstruct the image must form a complete basis to completely reconstruct the image, i.e., to reconstruct a general image solution. This is a lengthy process; therefore, intelligent systems were developed. By exploiting artificial intelligence (AI) techniques, it is possible to reduce image reconstruction times. As with single-pixel scanning systems, scanning through a series of masks results in a very lengthy image reconstruction process, which is not optimal, and a great area of interest within the ghost imaging community [27,28,30,7173]. Each projective mask is displayed on the SLM for a pre-determined time, known as the integration time. It is important to make the integration time per mask as short as possible, yet sufficiently long to collect enough photons to provide meaningful information, i.e., if the integration time is too short µthis will result in the detection of unwanted photons/noise, while longer integration times are favorable yet inefficient. A middle ground must be found to set the most optimal integration time to acquire the experimental results.

F. Optic Fibers

In typical optics, and especially quantum optics, experiments show that the choice of optic fiber directly impacts the number of modes available in the system especially in cases where high dimensionality is important. In quantum ghost imaging, however, the choice of fiber is indicative of another important parameter, this being the available imaging area. The available imaging area is in fact related to the number of modes in the optical system as it is indicative of the number of pixels available to image with. Specific to quantum ghost imaging, and ghost imaging as a whole, is the requirement of a large-area single-element detector that is present in the object arm, i.e., a detector that does not preserve or collect spatial information of the photons that have interacted with the object. This is easily accomplished by an optic fiber. Typically multimode fibers (MMFs) are used as they have a large collection area, coupled to an APD. It is, however, possible to employ a single mode fiber (SMF) or even a few mode fiber (FMF) depending on the imaging area required. There is, however, a trade-off—by increasing the fiber core size (or mode field diameter), the imaging area is enlarged at the expense of collecting unwanted photons which increases the noise, thus producing noisy images. While SMFs may seem like the better choice, those that are commercially available have very small core diameters leading to an available imaging area that is smaller than most objects, and this makes them an impractical choice, as a general image solution is not possible.

The most appropriate fiber is chosen based on the core sizes or mode field diameters because a smaller imaging area that is less noisy SMFs can be employed in each arm. A larger fiber (such as a MMF or FMF) will provide a larger imaging area at the expense of reconstructing a noisier image. There is, therefore, a trade-off between the imaging area and the amount of noise within the reconstructed image. Making the object too small means that finer details are lost; however, if the imaging area is made too large, the acquired images may result in the signal buried heavily under noise. Choosing a fiber with a larger core and obtaining noisier images is the preferred choice as there are many image processing techniques that are able to filter the noise out, post-measurement. Importantly the fibers need to be of the same length, as the entangled photons must arrive at each avalanche photodiode at exactly the same time (within the uncertainty principle). After the fiber size is chosen, a test image can be reconstructed with a low resolution so as to correctly determine the size of the imaging area available, and the imaging area remains constant regardless of the image resolution. It is also important to note that image resolution is defined as pixels per inch (PPI) [74], which means that, although the imaging area remains constant with fiber size, the resolution of the image and/or projective mask can change depending on the desired resolution. Note that this is independent of the pixels in use on the SLM, which remain constant. The pixel resolution of the projective mask contains super-pixels (several pixels on the SLM make up one pixel on the projective mask), which is explained further in Section 4.A. Objects are smaller than the imaging area to ensure an even illumination across the object. Ensuring an even illumination will allow the reconstructed image to better resolve finer details of the object.

G. Simulating the Image

The detection probability ($|\eta {|^2}$) in quantum ghost imaging, or in any SPDC experiment [41], is calculated from the overlap integral, which is given by

$$\eta = \int f({\textbf x}){g^2}({\textbf x}){M_p}({\textbf x}){M_{\rm{object}}}({\textbf x}){M_{\rm{mask}}}({\textbf x}){\textbf dx},$$
where $f({\textbf x})$ describes the crystal; however, in the thin crystal approximation, when the crystal is much thinner than the Rayleigh length of the pump mode, $f({\textbf x}) \approx 1$. ${M_p}({\textbf x})$ describes the mode that pumps the crystal, which is usually a Gaussian mode, apart from when the pump is shaped prior to the crystal (for pump shaping, see [59]). ${M_{\rm{object}}}({\textbf x})$ and ${M_{\rm{mask}}}({\textbf x})$ describe the transmission function of the digital object and the projective mask, respectively. ${M_{\rm{object}}}$ is the same as the digital object transmission function $O(x,y)$ and is also the quantity we wish to determine. ${M_{\rm{mask}}}({\textbf x})$ is also referred to as ${P_i}(x,y)$, later on, which is described in Section 3.E. ${g^2}({\textbf x})$ describes the fibers used for photon collection. For SMFs, the fiber functions are the Gaussian mode as these fibers only accept the Gaussian mode, described by
$$g({\textbf x}) \propto \exp\! \left({- {{\left({r/{w_0}} \right)}^2}} \right).$$

If FMFs or MMFs are used, ${g^2}({\textbf x})$ changes according to the modes accepted by the specific fiber. These details are usually obtained from the manufacturer when purchasing the fiber. Importantly, Eq. (8) assumes that the quantum ghost imaging optical setup was built in the position configuration. If built in the momentum configuration, it is important to consider the necessary Fourier transforms before calculating the overlap integral.

To simulate what the theoretical quantum ghost image would look like, one would calculate the detection probability per projective mask in the sequence, and the image ($I(x,y)$) is reconstructed as a linear combination of the projective masks weighted by a coefficient determined by the detection probability. The detection probability is directly proportional to the experimental coincidence counts (per projective measurement):

$$|{\eta _i}{|^2} \propto {c_{\!i}},$$
where ${c_{\!i}}$ represents the photons that were detected in coincidence for the ${i}$th measurement. Details of several image reconstruction algorithms are discussed in Section 5. In the simulated case, shown in Fig. 13, the detection probability is calculated from the overlap integral while the coefficient used to weight each projective mask is determined by the image reconstruction algorithm detailed in Eq. (16), where the calculated detection probability was substituted for the coincidence counts according to Eq. (10). To simulate the reconstructed image using different coefficients to weight the projective masks, we refer the reader to Section 5.
 figure: Fig. 13.

Fig. 13. A simulated image ($I(x,y)$), with resolution of $32 \times 32\;{\rm pixels}$, is reconstructed as a linear combination of each projective mask (${P_i}(x,y)$) weighted by a coefficient determined by the detection probability. The calculated detection probability is proportional to the experimental coincidence counts.

Download Full Size | PDF

4. OPTICAL SETUP

Now that the reader is familiar with the components required to get started with all-digital quantum ghost imaging, it is important to move on to the practicality with an experimental example. Following the experimental example, important experimental considerations will be highlighted and discussed such as the alignment, coincidences, and noise in the system. It is important to note that, in all-digital quantum ghost imaging, the imaging process is enhanced by utilizing digital technology, such as spatial light modulators, to enable a more efficient image reconstruction method. The use of these modulating devices in the digital approach allows for stability, particularly in proof of principle experiments. In the all-digital approach, an object is encoded onto the device, instead of placing a physical object in the arm of the signal photons. This digital object allows for a comparison of the reconstructed image to the ground truth (object), which allows for the quality of proof of principle experiments to be quantified. Additionally, the scanning system is also controlled by digital devices; this avoids the instability associated with a single-pixel scanning detector that scans the transverse plane in the arm of the idler photons. Furthermore, while most spatial light modulators are phase-only devices, a simple trick of either displaying a diffraction grating or not while encoding a uniform phase across the will allow for binary amplitude modulation.

 figure: Fig. 14.

Fig. 14. Schematic diagram of the implemented quantum optical setup for degenerate quantum ghost imaging. Degenerate entangled photons are produced at the NLC. A bandpass filter (BPF) centered at $\lambda = 810\;{\rm nm} $ filters out any unconverted photons while a half-wave plate (HWP) rotates the polarization for optimal modulation by the SLMs. A 50:50 beam splitter is used to spatially separate the entangled signal and idler photons. Each photon impinges on a SLM displaying either the object or projective mask. The photons are collected by coupling each beam to a fiber connected to an APD, and the photons that are detected in coincidence are counted by a coincidence counting device (CC). ${{L}_i}$ represents lenses.

Download Full Size | PDF

A. Experimental Setup: Practical Example

In this section, we present a practical example to setting up and running a quantum ghost imaging optical experiment. Figure 14 shows a schematic diagram of a typical degenerate quantum ghost imaging optical setup. Light from a diode laser at a wavelength of $\lambda = 405\;{\rm nm} $ was used to pump a temperature controlled type I periodically poled potassium titanyl phosphate (PPKTP) NLC phase-matched for degenerate SPDC. The optimal temperature was set to obtain collinear emission of degenerate entangled bi-photons at wavelengths of $\lambda = 810\;{\rm nm} $ each, via SPDC. Any unconverted pump photons were filtered out by means of a bandpass filter (BPF) centered at a wavelength of $\lambda = 810\;{\rm nm} $ (${\rm FWHM} = 10\;{\rm nm} $). Vertically polarized downconverted photons emitted from the NLC were then rotated to horizontal polarization by a half-wave plate (HWP). The polarization rotation was done to ensure optimal modulation of the impinging photons by a spatial light modulator (SLM) situated in each arm. The entangled photon beam was separated into two paths by a 50:50 beam splitter (BS), and each arm was then directed to a SLM where the required holograms were displayed. The SLM in one arm displayed the digital object to be imaged while the SLM in the other arm was used to display the required projective masks. As the projective masks are a sequence of masks, each mask was displayed on the SLM for a fixed time (i.e., the integration time). A blazed grating was added to both holograms to separate the first diffracted order from the zeroth, unmodulated, order. Photons from the first diffracted order, in each arm, were then coupled into multimode optical fibers (MMFs), of core size 62.5 µm, connected to avalanche photodiodes (APDs) for photon detection. Coincidences were measured within a 25 ns coincidence detection window by the photon counting device (CC).

The crystal, SLMs, and fiber entrances were placed at conjugate planes, resulting in a position configuration. Magnification between the laser and crystal consists of decreasing the beam size from 800 µm to 400 µm by a demagnification $4f$ system so that the beam size in the crystal was at half of its original size, while from the crystal to the SLMs we have a $5 \times$ magnification factor so that the size of the beam impinging on each SLM screen is 2 mm. From the SLMs to the fibers, again we demagnified the beam by a factor of 0.003. This was done to ensure optimal photon collection by the MMFs. In a degenerate setup, one SLM can be, and was, used in which the screen was divided in half: one half displayed the digital object to be imaged while the other half displayed the projective mask. We used a HOLOEYE Pluto 2 SLM with a screen size of $1920 \times 1080\;{\rm pixels}$, and each pixel had a size of 8 µm. $960 \times 960\;{\rm pixels}$ of the SLM screen were used to display both the object and the projective mask, respectively. Each digital display was resized to fit the $960 \times 960\;{\rm pixels}$, and this effectively created super-pixels. A projective mask resolution was chosen, e.g., $32 \times 32\;{\rm pixels}$, and resizing this to fit the chosen area of the SLM screen would create super-pixels whereby each super-pixel contains 30 SLM pixels (960/32). The resolution of the projective mask is effectively $32 \times 32$ super-pixels, and the resulting reconstructed image has the same resolution of the projective mask. The resolution of the object is chosen as the exact resolution of the SLM screen so as to emulate the qualities of a physical object whose resolution might be at an atomic level. The physical size of the object was restricted to 700–800 µm so as to ensure even illumination of the object by the SPDC photon distribution impinging on the SLM screen.

For a non-degenerate optical setup, one would replace the 50:50 beam splitter with a dichroic mirror, which reflects one wavelength and transmits the other. Bandpass filters for respective wavelengths are required for each of the different arms. Additionally, a non-degenerate setup would require the use of two SLMs, one for each wavelength.

B. Alignment

It is common to align quantum optical setups both forward and backward to ensure that each photon of the entangled photon pair travels along its individual path to the detection system. In the forward direction, the SPDC alignment depends on the SPDC geometry (collinear or non-collinear), the simplest being collinear. In the collinear geometry, the linear momentum of the entangled photon pair is in the same direction as that of the pump photon. Due to this, one may use the pump laser (at a reduced power—10 mW or less) as a guide to aligning the optical system. For the non-collinear case, the entangled photons do not travel in the same direction as the pump beam but rather in opposite directions to either side of the pump beam; it is, therefore, important to select opposite sides of the ring (illustrated in Fig. 7) when aligning for the non-collinear case. In this case, the pump cannot act as the guide for alignment, and one will need to use the SPDC ring with a sensitive CCD to align both arms.

Aligning backward entails connecting an appropriate laser to the fibers in each arm and aligning the system in the backward direction (mainly adjusting the position of the fiber coupler ports) so that there is complete and precise overlap between the forward and backward alignment. An appropriate laser would be one whose wavelength is the same or close to that of the SPDC wavelength in each arm of the experiment. When the forward and backward alignment are perfectly overlapped, the signal at the detector is visible. This then allows for one to optimize the coupling of photons at the fibers to increase the signal. This may be an iterative process as one would need to systematically adjust mirrors and fiber couplers to ensure optimal signal detection. The plane of the crystal as well as tuning the crystal (angle or temperature) will lead to an increase or decrease in signal and will also need to take place iteratively until one has pushed the limits and made sure that the maximum signal detected is a global maximum and not a local maximum.

C. Photon Collection

Characteristic to quantum ghost imaging, a bucket detector (large area mode insensitive detector) is required in the object arm as the photons that interact with the object are not spatially resolved. As aforementioned, this detector is accomplished by an optic fiber, usually a MMF, connected to an APD. In the all-digital case, the projective mask used to spatially resolve the signal photon (photon that has not interacted with the object) in combination with an optic fiber constitutes a detector with spatial resolution. As the projective mask is 2D and covers the same surface area as the object (on the SLM), it is advisable to use an optic fiber of the same core size as the object arm. Using fibers of the same core size or mode field diameter in each arm ensures that all photons that have, respectively, interacted with the object and projective mask are collected for efficient and accurate coincidence counting.

The detection system in each arm, therefore, comprises an optical fiber connected to an avalanche photodiode. Each avalanche photodiode is connected to a coincidence counting device to perform coincidence measurements. The coincidence counter (CC) is connected to a PC and controlled by the user’s software of choice. Many counters come pre-programmed with various programming language options. The most common programming languages used are LabVIEW and Python, although newer CC models may come with their own built-in graphical user interface, which allows for ease of access and control. It is important to choose a language that the user is fluent in when deciding which language to use, or for beginners to use the built-in graphical user interface if it is available.

D. Coincidences

Entangled photon detection in quantum experiments is accomplished by distinguishing the time correlations in the signals received by the photon detectors in each arm of the optical setup. By detecting the entangled photons with coincidence counters, the coincidences are measured by looking at the difference in the time delay between each photon detected in the signal arm with each photon detected in the idler arm (or vice versa). Coincidence counters are event timers that will record the time of arrival for each photon signal in each arm, often termed channel 1 and channel 2.

The entangled photons are produced at exactly the same time in the NLC; therefore, they should arrive at the detectors at exactly the same time—provided that the path taken by each photon is of the same length. Subsequently, these entangled photons are identified by taking a histogram of the detected photons showing all the respective time differences between the signals counted in the two channels, and this is done by a coincidence counter. Due to this time correlation, the entangled photons will be found to have the same time difference; thus, the counts or signal in that specific time bin of the histogram will increase while uncorrelated or ambient photons that are not entangled or correlated will be spread randomly across all the time bins and will not contribute to any specific signal buildup; this is illustrated in Fig. 15. In this manner, the degree of noise can be calculated, which depends on the ratio of detected entangled photons to the ambient or stray photons that have also been detected. Coincidence counters are programmable, whereby signal-to-noise ratios can be thresholded; again one needs to be familiar with the preferred programming language required to control the coincidence counter.

 figure: Fig. 15.

Fig. 15. Histogram of the photons detected in coincidence due to the time correlation. The entangled photons are found to have the same time difference and the counts in that specific time bin of the histogram increase while uncorrelated or ambient photons that are not entangled or correlated are spread randomly across all the time bins and do not contribute to any specific signal buildup.

Download Full Size | PDF

E. Noise

Considering the statistical nature of entangled photons, we know that the generation of each entangled pair via SPDC is independent of the next, as each pair is generated spontaneously. The generation times of entangled photon pairs are, therefore, governed by Poissonian statistics. It is possible to quantify the probability of the emission times for the entangled photon pair generation in terms of coincidence detection windows or time bins, relative to a fixed point. The probability of $k$ photon pairs generated in a coincidence detection window is described as

$$P(k) = \frac{{{N^k}{e^{- N}}}}{{k!}},$$
where $N$ is the average number of photon pairs generated per coincidence window (time bin). As variance is then given by the mean, accordingly the standard deviation is given by
$${\sigma _{P(k)}} = \sqrt N ,$$
where the uncertainty in the measured coincidences is $\sqrt N$.

There is also a certain probability of detecting an accidental coincidence; this occurs when two uncorrelated photons arrive at both detectors at the same time and are referred to as “accidentals.” Accidentals (${N_{\rm{Acc}}}$) may be estimated based on the single photon counts in both the signal and idler arms within a certain coincidence detection time bin window,

$${N_{\rm{Acc}}} = {C_1} \times {C_2} \times \Delta t,$$
where $\Delta t$ is the coincidence detection time bin window and ${C_1}$ and ${C_2}$ are the single photon counts in the signal and idler arms, respectively. Consequentially, if there are a high number of single photon counts in each arm, this will lead to a greater number of accidental photons. By reducing the coincidence detection window, $\Delta t$, one may reduce the accidentals and, therefore, the noise present in the experimental setup.

Additionally, the instruments experience a jitter that also contributes to noise. Finally any ambient or stray photons produced by computer screens or ambient light may contribute, marginally, to noise.

5. IMAGE RECONSTRUCTION AND QUALITY FACTOR INDICES

As aforementioned, the image is reconstructed as a linear combination of the projective masks and a weighting factor; this is illustrated in Fig. 13. The weighting factor is determined by the measured photon correlations; however, different weighting factors determine which reconstruction algorithm is used to reconstruct the image. In this section, we introduce the reader to several reconstruction algorithms. While we discuss image reconstruction algorithms used in both quantum and classical ghost imaging and provide ways in which to adapt classical algorithms to the quantum case, it is up to the reader to decide which algorithm would be best suited for their specific use case. Each algorithm discussed in this section can be used with either of the projective mask types discussed in Section 3.E. Finally we introduce the reader to image quality factors that can be used to quantify image quality in comparison to a reference point.

A. Algorithms Commonly Used in Quantum Ghost Imaging

In this section, we introduce the reader to image reconstruction algorithms commonly used in quantum ghost imaging, as well as a newly introduced image reconstruction algorithm. We explain how each algorithm is used based on the assumption that the experiment has already taken place and the measured coincidence counts and accidentals, for each projective mask, have been recorded. Once the reader is familiar with the algorithms, it is possible to implement the chosen algorithm during the real-time experimental reconstruction process.

Traditionally, the ghost image was reconstructed as a linear combination of all projective masks weighted by the coincidence counts and is expressed as the traditional ghost imaging (TGI) reconstruction algorithm [75]:

$$I(x,y) = \sum\limits_{i = 1}^N {c_{\!i}}{P_i}(x,y),$$
where $I(x,y)$ is the image, ${c_{\!i}}$ represents the coincidence counts, and ${P_i}(x,y)$ is the patterned projective mask for the ${i}$th measurement. Here we refer to ${c_{\!i}}$ as the coincidence counts; however, ${c_{\!i}}$ is proportional to the detection probability. For ease of understanding, as the masks are weighted by the respective coincidence counts to reconstruct the image, we refer to ${c_{\!i}}$ as the coincidence counts. The TGI image reconstruction algorithm does not contain a background suppressing term; this typically leads to a signal that is buried in a large background. The reconstructed image is, therefore, of low quality and has longer integration times, and the use of a larger number of projective masks is required to reconstruct an image with a very low signal-to-noise ratio (SNR). To improve this, some studies have calculated the joint probability distribution (JPD), which when applied to imaging improves the SNR [7679]. By employing the JPD as an image reconstruction algorithm, it is required that the accidental counts are subtracted from the coincidence counts to reduce the noise. For ease of understanding, we offer different abbreviations for each algorithm; we, therefore, term this method accidental subtracted ghost imaging (ASGI). The image is reconstructed as follows:
$$I(x,y) = \sum\limits_{i = 1}^N ({c_{\!i}} - {a_i}){P_i}(x,y),$$
where ${a_i}$ is the accidental counts for the ${i}$th measurement.

Predominantly, in quantum ghost imaging image reconstruction, in a further effort to increase the signal-to-noise ratio, the DC term is usually removed (TGIDC). It follows that this removes the large background component present, which usually suppresses the signal, and results in an improved quality of reconstructed images as the signal is boosted due to the removal of the DC term [4,31,32,80,81],

$$I(x,y) = \sum\limits_{i = 1}^N ({c_{\!i}} - {\bar c_N}){P_i}(x,y),$$
where ${\bar c_N}$ is the ensemble average of the coincidences over $N$ measurements.

Leveraging noise suppression characteristics from the reconstruction algorithms previously detailed in Eqs. (14)–(16), a novel reconstruction algorithm, which merges the above algorithms and suppresses the background in the most optimal and efficient manner, was first introduced in [82]. The reconstruction takes into account both the accidentals and the removal of the DC component as follows:

$$I(x,y) = \sum\limits_{i = 1}^N (({c_{\!i}} - {\bar c_N}) - ({a_i} - {\bar a_N})){P_i}(x,y),$$
where ${\bar a_N}$ is the ensemble average of the accidental counts over $N$ measurements. We have termed this accidental subtracted ghost imaging with DC term removal (ASGIDC).

B. Algorithms Commonly Used in Classical Ghost Imaging

The following algorithms are predominantly used in classical ghost imaging. It is, however, possible to adapt these algorithms to be used for the reconstruction of quantum ghost images. By looking solely at the signal obtained in the reference arm and the efficiency of the detectors used to obtain that signal, it is possible to employ a clever adaptation for use in the reconstruction of quantum ghost images. It must be noted that, although we present three algorithms in this section, there are many more. Here we present the technique and resources required to adapt various classical image reconstruction algorithms for use in quantum ghost imaging. This technique can be adapted or utilized as is in all algorithms not considered in this tutorial.

In various ghost imaging experiments, different detectors are used; APDs are predominantly used in quantum ghost imaging and have different operation efficiencies depending on the wavelength of the photons they are currently detecting. APDs have a certain efficiency at the detection wavelength. To determine what would be a quantum analog to the reference signal in classical ghost imaging, one must look at the single photon counts in the channel detecting the signal photons (photons that have not interacted with the object) as well as the detection efficiency of the APD for the relevant wavelength. The counts in the signal photon arm determine what is classically known as the reference signal. Quantumly this is calculated as ${R_{\rm{signal}}} = {{\rm Efficiency}_{\rm APD1}} \times {{\rm Efficiency}_{\rm APD2}} \times {{\rm Counts}_{\rm{signal}}}$, where ${R_{\rm{signal}}}$ is the quantum reference signal determined by the relevant efficiency of each detector multiplied by the counts in the signal arm. This is indicative of what the overlap would be if the projective mask was displayed on both arms and the coincidence counts were recorded for each projective mask, similar to the signal obtained from the reference arm in classical ghost imaging experiments.

In 2010, Ferri et al. proposed a differential ghost imaging (DGI) technique [66]. In DGI, the signal from the reference beam is included as a mechanism to suppress background noise [75], and the image is reconstructed as follows:

$$I(x,y) = \sum\limits_{i = 1}^N \left(\left({c_{\!i}} - \left({\frac{{{{\bar c}_N}}}{{{{\bar R}_N}}}{R_i}} \right)\right)\right.{P_i}(x,y),$$
where ${R_i}$ is ${R_{\rm{signal}}}$ for the ${i}$th measurement and ${\bar R_N}$ is the ensemble average of ${R_{\rm{signal}}}$ over $N$ measurements.

In an analogy to DGI without the use of the reference signal, the bucket signal is defined as a logarithmic function [83]. Similarly we equate this to a logarithmic version, which elevates the overlap signal (the coincidence counts) and reconstructs the image as follows:

$$I(x,y) = \sum\limits_{i = 1}^N \left(\log \left({\frac{{{c_{\!i}}}}{{{{\bar c}_N}}}} \right)\right){P_i}(x,y),$$

Finally, and similarly to DGI, normalized ghost imaging (NGI) employs the use of the signal from the reference arm to further suppress any background noise. In NGI, the weighting factor is adapted to account for changes in the efficiency of the generated light field [80],

$$I(x,y) = \sum\limits_{i = 1}^N \left(\left({\frac{{{c_{\!i}}}}{{{R_i}}}} \right) - \left({\frac{{{{\bar c}_N}}}{{{{\bar R}_N}}}} \right)\right){P_i}(x,y).$$

While we introduced the reader to the above image reconstruction algorithms, it is up to the reader to decide which will work the best per specific use case. In the following section, we provide details on image quality factor indices, which will help the user decide which algorithm produces images of the required quality for individual tests. We additionally subject the all-digital quantum ghost imaging optical setup to reconstruct images using all algorithms above and show the user some sample results that may assist with determining which reconstruction algorithms suits the individual requirements. There is no standard reconstruction algorithm for quantum ghost imaging, introduction to the above algorithms may spark or inspire further development of the image reconstruction process while improving the quality of reconstructed images.

C. Image Quality Factor Indices

While there are several image quality measures to implement, implementing all of them will turn into a tutorial on its own. Instead two of the most common measures are discussed here and implemented in the practical examples to follow. Image quality tests consider several aspects of the image when calculating image quality. The main aspects relate to the signal-to-noise ratio, contrast, correlation, luminance, and mean square error. The fidelity calculation of the image quality considers several of the main aspects; therefore, it is a good measure of image quality. A quality factor index proposed in [84] is what the fidelity computation is based on; namely, the computation considers the loss of correlation, the luminance distortion, and the contrast distortion. In [31], a modified fidelity is proposed, which additionally considers the normalized mean squared error; the entire computation is modified to obtained values between 0 and 1 and is calculated as follows:

$$\begin{split}f &={\left[{1 - \frac{1}{K}\sum\limits_{\!i} {{\left({I_i^{{\rm img}} - I_i^{{\rm ref}}} \right)}^2}} \right] \cdot \frac{1}{2}\left[{\frac{{{\sigma _{\textit{xy}}}}}{{{\sigma _x}{\sigma _y}}} + 1} \right].}\\ &\quad{\left[{\frac{{2\bar x\bar y}}{{{{\bar x}^2} + {{\bar y}^2}}}} \right] \cdot \left[{\frac{{2{\sigma _x}{\sigma _y}}}{{\sigma _x^2 + \sigma _y^2}}} \right],}\end{split}$$
with
$$\begin{split}\bar x & = \frac{1}{K}\sum\limits_{i = 1}^K {x_i},\quad \bar y = \frac{1}{K}\sum\limits_{i = 1}^K {y_i},\\ \sigma _x^2 & = \frac{1}{{K - 1}}\sum\limits_{i = 1}^K {{\left({{x_i} - \bar x} \right)}^2},\quad \sigma _y^2 = \frac{1}{{K - 1}}\sum\limits_{i = 1}^K {{\left({{y_i} - \bar y} \right)}^2},\\ {\sigma _{\textit{xy}}} & = \frac{1}{{K - 1}}\sum\limits_{i = 1}^K \left({{x_i} - \bar x} \right)\left({{y_i} - \bar y} \right),\end{split}$$
where $i$ represents each pixel in the image. Importantly, Eq. (21) requires that both the reconstructed image and the reference image are normalized and are of the same size. In all-digital quantum ghost imaging, the digital object employed is used as a reference image. The first component in Eq. (21) is the normalized mean squared error between the image ($y$) and the reference image ($x$, which is the digital object in this case). The next component is representative of a normalized correlation distortion between $x$ and $y$; this is a measure of the linear correlation between $x$ and $y$. The third component measures how close the mean luminance is between $x$ and $y$. The final component measures the similarity between the contrast of the image and its reference. As Eq. (21) is normalized, the calculated value ranges between 0 and 1, with 1 representing the greatest fidelity between the image and reference and 0 the least.

The peak-signal-to-noise ratio (PSNR) is a computation that determines the ratio between the maximum possible power value of a signal and the power of the noise distortion that affects the quality of the image. In this computation, a reference image is also required to determine the maximum power of the signal; for this, the digital object can once again be used. As many signals have a wide dynamic range, the PSNR is expressed in terms of the logarithmic decibel scale as follows:

$${\rm PSNR} = 10 {\log}_{10} \left({{{{\rm peakval}}^2}/{\rm MSE}} \right),$$
where peakval is taken from the range of image data for an image of data type uint8, and the peakval is 255. The mean square error (MSE) is calculated between the reconstructed image and the reference image. The MSE allows for the comparison of the true pixel values of the digital object with that of the reconstructed image and is representative of the average of the squares of the errors between both. The higher the PSNR is, the better the reconstructed image, i.e., the closer in similarity between digital object and image.

Both of the above quality factor indices are good determinants of the quality of the reconstructed quantum ghost images. While there are many more measures, these two indices take into account the main components required for image quality assessment, and implementations are seen in the proceeding image processing section.

6. IMAGE PROCESSING

In this section, we discuss image processing for different reconstruction algorithm and projective mask combinations. We present experimental results for all combinations and highlight some advantages and disadvantages to various combinations. In some combination scenarios, the images will require further processing to recover the image signal as it might be buried under noise or hidden by a mask artifact. Finally we end with a brief discussion on artificial intelligence (AI) techniques used to reduce image reconstruction times and to enhance the general image solutions for those wishing to look into the time-efficiency of quantum ghost imaging.

 figure: Fig. 16.

Fig. 16. Artifact arising from the TGI reconstruction algorithm when coupled with the Walsh-Hadamard mask type. (a) Image reconstruction by a $32 \times 32\;{\rm pixel}$ Walsh-Hadamard mask type, before contrast adjustment. The zoomed-in image shows the mask artifact visible at the top-right corner in the form of a single activated pixel. (b) Image reconstruction by a $32 \times 32\;{\rm pixel}$ Walsh-Hadamard mask type, after contrast adjustment. Insets in each reconstructed image show the digital object used in the experiment.

Download Full Size | PDF

 figure: Fig. 17.

Fig. 17. When TGI is coupled with the random mask type, no artifacts are present, as such there is no visual difference between (a) the raw reconstructed image and (b) the image that has undergone contrast adjustment. The insets in each reconstructed image show the digital object used in the experiment.

Download Full Size | PDF

A. Mask Artifact

As mentioned in Section 5.A, TGI [Eq. (14)] is not commonly used to reconstruct images as the background component is not suppressed. The high background contribution suppresses the image signal; therefore, different reconstruction algorithms were developed to acquire the signal efficiently. The signal of the image can only be reconstructed if random masks are used in abundance, i.e., $10{N^2}$ random masks or more are needed to begin to separate the signal from the noise. As such, other masks types with the TGI algorithm were not tested as Eq. (14) was deemed inefficient at reconstructing the image.

Nevertheless, we reconstructed images with Eq. (14) in both the random and Walsh-Hadamard bases. Figures 16(a) and 17(a) show the images reconstructed with TGI for the Walsh-Hadamard and random bases, respectively. While Fig. 17(a) shows a very noisy image with the possibility of a signal forming in the middle, Fig. 16(a) shows a blank image with an activated pixel at the top right corner (zoomed in for better visualization). The image was reconstructed by 1024 projective masks of a $32 \times 32\;{\rm pixel}$ resolution for the Walsh-Hadamard basis. The activated pixel in the top right corner of Fig. 16(a) is a common occurrence to all images (using different objects and projective mask resolutions) reconstructed with this particular combination of projective mask and image reconstruction algorithm. The activated pixel at the top-right corner of the image reconstructed by the Walsh-Hadamard basis largely influences the contrast within the image. Due to this, a noisy image is not seen nor are any outlines of the object within the image. Rather an entirely blank image with one white pixel at the top-right corner is what was reconstructed.

In an effort to enhance the image, one can employ either of two simple techniques to determine if the artifact can be removed and answer the question, if upon removal is there still a blank image? The first is to send the image through a contrast adjustment function; although there are several functions available that may be employed, we opted to simply use a function native to the MATLAB programming language called imadjust. This function, by default, saturates the bottom and the top 1% of all pixel values. By employing a contrast enhancement technique, the image is saturated by specific values within the image that can be used to enhance the contrast so that the image is revealed. The second technique employed was to crop out the outermost border that contains the activated pixel in all four edges of the image to remove the activated pixel but to preserve the aspect ratio of the reconstructed image. Surprisingly, both techniques provided the same result as shown in Fig. 16(b). The image is contrast enhanced, which reveals that the signal was in fact buried under the mask artifact, which hid the true contrast within the reconstructed image. The insets in Fig. 16 show the digital object that was used.

Importantly, these techniques were used on images reconstructed by both the random and Walsh-Hadamard bases. The image reconstruction using the random masks shows a noisy image in Fig. 17(a) (2000 masks were used to reconstruct the image), with the digital object shown in the inset. For the images reconstructed by the random masks, no change is detected after the contrast adjustment as shown in Fig. 17(b); however, for the images reconstructed by the Walsh-Hadamard masks [Fig. 16(b)], the image is now visible, and one can correctly identify what the image is, albeit with some noise present, which is expected from any experimentally reconstructed image. By employing either of these two elementary image processing techniques, it is possible to use a specific algorithm and projective mask combination to reconstruct an image that was previously unused as it was thought to reconstruct images of a poor quality (which is the case when using the random basis with TGI). Importantly this only works for images reconstructed by the Walsh-Hadamard projective mask types due to the artifact of an activated pixel present in the raw image reconstructions. This, however, is not the case for the random masks as they do not contain an artifact common to all image reconstructions. If a user desires a fast simple image reconstruction, this is possible with a TGI reconstruction and the Walsh-Hadamard basis as the computation is limited to employing only the coincidences for reconstruction. This means that by utilizing Eq. (14) in combination with the Walsh-Hadamard basis, a fast, noisy image of acceptable quality is obtained. It is now well known that Walsh-Hadamard masks converge the fastest to a general image solution, requiring less than N masks for adequate image identification by neural classifiers [32]. While other mask types work well, they take longer to converge, which is not in the interest of time-efficient quantum ghost imaging [31,82]. It is, however, also important to quantify image quality, which is discussed in the sections to follow.

B. Image Reconstruction Examples

In this section, we show the user practical examples of what the images look like with each projective mask and image reconstruction algorithm. As it was important to show a fair representation of all image reconstructions, we used the native MATLAB function imadjust on all the images after the reconstruction of a complete general image solution (2000 random masks and 1024 Walsh-Hadamard masks) to enhance the images. It must be noted that image enhancement or image processing techniques are important in ghost imaging so as to improve image quality as much as possible. Images were reconstructed for four input digital objects using the reconstruction algorithms presented in Eqs. (14)–(20), for both the random and Walsh-Hadamard bases. The input objects as well as the image reconstructions, across all algorithms and bases, are shown in Fig. 18.

 figure: Fig. 18.

Fig. 18. Contrast adjusted reconstructed images for (a) input digital objects, using $32 \times 32\;{\rm pixel}$ (b) random and (c) Walsh-Hadamard masks for reconstruction algorithms: TGI, TGIDC, ASGI, ASGIDC, DGI, LGI, and NGI, respectively.

Download Full Size | PDF

We first showcase the images reconstructed by the random basis, shown in Fig. 18(b). TGI reconstructs images that are noisy, the signal is buried under the noise, and the image is not visible or identifiable. ASGI produces images that are similarly noisy to those produced by TGI; however, the image is now identifiable to a certain extent. All other tested algorithms reconstruct images of a similar visible quality, significantly better than both TGI and ASGI; the noise levels across the remaining five image reconstruction algorithms are largely suppressed in comparison to TGI and ASGI. Notably this is only the case of reconstructing images by the random basis. The images reconstructed by the Walsh-Hadamard basis are shown in Fig. 18(c).

Images of a similar visual quality are produced across all algorithms, with the exception of TGIDC and LGI, which both show higher levels of noise in comparison to the other algorithms. Again it is important to note that to fairly assess all algorithms we used the same image enhancement technique on each image that we reconstructed. Notably one needs to consider factors such as the speed of computation in the image reconstruction process and the masks used (Walsh-Hadamard masks converge quicker than random masks), as well as the parameters required within the reconstruction process (coincidence counts, accidentals, and ensemble averages of both) before choosing the mask type, mask resolution, and reconstruction algorithm. The user needs to decide on the algorithm to use not only based on image quality but also by considering the computational efficiency and speed of computation per specific use case. It is, however, not enough to merely look at the visual quality of the reconstructed images to determine which algorithms perform well and which do not. We introduced the user to both the fidelity and the peak-signal-to-noise ratio (PSNR) calculations of the reconstructed images across the different reconstruction bases, the results of which are shown in Fig. 19.

 figure: Fig. 19.

Fig. 19. Statistical tests conducted on the contrast adjusted images showing (a) the fidelity and (b) the PSNR per image, per algorithm, for both random and Walsh-Hadamard mask types.

Download Full Size | PDF

The image quality factor indices used are detailed in Section 5.C, both of which require a reference image. The reference image used was the digital object used as shown in Fig. 18(a). We subjected the image reconstruction algorithms and projective mask combinations to these statistical tests to guide the user on quality factor tests. While it is important for the reconstructed images to have a good visual quality, it is also important to quantify the quality of the image, which is when image quality factor indices are important. While all combinations were subjected to these tests, we see that most combinations perform similarly with poor quality indicated mainly for the random basis reconstructions for algorithms TGI and ASGI. This was expected as these algorithms do not suppress noise as highly as the other algorithms. It is, however, possible to use these algorithms in combination with the projective masks from the Walsh-Hadamard basis to achieve better results with the same computational capacity (established by the algorithm). While these are the resulting image reconstructions and statistical results for tests we conducted, we encourage the user to conduct preliminary tests based on using this tutorial as a guide to implementing an all-digital quantum ghost imaging optical setup and to ultimately make choices regarding projective mask and reconstruction algorithm type after establishing what is required for each individual use case.

7. ARTIFICIAL INTELLIGENCE FOR EFFICIENT RECONSTRUCTION AND IMAGE ENHANCEMENT

Several artificial intelligence techniques were implemented in an effort to move toward time-efficient ghost imaging. We briefly discuss these techniques here so that those wishing to enter the field have a fundamental knowledge of what has currently been done. These approaches are meant to reduce image reconstruction time by image enhancement, object recognition, and super-resolving neural network architectures. We briefly describe some of the implementations below and show some results thereof; full implementation details can be found in [32,82,85].

A. Two-Step Deep Learning Approach to Reduce Image Reconstruction Time

Here, a two-step deep learning approach to establish an optimal early stopping point based on object recognition was implemented. In step one, we enhanced the reconstructed image after every measurement (i.e., the image reconstructed after every patterned mask) by a deep convolutional autoencoder, followed by step two in which a classifier was used to recognize the image (again, after every patterned mask). We tested this approach on a non-degenerate ghost imaging while varying physical parameters such as the mask type, objects, and resolution.

After each measurement (i.e., after each projective mask), we applied our two-step approach where we passed the raw reconstructed image through the autoencoder for enhancement and denoising followed by the neural classifier for object recognition. We imposed a strict evaluation criteria; the recognition algorithm requires a 75% or higher confidence (predicted probability) of accurate image recognition. A confidence of 75% is in the middle of acceptable and perfect: not as low as 50% and not as strict as 100%, imposing a strict early stopping criteria. We achieved a fivefold decrease in image acquisition time at a recognition confidence of 75%, the results of which are shown in Fig. 20. The vertical dashed line depicts the point at which the 75% confidence level is achieved. Imposing this criteria on the implemented early stopping approach, we achieve the early stopping criteria at approximately 20% of the total reconstruction time. Full implementation details are contained in [32].

 figure: Fig. 20.

Fig. 20. Results of the two-step deep learning early stopping approach. Reconstructed raw images for object four at 20% intervals of the image reconstruction time, respectively. The aforementioned reconstructed raw images are then passed through the autoencoder for image enhancement. Displayed are the corresponding enhanced images, followed by the corresponding confidence predictions for all digits and iterations. The vertical dashed line indicates the point at which the early stopping criteria are achieved. Taken from [32].

Download Full Size | PDF

B. Machine Learning for Faster Image Reconstruction

Taking the above technique a step further and reducing the computational power as well as the resources needed, we show that it is possible to achieve an even further reduction in image reconstruction times by employing machine learning techniques. Often over-looked, machine learning methods can offer the same, if not better, reduction in image acquisition time by an object recognition process. Four machine learning algorithms were implemented and trained on a uniquely generated, noised, and blurred dataset of numerical digits 1 through 9. Of the tested recognition algorithms, logistic regression shows a $10 \times$ speed up in image acquisition time with a 99% prediction accuracy. Additionally, this reduction in acquisition time was achieved without any image denoising or enhancement prior to recognition, thereby reducing training and implementation time, as well as the computational intensity of the approach. This method can be implemented in real-time, requiring only 1/10th of the measurements needed for a general solution.

Here four machine learning algorithms were implemented, and their performance was tested to determine which algorithm could be used to achieve the most significant reduction in image acquisition times. The implemented algorithms consisted of a support vector machine, logistic regression, nearest neighbors, and naïve Bayes, which is a baseline classifier. In the same manner as the two-step approach detailed above, the image was passed through each classifier after each measurement; image reconstructions are shown in Fig. 21. Importantly, in this work, we curated a dataset that was as aesthetically close to the experimental data as possible without using the experimental data for training. The dataset allowed for us to recognize the experimental data earlier on than in our two-step approach detailed above; this provides evidence that machine learning coupled with specifically generated data is a powerful tool in object recognition for ghost imaging. The results for the logistic regression classifier for object digits two and four are shown in Fig. 22, where the early stopping criteria is now achieved at 1/10th of the measurements required for a general solution, showing a $10 \times$ reduction in image reconstruction times. For full details, consult [85].

 figure: Fig. 21.

Fig. 21. Experimental image reconstructions for digital objects (a) 2 and (b) 4, starting at 50 masks and continuing in intervals of 100 masks using the Walsh-Hadamard basis (left to right). Taken from [85].

Download Full Size | PDF

 figure: Fig. 22.

Fig. 22. Confidence predictions of the logistic regression for input digits (a) 2 and (b) 4, normalized to the same scale for comparative purposes. The dashed lines represent the point at which a confidence prediction greater than 75% is achieved for the input digit of interest. Taken from [85].

Download Full Size | PDF

C. Generative Adversarial Networks for Achieving Impractical-to-Measure Image Resolutions

Image reconstruction times depend on the resolution of the required image, which scale quadratically with the image resolution. Here we introduced a super-resolved imaging approach based on neural networks. The approach consists of reconstructing a low-resolution image, which is subsequently denoised by a generative adversarial network and then super-resolved to an impractical to measure high-resolution image. We achieved super-resolving enhancement of $4 \times$ the measured resolution with a fidelity close to 90% at an acquisition time of ${{N}^2}$ measurements, required for a complete ${N} \times {N}\;{\rm pixel}$ image solution. Images were reconstructed at a resolution of $32 \times 32\;{\rm pixels}$ and super-resolved to a resolution of $256 \times 256\;{\rm pixels}$. Such a high-resolution image would require more than 65,000 measurements, which would take approximately a week to measure with a short integration time of 10 s per mask.

Figure 23(a) shows the input objects, while Figs. 23(b) and 23(c) show the raw image reconstructions, the denoised images from the generative adversarial network (GAN), and the super-resolved auto-encoder (SR AutoE) output for both the random and Walsh-Hadamard bases, respectively. Full implementation details are contained in [82]. In the case of using super-resolving neural networks in quantum imaging, this AI-based approach surpasses traditional imaging approaches and has the capability to realize image resolutions that are otherwise impractical to measure experimentally.

 figure: Fig. 23.

Fig. 23. Results of the image reconstructions for both digits that were reconstructed, denoised, and super-resolved. (a) Digital objects that were used in the experiment. (b) Results of the image reconstruction by random masks—from left to right, the image obtained from the experiment, the denoised image from the GAN, and the super-resolved image output from the SR AutoE. (c) Results of the image reconstruction by Walsh-Hadamard masks—from left to right, the image obtained from the experiment, the denoised image from the GAN, and the super-resolved image output from the SR AutoE. Taken from [82].

Download Full Size | PDF

D. Physics-Driven AI

Physics-driven AI, also known as physics-informed machine learning, is an approach that combines the principles of physics with machine learning algorithms to improve the accuracy and efficiency of the machine learning. This approach involves incorporating prior knowledge of physical systems, such as optical imaging, quantum imaging, and quantum ghost imaging, into the training of neural networks, which reduces the size of training data required to adequately train the neural network [86,87]. This decreases the computational intensity involved in training the algorithms but also leads to a more accurate representation of training [88]. Physics-driven AI has immense potential to solve problems that were previously unaddressed due to computational inefficiency. Physics-driven AI is an emerging field; however, it has currently been used to reduce resources in quantum teleportation [88] and has been demonstrated in decentralized physics-driven learning [89]. Importantly the intersection of physics-driven AI and its application to imaging is a field showing immense promise and growth [90], while optical neural networks have further capability to reduce image reconstruction time while achieving a high image identification accuracy [91].

In recent years, physics-enhanced AI has been applied to ghost imaging for optimization and enhancement purposes. In 2017, Lyu et al., proposed a physics-informed deep learning method where deep learning was used to reduce the sampling ratio. In this study, the input to the neural network was an approximant that was recovered using a conventional correlation algorithm [28]. As the modulation efficiency of the aforementioned approach was low, a further study conducted by Higham et al. proposed a deep convolutional autoencoder in which the trained binary weights of the encoder were used to scan the target object [72]; this was a purely data-driven method. Additionally several studies have shown that it is possible to reconstruct an image from the detected signal without any physical priors [92,93]; it is, however, pertinent to note that incorporating the physics of the imaging system into the neural network has paramount consequences for various aspects of the imaging system. One of the aforementioned aspects consists of applying physics-informed methods to the data acquisition process as demonstrated by Goy et al. and Wang et al., in 2019, respectively [92,94]. Neural networks face limitations in their generalization and interpretability; however, the generalization aspect has been addressed by Goy et al. and Wang et al. by incorporating physics priors into AI-based approaches [95,96], while the interpretability aspect was addressed by Iten et al. in 2020 [97]. In 2022, Wang et al. reported a physics-enhanced deep learning technique for single pixel imaging that leverage aspects relying on the forward propagation model of the imaging system [98]. A general framework leveraging both data and physics driven priors, called VGenNet, was proposed to enhance single pixel imaging and to solve existing inverse imaging problems [99]. In another study, single 1D data collected by a photodiode was required to feed a URNet architecture, and the network was automatically optimized to retrieve the 2D image without training tens of thousands of labeled data points [100].

8. CONCLUDING REMARKS

We wrote this tutorial as a guide for those wishing to enter the quantum ghost imaging field or for those already in the field wishing to extend their knowledge, go back to the basics, or introduce AI techniques to their toolbox. Although written in a linear manner, this tutorial does not need to be read linearly. Rather one may look at the getting starting considerations independent of practical examples or image reconstruction methods. We present basic theoretical considerations as well as practical and experimental choices to make before building the experimental setup. We follow this up with a practical example of an optical setup and discuss several image reconstruction algorithms and image quality tests the user could implement. We end with a brief discussion on AI techniques for time efficient quantum ghost imaging. While it is important to note that optical imaging performs well, there are some caveats to this as the boundaries of quantum ghost imaging are still currently being pushed and explored. A specific use case where quantum ghost imaging outperforms conventional imaging is non-degenerate quantum ghost imaging where it is important to probe the object with one wavelength yet measure the image with another wavelength [12,32]. Additionally, in recent work, employing AI to quantum imaging with NOON states effectively reduced image reconstruction time to that of $N = 2$ [101]. We believe that all-digital quantum ghost imaging offers the perfect setup for beginners who are entering and exploring the field. Additionally, for those already in the field, it serves as the first experimental test for probing different samples, exploring different SPDC detection wavelengths, and testing AI techniques for time efficiency. All-digital quantum imaging allows for one to “fail fast fail often” so as to quickly develop methods, techniques, and software for the enhancement and advancement of quantum ghost imaging in its entirety. We believe that this tutorial is a useful guide for beginners entering the field as well as those already in the field who wish to introduce AI to their quantum ghost imaging toolbox.

Funding

Council for Scientific and Industrial Research, South Africa.

Acknowledgment

The authors thank Isaac Nape and Pedro Ornelas for useful advice and discussions. C. M. acknowledges the financial support received from the CSIR under the HCD-IBS scholarship scheme.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are available from the authors upon reasonable request.

REFERENCES

1. Y. Shih, “The physics of ghost imaging,” in Classical, Semi-classical and Quantum Noise (Springer, 2012), pp. 169–222.

2. D. Klyshko, “A simple method of preparing pure states of an optical field, of implementing the Einstein–Podolsky–Rosen experiment, and of demonstrating the complementarity principle,” Sov. Phys. Usp. 31, 74 (1988). [CrossRef]  

3. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52, R3429–R3432 (1995). [CrossRef]  

4. R. E. Meyers and K. S. Deacon, “Quantum ghost imaging experiments,” Proc. SPIE 7465, 746508 (2009). [CrossRef]  

5. J. H. Shapiro and R. W. Boyd, “The physics of ghost imaging,” Quantum Inf. Process. 11, 949–993 (2012). [CrossRef]  

6. M. McLaren and A. Forbes, “Digital spiral-slit for bi-photon imaging,” J. Opt. 19, 044006 (2017). [CrossRef]  

7. A. F. Abouraddy, B. E. Saleh, A. V. Sergienko, and M. C. Teich, “Role of entanglement in two-photon imaging,” Phys. Rev. Lett. 87, 123602 (2001). [CrossRef]  

8. R. S. Bennink, S. J. Bentley, and R. W. Boyd, “Two-photon coincidence imaging with a classical source,” Phys. Rev. Lett. 89, 113601 (2002). [CrossRef]  

9. A. Gatti, E. Brambilla, and L. Lugiato, “Entangled imaging and wave-particle duality: from the microscopic to the macroscopic realm,” Phys. Rev. Lett. 90, 133603 (2003). [CrossRef]  

10. R. S. Bennink, S. J. Bentley, R. W. Boyd, and J. C. Howell, “Quantum and classical coincidence imaging,” Phys. Rev. Lett. 92, 033601 (2004). [CrossRef]  

11. M. J. Padgett and R. W. Boyd, “An introduction to ghost imaging: quantum and classical,” Philos. Trans. R. Soc. A 375, 20160233 (2017). [CrossRef]  

12. R. S. Aspden, N. R. Gemmell, P. A. Morris, D. S. Tasca, L. Mertens, M. G. Tanner, R. A. Kirkwood, A. Ruggeri, A. Tosi, R. W. Boyd, G. S. Buller, R. H. Hadfield, and M. J. Padgett, “Photon-sparse microscopy: visible light imaging using infrared illumination,” Optica 2, 1049–1052 (2015). [CrossRef]  

13. N. Bornman, M. Agnew, F. Zhu, A. Vallés, A. Forbes, and J. Leach, “Ghost imaging using entanglement-swapped photons,” npj Quantum Inf. 5, 1–6 (2019). [CrossRef]  

14. N. Bornman, S. Prabhakar, A. Vallés, J. Leach, and A. Forbes, “Ghost imaging with engineered quantum states by Hong–Ou–Mandel interference,” New J. Phys. 21, 073044 (2019). [CrossRef]  

15. P. A. Morris, R. S. Aspden, J. E. Bell, R. W. Boyd, and M. J. Padgett, “Imaging with a small number of photons,” Nat. Commun. 6, 5913 (2015). [CrossRef]  

16. G. M. Gibson, S. D. Johnson, and M. J. Padgett, “Single-pixel imaging 12 years on: a review,” Opt. Express 28, 28190–28208 (2020). [CrossRef]  

17. M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13, 13–20 (2019). [CrossRef]  

18. P.-A. Moreau, E. Toninelli, P. A. Morris, R. S. Aspden, T. Gregory, G. Spalding, R. W. Boyd, and M. J. Padgett, “Resolution limits of quantum ghost imaging,” Opt. Express 26, 7528–7536 (2018). [CrossRef]  

19. E. Toninelli, P.-A. Moreau, T. Gregory, A. Mihalyi, M. Edgar, N. Radwell, and M. Padgett, “Resolution-enhanced quantum imaging by centroid estimation of biphotons,” Optica 6, 347–353 (2019). [CrossRef]  

20. D. Pelliccia, A. Rack, M. Scheel, V. Cantelli, and D. M. Paganin, “Experimental X-ray ghost imaging,” Phys. Rev. Lett. 117, 113902 (2016). [CrossRef]  

21. H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, “Fourier-transform ghost imaging with hard X rays,” Phys. Rev. Lett. 117, 113901 (2016). [CrossRef]  

22. S. Li, F. Cropp, K. Kabra, T. Lane, G. Wetzstein, P. Musumeci, and D. Ratner, “Electron ghost imaging,” Phys. Rev. Lett. 121, 114801 (2018). [CrossRef]  

23. A. Trimeche, C. Lopez, D. Comparat, and Y. Picard, “Ion and electron ghost imaging,” Phys. Rev. Res. 2, 043295 (2020). [CrossRef]  

24. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78, 061802 (2008). [CrossRef]  

25. B. I. Erkmen and J. H. Shapiro, “Ghost imaging: from quantum to classical to computational,” Adv. Opt. Photonics 2, 405–450 (2010). [CrossRef]  

26. O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95, 93–96 (2009). [CrossRef]  

27. P. Zerom, K. W. C. Chan, J. C. Howell, and R. W. Boyd, “Entangled-photon compressive ghost imaging,” Phys. Rev. A 84, 061804 (2011). [CrossRef]  

28. M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7, 17865 (2017). [CrossRef]  

29. T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, A. Shiraki, and T. Ito, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018). [CrossRef]  

30. S. Rizvi, J. Cao, K. Zhang, and Q. Hao, “DeepGhost: real-time computational ghost imaging via deep learning,” Sci. Rep. 10, 11400 (2020). [CrossRef]  

31. V. Rodríguez-Fajardo, J. Pinnell, and A. Forbes, “Towards time-efficient ghost imaging,” J. Mod. Opt. 67, 1176–1183 (2020). [CrossRef]  

32. C. Moodley, B. Sephton, V. Rodríguez-Fajardo, and A. Forbes, “Deep learning early stopping for non-degenerate ghost imaging,” Sci. Rep. 11, 8561 (2021). [CrossRef]  

33. A. Chiuri, I. Gianani, V. Cimini, L. De Dominicis, M. G. Genoni, and M. Barbieri, “Ghost imaging as loss estimation: quantum versus classical schemes,” Phys. Rev. A 105, 013506 (2022). [CrossRef]  

34. Z. Ou and L. Mandel, “Violation of Bell’s inequality and classical probability in a two-photon correlation experiment,” Phys. Rev. Lett. 61, 50–53 (1988). [CrossRef]  

35. A. Mair, A. Vaziri, G. Weihs, and A. Zeilinger, “Entanglement of the orbital angular momentum states of photons,” Nature 412, 313–316 (2001). [CrossRef]  

36. J. Torres, A. Alexandrescu, and L. Torner, “Quantum spiral bandwidth of entangled two-photon states,” Phys. Rev. A 68, 050301 (2003). [CrossRef]  

37. H. D. L. Pires, H. Florijn, and M. Van Exter, “Measurement of the spiral spectrum of entangled two-photon states,” Phys. Rev. Lett. 104, 020505 (2010). [CrossRef]  

38. J. Romero, D. Giovannini, S. Franke-Arnold, S. Barnett, and M. Padgett, “Increasing the dimension in high-dimensional two-photon orbital angular momentum entanglement,” Phys. Rev. A 86, 012334 (2012). [CrossRef]  

39. J. Svozilík, J. Peřina Jr., and J. P. Torres, “High spatial entanglement via chirped quasi-phase-matched optical parametric down-conversion,” Phys. Rev. A 86, 052318 (2012). [CrossRef]  

40. M. G. McLaren, F. S. Roux, and A. Forbes, “Realising high-dimensional quantum entanglement with orbital angular momentum,” S. Afr. J. Sci. 111, 1–9 (2015). [CrossRef]  

41. A. Forbes and I. Nape, “Quantum mechanics with patterns of light: progress in high dimensional and multidimensional entanglement with structured light,” AVS Quantum Sci. 1, 011701 (2019). [CrossRef]  

42. G. Scarcelli, V. Berardi, and Y. Shih, “Can two-photon correlation of chaotic light be considered as correlation of intensity fluctuations?” Phys. Rev. Lett. 96, 063602 (2006). [CrossRef]  

43. C. Couteau, “Spontaneous parametric down-conversion,” Contemp. Phys. 59, 291–304 (2018). [CrossRef]  

44. A. Gatti, E. Brambilla, M. Bache, and L. A. Lugiato, “Ghost imaging with thermal light: comparing entanglement and classicalcorrelation,” Phys. Rev. Lett. 93, 093602 (2004). [CrossRef]  

45. A. Einstein, B. Podolsky, and N. Rosen, “Can quantum-mechanical description of physical reality be considered complete?” Phys. Rev. 47, 777–780 (1935). [CrossRef]  

46. R. S. Aspden, D. S. Tasca, R. W. Boyd, and M. J. Padgett, “EPR-based ghost imaging using a single-photon-sensitive camera,” New J. Phys. 15, 073032 (2013). [CrossRef]  

47. P.-A. Moreau, F. Devaux, and E. Lantz, “Einstein-Podolsky-Rosen paradox in twin images,” Phys. Rev. Lett. 113, 160401 (2014). [CrossRef]  

48. R. Weissleder, “A clearer vision for in vivo imaging,” Nat. Biotechnol. 19, 316–317 (2001). [CrossRef]  

49. C. Schnell, “Quantum imaging in biological samples,” Nat. Methods 16, 214 (2019). [CrossRef]  

50. R. S. Aspden, D. S. Tasca, A. Forbes, R. W. Boyd, and M. J. Padgett, “Experimental demonstration of Klyshkos advanced-wave picture using a coincidence-count based, camera-enabled imaging system,” J. Mod. Opt. 61, 547–551 (2014). [CrossRef]  

51. T. Pittman, D. Strekalov, D. Klyshko, M. Rubin, A. Sergienko, and Y. Shih, “Two-photon geometric optics,” Phys. Rev. A 53, 2804–2815 (1996). [CrossRef]  

52. F. M. Miatto, H. Di Lorenzo Pires, S. M. Barnett, and M. P. Van Exter, “Spatial Schmidt modes generated in parametric down-conversion,” Eur. Phys. J. D 66, 1–11 (2012). [CrossRef]  

53. O. Benson, C. Santori, M. Pelton, and Y. Yamamoto, “Regulated and entangled photons from a single quantum dot,” Phys. Rev. Lett. 84, 2513–2516 (2000). [CrossRef]  

54. R. J. Young, R. M. Stevenson, P. Atkinson, K. Cooper, D. A. Ritchie, and A. J. Shields, “Improved fidelity of triggered entangled photons from single quantum dots,” New J. Phys. 8, 29 (2006). [CrossRef]  

55. T. Chung, G. Juska, S. Moroni, A. Pescaglini, A. Gocalinska, and E. Pelucchi, “Selective carrier injection into patterned arrays of pyramidal quantum dots for entangled photon light-emitting diodes,” Nat. Photonics 10, 782–787 (2016). [CrossRef]  

56. J. Liu, R. Su, Y. Wei, B. Yao, S. F. C. de Silva, Y. Yu, J. Iles-Smith, K. Srinivasan, A. Rastelli, J. Li, and X. Wang, “A solid-state source of strongly entangled photon pairs with high brightness and indistinguishability,” Nat. Nanotechnol. 14, 586–593 (2019). [CrossRef]  

57. J.-W. Pan, Z.-B. Chen, C.-Y. Lu, H. Weinfurter, A. Zeilinger, and M. Żukowski, “Multiphoton entanglement and interferometry,” Rev. Mod. Phys. 84, 777–838 (2012). [CrossRef]  

58. G. B. Lemos, M. Lahiri, S. Ramelow, R. Lapkiewicz, and W. Plick, “Quantum imaging and metrology with undetected photons: a tutorial,” arXiv, arXiv:2202.09898 (2022). [CrossRef]  

59. N. Bornman, W. Tavares Buono, M. Lovemore, and A. Forbes, “Optimal pump shaping for entanglement control in any countable basis,” Adv. Quantum Technol. 4, 2100066 (2021). [CrossRef]  

60. G. Barreto Lemos, M. Lahiri, S. Ramelow, R. Lapkiewicz, and W. N. Plick, “Quantum imaging and metrology with undetected photons: tutorial,” J. Opt. Soc. Am. B 39, 2200–2228 (2022). [CrossRef]  

61. J. Pinnell, A. Klug, and A. Forbes, “Spatial filtering of structured light,” Am. J. Phys. 88, 1123–1131 (2020). [CrossRef]  

62. P. G. Kwiat, E. Waks, A. G. White, I. Appelbaum, and P. H. Eberhard, “Ultrabright source of polarization-entangled photons,” Phys. Rev. A 60, R773 (1999). [CrossRef]  

63. C. Rosales-Guzmán and A. Forbes, How to Shape Light with Spatial Light Modulators (SPIE, 2017).

64. A. F. Abouraddy, P. R. Stone, A. V. Sergienko, B. E. Saleh, and M. C. Teich, “Entangled-photon imaging of a pure phase object,” Phys. Rev. Lett. 93, 19–22 (2004). [CrossRef]  

65. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 83–91 (2008). [CrossRef]  

66. F. Ferri, D. Magatti, L. A. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. 104, 253603 (2010). [CrossRef]  

67. W. K. Pratt, J. Kane, and H. C. Andrews, “Hadamard transform image coding,” Proc. IEEE 57, 58–68 (1969). [CrossRef]  

68. N. J. Sloane and M. Harwit, “Masks for Hadamard transform optics, and weighing designs,” Appl. Opt. 15, 107–114 (1976). [CrossRef]  

69. Y. Y. Schechner, S. K. Nayar, and P. N. Belhumeur, “A theory of multiplexed illumination,” in ICCV (2003), Vol. 3, pp. 808–815.

70. Y. Y. Schechner, S. K. Nayar, and P. N. Belhumeur, “Multiplexing for optimal lighting,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 1339–1354 (2007). [CrossRef]  

71. Z. Zhang and J. Zhong, “Three-dimensional single-pixel imaging with far fewer measurements than effective image pixels,” Opt. Lett. 41, 2497–2500 (2016). [CrossRef]  

72. C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8, 2369 (2018). [CrossRef]  

73. Z. Niu, J. Shi, L. Sun, Y. Zhu, J. Fan, and G. Zeng, “Photon-limited face image super-resolution based on deep learning,” Opt. Express 26, 22773–22782 (2018). [CrossRef]  

74. A. Jain, Y. Chen, and M. Demirkus, “Pores and ridges: fingerprint matching using level 3 features,” in Proceedings - International Conference on Pattern Recognition (2006), Vol. 4, pp. 477–480.

75. H. C. Liu, “Imaging reconstruction comparison of different ghost imaging algorithms,” Sci. Rep. 10, 1 (2020). [CrossRef]  

76. B. Ndagano, H. Defienne, A. Lyons, I. Starshynov, F. Villa, S. Tisa, and D. Faccio, “Imaging and certifying high-dimensional entanglement with a single-photon avalanche diode camera,” npj Quantum Inf. 6, 1–8 (2020). [CrossRef]  

77. H. Defienne, B. Ndagano, A. Lyons, and D. Faccio, “Polarization entanglement-enabled quantum holography,” Nat. Phys. 17, 591–597 (2021). [CrossRef]  

78. H. Defienne, P. Cameron, B. Ndagano, A. Lyons, M. Reichert, J. Zhao, E. Charbon, J. W. Fleischer, and D. Faccio, “Pixel super-resolution using spatially-entangled photon pairs,” arXiv, arXiv:2105.10351 (2021). [CrossRef]  

79. B. Ndagano, H. Defienne, D. Branford, Y. D. Shah, A. Lyons, N. Westerberg, E. M. Gauger, and D. Faccio, “Hong-Ou-Mandel microscopy,” arXiv, arXiv–2108 (2021).

80. B. Sun, S. S. Welsh, M. P. Edgar, J. H. Shapiro, and M. J. Padgett, “Normalized ghost imaging,” Opt. Express 20, 16892–16901 (2012). [CrossRef]  

81. H. Wu, R. Wang, G. Zhao, H. Xiao, J. Liang, D. Wang, X. Tian, L. Cheng, and X. Zhang, “Deep-learning denoising computational ghost imaging,” Opt. Lasers Eng. 134, 106183 (2020). [CrossRef]  

82. C. Moodley and A. Forbes, “Super-resolved quantum ghost imaging,” Sci. Rep. 12, 10346 (2022). [CrossRef]  

83. H.-C. Liu, H. Yang, J. Xiong, and S. Zhang, “Positive and negative ghost imaging,” Phys. Rev. Appl. 12, 034019 (2019). [CrossRef]  

84. Z. Wang and A. C. Bovik, “A universal image quality index,” IEEE Signal Process. Lett. 9, 81–84 (2002). [CrossRef]  

85. C. Moodley, A. Ruget, J. Leach, and A. Forbes, “Time-efficient object recognition in quantum ghost imaging,” in Advanced Quantum Technologies (2022), paper 2200109.

86. L. G. Wright, T. Onodera, M. M. Stein, T. Wang, D. T. Schachter, Z. Hu, and P. L. McMahon, “Deep physical neural networks trained with backpropagation,” Nature 601, 549–555 (2022). [CrossRef]  

87. M. Stern and A. Murugan, “Learning without neurons in physical systems,” Annu. Rev. Condens. Matter Phys. 14, 417–441 (2023). [CrossRef]  

88. H. Zhang, L. Wan, T. Haug, W.-K. Mok, S. Paesani, Y. Shi, H. Cai, L. K. Chin, M. F. Karim, L. Xiao, X. Luo, F. Gao, B. Dong, S. Assad, M. S. Kim, A. Laing, L. C. Kwek, and A. Q. Liu, “Resource-efficient high-dimensional subspace teleportation with a quantum autoencoder,” Sci. Adv. 8, eabn9783 (2022). [CrossRef]  

89. S. Dillavou, M. Stern, A. J. Liu, and D. J. Durian, “Demonstration of decentralized physics-driven learning,” Phys. Rev. Appl. 18, 014040 (2022). [CrossRef]  

90. B. Jalali, Y. Zhou, A. Kadambi, and V. Roychowdhury, “Physics-AI symbiosis,” Mach. Learn.: Sci. Technol. 3, 041001 (2022). [CrossRef]  

91. Y. Xiao, X. Peng, H. Tang, and Y. Tang, “Optical neural network with complementary decomposition to overcome the phase insensitive constrains,” IEEE J. Sel. Top. Quantum Electron. 29, 6100708 (2023). [CrossRef]  

92. F. Wang, H. Wang, H. Wang, G. Li, and G. Situ, “Learning from simulation: an end-to-end deep-learning approach for computational ghost imaging,” Opt. Express 27, 25560–25572 (2019). [CrossRef]  

93. R. Shang, K. Hoffer-Hawlik, F. Wang, G. Situ, and G. P. Luke, “Two-step training deep learning framework for computational imaging without physics priors,” Opt. Express 29, 15239–15254 (2021). [CrossRef]  

94. A. Goy, G. Rughoobur, S. Li, K. Arthur, A. I. Akinwande, and G. Barbastathis, “High-resolution limited-angle phase tomography of dense layered objects using deep neural networks,” Proc. Natl. Acad. Sci. USA 116, 19848–19856 (2019). [CrossRef]  

95. A. Goy, K. Arthur, S. Li, and G. Barbastathis, “Low photon count phase retrieval using deep learning,” Phys. Rev. Lett. 121, 243902 (2018). [CrossRef]  

96. F. Wang, Y. Bian, H. Wang, M. Lyu, G. Pedrini, W. Osten, G. Barbastathis, and G. Situ, “Phase imaging with an untrained neural network,” Light Sci. Appl. 9, 77 (2020). [CrossRef]  

97. R. Iten, T. Metger, H. Wilming, L. Del Rio, and R. Renner, “Discovering physical concepts with neural networks,” Phys. Rev. Lett. 124, 010508 (2020). [CrossRef]  

98. F. Wang, C. Wang, C. Deng, S. Han, and G. Situ, “Single-pixel imaging using physics enhanced deep learning,” Photon. Res. 10, 104–110 (2022). [CrossRef]  

99. X. Zhang, C. Deng, C. Wang, F. Wang, and G. Situ, “VGenNet: variable generative prior enhanced single pixel imaging,” ACS Photon. 7, 2363–2373 (2023). [CrossRef]  

100. J. Li, B. Wu, T. Liu, and Q. Zhang, “URNet: high-quality single-pixel imaging with untrained reconstruction network,” Opt. Lasers Eng. 166, 107580 (2023). [CrossRef]  

101. F. Li, Y. Sun, and X. Zhang, “Deep-learning-based quantum imaging using noon states,” J. Phys. Commun. 6, 035005 (2022). [CrossRef]  

Data availability

Data underlying the results presented in this paper are available from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (23)

Fig. 1.
Fig. 1. Imaging systems. (a) Classically the photons used to image an object must physically interact with the object, travel to the image plane, and transfer information by virtue of position correlations that are established by the optical system itself. (b) Position correlations are established by virtue of quantum entanglement. The correlations are innate to the entangled photon pair, where the photon pair share quantum correlations at the source.
Fig. 2.
Fig. 2. Diagrams of SPDC interaction geometries for (a) collinear and (b) non-collinear phase-matching. Insets show the required momentum conservation.
Fig. 3.
Fig. 3. Conceptual sketch of quantum ghost imaging optical setups in the position and momentum configurations. Entangled photons, generated by a high energy pump photon at a non-linear crystal (NLC) by spontaneous parametric downconversion (SPDC), are spatially separated along two arms. One photon interacts with the object and is collected by a bucket detector—the idler photon. The other photon (signal photon) is collected by a spatially resolving detector consisting of a projective mask and a bucket detector. Each detector is connected to a coincidence counting (CC) device to perform coincidence measurements. (a) Illustration of a ghost imaging optical setup in which the object and projective mask are placed in the near field of the crystal. (b) Illustration of a ghost imaging optical setup in which the object and projective mask are placed in the far field of the crystal. The insets show the respective ghost images obtained for the different experimental configurations, taken from [50]. ${f_i}$ is indicative of the different focal lengths for lenses required for either the position of momentum configuration.
Fig. 4.
Fig. 4. Transmission graph of a bandpass filter centered at $\lambda = 810\;{\rm nm} $ with a 10 nm FWHM.
Fig. 5.
Fig. 5. Beam profiles of a diode laser centered at a wavelength of $\lambda = 405\;{\rm nm} $, (a) before and (b) after spatial filtering.
Fig. 6.
Fig. 6. Methods to achieve optimal phase-matching by (a) angle tuning and (b) periodic poling (temperature tuning).
Fig. 7.
Fig. 7. Far-field images of the SPDC transitions (from left to right) of a collinear geometry to a non-collinear geometry from a Type-I BBO sandwich NLC tilted at different angles. In this case, the tilt angle determines the geometry.
Fig. 8.
Fig. 8. Image of a typical Holoeye PLUTO 2 spatial light modulator. The inset shows the schematic structure of the liquid crystal display for a reflective SLM, taken from [63].
Fig. 9.
Fig. 9. To generate the hologram to be displayed on the SLM, the digital object (or projective mask) is combined with a phase grating also known as a blazed grating.
Fig. 10.
Fig. 10. The digital object to be imaged is superimposed on a Gaussian distribution, which is representative of the SPDC distribution containing the idler photons (left). A projective mask is superimposed on a Gaussian distribution representative of the SPDC distribution containing the signal photons (right). The mask is smaller than the Gaussian distribution, and the object is smaller than the mask. In this example, the SPDC geometry is collinear. Both SPDC distributions are the same size.
Fig. 11.
Fig. 11. Examples of $32 \times 32\;{\rm pixel}$ resolution random patterned projective mask types used to spatially resolve the signal photon.
Fig. 12.
Fig. 12. Examples of $32 \times 32\;{\rm pixel}$ resolution Walsh-Hadamard patterned projective mask types used to spatially resolve the signal photon.
Fig. 13.
Fig. 13. A simulated image ($I(x,y)$), with resolution of $32 \times 32\;{\rm pixels}$, is reconstructed as a linear combination of each projective mask (${P_i}(x,y)$) weighted by a coefficient determined by the detection probability. The calculated detection probability is proportional to the experimental coincidence counts.
Fig. 14.
Fig. 14. Schematic diagram of the implemented quantum optical setup for degenerate quantum ghost imaging. Degenerate entangled photons are produced at the NLC. A bandpass filter (BPF) centered at $\lambda = 810\;{\rm nm} $ filters out any unconverted photons while a half-wave plate (HWP) rotates the polarization for optimal modulation by the SLMs. A 50:50 beam splitter is used to spatially separate the entangled signal and idler photons. Each photon impinges on a SLM displaying either the object or projective mask. The photons are collected by coupling each beam to a fiber connected to an APD, and the photons that are detected in coincidence are counted by a coincidence counting device (CC). ${{L}_i}$ represents lenses.
Fig. 15.
Fig. 15. Histogram of the photons detected in coincidence due to the time correlation. The entangled photons are found to have the same time difference and the counts in that specific time bin of the histogram increase while uncorrelated or ambient photons that are not entangled or correlated are spread randomly across all the time bins and do not contribute to any specific signal buildup.
Fig. 16.
Fig. 16. Artifact arising from the TGI reconstruction algorithm when coupled with the Walsh-Hadamard mask type. (a) Image reconstruction by a $32 \times 32\;{\rm pixel}$ Walsh-Hadamard mask type, before contrast adjustment. The zoomed-in image shows the mask artifact visible at the top-right corner in the form of a single activated pixel. (b) Image reconstruction by a $32 \times 32\;{\rm pixel}$ Walsh-Hadamard mask type, after contrast adjustment. Insets in each reconstructed image show the digital object used in the experiment.
Fig. 17.
Fig. 17. When TGI is coupled with the random mask type, no artifacts are present, as such there is no visual difference between (a) the raw reconstructed image and (b) the image that has undergone contrast adjustment. The insets in each reconstructed image show the digital object used in the experiment.
Fig. 18.
Fig. 18. Contrast adjusted reconstructed images for (a) input digital objects, using $32 \times 32\;{\rm pixel}$ (b) random and (c) Walsh-Hadamard masks for reconstruction algorithms: TGI, TGIDC, ASGI, ASGIDC, DGI, LGI, and NGI, respectively.
Fig. 19.
Fig. 19. Statistical tests conducted on the contrast adjusted images showing (a) the fidelity and (b) the PSNR per image, per algorithm, for both random and Walsh-Hadamard mask types.
Fig. 20.
Fig. 20. Results of the two-step deep learning early stopping approach. Reconstructed raw images for object four at 20% intervals of the image reconstruction time, respectively. The aforementioned reconstructed raw images are then passed through the autoencoder for image enhancement. Displayed are the corresponding enhanced images, followed by the corresponding confidence predictions for all digits and iterations. The vertical dashed line indicates the point at which the early stopping criteria are achieved. Taken from [32].
Fig. 21.
Fig. 21. Experimental image reconstructions for digital objects (a) 2 and (b) 4, starting at 50 masks and continuing in intervals of 100 masks using the Walsh-Hadamard basis (left to right). Taken from [85].
Fig. 22.
Fig. 22. Confidence predictions of the logistic regression for input digits (a) 2 and (b) 4, normalized to the same scale for comparative purposes. The dashed lines represent the point at which a confidence prediction greater than 75% is achieved for the input digit of interest. Taken from [85].
Fig. 23.
Fig. 23. Results of the image reconstructions for both digits that were reconstructed, denoised, and super-resolved. (a) Digital objects that were used in the experiment. (b) Results of the image reconstruction by random masks—from left to right, the image obtained from the experiment, the denoised image from the GAN, and the super-resolved image output from the SR AutoE. (c) Results of the image reconstruction by Walsh-Hadamard masks—from left to right, the image obtained from the experiment, the denoised image from the GAN, and the super-resolved image output from the SR AutoE. Taken from [82].

Tables (2)

Tables Icon

Table 1. Summary of the Different Phase-Matching Regimes for SPDC, Assuming That the Pump Is Horizontally (H) Polarized and Whether Collinear or Non-Collinear Geometries Are Possible

Tables Icon

Table 2. Specifications of Commonly Used Avalanche Photodiodes (APDs)

Equations (23)

Equations on this page are rendered with MathJax. Learn more.

1 s o + 1 s i = 1 f ,
ω p = ω s + ω i .
k p = k s + k i ,
| Ψ = a , | s | i ,
σ x f 2 λ p π w p ,
F O V x f λ p L ,
N = ( F O V x σ x ) 2 π 2 w p 2 4 L λ p .
η = f ( x ) g 2 ( x ) M p ( x ) M o b j e c t ( x ) M m a s k ( x ) d x ,
g ( x ) exp ( ( r / w 0 ) 2 ) .
| η i | 2 c i ,
P ( k ) = N k e N k ! ,
σ P ( k ) = N ,
N A c c = C 1 × C 2 × Δ t ,
I ( x , y ) = i = 1 N c i P i ( x , y ) ,
I ( x , y ) = i = 1 N ( c i a i ) P i ( x , y ) ,
I ( x , y ) = i = 1 N ( c i c ¯ N ) P i ( x , y ) ,
I ( x , y ) = i = 1 N ( ( c i c ¯ N ) ( a i a ¯ N ) ) P i ( x , y ) ,
I ( x , y ) = i = 1 N ( ( c i ( c ¯ N R ¯ N R i ) ) P i ( x , y ) ,
I ( x , y ) = i = 1 N ( log ( c i c ¯ N ) ) P i ( x , y ) ,
I ( x , y ) = i = 1 N ( ( c i R i ) ( c ¯ N R ¯ N ) ) P i ( x , y ) .
f = [ 1 1 K i ( I i i m g I i r e f ) 2 ] 1 2 [ σ xy σ x σ y + 1 ] . [ 2 x ¯ y ¯ x ¯ 2 + y ¯ 2 ] [ 2 σ x σ y σ x 2 + σ y 2 ] ,
x ¯ = 1 K i = 1 K x i , y ¯ = 1 K i = 1 K y i , σ x 2 = 1 K 1 i = 1 K ( x i x ¯ ) 2 , σ y 2 = 1 K 1 i = 1 K ( y i y ¯ ) 2 , σ xy = 1 K 1 i = 1 K ( x i x ¯ ) ( y i y ¯ ) ,
P S N R = 10 log 10 ( p e a k v a l 2 / M S E ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.