Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Quantum ghost imaging using asynchronous detection

Open Access Open Access

Abstract

We present first results of a novel type of setup for quantum ghost imaging based on asynchronous single photon timing using single photon avalanche diode (SPAD) detectors. This scheme enables photon pairing with arbitrary path length difference and does, therefore, obviate the dependence on optical delay lines of current quantum ghost imaging setups [Nat. Commun. 6, 5913 (2015) [CrossRef]  ]. It is also, to our knowledge, the first quantum ghost imaging setup to allow three-dimensional imaging.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. INTRODUCTION

Quantum ghost imaging is a novel imaging technique that allows separation of the illumination of an object and its image acquisition, first realized by Shih et al. [1]. It gathered increasing interest in recent years due to the very low number of photons required, the possibility to spectrally separate imaging and illumination wavelength, and some advantages of the quantum regime over classical systems [25]. In quantum ghost imaging, two entangled photons, usually obtained through spontaneous parametric downconversion (SPDC), are separated by either polarization or wavelength. These entangled photons share their moment and location of creation while their momenta are anti-correlated due to momentum conservation [6]. One of them, the so-called idler, is interacting with an object and afterward is detected by a non-spatially resolving detector, while the second one, the signal, is imaged onto a spatially resolving detector. Due to the spatial correlations of signal and idler beams, arising from the correlations of the entangled photons, an image of the object can be obtained by matching the signal photons with the appropriate, entangled idler photons by exploiting their temporal correlation. This technique, thus, enables both imaging with very few numbers of photons as well as imaging at wavelengths where currently no mature or inexpensive camera technology is available.

 figure: Fig. 1.

Fig. 1. (a) SPAD array camera used in this work. The SPAD array detector is mounted on a PCB with an underlying FPGA board for signal processing and detector control (not shown in this view). The pins beneath the chip are used to connect the trigger. (b) Layout of the SPAD detector. The detector consists of two lines of 192 pixels each. Every pixel hereby consists of four vertically aligned SPADs, quenching, reset and pixel electronics, coincidence detection electronics, and individual TDC eletronics including dedicated memory. The coincidence detection electronics is not to be confused with the coincidence detection of this scheme. It is a feature used for background rejection in classical LiDAR [9] and irrelevant for this setup.

Download Full Size | PDF

Current setups mainly use intensified charge-coupled devices (ICCD) or similar camera technologies, which allow high resolution single photon imaging of the signal beam. The matching of signal and idler photons is hereby realized by triggering the camera or its intensifier upon detection of the idler, thus requiring a temporal delay of the signal photon. In order to reconstruct the image, however, the signal beam has to be imaged correctly on the camera, thus relying on image preservation of the signal beam while implementing the temporal delay. A corresponding setup was implemented by Padgett et al. [7], where the temporal delay is realized by a fixed image-preserving delay line. In order to realize three-dimensional (3D) imaging with such a scheme, the delay line has to be at least twice as long as the distance from object to the single pixel detector, thus making it difficult to realize for long distances. In addition, it is difficult to adjust the length of the delay line due to image preservation constraints, therefore making it complicated to adapt the imaging distance. The depth resolution for using these setups depends on the gating time of the camera, which is usually on the order of few nanoseconds, corresponding to about half a meter in air. Due to these drawbacks, current quantum ghost imaging setups using synchronized detectors are not capable to efficiently realize either remote sensing or 3D imaging.

We implemented a novel system by replacing the widely used ICCD cameras with an array of single photon avalanche diodes (SPAD) [2]. SPADs allow the direct measurement of single photons with timing resolutions in the picosecond range and do not rely on image intensifiers or other single photon amplifiers. In order to realize a time resolving camera with this technology, dedicated timing circuitry, so-called time-to-digital converters (TDCs), has to be integrated for every row, column, or even individual pixels, depending on the application and constraints. In our setup, we used a SPAD array with individual TDCs for every pixel, in order to be able to resolve every detected photon in both time and space, as shown in Fig. 1 . With this scheme the signal photon can be detected before the idler photon and afterward be matched by comparing the detection times of both photons, thus eliminating the necessity of synchronizing the detectors. In this work we used a setup in transmission in order to prove the concept and optimize its parameters; however, it can easily be adapted to work in reflection, thus enabling 3D imaging.

2. SPAD DETECTORS

All detectors in our system are implemented by SPADs, which are avalanche photodiodes operated above the breakdown voltage of its ${p} - {n}$ junction, thus enabling a single incident photon to trigger an electrical avalanche resulting in a measurable signal [8]. In this system we used two SPAD systems, one commercially available single pixel detector, realizing the so-called bucket detector, and a spatially resolving SPAD array detector, which works as a camera.

The SPAD array detector used in the setup was developed by Fraunhofer IMS as a sensor for flash-LiDAR applications, i.e., in automotive applications [9,10]. Due to its standard operation in direct time-of-flight measurements, the sensor is not designed to be triggered externally but rather gives out a trigger at the start of each measurement frame, usually used to trigger a pulsed laser. The detector is able to operate in both timing and counting mode and consists of two lines of 192 pixels each, with every pixel containing four evenly spaced, vertically aligned SPADs. Each of these pixels has its own dedicated TDC, thus allowing timing resolution for individual pixels. It was manufactured in a customized 0.35 µm CMOS process, which provides SPADs with a low average dark count rate of ${0.1}\;{{\rm cps/\unicode{x00B5}{\rm m}}^2}$ and, hence, a low sensor noise level [11]. The fill factor is limited by the circuitry’s space consumption and amounts to 5.6% with pixel dimensions of ${40.5}\;{\unicode{x00B5}{\rm m}} \times {200}\;{\unicode{x00B5}{\rm m}}$ and a SPAD diameter of 12 µm. The TDC design enables a timing resolution of 312.5 ps and, in combination with an 8-bit in-pixel counter, a frame length of 1.28 µs.

Current detector development focuses on 3D-stacking of separately fabricated sensor and circuitry wafers enabling backside illumination and drastically reducing the pixel’s space consumption to enable an array arrangement. This technological advance constitutes a major benefit for the presented technique by avoiding the scanning process with line detectors presented in the following chapters.

3. SETUP

Figure 2 shows the setup used to implement quantum ghost imaging with asynchronous detection. The pump laser is focused into a 2 mm long periodically poled KTP (ppKTP) crystal with a width of 6 mm and a height of 1 mm. Due to technical reasons, poling was only applied in a 2 mm wide strip, centered in the middle of the crystal, covering its complete length and height. The poling period was chosen to be 4.25 µm, allowing highly efficient collinear type-0 phase matching for a signal wavelength of 550 nm (VIS) and an idler wavelength of 1550 nm (IR). Since phase matching is highly dependent on the momentum of the incident pump beam, the SPDC source should ideally be a single point source in order to reconstruct the correlation of the photons. In practice, this means the focus has to be placed exactly in the crystal in order to maintain a good resolution; otherwise, the resulting image will be blurred [12]. In our setup this was realized by collimating the fiber-coupled pump laser and refocusing it with a dedicated lens ($f = {150}\;{\rm mm}$), whose position was fine-tuned using appropriate stages. In theory it is also possible to pump the crystal with a collimated beam, exploiting the spatial correlation of photons rather than the anti-correlation of momentum for imaging. However, due to the currently limited aperture of the crystals, the achievable resolution is expected to be inferior for this material. In order to meet energy conservation constraints, a pump wavelength of 405 nm is necessary.

 figure: Fig. 2.

Fig. 2. Setup for quantum ghost imaging using asynchronous detection. In this work the setup was operated in transmission in order to prove the concept and improve the setup parameters.

Download Full Size | PDF

The signal and idler are then separated via dichroic mirrors into two spatially separated signal and idler arms. The idler is imaged onto the object, and the reflected or transmitted signal of this object is collected by a single pixel single photon detector, depending on the scheme in use. In the signal arm the residual pump is blocked by optical filters, and the signal photons are imaged onto the SPAD array detector in a plane corresponding to the plane of the object, onto which the idler photons are imaged, in order to preserve the spatial correlation information of the photons. For simplicity our setup used the same off-the-shelf lenses for both signal and idler (2 in. lenses, $f = {100}\;{\rm mm}$), with dedicated coatings for both spectral regimes and positioned according to their focal lengths at the specific wavelengths.

All detections are time-tagged by time-correlated single photon counting (TCSPC) electronics, which are synchronized to reference the detections to a common timebase. For the idler this is done directly by registration of the electric pulse returned by the detector after each individual detection event. For the signal arm this is done by registering the start of each detection window of the SPAD array, given by its trigger, and combining this with the time and spatial information obtained from the individual measurements of the pixels. This results in two datastreams containing information of the individual detections, one consisting of the temporal information of the bucket detector and the other consisting of the temporal and spatial information of the SPAD array. Comparing the temporal information does, in case of coincidence detection, lead to a peak of the number of detections at a specific temporal offset. The image can then be obtained by extracting the appropriate spatial information from the datastream of the SPAD array. In order to further improve the image, the residual background, stemming from accidental coincidences, can be estimated by extracting the spatial information of a non-peak segment of the comparison, scaling the resulting number of photons by the size of the coincidence window and subtracting the result from the image. Similar systems, using a timestamp comparison to detect coincidences, can be found in [13,14].

For transmissive measurements the temporal offset from the comparison corresponds to the signal delays of the detection electronics and the optical path length difference of both detectors, while for measurements in reflection additional information on the distance of the object is obtained, thus allowing 3D-imaging with depth resolution. From the minimum time resolution of the SPAD array, which has the most coarse time resolution, results a minimum depth resolution of roughly 5 cm in air.

4. RESULTS

Since at the time of implementation only a one-dimensional SPAD detector with two lines of 192 pixels was available, an additional scanning process had to be implemented to be able to obtain two-dimensional images. To do so we illuminated the full target and collected all transmitted photons with a single pixel detector while scanning the SPAD array in the signal arm parametrically in one direction and obtaining one line of the image per each measurement. Due to this scanning process and the vertical alignment of the SPADs, the array was used with only a single active SPAD per pixel since the image would, otherwise, have an inferior resolution and the image information might be disturbed due to the discrete points of detection rather than integration of the whole area of the pixel. The even spacing of the SPADs leads to an effective pixel size of ${40.5} \times {50}\;{\unicode{x00B5}{\rm m}}$ with the same pixel fill factor of 5.6%. This method of course not only prevents the use of multiple SPADs, but also heavily increases the losses and the necessary measurement time, since only a fraction of the signal photons are detected. For the implementation used to obtain Fig. 3, namely 80 scanning steps, this results in an effective fill factor of around 0.05% and measurement times of at least half an hour per line to obtain reasonable results.

 figure: Fig. 3.

Fig. 3. (a) 3D printed target used to image, showing the logo of our project (material: black PLA, thickness 3 mm). Its non-transparency around 1550 nm was verified using an adjustment laser. (b) Quantum ghost image obtained via asynchronous detection by extracting the spatial information of the coincidence window. The scale given corresponds to the size of the IR arm and is roughly 3 times the size of the VIS arm, determined by their ratio of wavelengths. The background photon distribution was estimated by extracting the spatial information from a non-coincidence window, weighted according to window size and subtracted from the image in order to suppress coincidental detections. This distribution was also used to normalize the resulting image according to the illumination distribution and estimate an image corresponding to a homogeneous illumination. The threshold for display was set at 0.1 times the maximum number of photons per pixel to suppress residual noise from coincidental detections, which will be centered around 0.

Download Full Size | PDF

Despite these drawbacks, we were able to use this scheme to image the target shown in Fig. 3 with a maximum coincidence-to-accidental ratio (CAR) above 2 and a full peak width of 3.5 ns, as seen in Fig. 4.

 figure: Fig. 4.

Fig. 4. Coincidence evaluation of the temporal data of both detectors, exemplary shown for one scanning step. Entangled photons show a fixed delay due to their mutual time of creation and fixed path lengths, thus resulting in a peak, indicated by green markers. The window indicated by $\Delta {t_c}$ contains all coincidence detections as well as background photons. The non-coincidence windows $\Delta {t_{nc1}}$ and $\Delta {t_{nc2}}$ of this evaluation can be used to extract the distribution of background photons and reduce its effect on Fig. 3. (a) Evaluation of the raw data obtained from the detectors. All coincidence detections are contained by the about 3.5 ns large $\Delta {t_c}$ window, and their peak has a FWHM of 1.7 ns. (b) Evaluation of the data with timing correction of the SPAD array data, based on internal parameters of the detector. All coincidence detections are contained by the about 1 ns large $\Delta {t_c}$ window, and their peak has a FWHM of 0.4 ns. By comparing the FWHM of both evaluations, one can deduce an increase in depth resolution by a factor of 4. In addition the CAR, which is relevant for identifying coincidence photons, was increased from a maximum of 2 to roughly a maximum of 5. A detailed explanation of the timing correction can be found in Supplement 1.

Download Full Size | PDF

The temporal data returned by the array was analyzed in detail, whereby a dependence of the timing accuracy by internal parameters of the SPAD detector was discovered. A detailed description of this dependence can be found in Supplement 1. The analysis led to a dedicated correction of the timing information for every combination of these parameters, with which the maximum CAR could be increased to above 5 and the full peak width could be compressed to about 1 ns. In addition, the peak shape was transformed from a complex structure with two prevalent peaks to a more Gaussian profile with a FWHM of about 400 ps, which is close to the fundamental resolution of the array detector, as shown in Fig. 4. For a Gaussian profile, this FWHM limits the minimum depth to distinguish multiple objects at different depths and corresponds to a depth resolution of about 6 cm.

The image shown in Fig. 3 is not obtained directly by the measurement but was improved by dedicated post-processing, whose specific steps and results are found in Supplement 1. First the distribution of photons at the SPAD array was estimated by extracting the spatial information ${f_{{\rm det}}}(x,\Delta t)$ from the non-coincidence windows $\Delta {t_{nc1}}$ and $\Delta {t_{nc2}}$ of the evaluation shown in Fig. 4 for every scanned line. The resulting distribution ${f_{{\rm BG}}}(x,t)$ corresponds to the image a regular camera would obtain, which is the actual distribution of photons ${f_{{\rm SPDC}}}(x,t)$ superimposed by the pixel dependent detection efficiency ${\eta _{{\rm px}}}(x)$ and noise term ${\sigma _{{\rm px}}}(x,t)$,

$$\begin{split}\!\!\!{{f_{{\rm BG}}}({x_{{\rm px}}},t)}&= \int_{{x_{{\rm px}}}} {f_{{\rm SPDC}}}(x,t)dx*{\eta _{{\rm px}}}({x_{{\rm px}}}) + {\sigma _{{\rm px}}}({x_{{\rm px}}},t)\\{}&\approx {f_{{\rm det}}}(x,\Delta {t_{\textit{nc}}})*\frac{t}{{\Delta {t_{\textit{nc}}}}},\end{split}$$
with ${x_{{\rm px}}}$ being the pixel readout, ${f_{{\rm det}}}(x,\Delta {t_{\textit{nc}}}) = {f_{{\rm det}}}(x,\Delta {t_{nc1}}) + {f_{{\rm det}}}(x,\Delta {t_{nc2}})$, and $\Delta {t_{\textit{nc}}} = \Delta {t_{nc1}} + \Delta {t_{nc2}}$. It also corresponds to the distribution of accidental coincidence detections, meaning detection events within the coincidence window, which do not correspond to detections of actual photon pairs. This can be used to estimate and suppress the influence of this noise on the image obtained by subtracting the distribution weighted by the size of the windows,
$${f_{{\rm img}}}(x) = {f_{{\rm det}}}(x,\Delta {t_c}) - {f_{{\rm det}}}(x,\Delta {t_{\textit{nc}}})*\frac{{\Delta {t_c}}}{{\Delta {t_{\textit{nc}}}}}.$$

The resulting image ${f_{{\rm img}}}(x)$ is an estimate of the image resulting by only detecting real photon pairs. This, however, still does not correspond to a regular image of the object but is a superposition of the object function with the distribution of the illuminating photons and the pixel dependent detection efficiency ${\eta _{{\rm px}}}(x)$. Due to correlation, the illumination distribution corresponds to the distribution of photons at the array ${f_{{\rm SPDC}}}(x,t))$. Since the noise term ${\sigma _{{\rm px}}}(x,t)$ mainly consists of dark noise from detector electronics, it can be estimated by a short measurement without active illumination. Using this estimation, Eq. (1) can be used to estimate the image of the target itself by weighting ${f_{{\rm img}}}$ with ${f_{{\rm wght}}}(x,\Delta {t_{\textit{nc}}}) = {f_{{\rm det}}}(x,\Delta {t_{\textit{nc}}}) - {\sigma _{{\rm est}}}(x,\tau)*\frac{{\Delta {t_{\textit{nc}}}}}{\tau}$ as shown in Eq. (3). This way the image is normalized taking into account both the detection efficiency ${\eta _{{\rm px}}}$ and the photon distribution ${f_{{\rm SPDC}}}$ and resulting in the image shown in Fig. 3,

$${f_{{\rm obj}}}(x) = {f_{{\rm img}}}(x)\!\left/\left({f_{{\rm wght}}}(x,\Delta {t_{\textit{nc}}})*\frac{1}{{\max({f_{{\rm wght}}}(x,\Delta {t_{\textit{nc}}}))}}\right).\right.$$

For a complete estimation of the original image, the detection and collection efficiencies of the IR-SPAD ${\eta _{{\rm IR}}}(\vec x)$, depending on the incoming photons position and direction, would have to be measured by dedicated reference measurements; however, previous measurements showed this influence for a well-aligned setup to have a Gaussian behavior. This behavior was expected due to the IR detector being fiber-bound, estimated based on these previous measurements, and considered by weighting the image accordingly. Additionally the noise term ${\sigma _{{\rm px}}}(x,t)$ can, due to its random, unpredictable distribution, lead to residual noise. In order to suppress this, the non-coincidence windows should be chosen sufficiently large.

After evaluation of the complete scan, we found the background rate of photons to be 71,970 photons per nanosecond of the evaluation window, independently of the timing correction. The total number of coincidence photons was determined by subtracting the total number of photons in $\Delta {t_c}$ by this rate, weighted according to the size of $\Delta {t_c}$. For the uncorrected timing, shown in Fig. 4(a), it was determined to be 202,647 photons, while for the corrected timing, shown in Fig. 4(b), it was 192,548 photons. The discrepancy of both numbers arises from the significantly smaller coincidence window in Fig. 4(b) and the non-perfect timing correction.

5. OUTLOOK

In this work we demonstrated the possibility of quantum ghost imaging using asynchronously running detectors. As shown here, this scheme surpasses current setups with regards to 3D imaging, requires less space, and eases the calibration constraints of such setups.

Within our current project, the Fraunhofer IMS is working on further improving the detector in use, mainly by implementing a 3D-stacking of circuitry and SPADs, thereby increasing the fill factor and allowing backside illumination. Based on this technology, a SPAD array sensor with ${64} \times {48}$ pixels is being produced with a more advantageous arrangement of SPADs, a prototype of which is already available. However, the improvements of 3D integration for these detectors is limited, due to large circuit area and high power dissipation [15]. Due to that, wafer level integration of microoptic arrays is also being investigated to further enhance the effective fill factor and, thus, the sensor sensitivity.

The Fraunhofer IOSB plans to use this detector to implement a setup to truly demonstrate the 3D capabilities of this scheme as well as compare the results with those of classical schemes. In this framework we will also work on verifying and demonstrating some of the currently mainly theorized benefits of using a quantum approach over classical systems [2].

Funding

Fraunhofer-Gesellschaft.

Acknowledgment

The authors thank both the consortium and participants of the QUILT project for the organization and support provided to achieve this work. C. Pitsch further thanks the colleagues of the OPT and LAS of Fraunhofer IOSB for the expertise and help received while realizing the setup and writing this paper.

Disclosures

The authors declare no conflicts of interest.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

REFERENCES

1. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52, R3429–R3432 (1995). [CrossRef]  

2. D. Walter, C. Pitsch, G. Paunescu, and P. Lutzmann, “Quantum ghost imaging for remote sensing,” Proc. SPIE 11134, 111340W (2019). [CrossRef]  

3. R. S. Bennink, S. J. Bentley, R. W. Boyd, and J. C. Howell, “Quantum and classical coincidence imaging,” Phys. Rev. Lett. 92, 033601 (2004). [CrossRef]  

4. P. B. Dixon, G. A. Howland, K. W. C. Chan, C. O’Sullivan-Hale, B. Rodenburg, N. D. Hardy, J. H. Shapiro, D. S. Simon, A. V. Sergienko, R. W. Boyd, and J. C. Howell, “Quantum ghost imaging through turbulence,” Phys. Rev. A 83, 051803 (2011). [CrossRef]  

5. M. J. Padgett and R. W. Boyd, “An introduction to ghost imaging: quantum and classical,” Philos. Trans. R. Soc. A 375, 20160233 (2017). [CrossRef]  

6. D. C. Burnham and D. L. Weinberg, “Observation of simultaneity in parametric production of optical photon pairs,” Phys. Rev. Lett. 25, 84–87 (1970). [CrossRef]  

7. P. A. Morris, R. S. Aspden, J. E. C. Bell, R. W. Boyd, and M. J. Padgett, “Imaging with a small number of photons,” Nat. Commun. 6, 5913 (2015). [CrossRef]  

8. S. Cova, M. Ghioni, A. Lacaita, C. Samori, and F. Zappa, “Avalanche photodiodes and quenching circuits for single-photon detection,” Appl. Opt. 35, 1956–1976 (1996). [CrossRef]  

9. M. Beer, J. Haase, J. Ruskowski, and R. Kokozinski, “Background light rejection in SPAD-based LiDAR sensors by adaptive photon coincidence detection,” Sensors 18, 4338 (2018). [CrossRef]  

10. J. Haase, M. Beer, O. Schrey, J. Ruskowski, W. Brockherde, and H. Vogt, “Measurement concept for direct time-of-flight sensors at high ambient light,” Proc. SPIE 10926, 109260W (2019). [CrossRef]  

11. D. Bronzi, F. Villa, S. Bellisai, B. Markovic, S. Tisa, A. Tosi, F. Zappa, S. Weyers, D. Durini, W. Brockherde, and U. Paschen, “Low-noise and large-area CMOS spads with timing response free from slow tails,” in Proceedings of the European Solid-State Device Research Conference (ESSDERC) (2012), pp. 230–233.

12. D. R. Guido and A. B. Uren, “Study of the effect of pump focusing on the performance of ghost imaging and ghost diffraction, based on spontaneous parametric downconversion,” Opt. Commun. 285, 1269–1274 (2012). [CrossRef]  

13. G. Christian, C. Akers, D. Connolly, J. Fallis, D. Hutcheon, K. Olchanski, and C. Ruiz, “Design and commissioning of a timestamp-based data acquisition system for the DRAGON recoil mass separator,” Eur. Phys. J. A 50, 75 (2014). [CrossRef]  

14. M.-A. Tetrault, J. F. Oliver, M. Bergeron, R. Lecomte, and R. Fontaine, “Real time coincidence detection engine for high count rate timestamp based PET,” IEEE Trans. Nucl. Sci. 57, 117–124 (2010). [CrossRef]  

15. K. Morimoto, A. Ardelean, M.-L. Wu, A. C. Ulku, I. M. Antolovic, C. Bruschini, and E. Charbon, “Megapixel time-gated SPAD image sensor for 2D and 3D imaging applications,” Optica 7, 346–354 (2020). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Supplemental further explaining the methods used for timestamp correction and background rejection.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. (a) SPAD array camera used in this work. The SPAD array detector is mounted on a PCB with an underlying FPGA board for signal processing and detector control (not shown in this view). The pins beneath the chip are used to connect the trigger. (b) Layout of the SPAD detector. The detector consists of two lines of 192 pixels each. Every pixel hereby consists of four vertically aligned SPADs, quenching, reset and pixel electronics, coincidence detection electronics, and individual TDC eletronics including dedicated memory. The coincidence detection electronics is not to be confused with the coincidence detection of this scheme. It is a feature used for background rejection in classical LiDAR [9] and irrelevant for this setup.
Fig. 2.
Fig. 2. Setup for quantum ghost imaging using asynchronous detection. In this work the setup was operated in transmission in order to prove the concept and improve the setup parameters.
Fig. 3.
Fig. 3. (a) 3D printed target used to image, showing the logo of our project (material: black PLA, thickness 3 mm). Its non-transparency around 1550 nm was verified using an adjustment laser. (b) Quantum ghost image obtained via asynchronous detection by extracting the spatial information of the coincidence window. The scale given corresponds to the size of the IR arm and is roughly 3 times the size of the VIS arm, determined by their ratio of wavelengths. The background photon distribution was estimated by extracting the spatial information from a non-coincidence window, weighted according to window size and subtracted from the image in order to suppress coincidental detections. This distribution was also used to normalize the resulting image according to the illumination distribution and estimate an image corresponding to a homogeneous illumination. The threshold for display was set at 0.1 times the maximum number of photons per pixel to suppress residual noise from coincidental detections, which will be centered around 0.
Fig. 4.
Fig. 4. Coincidence evaluation of the temporal data of both detectors, exemplary shown for one scanning step. Entangled photons show a fixed delay due to their mutual time of creation and fixed path lengths, thus resulting in a peak, indicated by green markers. The window indicated by $\Delta {t_c}$ contains all coincidence detections as well as background photons. The non-coincidence windows $\Delta {t_{nc1}}$ and $\Delta {t_{nc2}}$ of this evaluation can be used to extract the distribution of background photons and reduce its effect on Fig. 3. (a) Evaluation of the raw data obtained from the detectors. All coincidence detections are contained by the about 3.5 ns large $\Delta {t_c}$ window, and their peak has a FWHM of 1.7 ns. (b) Evaluation of the data with timing correction of the SPAD array data, based on internal parameters of the detector. All coincidence detections are contained by the about 1 ns large $\Delta {t_c}$ window, and their peak has a FWHM of 0.4 ns. By comparing the FWHM of both evaluations, one can deduce an increase in depth resolution by a factor of 4. In addition the CAR, which is relevant for identifying coincidence photons, was increased from a maximum of 2 to roughly a maximum of 5. A detailed explanation of the timing correction can be found in Supplement 1.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

f B G ( x p x , t ) = x p x f S P D C ( x , t ) d x η p x ( x p x ) + σ p x ( x p x , t ) f d e t ( x , Δ t nc ) t Δ t nc ,
f i m g ( x ) = f d e t ( x , Δ t c ) f d e t ( x , Δ t nc ) Δ t c Δ t nc .
f o b j ( x ) = f i m g ( x ) / ( f w g h t ( x , Δ t nc ) 1 max ( f w g h t ( x , Δ t nc ) ) ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.