Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single-photon computational 3D imaging at 45 km

Open Access Open Access

Abstract

Single-photon light detection and ranging (lidar) offers single-photon sensitivity and picosecond timing resolution, which is desirable for high-precision three-dimensional (3D) imaging over long distances. Despite important progress, further extending the imaging range presents enormous challenges because only a few echo photons return and are mixed with strong noise. Here, we tackled these challenges by constructing a high-efficiency, low-noise coaxial single-photon lidar system and developing a long-range-tailored computational algorithm that provides high photon efficiency and good noise tolerance. Using this technique, we experimentally demonstrated active single-photon 3D imaging at a distance of up to 45 km in an urban environment, with a low return-signal level of 1 photon per pixel. Our system is feasible for imaging at a few hundreds of kilometers by refining the setup, and thus represents a step towards low-power and high-resolution lidar over extra-long ranges.

© 2020 Chinese Laser Press

1. INTRODUCTION

Long-range active optical imaging has widespread applications, ranging from remote sensing [13], satellite-based global topography [4,5], and airborne surveillance [3], to target recognition and identification [6]. An increasing demand for these applications has resulted in the development of smaller, lighter, lower-power lidar systems, which can provide high-resolution three-dimensional (3D) imaging over long ranges with all-time capability. Time-correlated single-photon-counting (TCSPC) lidar is a candidate technology that has the potential to meet these challenging requirements [7]. Particularly, single-photon detectors [8] and arrays [9,10] can provide extraordinary single-photon sensitivity and better timing resolution than analog optical detectors [7]. Such high sensitivity allows lower-power laser sources to be used and can permit time-of-flight imaging over significantly longer ranges. Tremendous effort has thus been devoted to the development of single-photon lidar for long-range 3D imaging [1114].

In long-range 3D imaging, a frontier question is the distance limit, i.e., over what distances can the imaging system work? For a single-photon lidar system, the echo light signal, and thus the signal-to-background ratio (SBR), decrease rapidly with imaging distance R, which imposes limits on the useful image reconstruction [15]. On hardware, the lidar system should possess both high efficiency for collecting the back-scattered photons and low background noise. On software, a computational algorithm with high photon efficiency is demanding [16]. Indeed, an important research trend today is the development of efficient algorithms for imaging with a small number of photons [17]. High-quality 3D structure and reflectivity by an active imager detecting only one photon per pixel (PPP) have been demonstrated, based on the approaches of pseudo-array [18,19], single-photon camera [20], unmixing signal/noise [21], and machine learning [22].

Our primary interest in this work is to significantly push the imaging range. Single-photon imaging up to the ten kilometers range has been reported in Ref. [23]. Very recently, super-resolution single-photon imaging over an 8.2 km range has also been demonstrated by us [24]. Nonetheless, before this work, the imaging range was limited to 10 km. Further extending the imaging range faces rather low photon counts and low signal-to-noise ratio, which casts challenge to both the imaging hardware and the reconstruction algorithm.

We approach the challenge of ultra-long-range imaging by developing advanced techniques based on both hardware and software implementations that are specifically designed for long-range scenarios. On the hardware side, we developed a high-efficiency coaxial-scanning system and optimized the system design to efficiently collect the few echo photons and suppress the background noise. On the software side, we developed a pre-processing approach to censor noise and a computational algorithm to reconstruct images with low-light data (i.e., 1 signal PPP) that are mixed with strong background noise (i.e., SBR1/30). These improvements allow us to demonstrate single-photon 3D imaging over a distance of 45 km in an urban environment. Moreover, by applying the microscanning approach [24,25], the demonstrated transverse resolution is about 0.6 m at the far field of 45 km.

2. EXPERIMENTAL SETUP

A. General Description

Figure 1 shows a bird’s eye view of the long-range active-imaging experiment, where the setup is placed at Chongming Island in Shanghai city, facing a target of a tall building located at Pudong across the river. The optical transceiver system incorporated a commercial Cassegrain telescope with a 280 mm aperture and a high-precision two-axis automatic rotating stage to allow large-scale scanning of the far-field target. The optical components were assembled on a custom-built aluminum platform integrated with the telescope tube. The entire optical hardware system is compact and suitable for mobile applications [see Fig. 1(b)].

 figure: Fig. 1.

Fig. 1. Illustration of long-range active imaging. Satellite image of the experimental layout in Shanghai city, where the single-photon lidar is placed on Chongming Island and the target is a tall building in Pudong. (a) Schematic diagram of the setup. SM, scanning mirror; Cam, camera; M, mirror; PERM, 45° perforated mirror; PBS, polarization beam splitter; SPAD, single-photon avalanche diode; MMF, multimode fiber; PMF, polarization-maintaining fiber; LA, laser (1550 nm); COL, collimator; F, filter; FF, fiber filter; L, lens; HWP, half-wave plate; QWP, quarter-wave plate. (b) Photograph of the setup. The optical system consists of a telescope congregation and an optical-component box for shielding. (c) Close-up photograph of the target, the Pudong Civil Aviation Building. The building is 45 km from the single-photon lidar setup.

Download Full Size | PDF

Specifically, as shown in Fig. 1(a), an erbium-doped near-infrared fiber laser (1550.1±0.1nm, 500 ps pulse width, 100 kHz repetition rate) served as the light source for illumination. The maximal average laser power transmitted was 120 mW, which is equivalent to 1.2 μJ per pulse. There are several advantages of using a near-infrared wavelength, such as reduced solar background, low atmospheric absorption loss, and a higher eye safety threshold compared with the visible band. The laser output was coupled into the telescope through a small aperture consisting of a 45° oblique hole through the mirror. The echo light will fill the unobstructed part of the telescope and be transmitted to the mirror, where the size of the light spot is larger than the oblique hole aperture, which ensures that most of the echo light is reflected to the rear optical path for coupling.

The transmitting and receiving beams were coaxial, where the transmitting beam has a divergence angle of 35 μrad, and the receiving beam has a field of view (FoV) of 22.3 μrad. The returned photons were reflected by the perforated mirror and passed through two wavelength filters (1500 nm long-pass filter, 9 nm bandpass filter). Then, the returned photons were collected by a focal lens. A polarization beam splitter (PBS) coupled only the horizontal-polarization light into a multimode-fiber filter (1.3 nm bandpass). Finally, the photons were detected by an InGaAs/InP single-photon avalanche diode detector (SPAD) operated at free-running mode (15% detection efficiency) [26]. This means that our system does not have any prior information for the location/width of the returned signal’s time-gating window.

Detection events are time stamped with a homemade time-to-digital convertor (TDC) with 50 ps time jitter. The time jitter of the entire lidar system was measured at 1ns. It means that the system can obtain the depth measurement with an accuracy of 15 cm. In addition, a standard camera was paraxially mounted on the telescope to provide a convenient direction and alignment aid for long distances.

B. Optimized System Design

To achieve a high-efficiency, low-noise coaxial single-photon lidar, we implemented several optimized optical designs, most of which differed from previous single-photon lidar experiments [1113,23]. With these new technologies, the imaging range can be greatly extended.

  • (a) We design a near-diffraction-limit optical system to realize high coupling efficiency (>50%) from free space to fiber (calibrated in a short-range experiment without considering turbulence effect). On the one hand, we avoid optical aberrations by using the eyepiece with the same parameters as the coupling lens. The eyepiece of the telescope system and the lens used to couple the echo into the optical fiber have the same focal length, which ensures that the light path formed by the two lenses is symmetrical. The light spot undergoes the same positive and negative transformations, and preserves its high quality. On the other hand, we use a multimode fiber (MMF) with a diameter of 62.5 μm to increase coupling efficiency. This diameter matches the designed Airy disk and provides an FoV of 22.3 μrad for each pixel.
  • (b) We set the receiver’s aperture slightly smaller than the illumination aperture, projected on the target, in order to reject the background noise and improve the SBR [23]. In the transmitter, we employ a polarization-maintaining fiber (PMF) and a fiber collimator with focal length f=11mm. In the receiver, we use a coupling lens with f=100mm and an MMF filter. The transmitting and receiving beams coaxially pass through the same 28× expander (telescope f=2800mm and eyepiece f=100mm).
  • (c) We used the polarization degree of freedom as a filter to reduce the internal back-scattered noise. Because of a coaxial transceiver design, the transmitting amplified spontaneous emission (ASE) photons are inevitably back-reflected by the surface of local optical elements via specular reflection, whose intensity is on the order of tenfold greater than that of the returned signal from the remote target. To resolve this issue, the transmitted light is designed to exhibit vertical polarization, while the received light is horizontal polarization selected by the polarization beam splitter (PBS). This improves the SBR by about 100 times over no-polarization filtering. The assumption is that the surfaces of natural scenes are mostly Lambertian, which randomizes the polarization. A similar design was used to split the common transmitted and received beams [12,27].
  • (d) We developed a two-stage field of regard (FoR) scanning method—offering both fine-FoR and wide-FoR scanning—to simultaneously maintain fine features and expand the total FoR. For fine-FoR scanning, we used a coplanar dual-axis scanning mirror to steer the beam in both x and y axial directions, which presents simplified optical elements, thereby avoiding imaging pillow distortions. For wide-FoR scanning, we used a two-axis automatic rotation table to rotate the entire telescope, where multiple sub-images are stitched into a single composition to expand the FoR.
  • (e) We used miniaturized optical holders to align the apertures of all optical elements to a height of 4 cm, thereby improving the system stability. The entire optical platform was compact, measuring only 30cm×30cm×35cm, including a customized aluminum box to block the ambient light, and was mounted behind the telescope [see Fig. 1(b)].

3. RECONSTRUCTION ALGORITHM

The long-range operation of the lidar system involves two challenges that limit useful image reconstruction. (i) Due to the divergence of the light beam, the receiver’s FoV, projected on the remote target, covers several reflectors with multiple returns [2830], which deteriorates the resolution of the image. (ii) The extremely low SBR, together with multiple returns per pixel, limits the pixelwise-adaptive unmixing of signal from noise [21]. These two challenges were not considered in previous algorithms [16,18,1922]. Recently, the issue of multiple returns has been addressed in different imaging scenarios, such as underwater imaging [30] and imaging through scattering media with multiple layers [31,32], most of which are aimed at scenes with partially transmissive objects. In contrast, we focused on the multiple-returns problem in a long-range situation caused by the divergence of the laser beam and the receiver’s large FoV, and proposed an approach to improve the resolution. We abstract the entire image reconstruction as a convolutional model instead of pixelwise processing and describe the reconstruction as an inverse deconvolution problem. To solve this problem, we modified the convex-optimization solver [33] to directly solve the 3D matrix. Rather than the previous two-step method that optimizes reflectivity and depth separately [16,18,1921], our scheme uses a 3D spatiotemporal matrix to solve reflectivity and depth simultaneously. This can include the correlations between reflectivity and depth in the optimization. Also, it can avoid introducing the reflectivity reconstruction error into the depth estimation.

A. Forward Model in Long-Range Conditions

The forward model is based on Ref. [29], which describes the imaging condition through a thin diffuser. Here, we demonstrate this model more explicitly under long-range conditions. Suppose that the laser illuminates the scene at a scanning angle (θx, θy). Under long-range conditions, due to the divergence of the beam, a large light spot, illuminating on the scene, has a spatial 2D Gaussian distribution with kernel hxy. Due to the laser pulse width and detector jitter, the detected photons have a timing jitter, which is a temporal 1D Gaussian distribution with kernel ht. The detector rate function R(t;θx,θy) can be written as [29]

R(t;θx,θy)=θx,θyFoVhxy(θxθx,θyθy)r(θx,θy)ht[t2d(θx,θy)/c]dθxdθy+b
for t[0,Tr), where Tr denotes the repetition period of the laser; [r(θx,θy), d(θx,θy)] is the [intensity, depth] pair for the scanning direction (θx, θy); FoV denotes the FoV of the detector; c is the speed of light; b describes background noise, and hxy and ht denote the spatial and temporal kernels, respectively.

We can discretize the continuous rate function in Eq. (1) into a 3D matrix with pixels and time bins. With nx×ny as the number of pixels, the scene can be described by a reflectivity matrix A and a depth matrix D(A,DRnx×ny). Let Δ denote the bin width, where the detector records the photon-count histogram with nt=Tr/Δ bins. To transform the two matrices of A,D into one matrix, we construct a 3D (nx×ny×nt) matrix RD whose (i,j)-th pixel is a vector with only one nonzero entry. The value of this entry is Aij, and its index is Tij=round[2×Dij/(cΔ)]. To match this 3D formulation, let B be a (bΔ)-constant matrix of size nx×ny×nt, and let h be the outer product of hxy and ht, which is also a 3D matrix of size kx×ky×kt, denoting a spatiotemporal kernel.

According to the theory of photodetection in which the photon detection generated by the SPAD is an inhomogeneous Poisson processing, we obtain the distribution of our detected photon histogram matrix S of size nx×ny×nt:

SPoisson(h*RD+B),
where * denotes the convolution operator. Next, our aim is to get the fine estimate of RD based on this probabilistic measurement model from the raw data S acquired from SPAD.

B. Reconstruction

The reconstruction contains two parts: (i) a global gating approach to unmix signal from noise; (ii) an inverse 3D deconvolution based on the modified SPIRALTAP solver [33].

1. Global Gating

In our experiment, we operate the SPAD in free-running mode with an operation time of 10μs [see Fig. 2(a)], the same as the laser period. This can cover a wide range of blind depth measurements. We post-select the signals by following an automated global gating process to extract the time-tagged signals. Our global gating consists mainly of the following two processes. (i) Noise-fitting: different from the pixelwise gating [21], we sum the detection counts from all pixels and generate a total raw-data histogram, as shown in Fig. 2(a). The background noise consists of ambient light, dark counts, and the internally reflected ASE noise. In our experiment, the ASE noise is about 6000 counts/s, and the dark counts rate is about 2000 counts/s. As for the ambient light, it can be neglected at night. Therefore, the background noise comes mainly from the ASE noise. Note that the ASE noise arising from the pulsed laser increases over time within the laser period (10μs). In each pulse cycle, the photon population in the upper laser level gradually increases, and it suddenly drops after the pulse emitting out [34]. The ASE noise is correlated with the photon population, resulting in its increase over time. The exact relationship for ASE noise over time can be complex for different laser systems [34]. In our experiment, after careful calibrations, we find that the ASE noise can be well described using quadratic polynomial fitting, as shown in Fig. 2(b), where the relative standard deviation between the data and the fitting curve is less than 5%. (ii) Peak searching: we apply a peak-searching process to determine the position of the effective signal gate Tgate. For the duration of Tgate, we generally select a typical value of 200 ns (30m), which can cover the depths for most of natural targets. Note that for a multiple-layer scene, multiple effective signal gates will be selected. We censor the data out of its Tgate from the raw data and obtain the censored signal bins in Fig. 2(c). Also, we set a threshold (according to the noise-fitting results) for each signal time bin to further censor the noisy bins within Tgate.

 figure: Fig. 2.

Fig. 2. Raw-data histogram and global-gating process. (a) Raw-data histogram for the 45 km imaging experiment over the laser period (10μs). (b) Noise fitting for the background noise, which comes mainly from the internally reflected ASE noisy photons and increases with time following a binomial distribution. (c) Censored time bins for reconstructions. (d) Illustration of the signal counts. (e) Illustration of a histogram of a single pixel within the effective signal gate Tgate.

Download Full Size | PDF

Overall, the general procedure for global gating is listed in Algorithm 1. The stepwise descriptions of the procedure can be summarized as follows. (i) Form histograms H[T] and h[t] from the raw data with two different bin resolutions Tcoarse and Tfine; (ii) apply a quadratic function to fit h[t] and downsample this fit for H[T]; (iii) compute the deviations between each histogram and its respective fit; (iv) find the position of the peak in the coarse deviation data E1[t] and refine this position estimate with the fine deviation data E2[T]; (v) within the time interval containing the signal peak, retain only the bins (fine bins) above a data-dependent threshold (the error standard deviation). Note that the output of the global gating procedure indicates which bins are considered as signal and need to be included. Last, the output signal bins are used to censor the raw data by judging whether each photon arriving time is within these signal bins.

Figure 2(d) shows the rough signal photons for all the pixels within Tgate. Figure 2(e) shows the raw data of a single pixel within Tgate, where one of the highly reflective pixels is selected for the illustration of multiple peaks per pixel. Clearly, the issue of the multiple returns results in several peaks per pixel, which makes it difficult to perform the conventional pixelwise-adaptive gating [21].

Tables Icon

Algorithm 1. Global gating.

2. 3D Deconvolution

For the censored signal bins in Fig. 2(c), we solve an inverse optimization problem to estimate RD. Let LRD(RD;S,h,B) denote the negative log-likelihood function of the RD derived from Eq. (2). Then the deconvolution problem is described by

minimizeRDLRD(RD;Y,h,B)subjecttoRDi,j,k0,i,j,k,
where the constraint RDi,j,k0 comes from the nonnegativity of reflectivity. Both the negative Poisson log-likelihood cost function LRD and the non-negativity constraint of the RD are convex; thus, the global minimizer could be found by a global optimization.

A widely used solver is SPIRALTAP, as demonstrated previously in Refs. [16,18,20,21]. Nonetheless, the existing SPIRALTAP solver cannot be applied directly to solve Eq. (3), because all the operators and matrices in our forward model are represented in the 3D spatiotemporal domain, whereas the existing SPIRALTAP can solve only optimization problems represented in the 2D domain [16,18,20,21]. Consequently, we generalize the existing SPIRALTAP to a 3D form by analogy. For this purpose, we applied a blurring matrix h, denoting the spatiotemporal kernel. h has dimensions of kx×ky×kt, and its elements are the product of the spatial (transversal) distribution and temporal (longitudinal) distribution for pixel (i,j). In implementation, the size of h is related to the FoV of the receiver and the system jitter. For more details about h, one can refer to the processing code available online [35].

4. RESULTS

We present an in-depth study of our imaging system and algorithm for a variety of targets with different spatial distributions and structures over different ranges [35]. The experiments were done in an urban environment in Shanghai. In experiment, we perform blind lidar measurements without any prior information for the time location of returned signals. Depth maps of the targets were reconstructed by using the proposed algorithm with 1PPP for signal photons and an SBR as low as 0.03. Here, we define the SBR as the signal detection counts (i.e., the back-reflections from the target) divided by the noise detection counts (i.e., the ambient light, dark counts, and ASE noise) within the 200 ns timing gate after the global gating process (see Section 2.A). We also made accurate laser-ranging measurements to determine the absolute distance to the targets; the laser pulses of three different repetition rates were employed to extend the unambiguous range [36].

We first show the imaging results for a long-range target, called the Pudong Civil Aviation Building, at a one-way distance of about 45 km. Figure 1 shows the topology of the experiment. The imaging setup was placed on the 20th floor of a building, and the target was on the opposite shore of the river. The ground truth of the target is shown in Fig. 1(c). Figure 3(a) shows a visible-band photograph, taken with a standard astronomical camera (ASI120MC-S). This photograph was substantially blurred due to the inadequate spatial resolution and the air turbulence in the urban environment. We adopted our single-photon lidar to do the imaging at night and produce a (128×128)-pixel image. A modest laser power of 120 mW was used for the data acquisition. The averaged PPP was 2.59, and the SBR was 0.03. Note that these PPP and SBR were calculated based on all the pixels in the entire scene. If we consider only the pixels with valid surfaces, the averaged PPP and SBR are about 6.45 and 0.08, respectively. The plots in Figs. 3(b)3(e) show the reconstructed depth obtained by using various imaging algorithms, including the pixelwise maximum likelihood (ML), photon-efficient algorithm by Shin et al. [18], unmixing algorithm by Rapp and Goyal [21], and the algorithm proposed herein. The proposed algorithm recovers the fine features of the building, allowing the scenes with multilayer distribution to be accurately identified. The other algorithms, however, fail in this regard. These results clearly demonstrate that the proposed algorithm operates better for spatial and depth reconstruction of long-range targets. Furthermore, we used the microscanning approach [24] by setting a fine interval scan (half FoV interval) to improve the resolution. The result reaches a spatial resolution of 0.6 m, which resolves the small windows of the target building [see inset in Fig. 3(e)].

 figure: Fig. 3.

Fig. 3. Long-range 3D imaging over 45 km. (a) Real visible-band image (tailored) of the target taken with a standard astronomical camera. This photograph is substantially blurred due to the inadequate spatial resolution and the air turbulence in the urban environment. The red rectangle indicates the approximate lidar FoR. (b)–(e) Reconstruction results obtained by using the pixelwise maximum likelihood (ML) method, photon-efficient algorithm [18], unmixing algorithm by Rapp and Goyal [21], and the proposed algorithm, respectively. The single-photon lidar recorded an average PPP of 2.59, and the SBR was 0.03. The calculated relative depth for each individual pixel is given by the false color (see color scale on right). Our algorithm performs much better than the other state-of-the-art photon-efficient computational algorithms and provides sufficient resolution to clearly resolve the 0.6 m wide windows [see expanded view in inset of (e)].

Download Full Size | PDF

To quantify the performance of the proposed technique, we show an example of a 3D image obtained in daylight of a solid target with complex structures at a one-way distance of 21.6 km [see Fig. 4(a)]. The target is part of a skyscraper called K11 [see Fig. 4(b)] that is located in the center of Shanghai city. Before data acquisition, a photograph of the target was taken with a visible-band camera [see Fig. 4(c)]; the resulting visible-band image is blurred because of the long object distance and the urban air turbulence. The single-photon lidar data were acquired by scanning 256×256 points at an acquisition time per point of 22 ms and with a laser power of 100 mW. The total acquisition time was about 25 min. We performed calculations according to our model in Section 3.A, where the difference between the expected photon number and the measured photon number is within an order of magnitude. For the entire scene, the average PPP was 1.20, and the SBR was 0.11. For the pixels with valid depths only, the average PPP was 1.76, and the SBR was 0.16. The plots in Figs. 4(d)4(g) show the reconstructed depth profiles using different algorithms. The proposed algorithm allows us to clearly identify the shape of the grid structure on the walls and the symmetrical H-like structure at the top of the building. The quality of the reconstruction is quantified based on the peak signal-to-noise ratio (PSNR) by comparing the reconstructed image with a high-quality image obtained by using a large number of photons. The PSNR is evaluated for all the pixels in the entire scene, where the pixels without valid depths are set to zero. The PSNR of the proposed algorithm is 14 dB better than that of the ML method, and 8 dB better than that of the unmixing algorithm. In this reconstruction, we have chosen a particular regularizer (3D TV semi-norm) based on the characteristics of our target scenes [35].

 figure: Fig. 4.

Fig. 4. Long-range target taken in daylight over 21.6 km. (a) Topology of the experiment. (b) Ground-truth image of the target (building K11). (c) Visible-band image of the target taken with a standard astronomical camera. (d)–(g) Depth profile taken with the proposed single-photon lidar in daylight and reconstructed by applying the different algorithms to the data with 1.2 signal PPP and SBR=0.11. (d) Reconstruction with the pixelwise ML method. (e) Reconstruction with the photon-efficient algorithm [18]. (f) Reconstruction with the algorithm of Rapp and Goyal [21]. (g) Reconstruction with the proposed algorithm. The peak signal-to-noise ratio (PSNR) was calculated by comparing the reconstructed image with a high-quality image obtained with a large number of photons. The proposed method yields a much higher PSNR than the other algorithms.

Download Full Size | PDF

To demonstrate the all-time capability of the proposed lidar system, we used it to image building K11 both in daylight and at night (i.e., 11:00 AM and 12:00 PM) on June 15, 2018, and compared the resulting reconstructions. The proposed single-photon lidar gave 1.2 signal PPP and an SBR of 0.11 (0.15) in daylight (at night). Figures 5(b) and 5(c) show front-view depth plots of the reconstructed scene. The single-photon lidar allows the surface features of the multilayer walls of the building to be clearly identified both in daylight and at night. The enlarged images in Figs. 5(b) and 5(c) show the detailed features of the window frames, although, due to increased air turbulence during the day, the daytime image is slightly blurred compared with the nighttime image.

 figure: Fig. 5.

Fig. 5. Long-range target at 21.6 km imaged in daylight and at night. (a) Visible-band image of the target taken with a standard astronomical camera. (b) Depth profile of image taken in daylight and reconstructed with signal PPP=1.2, SBR=0.11. (c) Depth profile of image taken at night and reconstructed with signal PPP=1.2, SBR=0.15.

Download Full Size | PDF

Finally, Fig. 6 shows a more complex natural scene with multiple trees and buildings at a one-way distance of 2.1 km. This scene was selected and scanned in daytime to produce a (128×256)-pixel depth image. Figure 6(b) shows the depth profile of the scene, and Fig. 6(c) shows a depth-intensity plot. The conventional visible-band photograph in Fig. 6(a) is blurred mainly because of smog in Shanghai, and does not resolve the different layers of trees in the 2D image. In contrast, as shown in Figs. 6(b) and 6(c), the proposed lidar system clearly resolves the details of the scene, such as the fine features of the trees. More importantly, the 3D capability of the single-photon lidar system clearly resolves the multiple layers of trees and buildings [see Fig. 6(b)]. This result demonstrates the superior capability of the near-infrared single-photon lidar system to resolve targets through smog [37].

 figure: Fig. 6.

Fig. 6. Reconstruction of multilayer depth profile of a complex scene. (a) Visible-band image of the target taken by a standard astronomical camera mounted on the imaging system with an f=700mm camera lens. (b), (c) Depth profile taken by the proposed single-photon lidar over 2.1 km, and recovered by using the proposed computational algorithm. Trees at different depths and their fine features can be identified.

Download Full Size | PDF

5. DISCUSSION

To summarize, we demonstrate active single-photon 3D imaging at ranges of up to 45 km; this beats the previous record of 10 km [23]. Table 1 shows more comparisons with previous experiments. The 3D images are generated at the single-photon per-pixel level, which allows for target recognition and identification at very low light levels. The proposed high-efficiency coaxial single-photon lidar system, noise-suppression method, and advanced computational algorithm open new opportunities for low-power lidar imaging over long ranges. These results could facilitate the adaptation of the system for use in future multibeam single-photon lidar systems with Geiger-mode SPAD arrays for rapid remote sensing [38]. Nonetheless, the SPAD arrays face the limitations of data readout and storage [9,10], which require future technical improvements. For instance, a high-speed circuitry and efficient readout strategies are needed to speed up the readout process [39]. Another limitation of SPAD arrays is the low fill factor caused by the additional in-pixel circuitry for TDC, which can be improved by using microlens arrays [39,40]. Moreover, the advanced detection techniques such as a superconducting nanowire single-photon detector (SNSPD) [8] can be used to improve the efficiency and decrease the noise, as demonstrated in other lidar systems [12,41,42]. Furthermore, our framework does not consider the turbulence effects in long-range imaging. Nonetheless, the turbulence effect can be included in our forward model and reconstruction, by modifying the integration domain and the distribution of spatial and temporal kernels in Eq. (1) according to the model of the turbulence effect. Finally, the imaging experiment we completed is through only the horizontal atmosphere. The lidar’s SNR will gain when the light passes the atmosphere vertically. In the future, low-power single-photon lidar mounted on LEO satellites, as a complement to traditional imaging, can provide high-resolution, richer 3D images for a variety of applications.

Tables Icon

Table 1. Summary of Representative Single-Photon Imaging Experiments, Focusing on Imaging Distance and Sensitivity

Funding

National Key Research and Development Program of China (2018YFB0504300); National Natural Science Foundation of China (61771443); Shanghai Municipal Science and Technology Major Project (2019SHZDZX01); Anhui Initiative in Quantum Information Technologies; Shanghai Science and Technology Development Funds (18JC1414700); Fundamental Research Funds for the Central Universities (WK2340000083); Youth Innovation Promotion Association of CAS (2018492).

Acknowledgment

The authors acknowledge insightful discussions with Cheng Wu, Ting Zeng, and Qi Shen.

Disclosures

The authors declare no conflicts of interest.

REFERENCES

1. R. M. Marino and W. R. Davis, “Jigsaw: a foliage-penetrating 3D imaging laser radar system,” Lincoln Lab. J. 15, 23–36 (2005).

2. B. Schwarz, “Lidar: mapping the world in 3D,” Nat. Photonics 4, 429–430 (2010). [CrossRef]  

3. C. L. Glennie, W. E. Carter, R. L. Shrestha, and W. E. Dietrich, “Geodetic imaging with airborne lidar: the Earth’s surface revealed,” Rep. Prog. Phys. 76, 086801 (2013). [CrossRef]  

4. D. E. Smith, M. T. Zuber, H. V. Frey, J. B. Garvin, J. W. Head, D. O. Muhleman, G. H. Pettengill, R. J. Phillips, S. C. Solomon, H. J. Zwally, W. B. Banerdt, and T. C. Duxbury, “Topography of the northern hemisphere of mars from the mars orbiter laser altimeter,” Science 279, 1686–1692 (1998). [CrossRef]  

5. W. Abdalati, H. J. Zwally, R. Bindschadler, B. Csatho, S. L. Farrell, H. A. Fricker, D. Harding, R. Kwok, M. Lefsky, T. Markus, A. Marshak, T. Neumann, S. Palm, B. Schutz, B. Smith, J. Spinhirne, and C. Webb, “The ICESat-2 laser altimetry mission,” Proc. IEEE 98, 735–751 (2010). [CrossRef]  

6. A. B. Gschwendtner and W. E. Keicher, “Development of coherent laser radar at Lincoln Laboratory,” Lincoln Lab. J. 12, 383–396 (2000).

7. G. Buller and A. Wallace, “Ranging and three-dimensional imaging using time-correlated single-photon counting and point-by-point acquisition,” IEEE J. Sel. Top. Quantum Electron. 13, 1006–1015 (2007). [CrossRef]  

8. R. H. Hadfield, “Single-photon detectors for optical quantum information applications,” Nat. Photonics 3, 696–705 (2009). [CrossRef]  

9. J. A. Richardson, L. A. Grant, and R. K. Henderson, “Low dark count single-photon avalanche diode structure compatible with standard nanometer scale CMOS technology,” IEEE Photon. Technol. Lett 21, 1020–1022 (2009). [CrossRef]  

10. F. Villa, R. Lussana, D. Bronzi, S. Tisa, A. Tosi, F. Zappa, A. Dalla Mora, D. Contini, D. Durini, S. Weyers, and W. Brockherde, “CMOS imager with 1024 SPADs and TDCs for single-photon timing and 3-D time-of-flight,” IEEE J. Sel. Top. Quantum Electron. 20, 364–373 (2014). [CrossRef]  

11. A. McCarthy, R. J. Collins, N. J. Krichel, V. Fernández, A. M. Wallace, and G. S. Buller, “Long-range time-of-flight scanning sensor based on high-speed time-correlated single-photon counting,” Appl. Opt. 48, 6241–6251 (2009). [CrossRef]  

12. A. McCarthy, N. J. Krichel, N. R. Gemmell, X. Ren, M. G. Tanner, S. N. Dorenbos, V. Zwiller, R. H. Hadfield, and G. S. Buller, “Kilometer-range, high resolution depth imaging via 1560 nm wavelength single-photon detection,” Opt. Express 21, 8904–8915 (2013). [CrossRef]  

13. Z. Li, E. Wu, C. Pang, B. Du, Y. Tao, H. Peng, H. Zeng, and G. Wu, “Multi-beam single-photon-counting three-dimensional imaging lidar,” Opt. Express 25, 10189–10195 (2017). [CrossRef]  

14. S. Chan, A. Halimi, F. Zhu, I. Gyongy, R. K. Henderson, R. Bowman, S. McLaughlin, G. S. Buller, and J. Leach, “Long-range depth imaging using a single-photon detector array and non-local data fusion,” Sci. Rep. 9, 8075 (2019). [CrossRef]  

15. W. Wagner, A. Ullrich, V. Ducic, T. Melzer, and N. Studnicka, “Gaussian decomposition and calibration of a novel small-footprint full-waveform digitising airborne laser scanner,” ISPRS J. Photogramm. Remote Sens. 60, 100–112 (2006). [CrossRef]  

16. A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58–61 (2014). [CrossRef]  

17. Y. Altmann, S. McLaughlin, M. J. Padgett, V. K. Goyal, A. O. Hero, and D. Faccio, “Quantum-inspired computational imaging,” Science 361, eaat2298 (2018). [CrossRef]  

18. D. Shin, A. Kirmani, V. K. Goyal, and J. H. Shapiro, “Photon-efficient computational 3-D and reflectivity imaging with single-photon detectors,” IEEE Trans. Comput. Imaging 1, 112–125 (2015). [CrossRef]  

19. Y. Altmann, X. Ren, A. McCarthy, G. S. Buller, and S. McLaughlin, “Lidar waveform-based analysis of depth images constructed using sparse single-photon data,” IEEE Trans. Image Process. 25, 1935–1946 (2016). [CrossRef]  

20. D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016). [CrossRef]  

21. J. Rapp and V. K. Goyal, “A few photons among many: unmixing signal and noise for photon-efficient active imaging,” IEEE Trans. Comput. Imaging 3, 445–459 (2017). [CrossRef]  

22. D. B. Lindell, M. O’Toole, and G. Wetzstein, “Single-photon 3D imaging with deep sensor fusion,” ACM Trans. Graph. 37, 113 (2018). [CrossRef]  

23. A. M. Pawlikowska, A. Halimi, R. A. Lamb, and G. S. Buller, “Single-photon three-dimensional imaging at up to 10 kilometers range,” Opt. Express 25, 11919–11931 (2017). [CrossRef]  

24. Z.-P. Li, X. Huang, P.-Y. Peng, Y. Hong, C. Yu, Y. Cao, J. Zhang, F. Xu, and J.-W. Pan, “Super-resolution single-photon imaging at 8.2 kilometers,” Opt. Express 28, 4076–4087 (2020). [CrossRef]  

25. M.-J. Sun, M. P. Edgar, D. B. Phillips, G. M. Gibson, and M. J. Padgett, “Improving the signal-to-noise ratio of single-pixel imaging using digital microscanning,” Opt. Express 24, 10476–10485 (2016). [CrossRef]  

26. C. Yu, M. Shangguan, H. Xia, J. Zhang, X. Dou, and J. W. Pan, “Fully integrated free-running InGaAs/InP single-photon detector for accurate lidar applications,” Opt. Express 25, 14611–14620 (2017). [CrossRef]  

27. M. A. Albota, B. F. Aull, D. G. Fouche, R. M. Heinrichs, D. G. Kocher, R. M. Marino, J. G. Mooney, N. R. Newbury, M. E. O’Brien, B. E. Player, B. C. Willard, and J. J. Zayhowski, “Three-dimensional imaging laser radars with Geiger-mode avalanche photodiode arrays,” Lincoln Lab. J. 13, 351–370 (2002).

28. S. Hernandez-Marin, A. M. Wallace, and G. J. Gibson, “Bayesian analysis of lidar signals with multiple returns,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 2170–2180 (2007). [CrossRef]  

29. D. Shin, J. H. Shapiro, and V. K. Goyal, “Photon-efficient super-resolution laser radar,” Proc. SPIE 10394, 1039409 (2017). [CrossRef]  

30. J. Tachella, Y. Altmann, S. McLaughlin, and J.-Y. Tourneret, “3D reconstruction using single-photon lidar data exploiting the widths of the returns,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2019), pp. 7815–7819.

31. D. Shin, F. Xu, F. N. Wong, J. H. Shapiro, and V. K. Goyal, “Computational multi-depth single-photon imaging,” Opt. Express 24, 1873–1888 (2016). [CrossRef]  

32. J. Tachella, Y. Altmann, X. Ren, A. McCarthy, G. S. Buller, S. Mclaughlin, and J.-Y. Tourneret, “Bayesian 3D reconstruction of complex scenes from single-photon lidar data,” SIAM J. Imaging Sci. 12, 521–550 (2019). [CrossRef]  

33. Z. T. Harmany, R. F. Marcia, and R. M. Willett, “This is SPIRAL-TAP: sparse Poisson intensity reconstruction algorithms - theory and practice,” IEEE Trans. Image Process. 21, 1084–1096 (2012). [CrossRef]  

34. M. J. Digonnet, Rare-Earth-Doped Fiber Lasers and Amplifiers, Revised and Expanded (CRC Press, 2001).

35. https://github.com/quantum-inspired-lidar/long-range-photon-efficient-imaging.git.

36. B. Du, C. Pang, D. Wu, Z. Li, H. Peng, Y. Tao, E. Wu, and G. Wu, “High-speed photon-counting laser ranging for broad range of distances,” Sci. Rep. 8, 4198 (2018). [CrossRef]  

37. R. Tobin, A. Halimi, A. McCarthy, M. Laurenzis, F. Christnacher, and G. S. Buller, “Three-dimensional single-photon imaging through obscurants,” Opt. Express 27, 4590–4611 (2019). [CrossRef]  

38. J. J. Degnan, “Scanning, multibeam, single photon lidars for rapid, large scale, high resolution, topographic and bathymetric mapping,” Remote Sens. 8, 958 (2016). [CrossRef]  

39. C. Bruschini, H. Homulle, I. Antolovic, S. Burri, and E. Charbon, “Single-photon avalanche diode imagers in biophotonics: review and outlook,” Light Sci. Appl. 8, 87 (2019). [CrossRef]  

40. P. W. R. Connolly, X. Ren, A. Mccarthy, H. Mai, F. Villa, A. J. Waddie, M. R. Taghizadeh, A. Tosi, F. Zappa, R. K. Henderson, and G. S. Buller, “High concentration factor diffractive microlenses integrated with CMOS single-photon avalanche diode detector arrays for fill-factor improvement,” Appl. Opt. 59, 4488–4498 (2020). [CrossRef]  

41. D. M. Boroson, B. S. Robinson, D. V. Murphy, D. A. Burianek, F. Khatri, J. M. Kovalik, Z. Sodnik, and D. M. Cornwell, “Overview and results of the lunar laser communication demonstration,” Proc. SPIE 8971, 89710S (2014). [CrossRef]  

42. H. Li, S. Chen, L. You, W. Meng, Z. Wu, Z. Zhang, K. Tang, L. Zhang, W. Zhang, X. Yang, X. Liu, Z. Wang, and X. Xie, “Superconducting nanowire single photon detector at 532 nm and demonstration in satellite laser ranging,” Opt. Express 24, 3535–3542 (2016). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Illustration of long-range active imaging. Satellite image of the experimental layout in Shanghai city, where the single-photon lidar is placed on Chongming Island and the target is a tall building in Pudong. (a) Schematic diagram of the setup. SM, scanning mirror; Cam, camera; M, mirror; PERM, 45° perforated mirror; PBS, polarization beam splitter; SPAD, single-photon avalanche diode; MMF, multimode fiber; PMF, polarization-maintaining fiber; LA, laser (1550 nm); COL, collimator; F, filter; FF, fiber filter; L, lens; HWP, half-wave plate; QWP, quarter-wave plate. (b) Photograph of the setup. The optical system consists of a telescope congregation and an optical-component box for shielding. (c) Close-up photograph of the target, the Pudong Civil Aviation Building. The building is 45 km from the single-photon lidar setup.
Fig. 2.
Fig. 2. Raw-data histogram and global-gating process. (a) Raw-data histogram for the 45 km imaging experiment over the laser period (10μs). (b) Noise fitting for the background noise, which comes mainly from the internally reflected ASE noisy photons and increases with time following a binomial distribution. (c) Censored time bins for reconstructions. (d) Illustration of the signal counts. (e) Illustration of a histogram of a single pixel within the effective signal gate Tgate.
Fig. 3.
Fig. 3. Long-range 3D imaging over 45 km. (a) Real visible-band image (tailored) of the target taken with a standard astronomical camera. This photograph is substantially blurred due to the inadequate spatial resolution and the air turbulence in the urban environment. The red rectangle indicates the approximate lidar FoR. (b)–(e) Reconstruction results obtained by using the pixelwise maximum likelihood (ML) method, photon-efficient algorithm [18], unmixing algorithm by Rapp and Goyal [21], and the proposed algorithm, respectively. The single-photon lidar recorded an average PPP of 2.59, and the SBR was 0.03. The calculated relative depth for each individual pixel is given by the false color (see color scale on right). Our algorithm performs much better than the other state-of-the-art photon-efficient computational algorithms and provides sufficient resolution to clearly resolve the 0.6 m wide windows [see expanded view in inset of (e)].
Fig. 4.
Fig. 4. Long-range target taken in daylight over 21.6 km. (a) Topology of the experiment. (b) Ground-truth image of the target (building K11). (c) Visible-band image of the target taken with a standard astronomical camera. (d)–(g) Depth profile taken with the proposed single-photon lidar in daylight and reconstructed by applying the different algorithms to the data with 1.2 signal PPP and SBR=0.11. (d) Reconstruction with the pixelwise ML method. (e) Reconstruction with the photon-efficient algorithm [18]. (f) Reconstruction with the algorithm of Rapp and Goyal [21]. (g) Reconstruction with the proposed algorithm. The peak signal-to-noise ratio (PSNR) was calculated by comparing the reconstructed image with a high-quality image obtained with a large number of photons. The proposed method yields a much higher PSNR than the other algorithms.
Fig. 5.
Fig. 5. Long-range target at 21.6 km imaged in daylight and at night. (a) Visible-band image of the target taken with a standard astronomical camera. (b) Depth profile of image taken in daylight and reconstructed with signal PPP=1.2, SBR=0.11. (c) Depth profile of image taken at night and reconstructed with signal PPP=1.2, SBR=0.15.
Fig. 6.
Fig. 6. Reconstruction of multilayer depth profile of a complex scene. (a) Visible-band image of the target taken by a standard astronomical camera mounted on the imaging system with an f=700mm camera lens. (b), (c) Depth profile taken by the proposed single-photon lidar over 2.1 km, and recovered by using the proposed computational algorithm. Trees at different depths and their fine features can be identified.

Tables (2)

Tables Icon

Algorithm 1. Global gating.

Tables Icon

Table 1. Summary of Representative Single-Photon Imaging Experiments, Focusing on Imaging Distance and Sensitivity

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

R(t;θx,θy)=θx,θyFoVhxy(θxθx,θyθy)r(θx,θy)ht[t2d(θx,θy)/c]dθxdθy+b
SPoisson(h*RD+B),
t0{(Ts2)M,(Ts2)M+1,...,(Ts+1)M}
minimizeRDLRD(RD;Y,h,B)subjecttoRDi,j,k0,i,j,k,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.