Abstract
In this paper, we propose enhancement of three-dimensional (3D) image visualization under photon-starved conditions using preprocessing such as contrast-limited adaptive histogram equalization (CLAHE) and histogram matching. In conventional imaging techniques, photon-counting integral imaging can be utilized for 3D visualization. However, due to a lack of photons, it is challenging to enhance the visual quality of 3D images under severely photon-starved conditions. To improve the visual quality and accuracy of 3D images under these conditions, in this paper, we apply CLAHE and histogram matching to a scene before photon-counting integral imaging is used. To prove the feasibility of our proposed method, we implement the optical experiment and show the performance metric such as peak sidelobe ratio.
© 2022 Optica Publishing Group
1. INTRODUCTION
Three-dimensional (3D) imaging under photon-starved conditions has been challenging for many applications such as medical imaging, unmanned autonomous vehicles, defense, and so on [1–3]. Under these conditions, why is it challenge to obtain 3D images? The reason is that 3D image information such as intensity, contrast, and depth is required, but acquisition is not easy [4–13]. To overcome this problem, photon-counting integral imaging has been introduced [4–11].
Integral imaging is a 3D imaging technique that can provide 3D images using multiple 2D images with different perspectives [4–11,14–17]. These 2D images are referred to as elemental images. Elemental images can be recorded through a lenslet or camera array from a 3D scene [14,16]. In lenslet-array-based integral imaging, since the resolution of elemental images depends on the number of lenslets and the resolution of the image sensor, elemental images may have low resolution. Thus, in this paper, synthetic aperture integral imaging (SAII) [16,18,19] is used to obtain high-resolution elemental images because it uses a camera array. Then, 3D images with enhanced depth resolution can be obtained using nonuniform volumetric computational reconstruction (VCR) [20]. To obtain elemental images under photon-starved conditions, computational photon-counting imaging [9,11,12] is used. It can be modelled by Poisson distribution since photons occur rarely in unit time and space [21]. For 3D visualization under these conditions, 3D photon-counting integral imaging with statistical estimations such as maximum likelihood estimation (MLE) or Bayesian approach can be utilized. In MLE, we assume that each pixel of a scene has uniform probability; that is, prior information of the scene follows uniform distribution. On the other hand, in the Bayesian approach, the prior information of the scene follows Gamma distribution due to its support. However, it may not reconstruct 3D images under severely photon-starved conditions because each elemental image has only a few photons, which may not be sufficient for visualization. Therefore, in this paper, we propose a preprocessing technique such as contrast-limited adaptive histogram equalization (CLAHE) [22] and histogram matching [23] for enhancing the visual quality of a 3D photon-counting image.
This paper is organized as follows. In Section 2, we present the basic concept of 3D photon-counting integral imaging and our proposed method. Then, in Section 3, we show the experimental results for supporting the feasibility of our proposed method with the performance metric such as peak sidelobe ratio (PSR). Finally, we conclude with a summary in Section 4.
2. ENHANCEMENT OF 3D IMAGE VISUALIZATION UNDER PHOTON-STARVED CONDITIONS
In this section, we describe the basic concept of computational photon-counting imaging, 3D photon-counting integral imaging, and our proposed method.
A. Computational Photon-Counting Imaging
Under photon-starved conditions, the conventional imaging system may not record an image due to the lack of photons. To detect photons from the scene, photon-counting imaging [12] may be required. Notably, a computational photon-counting imaging model can be utilized since it can easily control the extracted number of photons from a scene by statistical processes. Thus, we can assume that photon-counting imaging follows Poisson distribution because photons rarely occur in unit time and space [21]. Photon-counting imaging can be written as follows [10,12]:
where ${\lambda _E}(x)$ is the normalized irradiance of the image at $x$, ${N_x}$ is the total number of pixels in the image, ${N_p}$ is the expected number of photons from the image, ${I_E}(x)$ is the intensity of the image at $x$, and ${C_E}(x)$ is the number of photons at $x$, respectively. Since ${\lambda _E}(x)$ has a unit energy, the total number of the extracted photons in ${C_E}(x)$ is ${N_p}$.In photon-counting imaging, ${\lambda _E}(x)$ is estimated for visualization by statistical estimations such as maximum likelihood estimation (MLE) [12] or Bayesian approach [11]. Figure 1 shows the recorded images under photon-starved conditions by conventional imaging and photon-counting imaging. As shown in Fig. 1(a), it is difficult to recognize the objects in the image. In contrast, we can observe the objects in the image by photon-counting imaging as shown in Fig. 1(b), where image size is $4128({\text{H}}) \times 2752({\text{V}})$ and the expected number of photons is ${N_p} = 800{,}000$ (0.0704 photons/pixel). Even if the number of photons per each pixel is 0.0704, the image can be visualized by photon-counting imaging. However, its visual quality is still insufficient for observation. Therefore, integral imaging is applied to photon-counting imaging for enhancing the visual quality and obtaining 3D information since it can increase the number of photons with different perspectives.
B. 3D Photon-Counting Integral Imaging
To reconstruct 3D images, integral imaging may be utilized. For acquisition of elemental images in integral imaging, two different methods can be applied, i.e., direct pickup and synthetic aperture integral imaging (SAII). Direct pickup can obtain elemental images through a lenslet array by single shot. It can be used for a 3D dynamic scene. However, since the resolution of each elemental image may be reduced by the number of lenslets, it causes the low 3D resolution of a 3D scene. In contrast, SAII can provide high-resolution elemental images by using a camera array. Therefore, in this paper, we introduce SAII for recording elemental images.
Three-dimensional images can be visualized by volumetric computational reconstruction (VCR) with elemental images. In VCR, since the shifting pixels for each elemental image via the reconstruction depth is the most important factor, uniform VCR (i.e., conventional method) and nonuniform VCR are defined. In uniform VCR, the shifting pixels for each elemental image via the reconstruction depth is approximated and fixed by an integer number. On the other hand, in nonuniform VCR, it is different from each elemental image via the reconstruction depth. The difference between them can be described as follows [20]:
For visualization of a 3D scene under photon-starved conditions, 3D photon-counting integral imaging can be applied. Since it can obtain multiple elemental images, a likelihood function of the scene can be constructed by them. Therefore, using maximum likelihood estimation (MLE), the 3D scene under these conditions may be estimated as follows [10,11]:
Figure 2 shows the 3D reconstructed images by conventional integral imaging and photon-counting integral imaging, respectively. It is noticed that photon-counting integral imaging can visualize the scene under photon-starved conditions compared with the conventional integral imaging. For reconstructing more accurate 3D images, Bayesian approaches such as maximum a posterior (MAP) can be utilized because it uses a certain statistical distribution as the prior information unlike MLE. This method is written as follows [11]:
However, it is still insufficient for visualizing 3D images under severely photon-starved conditions due to lack of the number of photons from the scene. Therefore, in this paper, we propose a preprocessing method such as contrast-limited adaptive histogram equalization (CLAHE) and histogram matching for enhanced photon-counting integral imaging.
C. Our Proposed Method
CLAHE has been developed by the adaptive histogram equalization (AHE) noise problem. When we utilize conventional histogram equalization in the original image, as shown in Fig. 4(a), it is difficult to enhance the visual quality of the small object with a low-contrast ratio, as shown in Fig. 4(b). It is noticed that AHE limits the area of the histogram equalization to provide more contrast ratio in small objects with low-contrast ratio, as show in Fig. 4(c). However, it causes noise problems since it may also increase noise information. Therefore, the CLAHE method is used to provide a better contrast ratio for the image than histogram equalization as well as reduce the noise, as shown in Fig. 4(d).
CLAHE has several processing steps, as illustrated in Fig. 5. First, $K \times L$ areas are segmented from the image. Then, a histogram of each area is calculated, where the optimal clip limit is determined manually. Excess pixels are defined as the pixels above the clip limit. In CLAHE, the excess pixels are distributed as follows [22,24]:
When the redundancy of the excess pixels after distribution process is existed, the distribution process is iterated until the number of remaining excess pixels is less than the number of intensity levels. Now, probability density function (PDF) can be constructed for cumulative distribution function (CDF). Using CDF, pixel values of the segmented area are mapped. Finally, the segmented areas are stitched to each other by bilinear interpolation. Then, the CLAHE image can be generated.
Histogram matching is used for mapping the histogram property of the original image to the other property of a certain image. It can change the color tone or intensity contrast ratio of the image into the optimal histogram as follows [23]:
Our proposed method starts with utilizing the CLAHE and histogram matching for every elemental image. It can help the CLAHE images have a similar condition to the original image and collect the object information into the equivalent intensity level. Figure 7 shows the histogram of histogram-matched result and original image. As you can see, the histogram of the original image in Fig. 7(b) shows a continuous shape. Because of these shapes, we may not divide the information of objects and background with intensity levels. However, the histogram of our proposed method in Fig. 7(a) shows discrete level differences between background and objects. It can provide more precise information of an object when we implement the photon-counting imaging.
Figure 8 illustrates the procedure of our proposed method. After CLAHE and histogram matching, photon-counting imaging is applied to every elemental image. For various severely photon-starved conditions, $0.03 \sim 0.8\%$ of the total number of pixels are used as the number of extracted photons. Then, the estimated images are calculated by statistical estimation. Finally, 3D images are reconstructed by nonuniform VCR. Figure 9 shows elemental images by the normalization and our method. As we can see, the normalized image has a low-contrast ratio that makes it difficult to find accurate depths. In contrast, the image by our method has a high-contrast ratio, which makes it easier to find the depth information than the normalized image. For obtaining more accurate results by our proposed method, simulation is implemented before the optical experiments.
3. RESULTS FOR SIMULATION AND EXPERIMENTS
A. Simulation Setup
Figure 10 illustrates the simulation setup and 3D scene. A $9({\text{H}}) \times 9({\text{V}})$ camera array, which has the focal length $f = 50\,\,{\text{mm}}$, pitch between cameras $p = 2\,\,{\text{mm}}$, sensor size $36({\text{H}}) \times 24({\text{V}}){\text{mm}}$, and the number of pixels $1920({\text{H}}) \times 1080({\text{V}})$, is used for recording elemental images. In general, since the background is far away from the image sensor, background photons may not reach the image sensor. Therefore, we use the black background to make the real situation. For photon-starved conditions, the number of extracted photons are 800, 1,000, and 1,200, and the clip limit level is $0.001 \sim 0.015$, respectively.
B. Simulation Results
Figure 11 shows simulation results. Our proposed method and conventional photon-counting integral imaging are used with the same number of extracted photons, 800. It is noticed that the results by our proposed method can enhance the visual quality of a 3D image compared with the conventional method even if 800 extracted photons are used. To show the performance of our proposed method, a peak sidelobe ratio (PSR) is calculated for each object with various number of extracted photons, as shown in Fig. 12 and Table 1. As you can see, results by our proposed method have better PSR values than the conventional results for all objects. It is apparent that more photons are concentrated in objects by our proposed method than the conventional method. Thus, background noise may be reduced. Now, to prove feasibility of our proposed method, we implement the optical experiments.
C. Experimental Setup
Figure 13 illustrates the experimental setup and 3D scene. The camera array is the same as the one used in simulation, except the resolution of camera $4128({\text{H}}) \times 2752({\text{V}})$. Three different objects are used and located at different distances from the camera array. In optical experiments, we set the background as black for the same reason as in simulation experiments. When we record elemental images, we use a fast shutter speed to detect the small number of photons for making the photon-starved conditions. To reconstruct the 3D images in photon-starved conditions, 10,000, 50,000, and 100,000 photons are used in photon-counting integral imaging. Clip limit range $0.01 \sim 0.15$ is used for CLAHE, which is determined via empirical methods.
D. Experimental Results
Figure 14 shows 2D photon-counting images by conventional and our proposed methods with the same number of photons. As shown in Fig. 14, our proposed method can obtain the higher photon density in the object than in the conventional method. These differences give the image in our proposed method a higher contrast ratio for obtaining better quality of 3D images.
Figure 15 shows the experimental results. Three-dimensional images are reconstructed with the same number of photons by our proposed method and conventional method. As shown in Fig. 15, visual quality of 3D images by our proposed method is better than in the conventional method, which means that the photon density of objects by our proposed method is higher than in the conventional method because of CLAHE and histogram matching. To show the feasibility of our proposed method, we calculate PSR via different reconstruction depths with a different number of photons, as shown in Fig. 16 and Table 2. It is remarkable that our proposed method can provide better 3D images via all object depths than the conventional method. In addition, the object position can be found more easily than in the conventional method. Therefore, our proposed method can provide more accurate 3D information from a scene under photon-starved conditions.
4. CONCLUSION
In this paper, we have proposed enhanced 3D image visualization using CLAHE and histogram matching under photon-starved conditions. In conventional techniques, the image visualization under these conditions may not be accurate. In contrast, our proposed method may visualize a 3D scene under these conditions with relatively a few photons. Considering PSR values as the performance metric, our method shows better results than the conventional method even if a few number of photons are used. We believe that our proposed method may be utilized for many applications such as medical imaging with low radiance, unmanned autonomous vehicles at night, defense under inclement weather conditions, etc. However, it has some drawbacks. First, we do not have a method to determine the optimal value of clip limit and segmented size. Clip limit and segmented size are the trade-offs between detailed information and contrast ratio. That is, the lower clip limit and segmentation size used, the higher contrast ratio and the lower detail information are obtained or vice versa. Figure 17 shows 3D reconstructed images with different clip limits and segmented sizes. Second, its processing speed is slower than in the conventional method because it has more processing steps. To solve the optimization and processing speed problems, we will investigate solutions in future work.
Funding
National Research Foundation of Korea (NRF-2020R1F1A1068637).
Disclosures
The authors declare no conflicts of interest.
Data availability
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
REFERENCES
1. N. Farahani, A. Braun, D. Jutt, T. Huffman, N. Reder, Z. Liu, Y. Yagi, and L. Pantanowitz, “Three-dimensional imaging and scanning: current and future applications for pathology,” J. Pathol. Inform. 8, 36 (2017). [CrossRef]
2. V. K. Kukkala, J. Tunnell, S. Pasricha, and T. Bradley, “Advanced driver-assistance systems: a path toward autonomous vehicles,” IEEE Consum. Electron. Mag. 7(5), 18–25 (2018). [CrossRef]
3. A. Tosi and F. Zappa, “MiSPiA: microelectronic single-photon 3D imaging arrays for low-light high-speed safety and security applications,” Proc. SPIE 8899, 88990D (2013). [CrossRef]
4. B. Javidi, A. Carnicer, J. Arai, T. Fujii, H. Hua, H. Liao, M. Martínez-Corral, F. Pla, A. Stern, L. Waller, Q.-H. Wang, G. Wetzstein, M. Yamaguchi, and H. Yamamoto, “Roadmap on 3D integral imaging: sensing, processing, and display,” Opt. Express 28, 32266–32293 (2020). [CrossRef]
5. M. Martinez-Corral and B. Javidi, “Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” Adv. Opt. Photon. 10, 512–566 (2018). [CrossRef]
6. A. Carnicer and B. Javidi, “Polarimetric 3D integral imaging in photon-starved conditions,” Opt. Express 23, 6408–6417 (2015). [CrossRef]
7. M. Cho, A. Mahalanobis, and B. Javidi, “3D passive photon counting automatic target recognition using advanced correlation filters,” Opt. Lett. 36, 861–863 (2011). [CrossRef]
8. D. Aloni, A. Stern, and B. Javidi, “Three-dimensional photon counting integral imaging reconstruction using penalized maximum likelihood expectation maximization,” Opt. Express 19, 19681–19687 (2011). [CrossRef]
9. M. Cho and B. Javidi, “Three-dimensional photon counting integral imaging using moving array lens technique,” Opt. Lett. 37, 1487–1489 (2012). [CrossRef]
10. M. Cho, “Three-dimensional color photon counting microscopy using Bayesian estimation with adaptive priori information,” Chin. Opt. Lett. 13, 070301 (2015). [CrossRef]
11. J. Jung, M. Cho, D. K. Dey, and B. Javidi, “Three-dimensional photon counting integral imaging using Bayesian estimation,” Opt. Lett. 35, 1825–1827 (2010). [CrossRef]
12. B. Tavakoli, B. Javidi, and E. Watson, “Three dimensional visualization by photon counting computational integral imaging,” Opt. Express 16, 4426–4436 (2008). [CrossRef]
13. I. Moon, I. Muniraj, and B. Javidi, “3D visualization at low light levels using multispectral photon counting integral imaging,” J. Disp. Technol. 9, 51–55 (2013). [CrossRef]
14. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications,” Appl. Opt. 52, 546–560 (2013). [CrossRef]
15. A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94, 591–607 (2006). [CrossRef]
16. J.-S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. 27, 1144–1146 (2002). [CrossRef]
17. B. Lee, S. Jung, and J.-H. Park, “Viewing-angle-enhanced integral imaging by lens switching,” Opt. Lett. 27, 818–820 (2002). [CrossRef]
18. Y. S. Hwang, S.-H. Hong, and B. Javidi, “Free view 3-D visualization of occluded objects by using computational synthetic aperture integral imaging,” J. Disp. Technol. 3, 64–70 (2007). [CrossRef]
19. B. Tavakoli, M. Daneshpanah, B. Javidi, and E. Watson, “Performance of 3D integral imaging with position uncertainty,” Opt. Express 15, 11889–11902 (2007). [CrossRef]
20. B. Cho, P. Kopycki, M. Martinez-Corral, and M. Cho, “Computational volumetric reconstruction of integral imaging with improved depth resolution considering continuously non-uniform shifting pixels,” Opt. Laser Eng. 111, 114–121 (2018). [CrossRef]
21. J. W. Goodman, Statistical Optics (Wiley, 2000).
22. S. M. Pizer, “Adaptive histogram equalization and its variations,” Comput. Vis. Graph. Image Process. 39, 355–368 (1987). [CrossRef]
23. R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd ed. (Pearson Education, 2002).
24. Z. Xu, X. Liu, and X. Chen, “Fog removal from video sequences using contrast limited adaptive histogram equalization,” in International Conference on Computational Intelligence and Software Engineering (IEEE, 2009).