Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Enhancement of three-dimensional image visualization under photon-starved conditions

Open Access Open Access

Abstract

In this paper, we propose enhancement of three-dimensional (3D) image visualization under photon-starved conditions using preprocessing such as contrast-limited adaptive histogram equalization (CLAHE) and histogram matching. In conventional imaging techniques, photon-counting integral imaging can be utilized for 3D visualization. However, due to a lack of photons, it is challenging to enhance the visual quality of 3D images under severely photon-starved conditions. To improve the visual quality and accuracy of 3D images under these conditions, in this paper, we apply CLAHE and histogram matching to a scene before photon-counting integral imaging is used. To prove the feasibility of our proposed method, we implement the optical experiment and show the performance metric such as peak sidelobe ratio.

© 2022 Optica Publishing Group

1. INTRODUCTION

Three-dimensional (3D) imaging under photon-starved conditions has been challenging for many applications such as medical imaging, unmanned autonomous vehicles, defense, and so on [13]. Under these conditions, why is it challenge to obtain 3D images? The reason is that 3D image information such as intensity, contrast, and depth is required, but acquisition is not easy [413]. To overcome this problem, photon-counting integral imaging has been introduced [411].

Integral imaging is a 3D imaging technique that can provide 3D images using multiple 2D images with different perspectives [411,1417]. These 2D images are referred to as elemental images. Elemental images can be recorded through a lenslet or camera array from a 3D scene [14,16]. In lenslet-array-based integral imaging, since the resolution of elemental images depends on the number of lenslets and the resolution of the image sensor, elemental images may have low resolution. Thus, in this paper, synthetic aperture integral imaging (SAII) [16,18,19] is used to obtain high-resolution elemental images because it uses a camera array. Then, 3D images with enhanced depth resolution can be obtained using nonuniform volumetric computational reconstruction (VCR) [20]. To obtain elemental images under photon-starved conditions, computational photon-counting imaging [9,11,12] is used. It can be modelled by Poisson distribution since photons occur rarely in unit time and space [21]. For 3D visualization under these conditions, 3D photon-counting integral imaging with statistical estimations such as maximum likelihood estimation (MLE) or Bayesian approach can be utilized. In MLE, we assume that each pixel of a scene has uniform probability; that is, prior information of the scene follows uniform distribution. On the other hand, in the Bayesian approach, the prior information of the scene follows Gamma distribution due to its support. However, it may not reconstruct 3D images under severely photon-starved conditions because each elemental image has only a few photons, which may not be sufficient for visualization. Therefore, in this paper, we propose a preprocessing technique such as contrast-limited adaptive histogram equalization (CLAHE) [22] and histogram matching [23] for enhancing the visual quality of a 3D photon-counting image.

This paper is organized as follows. In Section 2, we present the basic concept of 3D photon-counting integral imaging and our proposed method. Then, in Section 3, we show the experimental results for supporting the feasibility of our proposed method with the performance metric such as peak sidelobe ratio (PSR). Finally, we conclude with a summary in Section 4.

2. ENHANCEMENT OF 3D IMAGE VISUALIZATION UNDER PHOTON-STARVED CONDITIONS

In this section, we describe the basic concept of computational photon-counting imaging, 3D photon-counting integral imaging, and our proposed method.

A. Computational Photon-Counting Imaging

Under photon-starved conditions, the conventional imaging system may not record an image due to the lack of photons. To detect photons from the scene, photon-counting imaging [12] may be required. Notably, a computational photon-counting imaging model can be utilized since it can easily control the extracted number of photons from a scene by statistical processes. Thus, we can assume that photon-counting imaging follows Poisson distribution because photons rarely occur in unit time and space [21]. Photon-counting imaging can be written as follows [10,12]:

$${\lambda _E}(x) = \frac{{{I_E}(x)}}{{\sum\nolimits_{x = 1}^{{N_x}} {{I_E}(x)}}},$$
$${C_E}(x)|{N_p}{\lambda _E}(x) \sim \text{Poisson}[{{N_p}{\lambda _E}(x)} ],$$
where ${\lambda _E}(x)$ is the normalized irradiance of the image at $x$, ${N_x}$ is the total number of pixels in the image, ${N_p}$ is the expected number of photons from the image, ${I_E}(x)$ is the intensity of the image at $x$, and ${C_E}(x)$ is the number of photons at $x$, respectively. Since ${\lambda _E}(x)$ has a unit energy, the total number of the extracted photons in ${C_E}(x)$ is ${N_p}$.

In photon-counting imaging, ${\lambda _E}(x)$ is estimated for visualization by statistical estimations such as maximum likelihood estimation (MLE) [12] or Bayesian approach [11]. Figure 1 shows the recorded images under photon-starved conditions by conventional imaging and photon-counting imaging. As shown in Fig. 1(a), it is difficult to recognize the objects in the image. In contrast, we can observe the objects in the image by photon-counting imaging as shown in Fig. 1(b), where image size is $4128({\text{H}}) \times 2752({\text{V}})$ and the expected number of photons is ${N_p} = 800{,}000$ (0.0704 photons/pixel). Even if the number of photons per each pixel is 0.0704, the image can be visualized by photon-counting imaging. However, its visual quality is still insufficient for observation. Therefore, integral imaging is applied to photon-counting imaging for enhancing the visual quality and obtaining 3D information since it can increase the number of photons with different perspectives.

 figure: Fig. 1.

Fig. 1. Captured images under photon-starved conditions by (a) conventional imaging and (b) photon-counting imaging with ${N_p} = 800{,}000$.

Download Full Size | PDF

B. 3D Photon-Counting Integral Imaging

To reconstruct 3D images, integral imaging may be utilized. For acquisition of elemental images in integral imaging, two different methods can be applied, i.e., direct pickup and synthetic aperture integral imaging (SAII). Direct pickup can obtain elemental images through a lenslet array by single shot. It can be used for a 3D dynamic scene. However, since the resolution of each elemental image may be reduced by the number of lenslets, it causes the low 3D resolution of a 3D scene. In contrast, SAII can provide high-resolution elemental images by using a camera array. Therefore, in this paper, we introduce SAII for recording elemental images.

Three-dimensional images can be visualized by volumetric computational reconstruction (VCR) with elemental images. In VCR, since the shifting pixels for each elemental image via the reconstruction depth is the most important factor, uniform VCR (i.e., conventional method) and nonuniform VCR are defined. In uniform VCR, the shifting pixels for each elemental image via the reconstruction depth is approximated and fixed by an integer number. On the other hand, in nonuniform VCR, it is different from each elemental image via the reconstruction depth. The difference between them can be described as follows [20]:

$$\Delta {x_s} = \frac{{{N_x} \times {p_x} \times f}}{{{c_x} \times {z_d}}},\quad \Delta {y_s} = \frac{{{N_y} \times {p_y} \times f}}{{{c_y} \times {z_d}}},$$
$$\Delta x_{k}^{u}=k \times \lfloor( \Delta {{x}_{s}} )\rceil,\quad \text{for}\;\; k=0,1,2,\ldots ,K-1,$$
$$\Delta y_{l}^{u}=l\times \lfloor( \Delta {{y}_{s}} )\rceil,\quad \text{for}\;\; l=0,1,2,\ldots ,L-1,$$
$$\Delta x_{k}^{n}= \lfloor( k\times \Delta {{x}_{s}} )\rceil,\quad \text{for}\;\; k=0,1,2,\ldots ,K-1,$$
$$\Delta y_{k}^{n}= \lfloor( l\times \Delta {{y}_{s}} )\rceil,\quad \text{for}\;\; l=0,1,2,\ldots ,L-1,$$
$$O{{(x,y,{{z}_{d}})}_{u}}=\sum\limits_{k=0}^{K-1}\sum\limits_{l=0}^{L-1}\unicode{x1D7D9}( x+\Delta x_{k}^{f},y+\Delta y_{l}^{f} ),$$
$$O{{(x,y,{{z}_{d}})}_{n}}=\sum\limits_{k=0}^{K-1} \sum\limits_{l=0}^{L-1} \unicode{x1D7D9}( x+\Delta x_{k}^{n},y+\Delta y_{l}^{n} ),$$
$$I{(x,y,{z_d})_u} = \frac{1}{{O{{(x,y,{z_d})}_f}}}\sum\limits_{k = 0}^{K - 1} \sum\limits_{l = 0}^{L - 1} {E_{\textit{kl}}}({x + \Delta x_k^f,y + \Delta y_l^f} ),$$
$$I{(x,y,{z_d})_n} = \frac{1}{{O{{(x,y,{z_d})}_n}}}\sum\limits_{k = 0}^{K - 1} \sum\limits_{l = 0}^{L - 1} {E_{\textit{kl}}}\big(x + \Delta x_k^n,y + \Delta y_l^n\big),$$
where $\Delta {x_s},\Delta {y_s}$ are the actual shifting pixels with the real number, ${N_x},{N_y}$ are the number of pixels for the elemental image, ${p_x},{p_y}$ are the pitch between elemental images, $f$ is the focal length of the camera lens, ${c_x},{c_y}$ are the sensor size, ${z_d}$ is the reconstruction depth, $\Delta x_k^u,\Delta y_l^u$ are the shifting pixels of $k$th column and $l$th row elemental image for uniform VCR, $\Delta x_k^n,\Delta y_l^n$ are the shifting pixels of the $k$th column and $l$th row elemental image for the nonuniform VCR, $K,L$ are the number of elemental images, $\lfloor \cdot \rceil$ is the round operator, is the all ones matrix, ${E_{\textit{kl}}}$ is the $k$th column and $l$th row elemental image, $O{(x,y,{z_d})_u},O{(x,y,{z_d})_n}$ are the overlapping metrics for the uniform and nonuniform VCRs, and $I{(x,y,{z_d})_u},I{(x,y,{z_d})_n}$ are the reconstructed 3D images by the uniform and nonuniform VCRs, respectively. Since $\Delta x_k^u,\Delta y_l^u$ have more quantization errors than $\Delta x_k^n,\Delta y_l^n$, depth resolution of the reconstructed 3D image by the uniform VCR is worse than the nonuniform VCR. For example, if $\Delta {x_s} = \Delta {y_s} = 3.4,k = l = 2$, the actual shifting pixels are 6.8. In the uniform VCR, $\Delta x_{2}^{u}=\Delta y_{2}^{u}=2\times \lfloor 3.4 \rceil =2\times 3=6$, so that the quantization errors are 0.8. On the other hand, in the nonuniform VCR, $\Delta x_{2}^{n}=\Delta y_{2}^{n}= \lfloor ( 2\times 3.4 )\rceil = \lfloor2\times 3.4\rceil = 6.8=7$, so that the quantization errors are 0.2. Therefore, a more accurate 3D image can be obtained by the nonuniform VCR than the uniform VCR.

For visualization of a 3D scene under photon-starved conditions, 3D photon-counting integral imaging can be applied. Since it can obtain multiple elemental images, a likelihood function of the scene can be constructed by them. Therefore, using maximum likelihood estimation (MLE), the 3D scene under these conditions may be estimated as follows [10,11]:

$$L\big({{N_p}{\lambda _{\textit{kl}}}|{C_{\textit{kl}}}} \big) = \prod\limits_{k = 0}^{K - 1} \prod\limits_{l = 0}^{L - 1} \frac{{{e^{- {N_p}{\lambda _{\textit{kl}}}}}{{({{N_p}{\lambda _{\textit{kl}}}} )}^{{C_{\textit{kl}}}}}}}{{{C_{\textit{kl}}}!}},$$
$$l\big({{N_p}{\lambda _{\textit{kl}}}|{C_{\textit{kl}}}} \big) \propto \sum\limits_{k = 0}^{K - 1} \sum\limits_{l = 0}^{L - 1} \big\{{{C_{\textit{kl}}}\log [{{N_p}{\lambda _{\textit{kl}}}} ]} \big\} - \sum\limits_{k = 0}^{K - 1} \sum\limits_{l = 0}^{L - 1} {N_p}{\lambda _{\textit{kl}}},$$
$$\frac{{\partial l\big[{{N_p}{\lambda _{\textit{kl}}}|{C_{\textit{kl}}}} \big]}}{{\partial {\lambda _{\textit{kl}}}}} = \frac{{{C_{\textit{kl}}}}}{{{\lambda _{\textit{kl}}}}} - {N_p} = 0\quad \therefore {\hat \lambda _{\textit{kl}}} = \frac{{{C_{\textit{kl}}}}}{{{N_p}}},$$
where $L(\cdot),l(\cdot)$ are the likelihood and log-likelihood functions, respectively, ${\lambda _{\textit{kl}}}$ is the normalized irradiance for the scene, ${C_{\textit{kl}}}$ is the photon-counting image, and ${\hat \lambda _{\textit{kl}}}$ is the estimated image for the scene by MLE, respectively. Now, using nonuniform VCR with the estimated images, a 3D image can be reconstructed by the following [7,10,12]:
$$\hat I(x,y,{z_d}) = \frac{1}{{O{{(x,y,{z_d})}_n}}}\sum\limits_{k = 0}^{K - 1} \sum\limits_{l = 0}^{L - 1} {\hat \lambda _{\textit{kl}}}({x + \Delta x_k^n,y + \Delta y_l^n} ).$$

Figure 2 shows the 3D reconstructed images by conventional integral imaging and photon-counting integral imaging, respectively. It is noticed that photon-counting integral imaging can visualize the scene under photon-starved conditions compared with the conventional integral imaging. For reconstructing more accurate 3D images, Bayesian approaches such as maximum a posterior (MAP) can be utilized because it uses a certain statistical distribution as the prior information unlike MLE. This method is written as follows [11]:

$$\pi ({{N_p}{\lambda _{\textit{kl}}}} ) = \frac{{{\beta ^\alpha}}}{{\Gamma (\alpha)}}{\big({{N_p}{\lambda _{\textit{kl}}}} \big)^{\alpha - 1}}{e^{- \beta {N_p}{\lambda _{\textit{kl}}}}},\quad {N_p}{\lambda _{\textit{kl}}} \gt 0,$$
$$\mu = \frac{\alpha}{\beta},\quad {\sigma ^2} = \frac{\alpha}{{{\beta ^2}}}\quad \to \quad \alpha = \frac{{{\mu ^2}}}{{{\sigma ^2}}},\quad \beta = \frac{\mu}{{{\sigma ^2}}},$$
$$\pi \big({{N_p}{\lambda _{\textit{kl}}}|{C_{\textit{kl}}}} \big) \sim \textit{Gamma}\big({{C_{\textit{kl}}} + \alpha ,{N_p}({1 + \beta} )} ),$$
$${\tilde \lambda _{\textit{kl}}} = \frac{{{C_{\textit{kl}}} + \alpha}}{{{N_p}{1 + \beta} )}},\quad {C_{\textit{kl}}} \gt 0,$$
where $\pi ({N_p}{\lambda _{\textit{kl}}})$ is the prior distribution as Gamma distribution due to support of the scene and the conjugate prior of Poisson distribution, $\alpha ,\beta$ are the parameters of Gamma distribution, $\mu,{\sigma ^2}$ are the mean and variance of the scene, $\pi ({N_p}{\lambda _{\textit{kl}}}|{C_{\textit{kl}}})$ is the posterior distribution, and ${\tilde \lambda _{\textit{kl}}}$ is the estimated image for the scene by MAP, respectively. Now, using nonuniform VCR with the estimated images by MAP, a more accurate 3D image can be obtained as shown in Fig. 3.
 figure: Fig. 2.

Fig. 2. 3D reconstructed image by (a) conventional integral imaging and (b) photon-counting integral imaging.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. 3D reconstructed images by (a) maximum likelihood estimation and (b) maximum a posterior.

Download Full Size | PDF

However, it is still insufficient for visualizing 3D images under severely photon-starved conditions due to lack of the number of photons from the scene. Therefore, in this paper, we propose a preprocessing method such as contrast-limited adaptive histogram equalization (CLAHE) and histogram matching for enhanced photon-counting integral imaging.

C. Our Proposed Method

CLAHE has been developed by the adaptive histogram equalization (AHE) noise problem. When we utilize conventional histogram equalization in the original image, as shown in Fig. 4(a), it is difficult to enhance the visual quality of the small object with a low-contrast ratio, as shown in Fig. 4(b). It is noticed that AHE limits the area of the histogram equalization to provide more contrast ratio in small objects with low-contrast ratio, as show in Fig. 4(c). However, it causes noise problems since it may also increase noise information. Therefore, the CLAHE method is used to provide a better contrast ratio for the image than histogram equalization as well as reduce the noise, as shown in Fig. 4(d).

 figure: Fig. 4.

Fig. 4. (a) Original image under photon-starved conditions. (b) Reconstructed image by conventional histogram equalization. (c) Reconstructed image by adaptive histogram equalization. (d) Reconstructed image by contrast-limited adaptive histogram equalization (CLAHE).

Download Full Size | PDF

CLAHE has several processing steps, as illustrated in Fig. 5. First, $K \times L$ areas are segmented from the image. Then, a histogram of each area is calculated, where the optimal clip limit is determined manually. Excess pixels are defined as the pixels above the clip limit. In CLAHE, the excess pixels are distributed as follows [22,24]:

$${N_t} = \sum {P_e},\quad {N_d} = \frac{{{N_t}}}{{{N_L}}},$$
$${H_{\textit{dp}}} = \left\{{\begin{array}{*{20}{l}}{{N_{\textit{cl}}} \sim {H_o} \gt {N_{\textit{cl}}}}\\{{N_{\textit{cl}}} \sim {H_o} + {N_d} \gt {N_{\textit{cl}}}}\\{{H_o} + {N_d} \sim {H_o} + {N_d} \le {N_{\textit{cl}}}}\end{array}} \right.,$$
where ${P_e}$ is the excess pixel, ${N_t}$ is the total number of excess pixels, ${N_L}$ is the number of intensity levels in histogram, ${N_{\textit{cl}}}$ is the number of clip limits, ${H_{\textit{dp}}}$ is the number of pixels per each intensity level when excess pixels are distributed, ${H_o}$ is the number of pixels per each intensity level in original segmented area, and ${N_d}$ is the number of pixels that are going to be distributed by each intensity level, respectively.
 figure: Fig. 5.

Fig. 5. Procedure of contrast-limited adaptive histogram equalization (CLAHE).

Download Full Size | PDF

When the redundancy of the excess pixels after distribution process is existed, the distribution process is iterated until the number of remaining excess pixels is less than the number of intensity levels. Now, probability density function (PDF) can be constructed for cumulative distribution function (CDF). Using CDF, pixel values of the segmented area are mapped. Finally, the segmented areas are stitched to each other by bilinear interpolation. Then, the CLAHE image can be generated.

Histogram matching is used for mapping the histogram property of the original image to the other property of a certain image. It can change the color tone or intensity contrast ratio of the image into the optimal histogram as follows [23]:

$$P({r_i}) = \frac{{{n_i}}}{{KL}}\quad {\text{for}}\quad i = (0,1,2, \ldots ,{I_{{\max}}}),$$
$${S_i} = T({{r_i}} ) = ({I_{{\max}}})\sum\limits_{j = 0}^i P({r_j}) = \frac{{{I_{{\max}}}}}{{KL}}\sum\limits_{j = 0}^i {n_j},$$
$$G({{z_q}} ) = ({I_{{\max}}})\sum\limits_{j = 0}^q P({r_j}),$$
$$G({{z_q}} ) = {S_i},$$
$${z_q} = {G^{- 1}}({{S_i}} ),$$
where $K,L$ are the number of pixels in $x$ and $y$ directions, ${n_i}$ is the number of $i$ intensity, $P({r_i})$ is the probability of occurrence of each intensity level ${r_i}$ in the image, ${S_i}$ is the multiplication intensity level with the CDF value of $P({r_i})$ for matching, $q$ is the intensity level for the target histogram, and $G({z_q})$ is the target for histogram matching, respectively. Figure 6 shows the histogram-matching example. The blue sky image in Fig. 6(a) is matched with the histogram of the sunset image in Fig. 6(b). Thus, the sky turns into red sky, as shown in Fig. 6(c).
 figure: Fig. 6.

Fig. 6. Histogram-matching example. (a) Original image. (b) Reference image that provides histogram information. (c) Histogram-matching result.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Histogram comparison between (a) histogram-matching and (b) original image.

Download Full Size | PDF

Our proposed method starts with utilizing the CLAHE and histogram matching for every elemental image. It can help the CLAHE images have a similar condition to the original image and collect the object information into the equivalent intensity level. Figure 7 shows the histogram of histogram-matched result and original image. As you can see, the histogram of the original image in Fig. 7(b) shows a continuous shape. Because of these shapes, we may not divide the information of objects and background with intensity levels. However, the histogram of our proposed method in Fig. 7(a) shows discrete level differences between background and objects. It can provide more precise information of an object when we implement the photon-counting imaging.

Figure 8 illustrates the procedure of our proposed method. After CLAHE and histogram matching, photon-counting imaging is applied to every elemental image. For various severely photon-starved conditions, $0.03 \sim 0.8\%$ of the total number of pixels are used as the number of extracted photons. Then, the estimated images are calculated by statistical estimation. Finally, 3D images are reconstructed by nonuniform VCR. Figure 9 shows elemental images by the normalization and our method. As we can see, the normalized image has a low-contrast ratio that makes it difficult to find accurate depths. In contrast, the image by our method has a high-contrast ratio, which makes it easier to find the depth information than the normalized image. For obtaining more accurate results by our proposed method, simulation is implemented before the optical experiments.

 figure: Fig. 8.

Fig. 8. Procedure of our proposed method.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. (a) Min-max normalized image and (b) proposed preprocessed image. Both images are focused on the left car at 375 mm, respectively.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. (a) Simulation setup. (b) 3D scene.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. 3D images reconstructed by (a) conventional method and (b) our proposed method. (c), (e), and (g) Enlarged images of (a). (d), (f), and (h) Enlarged images of (b), respectively.

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. Peak sidelobe ratio (PSR) results for (a) house, (b) wolf, and (c) police car, where 800 extracted photons are used.

Download Full Size | PDF

3. RESULTS FOR SIMULATION AND EXPERIMENTS

A. Simulation Setup

Figure 10 illustrates the simulation setup and 3D scene. A $9({\text{H}}) \times 9({\text{V}})$ camera array, which has the focal length $f = 50\,\,{\text{mm}}$, pitch between cameras $p = 2\,\,{\text{mm}}$, sensor size $36({\text{H}}) \times 24({\text{V}}){\text{mm}}$, and the number of pixels $1920({\text{H}}) \times 1080({\text{V}})$, is used for recording elemental images. In general, since the background is far away from the image sensor, background photons may not reach the image sensor. Therefore, we use the black background to make the real situation. For photon-starved conditions, the number of extracted photons are 800, 1,000, and 1,200, and the clip limit level is $0.001 \sim 0.015$, respectively.

Tables Icon

Table 1. PSR Results of the Simulation

 figure: Fig. 13.

Fig. 13. (a) Experimental setup. (b) 3D scene.

Download Full Size | PDF

 figure: Fig. 14.

Fig. 14. (a) 2D conventional photon counting image. (b) 2D preprocessing image. Both results used the same number of photons.

Download Full Size | PDF

 figure: Fig. 15.

Fig. 15. 3D images at 375 mm depth with 100,000 photons by (a) conventional method and (b) our proposed method. (c), (d), and (e) enlarged images of (a). (f), (g), and (h) Enlarged images of (b), respectively.

Download Full Size | PDF

 figure: Fig. 16.

Fig. 16. PSR results for (a) left car, (b) right car, and (c) bunny, where 10,000 extracted photons are used.

Download Full Size | PDF

B. Simulation Results

Figure 11 shows simulation results. Our proposed method and conventional photon-counting integral imaging are used with the same number of extracted photons, 800. It is noticed that the results by our proposed method can enhance the visual quality of a 3D image compared with the conventional method even if 800 extracted photons are used. To show the performance of our proposed method, a peak sidelobe ratio (PSR) is calculated for each object with various number of extracted photons, as shown in Fig. 12 and Table 1. As you can see, results by our proposed method have better PSR values than the conventional results for all objects. It is apparent that more photons are concentrated in objects by our proposed method than the conventional method. Thus, background noise may be reduced. Now, to prove feasibility of our proposed method, we implement the optical experiments.

C. Experimental Setup

Figure 13 illustrates the experimental setup and 3D scene. The camera array is the same as the one used in simulation, except the resolution of camera $4128({\text{H}}) \times 2752({\text{V}})$. Three different objects are used and located at different distances from the camera array. In optical experiments, we set the background as black for the same reason as in simulation experiments. When we record elemental images, we use a fast shutter speed to detect the small number of photons for making the photon-starved conditions. To reconstruct the 3D images in photon-starved conditions, 10,000, 50,000, and 100,000 photons are used in photon-counting integral imaging. Clip limit range $0.01 \sim 0.15$ is used for CLAHE, which is determined via empirical methods.

D. Experimental Results

Figure 14 shows 2D photon-counting images by conventional and our proposed methods with the same number of photons. As shown in Fig. 14, our proposed method can obtain the higher photon density in the object than in the conventional method. These differences give the image in our proposed method a higher contrast ratio for obtaining better quality of 3D images.

Figure 15 shows the experimental results. Three-dimensional images are reconstructed with the same number of photons by our proposed method and conventional method. As shown in Fig. 15, visual quality of 3D images by our proposed method is better than in the conventional method, which means that the photon density of objects by our proposed method is higher than in the conventional method because of CLAHE and histogram matching. To show the feasibility of our proposed method, we calculate PSR via different reconstruction depths with a different number of photons, as shown in Fig. 16 and Table 2. It is remarkable that our proposed method can provide better 3D images via all object depths than the conventional method. In addition, the object position can be found more easily than in the conventional method. Therefore, our proposed method can provide more accurate 3D information from a scene under photon-starved conditions.

Tables Icon

Table 2. PSR Results of Optical Experiments

4. CONCLUSION

In this paper, we have proposed enhanced 3D image visualization using CLAHE and histogram matching under photon-starved conditions. In conventional techniques, the image visualization under these conditions may not be accurate. In contrast, our proposed method may visualize a 3D scene under these conditions with relatively a few photons. Considering PSR values as the performance metric, our method shows better results than the conventional method even if a few number of photons are used. We believe that our proposed method may be utilized for many applications such as medical imaging with low radiance, unmanned autonomous vehicles at night, defense under inclement weather conditions, etc. However, it has some drawbacks. First, we do not have a method to determine the optimal value of clip limit and segmented size. Clip limit and segmented size are the trade-offs between detailed information and contrast ratio. That is, the lower clip limit and segmentation size used, the higher contrast ratio and the lower detail information are obtained or vice versa. Figure 17 shows 3D reconstructed images with different clip limits and segmented sizes. Second, its processing speed is slower than in the conventional method because it has more processing steps. To solve the optimization and processing speed problems, we will investigate solutions in future work.

 figure: Fig. 17.

Fig. 17. 3D reconstructed images by (a) $4 \times 4$ segmented image, (b) $16 \times 16$ segmented image, (c) 0.01 clip limit, and (d) 0.15 clip limit, respectively. (a) and (b) have the same clip limit 0.06. (c) and (d) have the same segmented size $16 \times 16$.

Download Full Size | PDF

Funding

National Research Foundation of Korea (NRF-2020R1F1A1068637).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

REFERENCES

1. N. Farahani, A. Braun, D. Jutt, T. Huffman, N. Reder, Z. Liu, Y. Yagi, and L. Pantanowitz, “Three-dimensional imaging and scanning: current and future applications for pathology,” J. Pathol. Inform. 8, 36 (2017). [CrossRef]  

2. V. K. Kukkala, J. Tunnell, S. Pasricha, and T. Bradley, “Advanced driver-assistance systems: a path toward autonomous vehicles,” IEEE Consum. Electron. Mag. 7(5), 18–25 (2018). [CrossRef]  

3. A. Tosi and F. Zappa, “MiSPiA: microelectronic single-photon 3D imaging arrays for low-light high-speed safety and security applications,” Proc. SPIE 8899, 88990D (2013). [CrossRef]  

4. B. Javidi, A. Carnicer, J. Arai, T. Fujii, H. Hua, H. Liao, M. Martínez-Corral, F. Pla, A. Stern, L. Waller, Q.-H. Wang, G. Wetzstein, M. Yamaguchi, and H. Yamamoto, “Roadmap on 3D integral imaging: sensing, processing, and display,” Opt. Express 28, 32266–32293 (2020). [CrossRef]  

5. M. Martinez-Corral and B. Javidi, “Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” Adv. Opt. Photon. 10, 512–566 (2018). [CrossRef]  

6. A. Carnicer and B. Javidi, “Polarimetric 3D integral imaging in photon-starved conditions,” Opt. Express 23, 6408–6417 (2015). [CrossRef]  

7. M. Cho, A. Mahalanobis, and B. Javidi, “3D passive photon counting automatic target recognition using advanced correlation filters,” Opt. Lett. 36, 861–863 (2011). [CrossRef]  

8. D. Aloni, A. Stern, and B. Javidi, “Three-dimensional photon counting integral imaging reconstruction using penalized maximum likelihood expectation maximization,” Opt. Express 19, 19681–19687 (2011). [CrossRef]  

9. M. Cho and B. Javidi, “Three-dimensional photon counting integral imaging using moving array lens technique,” Opt. Lett. 37, 1487–1489 (2012). [CrossRef]  

10. M. Cho, “Three-dimensional color photon counting microscopy using Bayesian estimation with adaptive priori information,” Chin. Opt. Lett. 13, 070301 (2015). [CrossRef]  

11. J. Jung, M. Cho, D. K. Dey, and B. Javidi, “Three-dimensional photon counting integral imaging using Bayesian estimation,” Opt. Lett. 35, 1825–1827 (2010). [CrossRef]  

12. B. Tavakoli, B. Javidi, and E. Watson, “Three dimensional visualization by photon counting computational integral imaging,” Opt. Express 16, 4426–4436 (2008). [CrossRef]  

13. I. Moon, I. Muniraj, and B. Javidi, “3D visualization at low light levels using multispectral photon counting integral imaging,” J. Disp. Technol. 9, 51–55 (2013). [CrossRef]  

14. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications,” Appl. Opt. 52, 546–560 (2013). [CrossRef]  

15. A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94, 591–607 (2006). [CrossRef]  

16. J.-S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. 27, 1144–1146 (2002). [CrossRef]  

17. B. Lee, S. Jung, and J.-H. Park, “Viewing-angle-enhanced integral imaging by lens switching,” Opt. Lett. 27, 818–820 (2002). [CrossRef]  

18. Y. S. Hwang, S.-H. Hong, and B. Javidi, “Free view 3-D visualization of occluded objects by using computational synthetic aperture integral imaging,” J. Disp. Technol. 3, 64–70 (2007). [CrossRef]  

19. B. Tavakoli, M. Daneshpanah, B. Javidi, and E. Watson, “Performance of 3D integral imaging with position uncertainty,” Opt. Express 15, 11889–11902 (2007). [CrossRef]  

20. B. Cho, P. Kopycki, M. Martinez-Corral, and M. Cho, “Computational volumetric reconstruction of integral imaging with improved depth resolution considering continuously non-uniform shifting pixels,” Opt. Laser Eng. 111, 114–121 (2018). [CrossRef]  

21. J. W. Goodman, Statistical Optics (Wiley, 2000).

22. S. M. Pizer, “Adaptive histogram equalization and its variations,” Comput. Vis. Graph. Image Process. 39, 355–368 (1987). [CrossRef]  

23. R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd ed. (Pearson Education, 2002).

24. Z. Xu, X. Liu, and X. Chen, “Fog removal from video sequences using contrast limited adaptive histogram equalization,” in International Conference on Computational Intelligence and Software Engineering (IEEE, 2009).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1.
Fig. 1. Captured images under photon-starved conditions by (a) conventional imaging and (b) photon-counting imaging with ${N_p} = 800{,}000$ .
Fig. 2.
Fig. 2. 3D reconstructed image by (a) conventional integral imaging and (b) photon-counting integral imaging.
Fig. 3.
Fig. 3. 3D reconstructed images by (a) maximum likelihood estimation and (b) maximum a posterior.
Fig. 4.
Fig. 4. (a) Original image under photon-starved conditions. (b) Reconstructed image by conventional histogram equalization. (c) Reconstructed image by adaptive histogram equalization. (d) Reconstructed image by contrast-limited adaptive histogram equalization (CLAHE).
Fig. 5.
Fig. 5. Procedure of contrast-limited adaptive histogram equalization (CLAHE).
Fig. 6.
Fig. 6. Histogram-matching example. (a) Original image. (b) Reference image that provides histogram information. (c) Histogram-matching result.
Fig. 7.
Fig. 7. Histogram comparison between (a) histogram-matching and (b) original image.
Fig. 8.
Fig. 8. Procedure of our proposed method.
Fig. 9.
Fig. 9. (a) Min-max normalized image and (b) proposed preprocessed image. Both images are focused on the left car at 375 mm, respectively.
Fig. 10.
Fig. 10. (a) Simulation setup. (b) 3D scene.
Fig. 11.
Fig. 11. 3D images reconstructed by (a) conventional method and (b) our proposed method. (c), (e), and (g) Enlarged images of (a). (d), (f), and (h) Enlarged images of (b), respectively.
Fig. 12.
Fig. 12. Peak sidelobe ratio (PSR) results for (a) house, (b) wolf, and (c) police car, where 800 extracted photons are used.
Fig. 13.
Fig. 13. (a) Experimental setup. (b) 3D scene.
Fig. 14.
Fig. 14. (a) 2D conventional photon counting image. (b) 2D preprocessing image. Both results used the same number of photons.
Fig. 15.
Fig. 15. 3D images at 375 mm depth with 100,000 photons by (a) conventional method and (b) our proposed method. (c), (d), and (e) enlarged images of (a). (f), (g), and (h) Enlarged images of (b), respectively.
Fig. 16.
Fig. 16. PSR results for (a) left car, (b) right car, and (c) bunny, where 10,000 extracted photons are used.
Fig. 17.
Fig. 17. 3D reconstructed images by (a)  $4 \times 4$ segmented image, (b)  $16 \times 16$ segmented image, (c) 0.01 clip limit, and (d) 0.15 clip limit, respectively. (a) and (b) have the same clip limit 0.06. (c) and (d) have the same segmented size $16 \times 16$ .

Tables (2)

Tables Icon

Table 1. PSR Results of the Simulation

Tables Icon

Table 2. PSR Results of Optical Experiments

Equations (26)

Equations on this page are rendered with MathJax. Learn more.

λ E ( x ) = I E ( x ) x = 1 N x I E ( x ) ,
C E ( x ) | N p λ E ( x ) Poisson [ N p λ E ( x ) ] ,
Δ x s = N x × p x × f c x × z d , Δ y s = N y × p y × f c y × z d ,
Δ x k u = k × ( Δ x s ) , for k = 0 , 1 , 2 , , K 1 ,
Δ y l u = l × ( Δ y s ) , for l = 0 , 1 , 2 , , L 1 ,
Δ x k n = ( k × Δ x s ) , for k = 0 , 1 , 2 , , K 1 ,
Δ y k n = ( l × Δ y s ) , for l = 0 , 1 , 2 , , L 1 ,
O ( x , y , z d ) u = k = 0 K 1 l = 0 L 1 𝟙 ( x + Δ x k f , y + Δ y l f ) ,
O ( x , y , z d ) n = k = 0 K 1 l = 0 L 1 𝟙 ( x + Δ x k n , y + Δ y l n ) ,
I ( x , y , z d ) u = 1 O ( x , y , z d ) f k = 0 K 1 l = 0 L 1 E kl ( x + Δ x k f , y + Δ y l f ) ,
I ( x , y , z d ) n = 1 O ( x , y , z d ) n k = 0 K 1 l = 0 L 1 E kl ( x + Δ x k n , y + Δ y l n ) ,
L ( N p λ kl | C kl ) = k = 0 K 1 l = 0 L 1 e N p λ kl ( N p λ kl ) C kl C kl ! ,
l ( N p λ kl | C kl ) k = 0 K 1 l = 0 L 1 { C kl log [ N p λ kl ] } k = 0 K 1 l = 0 L 1 N p λ kl ,
l [ N p λ kl | C kl ] λ kl = C kl λ kl N p = 0 λ ^ kl = C kl N p ,
I ^ ( x , y , z d ) = 1 O ( x , y , z d ) n k = 0 K 1 l = 0 L 1 λ ^ kl ( x + Δ x k n , y + Δ y l n ) .
π ( N p λ kl ) = β α Γ ( α ) ( N p λ kl ) α 1 e β N p λ kl , N p λ kl > 0 ,
μ = α β , σ 2 = α β 2 α = μ 2 σ 2 , β = μ σ 2 ,
π ( N p λ kl | C kl ) Gamma ( C kl + α , N p ( 1 + β ) ) ,
λ ~ kl = C kl + α N p 1 + β ) , C kl > 0 ,
N t = P e , N d = N t N L ,
H dp = { N cl H o > N cl N cl H o + N d > N cl H o + N d H o + N d N cl ,
P ( r i ) = n i K L for i = ( 0 , 1 , 2 , , I max ) ,
S i = T ( r i ) = ( I max ) j = 0 i P ( r j ) = I max K L j = 0 i n j ,
G ( z q ) = ( I max ) j = 0 q P ( r j ) ,
G ( z q ) = S i ,
z q = G 1 ( S i ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.