Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single image dehazing based on haze density estimation in different color spaces

Open Access Open Access

Abstract

In this paper, an efficient method for single image dehazing is proposed based on haze density estimation. We provide two forms of haze density estimation in different color spaces, which are called scene-based haze density estimation in HSV color space and pixel-based haze density estimation in RGB color space. The attenuation model of pixel-level transmission is established based on the two haze density estimations by an exponential function. Guided filtering is applied to smooth the transmission map and maintain the local edges. Global atmospheric light is obtained adaptively by smoothed transmission. A series of experiments on different types of hazy images are implemented, and the results reveal that the proposed method can obtain high-quality haze-free images along with abundant details, high color fidelity, and few halo artifacts.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Outdoor images often suffer from limited visibility and low contrast in hazy conditions, which seriously affect the effectiveness of the surveillance system [1]. Therefore, it is urgent for research on dehazing technology. Current image dehazing techniques can be grouped into conventional image enhancement methods and physical-based methods. Image enhancement methods [2,3], such as histogram equalization [4] and retinex [5], address contrast improvement without considering the degradation mechanism, so the recovery quality is generally limited. Improved results have been obtained by employing physical-based models, especially single image haze removal. Researchers have proposed many dehazing algorithms using multiple images [69], which require additional imaging equipment support. However, it is difficult to obtain multiple images under different weather conditions, which limits the application of the dehazing algorithms. For example, polarization-based dehazing methods [79] use images taken with different degrees of polarization to obtain haze-free images effectively.

Algorithms [1024] have attracted extensive attention from researchers due to their simplicity and practicability. Many single image dehazing methods presented in the literature are implemented based on the haze optical model proposed in [25,26], the model is usually depicted as:

$$I(x) = \underbrace{{J(x)t(x)}}_{{\textrm{direct attenuation}}} + \underbrace{{A(1 - t(x))}}_{{\textrm{airlight}}}$$
where x denotes the pixel location in the image, J(x) denotes the scene radiance, I(x) denotes the observed hazy image, A denotes the global atmospheric light, and t(x) denotes the medium transmission. In this model, the light received by the imaging device comes from two parts: direct attenuation and airlight. In the direct attenuation model, the reflected light is weakened by the atmosphere, which causes the brightness and contrast to decrease; the airlight describes that ambient light is scattered by the atmosphere scattering medium to form background light, which causes the imaging result to blur. Recovery of a haze-free image from a single hazy image is an underconstrained problem, as shown in Eq. (1).

Previous studies have proposed many single image dehazing methods. For example, He et al. [23] proposed a classic dark channel prior (DCP), which is simple and effective, but it may not work for some scenes that are similar to atmospheric light, such as gray and white objects. Meng et al. [17] provided a transmission image optimization algorithm by exploring boundary constraints and contextual regularization. Zhu et al. [18] proposed a simple but powerful color attenuation prior (CAP) for haze removal. Berman et al. [21] proposed an algorithm based on a new nonlocal prior in which the colors of a haze-free image were well approximated by hundreds of distinct colors that form tight clusters in RGB space. Shin et al. [24] presented an optimization-based dehazing algorithm that combined the radiance and reflectance components of an image.

More recently, studies have focused on learning-based methods that can significantly improve the quality of image dehazing, and these methods have achieved promising performance in various visual tasks [2733]. DehazeNet [27] designed a convolutional neural network (CNN) to estimate the transmission map of a hazy image and subsequently used it to estimate atmospheric light. Yang et al. [30] proposed a novel deep learning approach for single image dehazing by learning dark channels and transmission priors. Dong [33] proposed a multiscale boosted dehazing network (MSBDN) with dense feature fusion based on the U-Net architecture.

However, the main limitations of existing dehazing methods suffer from edge, texture, and color distortion issues. These methods also introduce halo and gradient reversal artifacts. In this paper, an efficient image dehazing algorithm is proposed based on haze density estimation in different color spaces.

The remainder of this paper is arranged as follows. In Section 2, the proposed algorithm is introduced. In Section 3, we present the experimental results and related discussions. Finally, we conclude this paper in Section 4.

2. Proposed algorithm

2.1 Scene-based haze density estimation

In the HSV color space, the saturation S relates to the purity of a certain hue. Pure spectral color is completely saturated, and the saturation gradually decreases with the addition of white light. Therefore, with the addition of haze, the saturation of the image decreases. According to Eq. (2), we obtain the normalized component of saturation ${I_S}$.

$${I_S} = 1 - \frac{{\min ({I_r},{I_g},{I_b})}}{{\max ({I_r},{I_g},{I_b})}}$$
where ${I_r},{I_g},{I_b}$ are three components in the RGB color space. We convert ${I_S}$ to a grayscale image with [0, 255] and perform histogram statistics to obtain the numbers of each pixel ${n_i}$. Considering that some outdoor images usually have some areas (gray objects, gray sky, very distant view, etc.) with low saturation, to eliminate the interference of these areas on the haze estimation, we delete these pixels, that is, the first peak of the histogram, and then calculate the average of the remaining pixels from k1 to 255 as:
$${I_{ave}} = \frac{{\sum\limits_{\textrm{i} = {k_1}}^{255} {(i\ast {n_i})} }}{{255\ast \sum\limits_{\textrm{i} = {k_1}}^{255} {{n_i}} }}$$

For a saturation image, when the scene-based haze density is higher, the average of the saturation image ${I_{ave}}$ should be smaller. We adopt the decreasing function of Eq. (4) to adaptively obtain the representative quantity of scene-based haze density. It shows that a scene with higher haze density has a larger ${\xi _s}$.

$${\xi _s} ={-} \log ({I_{ave}})$$

Here, we select three the hazy images (in Fig. 1) for experiments. The first column is the haze image, the second column is the saturation image, and the third and fourth columns are statistical histograms. Figure 1 (a) is an image with sky areas with low saturation. Figure 1 (b) is an image with no sky areas but the saturation is low in the distant view area. Figure 1 (c) is an image where the sky area is blue and there are non-haze areas with low saturation. All the areas with low saturation are marked by the red rectangle.

 figure: Fig. 1.

Fig. 1. Acquiring the adaptive ${\xi _s}$ of the image

Download Full Size | PDF

The first peaks are marked by red ellipses of the statistical histograms are removed, as illustrated in fourth columns of Fig. 1. At this time, the k1 of the three images are 15, 36, 24, and the corresponding ${\xi _s}$ are 1.0589, 1.2384, and 1.0095 according to Eq. (4). In addition, we randomly select a number of hazy images and conduct statistics on these images to find that the value of ${\xi _s}$ is generally between 1 and 3. Therefore, we give the bounds of ${\xi _s}$: when ${\xi _s} < 1$, set ${\xi _s} = 1$; when ${\xi _s} > 3$, set ${\xi _s} = 3$, which makes the final value ${\xi _s} \in [1,3]$.

2.2 Transmission physical interpretation

In the RGB color space, $c \in \{ R,G,B\}$, we find the maximum values $\max ({I^c}(x))$ of the three channels in the hazy image and consider the three maximum values to have the highest haze density in the three channels. We calculate and normalize the L2 norm of the distance between the pixels and $\max ({I^c}(x))$ in three channels. The distance function $d_{rgb}^{}(x)$ is:

$$d_{rgb}^{}(x) = {(\frac{{{{||{{I^c}(x) - \max ({I^c}(x))} ||}_2}}}{{\max ({{||{{I^c}(x) - \max ({I^c}(x))} ||}_2})}})_{c \in \{ R,G,B\} }}$$

Generally, a smaller value of $d_{rgb}^{}(x)$ indicates that pixel $x$ has a larger pixel value and a higher haze density. Therefore, we need to find a decreasing function with $d_{rgb}^{}(x)$ as the variable. We use ${\omega _{rgb}}(x)$ to represent the pixel-based density of haze and select two simple and representative decreasing functions, as shown in Eq. (6) and Eq. (7). The Eq. (6) indicates that the slope changes from fast to slow, and the Eq. (7) indicates that the slope changes from slow to fast.

$${\omega _{rgb}}(x) = {(1 - d_{rgb}^{}(x))^\gamma }$$
$${\omega _{rgb}}(x) = 1 - {(d_{rgb}^{}(x))^\gamma }$$

It can be known from Eq. (5), $d_{rgb}^{}(x) \in [0,1]$. From Eq. (6) and Eq. (7), ${\omega _{rgb}}(x) \in [0,1]$. The relationship between ${\omega _{rgb}}(x)$ and $d_{rgb}^{}(x)$ is shown in Fig. 2. The solid curves show the relationship of Eq. (6), and the dashed curves show the relationship of Eq. (7). When $\gamma = 1$, ${\omega _{rgb}}(x)$ obtained by Eq. (6) and Eq. (7) is the same, as shown by the black straight line in Fig. 2. The relationship indicated by the solid line is that when it is small, the haze density decreases rapidly, and when it is large, the haze density decreases slowly. The relationship indicated by the dashed line is the opposite. Normally, the density of haze increases faster when the distance is small, the density of haze increases becomes slower when the distance is large. Therefore, the relationship indicated by the solid line (Eq. (6)) is more consistent with the actual attenuation of haze density, so we choose Eq. (6) as the final relationship between $d_{rgb}^{}(x)$ and ${\omega _{rgb}}(x)$.

 figure: Fig. 2.

Fig. 2. The relationship between $d_{rgb}^{}(x)$ and ${\omega _{rgb}}(x)$ under different $\gamma$.

Download Full Size | PDF

As $\gamma$ increases in Eq. (6), the pixel-based haze density ${\omega _{rgb}}(x)$ decreases faster, indicating that the haze density of the imaging scene is also greater. It can be seen that $\gamma$ is similar to the physical meaning of the scene-based haze density ${\xi _s}$, which was discussed in the previous section, and there is a positive correlation between ${\xi _s}$ and $\gamma$. We assume that there is a simple linear relationship between them:

$$\gamma = k{\xi _s} + b$$

Since ${\xi _s} \in [1,3]$, when ${\xi _s} = 1$, if $\gamma = {\xi _s}$, then $\gamma = 1$, as the black straight line in Fig. 3, which does not conform to the actual haze density attenuation; therefore, in the range of ${\xi _s} \in [1,3]$, it requires $\gamma > {\xi _s}$ and $\gamma ({\xi _s})$ is an increasing function.

 figure: Fig. 3.

Fig. 3. The relationship between ${\xi _s}$ and $\gamma$ under different k and b.

Download Full Size | PDF

We select two forms of straight lines: one form is a group of parallel lines with the same slope (i.e., k = 1) to the black line, and we set b as 0.5 and 1.0, as shown by the red and blue solid lines; the other form is a group of straight lines passing the point of (0, 0), i.e., b = 0, which satisfies $\gamma = k{\xi _s}$ and k > 1. We set k as 3.5/3, 4.0/3, and 4.5/3, as shown by the dashed line in Fig. 3. As the scene-based haze density ${\xi _s}$ increases, $\gamma$ increases gradually, so it is more reasonable to choose a straight line $\gamma = k{\xi _s}$ and k>1. Pixel-level transmission estimation according to the physical meaning of ${\xi _s}$ and ${\omega _{rgb}}(x)$, the larger ${\xi _s}$ and ${\omega _{rgb}}(x)$ are, the higher the haze density and the lower the transmission; the smaller ${\xi _s}$ and ${\omega _{rgb}}(x)$ are, the lower the haze density and the greater the transmission. It can be seen that ${\xi _s}$ and ${\omega _{rgb}}(x)$ are positively correlated with the haze density but negatively correlated with the transmission. Therefore, an estimation model for the transmission is:

$$t(x) = {e^{ - {\xi _s}\ast {w_{rgb}}(x)}}$$

Here, ${\omega _{rgb}}(x) \in [0,1]$, ${\xi _s} \in [1,3]$, then $t(x) \in [exp ( - 3),exp (0)]$, that is, $[0.0498,1]$ is close to $[0,1]$, which meets the actual transmission requirements $t(x) \in [0,1]$. The influence of different ${\xi _s}$ on t(x) is shown in Fig. 4. It can be seen that when ${\xi _s} \in [1,3]$, the larger ${\xi _s}$ is, the faster t(x) attenuates; the smaller ${\xi _s}$ is, the slower t(x) attenuates. When the scene-based haze density ${\xi _s}$ is the same, the transmission t(x) decreases as the pixel-based haze density ${\omega _{rgb}}(x)$ increases.

 figure: Fig. 4.

Fig. 4. The effect of different ${\xi _s}$ values on t(x).

Download Full Size | PDF

We substitute Eq. (8) into Eq. (9) to obtain the expression of transmission estimation:

$$t(x) = {e^{ - {\xi _s}{{(1 - d_{rgb}^{}(x))}^{(k{\xi _s})}}}}$$

Here, ${d_{rgb}}(x) \in [0,1]$, ${\xi _s} \in [1,3]$, and we choose ${\xi _s} = 1.5$ and ${\xi _s} = 2.0$ as shown in Fig. 5 with the solid line and the dashed line, respectively. We set k as 3.5/3, 4.0/3, and 4.5/3, corresponding to the red, blue, and green curves in Fig. 5, respectively. When ${d_{rgb}}(x) = 0$, $t(x) = {e^{ - {\xi _s}}}$. The larger the scene-based ${\xi _s}$ is, the smaller the transmission $t(x)$. For the same ${\xi _s}$, when ${d_{rgb}}(x)$ is smaller, that is, when the pixel-based haze density ${\omega _{rgb}}(x)$ is larger, as the parameter k increases, the transmission $t(x)$ increases faster, which conforms to the change law of actual hazy scenes. In this paper, we choose $k = 4/3$, which can obtain ideal dehazing results. In general, the transmission changes slowly in a certain area, but the obtained pixel-level transmission map $t(x)$ is greatly affected by the grayscale of the hazy image. Therefore, it is necessary to perform smoothing operations. Finally, the guide filter [34] is selected for smoothing with high efficiency and good edge preservation.

 figure: Fig. 5.

Fig. 5. The effect of different ${\xi _s}$ and k on t(x).

Download Full Size | PDF

2.3 Recovering the scene radiance

We first pick the top 0.05% darkest pixels in the transmission map. These pixels with the highest intensity in the hazy image ${I^c}(x)$ are selected as the calculation area $\Omega $ of atmospheric light, and the total pixels of the areas $\Omega $ are N. The average of the pixels of the three channels in the corresponding areas $\Omega $ are calculated, and then the maximum value of the average of the three channels is found as the atmospheric light A.

$$A = \mathop {\max }\limits_{c \in \{ r,g,b\} } (\frac{{\sum\limits_{x \in \Omega } {{I^c}(x)} }}{N})$$

The area $\Omega $ of the atmospheric light obtained by this method are shown in the red pixels marked with yellow circle in Fig. 6. The atmospheric light is selected in the brightest area of the sky in Fig. 6(a) and Fig. 6(b). Although there is a white car in Fig. 6(c), the atmospheric light is not disturbed by the white objects. In Fig. 6(d), some pixels are selected on the white gooses, but most pixels are selected in the region with the highest haze density, the average of pixels in the area $\Omega $ effectively reduces the interference of white objects on atmospheric light A. This simple method based on the correct estimation of transmission is more robust than the other methods.

 figure: Fig. 6.

Fig. 6. The areas $\Omega $ of atmospheric light selection

Download Full Size | PDF

With $t(x)$ and A being obtained, we can recover the scene radiance according to

$$J(x) = \frac{{I(x) - {A_{}}}}{{\max (t(x),0.1)}} + A$$

Since the scene radiance is usually not as bright as the atmospheric light, the image after haze removal looks dim [23]. We adopt the UM (unsharp masking) algorithm [22] to enhance the recovered image J(x).

3. Experimental results

3.1 Transmission estimation

The pixel-based estimation of $t(x)$ proposed in this paper completely depend on the gray value of the image pixels. Selecting the local area marked by the red box in the forest image of Fig. 1 for experiment.

The smoothing transmission maps by Guided filter and the dehazed images are as shown in the first row and second row of Fig. 7(a) ∼ 7(d), where corresponding to different Guided filter radius of r=3, r=5, r=9 and r=15. The pixel-based transmission estimation and corresponding dehazed image are shown in Fig. 7 (e). After smooth filtering, the transmission maps are blurred to varying degrees, and the dehazed result is significantly better than the unfiltered one.

 figure: Fig. 7.

Fig. 7. The effect of different guided filter radius. (a) r = 3, (b) r = 5, (c) r = 9, (d) r = 15, (e) Pixel-level unfiltered, (f) one-dimensional signal of the 202nd row and the column 90 to 130.

Download Full Size | PDF

In order to more intuitively show the effect, we select one-dimensional signal of the 202nd row and the column from 90 to 130 in the second row of Fig. 7(a) ∼ Fig. 7(d) which is marked by red line. The one-dimensional signals with abrupt changes of the scene depth are displayed in Fig. 7(f) with green, red, pink and blue curve respectively.

In Fig. 7(f), there are two abrupt edges which are marked with black circles. Compare with the smoothed transmission map by bigger Guided filter radius, the smoothed transmission map by the smaller Guided filter radius have more clear edge details of the leaves. However, when r is too small (as shown by the green curve in Fig. 7 (f)), the transmission map is not smoothed very well in areas with similar depth of field, and the noise of image is obvious after dehazing. When r is too large (as shown by the blue curve in Fig. 7 (f)), the transmission map is excessively blurred, and the halo effects will be obvious in the dehazed image where the depth of field changes abruptly. The pixels value of dehazed image become smaller, that is, the dehazed image becomes darker. In summary, the radius of Guided filtering is selected as r = 5. In this paper, the regularization parameter is set 0.01.

The dehazing results and the corresponding transmission results of He’s method [23], CAP [18], Berman’s method [21], Shin’s method [24] and the proposed method as shown in Fig. 8 (a) - Fig. 8 (e) (redder parts indicate high values, and bluer parts indicate low values).

 figure: Fig. 8.

Fig. 8. Comparison of the dehazing results and estimated transmission. (a) He’s [23]; (b) CAP [18]; (c) Berman’s [21]; (d) Shin’s [24]; (e) Ours.

Download Full Size | PDF

The transmission results of He’s method [23] and CAP [18] have clear details at the abrupt change in depth of field, and other areas are excessively blurred; the overall edge information of Berman’s method [21] and Shin’s method [24] transmission results are retained, and all other details are blurred; the transmission map with our guide filtering is somewhere in between. The transmission results of He’s method [23], CAP [18], Shin’s method [24] and ours are similar in color, but the transmission of Berman’s method [21] is different from other colors that estimate too small, especially in distant areas. Therefore, although the two rows of trees are well separated in Berman’s [21] result, the enlarged images (as shown in the bottom left corners of Fig. 8, which are outlined by the red rectangles) show that the details of the distant trees cannot be distinguished, while the details of the trees in the distance of our method have been restored very well. This shows that our method can recover haze-free images well from hazy images with different depths of field.

3.2 Dehazing result with different hazy images

In this section, we compare our results with those of four state-of-the-art visibility restoration algorithms: Berman’s [21], Shin’s [24], DehazeNet [27] and MSBDN [33]. The image with a monotonous color is shown in the first row of Fig. 9, and the color of the image in the second row of Fig. 9 is rich and contains white regions that are hard to handle because most existing dehazing methods are sensitive to the white color. The dehazed images of Beman [21] and DehazeNet [27] are too dark in local areas (such as tree trunk in Fig. 9), resulting in poor image contrast. The details of Shin’s [24] and MSBDN [33] methods are not clear and still retain some dense haze. The enlarged images shown in the upper left corner of Fig. 9 show that our method has more cloud edge details than the other methods. Although the proposed method depends on the gray values of the images, it is not affected by white objects (such as the white geese in Fig. 9).

 figure: Fig. 9.

Fig. 9. Comparison of the results from different methods. (a) Input hazy image; (b) Berman’s [21]; (c) Shin’s [24]; (d) DehazeNet [27]; (e) MSBDN [33]; (f) Ours.

Download Full Size | PDF

The sky region is challenging for dehazing methods because clouds and haze are similar natural phenomena with the same atmospheric scattering model. Three widely used images with large sky areas are chosen as shown in Fig. 10.

 figure: Fig. 10.

Fig. 10. Comparison of the results from different methods. (a) Input hazy image; (b) Berman’s [21]; (c) Shin’s [24]; (d) DehazeNet [27]; (e) MSBDN [33]; (f) Ours.

Download Full Size | PDF

The details of the scenes and objects are effectively restored by Berman’s [21] and Shin’s [24] methods. However, the results significantly suffer from color distortion and halo artifacts in the sky regions. Although the color of DehazeNet [27] and MSBDN [33] is natural, their dehazing ability is insufficient, and the dehazed images are still very blurry. Our method maintains the natural colors of sky regions, but our method achieves a promising dehazing effect with little color distortion and few halo artifacts in the sky due to the effectiveness of the transmission estimation model in this paper.

One real-world outdoor hazy image is chosen in the test subset named SOTS of the RESIDE dataset [35], as shown in the first row of Fig. 11(a). We also choose a hazy image with inhomogeneous and dense haze as shown in the second row of Fig. 11(a). The visibility of Shin [24] and DehazeNet [27] improves, but the contrast is not high. The MSBDN [33] tend to leave haze in the results. The proposed method obtains higher contrast while retaining fine structures and natural colors.

 figure: Fig. 11.

Fig. 11. Comparison of the results from different methods. (a) Input hazy image; (b) Shin’s [24]; (c) DehazeNet [27]; (d) MSBDN [33]; (e) Ours.

Download Full Size | PDF

Other experimental results are shown in Fig. 12. This shows that the proposed method can obtain images with natural colors and clear details under all different conditions.

 figure: Fig. 12.

Fig. 12. Part of the experimental results of the proposed method.

Download Full Size | PDF

3.3 Objective evaluation

We choose the mean square error (MSE) and image information entropy (IIE) as nonreference image quality assessment metrics [22]. The reference image quality assessment metrics are the rate of new visible edges ‘e’ and mean ratio ‘r’ of the gradients at visible edges [36,37]. High values of MSE, IIE, ‘e’ and ‘r’ obtained by a method indicate that the method achieves a better performance than others.

In Table 1, we select two images in the first row of Fig. 9 and Fig. 10 for experiments, the MSEs of the images dehazed by Berman’s [21] method are higher than our algorithm because Berman’s [21] method has more serious overcorrection in the two images, making the image contrast higher and the MSE value larger. The IIE, ‘e’ and ‘r’ of the image dehazing by our method are the highest. Compared with the other four methods, our method obtains a higher value in the quantitative analysis, which shows that our method improves the image contrast and effectively enhances the image details.

Tables Icon

Table 1. Objective Comparison of Image Dehazing effects

Then, the full reference PSNR and SSIM metrics are computed, the image of the first rows of Fig. 11 in the SOTS dataset is chosen for experiments, as shown in Table 1. It can be seen that although MSBDN [33] method has obtained the highest PSNR and SSIM values. This is not only due to the small distortion of the MSBDN [33] method, but also the ground truth of outdoor images in the SOTS dataset are related to a certain amount of haze. Among the remaining results, our method obtains higher PSNR and SSIM values.

To verify the speed advantage of our method, various images with different sizes were tested, and the running times are compared with He’s method [23], Tarel’s method [12], Meng’s method [17], Berman’s method [21], Shin’s method [24], and DehazeNet [27]. To ensure the fairness of the comparison, all programs of the different methods are run in MATLAB 64-bit on a personal computer equipped with Intel Core i7-4712HQ processor and 16 GB memory, and image restoration is performed ten times to find the average time.

Table 2 lists the run-time (in seconds for processing an image) of the six dehazing methods and our method. The data in Table 2 is listed in order from top to bottom according to the size of the image resolution. From Table 2, Tarel’s method [12] was optimized using a median filter, however, as the size of the image increased, the computational complexity rapidly increased. The He_Guided’s method [34] uses guided filtering instead of soft matting operation, which reduced the time-consuming significantly, but it is longer than the DehazeNet [27]. It can be observed from Table 2 that the computational complexity of our method is significantly lower than other methods, and the time-consuming of our method basically increases linearly with the increase of resolution. In summary, our method has a highly efficient implementation.

Tables Icon

Table 2. Run Time of Different Methods (s)

4. Conclusion

In this paper, an efficient dehazing method is proposed based on haze density estimation in different color spaces. In the HSV space, an estimate of the haze density of the scene is obtained adaptively based on the saturation characteristics; In the RGB space, we obtain the pixel-based haze density estimation map; then an exponential function transmission attenuation model is established based on the above-estimated haze densities, which can accurately estimate the transmission map; the global atmospheric light is obtained adaptively from the smoothed transmission map which smoothed by Guided filter; Finally, the haze-free scene is restored based on the atmospheric scattering model. Experimental results show that our method achieves outstanding recovery performance for images with different color complexity, white interference, sky, and inhomogeneous and dense haze.

Funding

National Natural Science Foundation of China (61801455).

Disclosures

The authors declare no conflicts of interest.

References

1. M. Saini, X. Wang, P. Atrey, and M. Kankanhalli, “Adaptive workload equalization in multi-camera surveillance systems,” IEEE Trans. Multimedia 14(3), 555–562 (2012). [CrossRef]  

2. A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmo, “Fusion-based variational image dehazing,” IEEE Signal Process. Lett. 24(2), 1 (2016). [CrossRef]  

3. C. O. Ancuti and C. Ancuti, “Single image dehazing by multi-scale fusion,” IEEE Trans. on Image Process. 22(8), 3271–3282 (2013). [CrossRef]  

4. M. Abdullah-Al-Wadud, Hasanul Kabir, M. Ali Akber Dewan, and O. Chae, “A dynamic histogram equalization for image contrast enhancement,” IEEE Trans. Broadcast Telev. Receivers 53(2), 593–600 (2007). [CrossRef]  

5. X. Fu, Y. Sun, M. Liwang, H Yue, and X Ding, “A novel retinex based approach for image enhancement with illumination adjustment,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process, 2014, 1190–1194.

6. H. Hu, J. Wu, B. Li, Q. Guo, and J. Zheng, “An adaptive fusion algorithm for visible and infrared videos based on entropy and the cumulative distribution of gray levels,” IEEE Trans. Multimedia 19(12), 2706–2719 (2017). [CrossRef]  

7. Y.Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt. 42(3), 511–525 (2003). [CrossRef]  

8. L. Shen, Y. Zhao, Q. Peng, CW. Chan, and SG. Kong, “An Iterative Image Dehazing Method With Polarization,” IEEE Trans. Multimedia 21(5), 1093–1107 (2019). [CrossRef]  

9. F. Liu, L. Cao, X. Shao, P. Han, and X. Bin, “Polarimetric dehazing utilizing spatial frequency segregation of images,” Appl. Opt. 54(27), 8116–8122 (2015). [CrossRef]  

10. R. T. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 1–82008.

11. R. Fattal. “Dehazing Using Color-Lines,” Acm Transactions on Graphics 34(1), (2014).

12. J. P. Tarel and N. Hautiere, “Fast visibility restoration from a single color or gray level image,” in Proceedings of IEEE International Conference on Computer Vision, 2201–22082009.

13. J. H. Kim, W. D. Jang, J. Y. Sim, and C. S. Kim, “Optimized contrast enhancement for real-time image and video dehazing,” J. Vis. Commun. Image R. 24(3), 410–425 (2013). [CrossRef]  

14. R. Luzón-González, J. L. Nieves, and J. Romero, “Recovering of weather degraded images based on rgb response ratio constancy,” Appl. Opt. 54(4), B222–31 (2015). [CrossRef]  

15. K. J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep Photo: Model-based Photograph Enhancement and Viewing,” ACM Trans. Graph. 27(5), 1–10 (2008). [CrossRef]  

16. C. Dai, M. Lin, X. Wu, and D Zhang, “Single hazy image restoration using robust atmospheric scattering model,” Signal Processing 166, 107257 (2020). [CrossRef]  

17. G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan, “Efficient image dehazing with boundary constraint and contextual regularization,” IEEE International Conference on Computer Vision617–624 (2014).

18. Q Zhu, J Mai, and L Shao, “A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior,” IEEE Trans. on Image Process. 24(11), 3522–3533 (2015). [CrossRef]  

19. K. B. Gibson and T. Q. Nguyen, “An analysis of single image defogging methods using a color ellipsoid framework,” EURASIP Journal on Image and Video Processing, 2013(1), 2013.

20. R. Fattal, “Single image dehazing,” ACM Trans. Graph. 27(3), 1–9 (2008). [CrossRef]  

21. D. Berman, T. Treibitz, and S. Avidan, “Non-local Image Dehazing,” IEEE Conference on Computer Vision & Pattern Recognition IEEE, 2016.

22. G Bi, J Ren, T Fu, T Nie, C Chen, and N Zhang, “Image Dehazing Based on Accurate Estimation of Transmission in the Atmospheric Scattering Model,” IEEE Photonics J. 9(4), 1–18 (2017). [CrossRef]  

23. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel Prior,” in IEEE International Conference on Computer Vision and Pattern Recognition, 1956–1963 (2009).

24. J Shin, M Kim, J Paik, and S Lee, “Radiance–Reflectance Combined Optimization and Structure-Guided l0-Norm for Single Image Dehazing,” IEEE Trans. Multimedia 22(1), 30–44 (2020). [CrossRef]  

25. E. J. McCartney, Optics of the Atmosphere: Scattering by Molecules and Particles. Wiley, New York, NY, USA1976.

26. S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE Trans. Pattern Anal. Mach. Intell. 25(6), 713–724 (2003). [CrossRef]  

27. B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: An end-to-end system for single image haze removal,” IEEE Trans. on Image Process. 25(11), 5187–5198 (2016). [CrossRef]  

28. W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M.-H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in Proc. Eur. Conf. Comput. Vis, 154–1692016.

29. Z. Yu, X. Wang, X. Bi, and D. Tao, “A Light Dual-Task Neural Network for Haze Removal,” IEEE Signal Processing Letters, 12018.

30. D. Yang and J. Sun, “Proximal Dehaze-Net: A Prior Learning-Based Deep Network for Single Image Dehazing,” Computer Vision ECCV 2018. Springer, Cham, 2018.

31. B. Li, X. Peng, Z. Wang, J. Xu, and F. Dan, “AOD-Net: All-in-One Dehazing Network,” IEEE International Conference on Computer Vision (ICCV). IEEE, 2017.

32. A. Wang, W. Wang, J. Liu, and N. Gu, “AIPNet: Image-to-Image Single Image Dehazing with Atmospheric Illumination Prior,” IEEE Transactions on Image Processing, 2018:1.

33. H. Dong, J. Pan, L. Xiang, Z. Hu, and M. H. Yang, “Multi-Scale Boosted Dehazing Network with Dense Feature Fusion,” in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., 2020.

34. K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2013). [CrossRef]  

35. B. Li, W. Ren, D. Fu, D. Tao, D. Feng, W. Zeng, and Z. Wang, “Reside: A benchmark for single image dehazing,” IEEE Transactions on Image Processing 28(1), 492–505 (2018).

36. N. Hautière, J.P. Tarel, D. Aubert, and É. Dumont, “Blind contrast enhancement assessment by gradient ratioing at visible edges,” Image Anal. Stereol. 27(1), 87–95 (2008). [CrossRef]  

37. L. K. Choi, J. You, and A. C. Bovik, “Referenceless prediction of perceptual fog density and perceptual image defogging,” IEEE Trans. on Image Process. 24(11), 3888–3901 (2015). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Acquiring the adaptive ${\xi _s}$ of the image
Fig. 2.
Fig. 2. The relationship between $d_{rgb}^{}(x)$ and ${\omega _{rgb}}(x)$ under different $\gamma$ .
Fig. 3.
Fig. 3. The relationship between ${\xi _s}$ and $\gamma$ under different k and b.
Fig. 4.
Fig. 4. The effect of different ${\xi _s}$ values on t(x).
Fig. 5.
Fig. 5. The effect of different ${\xi _s}$ and k on t(x).
Fig. 6.
Fig. 6. The areas $\Omega $ of atmospheric light selection
Fig. 7.
Fig. 7. The effect of different guided filter radius. (a) r = 3, (b) r = 5, (c) r = 9, (d) r = 15, (e) Pixel-level unfiltered, (f) one-dimensional signal of the 202nd row and the column 90 to 130.
Fig. 8.
Fig. 8. Comparison of the dehazing results and estimated transmission. (a) He’s [23]; (b) CAP [18]; (c) Berman’s [21]; (d) Shin’s [24]; (e) Ours.
Fig. 9.
Fig. 9. Comparison of the results from different methods. (a) Input hazy image; (b) Berman’s [21]; (c) Shin’s [24]; (d) DehazeNet [27]; (e) MSBDN [33]; (f) Ours.
Fig. 10.
Fig. 10. Comparison of the results from different methods. (a) Input hazy image; (b) Berman’s [21]; (c) Shin’s [24]; (d) DehazeNet [27]; (e) MSBDN [33]; (f) Ours.
Fig. 11.
Fig. 11. Comparison of the results from different methods. (a) Input hazy image; (b) Shin’s [24]; (c) DehazeNet [27]; (d) MSBDN [33]; (e) Ours.
Fig. 12.
Fig. 12. Part of the experimental results of the proposed method.

Tables (2)

Tables Icon

Table 1. Objective Comparison of Image Dehazing effects

Tables Icon

Table 2. Run Time of Different Methods (s)

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

I ( x ) = J ( x ) t ( x ) direct attenuation + A ( 1 t ( x ) ) airlight
I S = 1 min ( I r , I g , I b ) max ( I r , I g , I b )
I a v e = i = k 1 255 ( i n i ) 255 i = k 1 255 n i
ξ s = log ( I a v e )
d r g b ( x ) = ( | | I c ( x ) max ( I c ( x ) ) | | 2 max ( | | I c ( x ) max ( I c ( x ) ) | | 2 ) ) c { R , G , B }
ω r g b ( x ) = ( 1 d r g b ( x ) ) γ
ω r g b ( x ) = 1 ( d r g b ( x ) ) γ
γ = k ξ s + b
t ( x ) = e ξ s w r g b ( x )
t ( x ) = e ξ s ( 1 d r g b ( x ) ) ( k ξ s )
A = max c { r , g , b } ( x Ω I c ( x ) N )
J ( x ) = I ( x ) A max ( t ( x ) , 0.1 ) + A
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.