Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Reconstruction algorithm of haze image based on blind separation model of polarized orthogonal airlight

Open Access Open Access

Abstract

Polarization-based dehazing methods can enhance the quality of haze images. However, existing methods tend to a manual selection of sky area and bias coefficient to estimate the degree of polarization (DoP) of the airlight, which leads to inaccurate estimation of the airlight. Aiming at the problem, a reconstruction algorithm based on the blind separation model of polarized orthogonal airlight is proposed. Importantly, the depth-dependent DoP of the airlight is automatically estimated without manual selection of sky area and bias coefficient. To reduce the interference of white objects on the estimation of airlight at infinity, an adaptive estimation method using the deviation between the DoP of the airlight and incident light is proposed. In order to accurate estimate the airlight from the airlight at infinity, a blind separation model of the airlight with multi-regularization constraints is established based on the decomposition of the airlight at infinity into a pair of polarized components with orthogonal angles. The experimental results show that the method effectively improves the visibility of scenes under different haze concentrations, especially in dense or heavy haze weather.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Under severe weather conditions such as fog and haze, the performance of the imaging system is affected by the scattering and absorption of copious quantities of suspended particles in the atmospheric environment, which leads to the decrease of the image contrast, visibility, and imaging quality. Therefore, it is of significance to restore the haze-free images for the application of urban lane detection [1], object detection and others. With in-depth study of image dehazing methods, a lot of work has made remarkable progress. The single image dehazing method has been widely studied, in which prior-based methods use prior knowledge to model the relevant image parameters, such as DCP [2], BCCR [3], CAP [4] and Nonlocal [5]. All above methods base on the physical model and various statistic assumptions. However, since the assumption or prior information is not valid in all scenes, the dehazing performance of the above methods appears not always satisfactory for some specific scenes.

The polarization-based reconstruction methods have significant advantages in image dehazing, because the polarization imaging can obtain richer polarized information than conventional optical imaging. Using polarized information to estimate the related parameters is crucial, such as the airlight and the the degree of polarization (DoP) of the airlight. However, there are still some problems. On the one hand, early methods estimate the DoP of the airlight by polarized difference information [6] for the airlight estimation, and the estimation of the DoP of the airlight generally depends on sky area [7,8], so the dehazing effect for skyless scenes cannot perform well. In order to overcome the problem of sky dependency, Liang et al. [9] took full advantage of orientation-angle information to estimate the DoP of the airlight, and assumed that the airlight at infinity is a partial-polarization airlight. Fang [10] et al. assumed that the transmission map was not statistically correlated with the dehazing image to solve the DoP of the airlight. On this basis, some methods also combine the fusion strategy [1113]. On the other hand, based on the assumption that the DoP of the airlight is globally invariant, the bias coefficient is introduced to estimate the DoP of the airlight, which is a universal approach to stabilize the quality of restoration [1416]. However, the polarized properties of light are closely related to the medium concentration in Ref. [17] (the light depolarizes as medium concentration increases to a certain extent), and the multi-scattered airlight is related to the medium concentration and the depth, thus, the DoP of the airlight should be depth-dependent. Based above assumption, the DoP of the airlight is supposed to be estimated without bias coefficient. For the accurate estimation of airlight, Liang et al. proposed the methods of the GPLPF [18] and the PLF [19] by filtering the object information in the frequency domain and nonpolarized airlight (spatial domain), respectively. Liang et al. [20] proposed a optimization of angle of polarization (AoP) based on the regularization constraints to reduce the noise of the quantization noise caused by camera. Shen et al. [21] proposed a joint optimization scheme for the whole dehazing process to improve the restoration quality. The most methods use image processing technology to estimate the airlight from original haze image, without considering the source of the airlight in the image degradation model. When the airlight at infinity and the depth can be fully considered, which can be more accurately estimated in scenes under higher haze concentration.

Aiming at the problems of estimating the DoP of the airlight, the airlight at infinity and the airlight, a reconstruction algorithm based on blind separation model of polarized orthogonal airlight is proposed, which considers the effects of the airlight at infinity and the depth on parameter estimation. Firstly, to avoid the limitation of sky area and bias coefficient on estimation of the DoP of the airlight, the depth-dependent DoP of the airlight can be automatically estimated. Secondly, the interference of white objects on the estimation of airlight at infinity is reduced by using the deviation between the DoP of the airlight and incident light. Finally, for the accurate estimation of airlight from the airlight at infinity, a blind separation model of the airlight with multi-regularization constraints is established. The model is based on the assumption that the airlight at infinity is partially polarized as same as the airlight [9], and the airlight at infinity can be decomposed into a pair of polarized components with orthogonal angles. The mechanism of multi-regularization constraints is introduced into the blind separation model to ensure the local smoothness and edge jumpiness of airlight effectively.

2. Polarization-based reconstruction theory

The image degradation model indicates that the total intensity of captured haze image or incident light I contains two components, the attenuated intensity from the scene D and the intensity from the multi-scattered airlight $A$:

$$I = D + A$$
where D is calculated from the intensity of haze-free image R and the depth $z$:
$$D = R \cdot \textrm{exp} ({\textrm{ - }\beta \textrm{z}} ),$$
where $\beta $ is the degradation coefficient, and $\textrm{exp} ({ - \beta z} )$ usually is called transmission map t. Similarity, A is related to the depth z and the airlight at infinity ${A_\infty }$, expressed as:
$$A = {A_\infty } \cdot ({1 - t} ),$$

From Eqs. (1)–(3), R can be restored by:

$$R = \frac{{I - A}}{{1 - A/{A_\infty }}}.$$

Equation (4) shows that R can be recovered by estimating A and ${A_\infty }$. Moreover, the advantage of polarization-based reconstruction method uses the partial polarized of the airlight to achieve accurate estimation of the airlight, thus, estimating the DoP of the airlight is the key to accurate estimation of the airlight.

2.1 Estimation of the depth-dependent DoP of the airlight

In general case, the DoP of the airlight ${p_\textrm{A}}$ is estimated by manually selecting the sky area. In addition, since the parameter is a global invariant, the bias coefficient is introduced to stabilize the quality of image restoration, which brings the error of artificial selection. However, in Ref. [17], the experimental result shows that the DoP of the airlight is affected by the scattering medium concentration, and as haze concentration increases to a certain extent, the DoP of the airlight decreases. Moreover, the airlight is related to the medium concentration and the depth, so, the DoP of the airlight is assumed to be depth-dependent, which shows that the DoP of the airlight is supposed to be different for sky or the objects at different depths.

In order to verify the above assumption, the mathematical analysis is carried out. Defined by DoP [10], the DoP of the incident light p and the DoP of the airlight ${p_\textrm{A}}$ can be obtained as follows:

$$\left\{ \begin{array}{l} p = \frac{{{I_{\textrm{max}}} - {I_{\textrm{min}}}}}{I}\textrm{ = }\frac{{\nabla I}}{I}\\ {p_\textrm{A}} = \frac{{{A_{\textrm{max}}} - {A_{\min }}}}{A} = \frac{{\nabla A}}{A}, \end{array} \right.$$
where ${I_{\textrm{max}}}$ and ${I_{\textrm{min}}}$ denotes the maximum and minimum intensity corresponding to the angle of linear polarizer ${\theta _{\textrm{max}}}$ and ${\theta _{\textrm{min}}}$, respectively. $\nabla I$ and $\nabla A$ are defined as the polarized difference intensity of the incident light and the airlight respectively. Generally, the polarized properties of object can be ignored due to a long-range, multi-scattering effect, so only the polarized properties of the airlight are considered:
$$\nabla A \approx \nabla I,$$

According to Eqs. (1)–(6)), ${p_\textrm{A}}$ can be expressed as:

$${p_\textrm{A}} = \left[ {1 + \frac{R}{{{A_\infty }}} \cdot \frac{{\textrm{exp} ({ - \beta z} )}}{{1 - \textrm{exp} ({ - \beta z} )}}} \right] \cdot p.$$

It can be inferred from Eq. (7) that ${p_\textrm{A}}$ varies with depth z. When the depth is at infinity, ${p_\textrm{A}}$ is approximate to p. With the decrease of depth, ${p_\textrm{A}}$ is gradually larger than p. If ${p_\textrm{A}}$ is treated as a global invariant, the objects at different depths cannot be restored well, therefore, ${p_\textrm{A}}$ is assumed to be depth-dependent. In this paper, the depth-dependent DoP of the airlight can be automatically estimated to overcome the limitations of the manually selecting the sky area and bias coefficient.

Firstly, when a beam of incident light with stokes matrix $S = {({I,Q,U} )^T}$ passes through a linear polarizer with angle $\alpha $ (the angle between the reference direction and polarizer axis direction), the intensity of polarized image ${I_\alpha }$ can be calculated:

$${I_\alpha } = \frac{1}{2}({I + Q \cdot \cos 2\alpha + U \cdot \sin 2\alpha } ),$$

The polarized images ${I_0}$, ${I_{60}}$ and ${I_{120}}$ can be captured with a polarization camera SZ-SR500M of three channels [22,23], since the camera has lower cost compared the polarization camera of four channels. Then, the stokes matrix can be calculated as follows:

$$\left\{ \begin{array}{l} I = \frac{2}{3}({{I_0} + {I_{60}} + {I_{120}}} )\\ Q = \frac{2}{3}({2{I_0} - {I_{60}} - {I_{120}}} )\\ U = \frac{2}{{\sqrt 3 }}({{I_{60}} - {I_{120}}} ). \end{array} \right.$$

From Eq. (9), the DoP and AoP of the incident light are expressed as:

$$p = \frac{{\sqrt {{Q^2} + {U^2}} }}{I},$$
$$\theta = \frac{1}{2}\arctan \left( {\frac{U}{Q}} \right),$$

Secondly, since only the polarized properties of the airlight are considered, the values of Q and U are assumed to be mainly determined by the airlight, so $\theta $ calculated by Eq. (11) can be approximated to the Aop of the airlight ${\theta _\textrm{A}}$. The most values in the matrix $\theta $ are the response to the ${\theta _\textrm{A}}$, so the frequency priority strategy [9] is used take the most frequently appeared value in the matrix $\theta $ as the ${\theta _\textrm{A}}$. Finally, without manual selection of sky area and bias coefficient, the depth-dependent ${p_\textrm{A}}$ can be obtained by substituting ${\theta _\textrm{A}}$ into Eqs. (10), (11):

$${p_\textrm{A}} = \max \left( {\left|{\frac{Q}{{I\cos 2{\theta_\textrm{A}}}}} \right|,\left|{\frac{U}{{I\sin 2{\theta_\textrm{A}}}}} \right|} \right).$$

It can be seen from Eq. (7) that ${p_\textrm{A}}$ is not only correlated with z but also with p, the statistical histogram is used to study the statistical law between ${p_\textrm{A}}$ and p in different selected areas. The sky area or object areas at different depths are randomly selected to verify the variation of ${p_\textrm{A}}$ with depth. As shown in Fig. 1, the histograms of ${p_\textrm{A}}$ and p are shown in red and blue. Firstly, it can be seen that the two histograms have similar distribution from Fig. 1(b)–(c) or 1(e)–(f). However, when selecting the horizontal area or local object area, the red histogram is apparently shifted to the right compared with the blue histogram, which illustrates that the deviation between ${p_\textrm{A}}$ and p is different for the sky area and object area. Secondly, the red histogram in Fig. 1(f) is significantly shifted to the right compared to the red histogram in Fig. 1(e), therefore, the DoP of the airlight estimated by ${\theta _\textrm{A}}$ can satisfy the properties that the DoP of the airlight changes with depth.

 figure: Fig. 1.

Fig. 1. The statistical results of ${p_\textrm{A}}$ and p in different selected areas. (a) The locations of vertical area and the horizontal area. (b)–(c) The statistical results of ${p_\textrm{A}}$ and p in the vertical area and the horizontal area, respectively. (d) The sky area P1 and the object area P2. (e)–(f) The statistical results of ${p_\textrm{A}}$ and p in the sky area P1 and the object area P2, respectively.

Download Full Size | PDF

In addition, a phenomenon can be observed that the two histograms presented in Fig. 1(e) are identical in theory but not in experiment. The causes of the phenomenon are as follows. According to the Eq. (7), whether the two histograms are identical depends on the depth z and haze-free image R. On the one hand, the depth z of the sky area P1 is not at infinity in our experiment, so, $\textrm{exp} ({ - \beta z} )\ne 0$. On the other hand, the buildings in the sky area P1 are invisible because of the higher haze concentration in the distance, so, R is not equal to zero. In conclusion, it is reasonable to conclude that the two histograms are not exactly identical in Fig. 1(e). Nevertheless, the deviation in the sky area P1 is not large.

2.2 Estimation of the airlight at infinity

In general, when the depth tends to infinite distance, $I \to {A_\infty }$. The spatial location of ${A_\infty }$ with high probability can be screened by obtaining a segmentation threshold in the histogram of I, and the spatial location of ${A_\infty }$ is the coordinates of pixels whose gray values is greater than the threshold in I. However, the screened pixels usually include the white objects or strong light sources, which can lead to inaccurate estimation of ${A_\infty }$. As can be seen from Fig. 1, the deviation between ${p_\textrm{A}}$ and p is different for the sky area and object area, therefore, the deviation between ${p_\textrm{A}}$ and p can be used to reduce the interference of white objects or strong light sources on the estimation of ${A_\infty }$. First of all, another segmentation threshold in the histogram of the deviation $\Delta p = {p_\textrm{A}} - p$ also need to be obtained. Secondly, since the deviation of the sky area is smaller than the deviation of the object area, the coordinates of pixels whose gray values is less than the threshold can be screened as the spatial location of ${A_\infty }$ with high probability in $\Delta p$. Finally, the interference from white objects or strong light sources can be reduced by calculating the intersection coordinates of the two coordinate sets.

In detail, an adaptive OTSU method is used to obtain segmentation threshold of sky and white object in the histogram of I and $\Delta p$, which is called ${t_I}$ and ${t_{\Delta p}}$, respectively. In I, the coordinates of pixels that gray value is greater than ${t_I}$ are expressed as $\xi _I^i[{I(x) > {t_I}} ]$. Similarly, in $\Delta p$, the coordinates of pixels that gray value is less than ${t_{\Delta p}}$ are expressed as $\xi _{\Delta p}^i[{\Delta p(x) < {t_{\Delta p}}} ]$. So, these coordinates represent the spatial location of ${A_\infty }$ with high probability. Since the airlight at infinity is commonly regarded as a constant, ${A_\infty }$ can be calculated by the average of gray values corresponding to the intersection coordinates of the two coordinate sets, expressed as:

$${A_\infty } = \frac{1}{n}\sum\limits_{i = 1}^n {I\{{\xi_I^i[{I(x) > {t_I}} ]\cap \xi_{\Delta p}^i[{\Delta p(x) < {t_{\Delta p}}} ]} \}.}$$
where x is the coordinates of pixels. $\xi _I^i$ and $\xi _{\Delta p}^i$ are the operator for calculating coordinates.

In order to further verify whether the white object affects the estimation of ${A_\infty }$, Fig. 2 is the estimation of the airlight at infinity excluding the white object. It should be seen in the rectangular box of Fig. 1(a), the coordinate sets of ${A_\infty }$ include white buildings, which shows that ${A_\infty }$ estimated by only using I is easily disturbed by white objects. However, In Fig. 1(b), the deviation $\Delta p$ have noticeable difference for the sky and white buildings, and the deviation of white buildings is larger than the deviation of sky, so, the coordinate sets of ${A_\infty }$ estimated by only using $\Delta p$ exclude white buildings. Then, the interference from white buildings can be reduced by calculating the intersection coordinates of the two coordinate sets. As shown in Fig. 2(c), the spatial location of ${A_\infty }$ corresponding to the intersection coordinates are mostly displayed in the sky area, and the white objects are well excluded. In conclusion, the estimation method of ${A_\infty }$ calculated in Eq. (13) can reduce the influence of white buildings on ${A_\infty }$, so the airlight at infinity can be more accurately estimated.

 figure: Fig. 2.

Fig. 2. The estimation of ${A_\infty }$ excluding the white object. (a) The spatial location of ${A_\infty }$ with high probability in I. (b) The spatial location of ${A_\infty }$ with high probability in $\Delta p$. (c) The spatial location of ${A_\infty }$ corresponding to the intersection coordinates.

Download Full Size | PDF

2.3 Blind separation model of polarized orthogonal airlight

Based on the assumptions that the airlight is partially polarized and the ${\theta _\textrm{A}}$ is treated as AoP of the airlight, the airlight can be decomposed into unpolarized light and polarized light, the airlight that passes through the polarizer with angle $\alpha $ can be expressed according to Malus’ law:

$${A_\alpha } = {p_\textrm{A}} \cdot A \cdot {\cos ^2}(\alpha - {\theta _\textrm{A}}) + \frac{{1 - {p_A}}}{2} \cdot A.$$

When the rotational angle of the polarizer is equal to ${\theta _\textrm{A}}$ and ${\theta _\textrm{A}} + \frac{\pi }{2}$, the influence of airlight is the greatest. Similarly, when the rotational angle of the polarizer is equal to ${\theta _\textrm{A}} + \frac{\pi }{2}$, the influence of airlight is the least. So, the airlight can be decomposed into a pair of polarized components with orthogonal angles, which called ${A_{\max }}$ and ${A_{\min }}$. It can be expressed as follows:

$$\left\{ \begin{array}{l} {A_{\max }} = \frac{{1 + {p_\textrm{A}}}}{2} \cdot A\\ {A_{\min }} = \frac{{1 - {p_\textrm{A}}}}{2} \cdot A, \end{array} \right.$$

So, there is $A = {A_{\max }} + {A_{\min }}$. Substitute Eq. (3) into Eq. (15), we can obtain:

$$\left\{ \begin{array}{l} {A_{\max }} = \frac{{1 + {p_A}}}{2} \cdot {A_\infty } \cdot ({1 - t} )\\ {A_{\min }} = \frac{{1 - {p_A}}}{2} \cdot {A_\infty } \cdot ({1 - t} ), \end{array} \right.$$

From Eq. (6), we can define the new parameters as:

$$\left\{ \begin{array}{l} {A_{\infty \max }} = \frac{{1 + {p_A}}}{2} \cdot {A_\infty }\\ {A_{\infty \min }} = \frac{{1 - {p_A}}}{2} \cdot {A_\infty }. \end{array} \right.$$
where ${A_{\infty \max }}$ and ${A_{\infty \min }}$ are defined as a pair of polarized components of ${A_\infty }$ with orthogonal angles. It means that ${A_\infty }$ has the same partial polarized properties as A, and the $A{}_\infty$ will appear the maximum and minimum intensity as the angle of polarizer changes, but the DoP of the airlight changes with the depth, therefore, ${A_{\infty \max }}$ and ${A_{\infty \min }}$ of each pixel are different.

For the accurate estimation of $A$ from $A{}_\infty$, Eq. (16) is simplified as follows:

$${A^ \ast } = A_\infty ^ \ast{\cdot} T.$$
where ${A^ \ast } \in \{{{A_{\max }},{A_{\min }}} \}$, $A_\infty ^ \ast{\in} \{{{A_{\infty \max }},{A_{\infty \min }}} \}$, $T = 1 - t$, ${\cdot} $ is Hadamard product.

In the case that $A_\infty ^ \ast $ is known and T is unknown, a blind separation model of airlight should be established to separate ${A^ \ast }$ from $A_\infty ^ \ast $. On the one hand, the airlight has local smoothness. On the other hand, the jumpiness needs to be preserved on the edge of the objects. So, the mechanism of multi-regularization constraints is introduced into the blind separation model to ensure the local smoothness and edge jumpiness of airlight effectively. Then, the optimal solutions ${\hat{A}^ \ast }$ and T are obtained by minimizing Eq. (19).

$$\arg \mathop {\min }\limits_{{{\hat{A}}^ \ast },T} \frac{1}{2}||{{{\hat{A}}^ \ast } - A_\infty^ \ast{\cdot} T} ||_2^2 + {\lambda _1}\sum\limits_{j \in w} {{{||{{W_j} \cdot ({{D_j} \otimes {{\hat{A}}^ \ast }} )} ||}_1}} + \frac{{{\lambda _2}}}{2}||T ||_2^2,$$
where the first term in Eq. (19) is the fidelity term, which ensures that the estimated ${\hat{A}^ \ast }$ is within the error range of minimum data. The second term is the regularization term of gradient with multi-difference operator, w is different sets of weight, ${W_j}$ is the adaptive weight matrix guided by $A_\infty ^ \ast $, which can make it more effective to distinguish edge region and smooth region, ${\lambda _1}$ controls the smoothness of the image. The third term is the 2-norm regular term of T, which can effectively control brightness. ${\lambda _2}$ is brightness coefficient. In the second term, ${W_j}$ can be expressed as:
$${W_j} = \textrm{exp} \left( { - \sum\limits_{c \in \{{r,g,b} \}} {\frac{{||{{D_j} \otimes A_\infty^{ {\ast} c}} ||_2^2}}{{2{\sigma^2}}}} } \right),$$
where ${D_j}$ is different higher-order difference operator. ${D_j}$ is introduced to achieve edge sharpening and reduce the artifacts of smooth region. In this paper, ${\lambda _1}$ is set to 0.1, ${\lambda _2}$ is set to 0.01, and $\sigma $ is set to 0.5.

In order to obtain the optimal solution, the Half Quadratic Splitting and the Fast Fourier Transform are adopted to transform the Eq. (19) into sub-problems for solving ${\hat{A}^ \ast }$ and T by iterative separation, expressed as:

$$\arg \mathop {\min }\limits_{{{\hat{A}}^\ast }} \frac{1}{2}||{{{\hat{A}}^\ast } - A_\infty^ \ast{\cdot} T} ||_2^2 + {\lambda _1}\sum\limits_{j \in w} {{{||{{W_j} \cdot ({{D_j} \otimes {{\hat{A}}^\ast }} )} ||}_1}} ,$$
$$\arg \mathop {\min }\limits_T \frac{1}{2}||{{{\hat{A}}^\ast } - A_\infty^ \ast{\cdot} T} ||_2^2 + \frac{{{\lambda _2}}}{2}||T ||_2^2,$$

Finally, the solutions of these sub-problems will converge to the optimal solution of the initial problem through iteration. and then more accurate ${A^ \ast }$ is estimated. The iterative flowchart is shown in the Fig. 3. The estimated airlight at the same depth is similar, and guarantees the local smoothness and edge jumpiness of airlight effectively.

 figure: Fig. 3.

Fig. 3. The iterative flowchart for blind separation of airlight. The iteration starts with ${A_{\infty \max }}$ and ${A_{\infty \min }}$. $\hat{A}_{\max }^0$ and $\hat{A}_{\min }^0$ represent the airlight after the first iteration. ${\hat{A}_{\max }}$ and ${\hat{A}_{\min }}$ represent the airlight after iterative convergence. A is calculated by adding ${\hat{A}_{\max }}$ and ${\hat{A}_{\min }}$

Download Full Size | PDF

The flowchart of the proposed method is shown in Fig. 4.

 figure: Fig. 4.

Fig. 4. The flowchart of the proposed method.

Download Full Size | PDF

3. Experimental result and analysis

To verify the effectiveness of the proposed method, the gray or RGB polarized images with different haze concentrations are taken in real scenes for experiments, and the level of haze concentration is divided into light, moderate, dense, heavy. In the experiment, the equipment for image acquisition is a polarization camera called SZ-SR500M [22,23], which can acquire three original polarized haze images with a resolution of 1860 × 2480 simultaneously in a single exposure. The results of the proposed method are further compared with those of GPLPF [18] and PLF [19] because of similar research question. The GPLPF [18] is an method for estimating airlight by filtering the object information in the frequency domain, and the PLF [19] is an method for estimating airlight by filtering the object information of non-polarized light in the spatial domain. Besides, the proposed method is also compared with classical dehazing methods, such as BCCR [3], Nonlocal [5]. The GPLPF [18] and PLF [19] take the original polarized image as its input, and the total haze image calculated by stokes matrix is taken as the input of BCCR [3] and Nonlocal [5].

The experiments are considered from the following aspects: the comparison of local objects, the influence of the sky area and the dehazing effect of RGB polarized images. Figure 5 is the comparison of dehazing results generated by different methods for local objects. The GPLPF [18], PLF [19] and Nonlocal [5] are still hazy in the distant Ferris wheel, which shows incomplete removal of haze. There are also a lot of noises after local magnification. Though the local magnification of BCCR [3] is better than the others, the overall dehazing is heterogeneous. In comparison, the proposed method has a good overall dehazing effect and enhances visibility of the near or distant object. At the same time, the noises are also suppressed, and more texture details are remained. Therefore, the proposed method has good dehazing effect for objects at different depths.

 figure: Fig. 5.

Fig. 5. The comparison of dehazing results generated by different methods for local objects. (a) ∼ (a5) The original image and the dehazing results generated by generated by GPLPF [18], PLF [19], BCCR [3], Nonlocal [5] and the proposed method, respectively. (b) ∼ (b5) The locally enlarged results of building corresponding to (a) ∼ (a5). (c) ∼ (c5) The locally enlarged results of Ferris wheel corresponding to (a) ∼ (a5).

Download Full Size | PDF

Figure 6 is the comparison of dehazing results generated by different methods for sky scenes under different haze concentrations. It shows that several methods can effectively remove haze in the light haze concentration, but in case of higher concentration, The GPLPF [18] and PLF [19] are not thorough enough in overall dehazing. The BCCR [3] and Nonlocal [5] are better. However, such as the signal tower, some details of object are lost, which can result in darkening of partial object. The proposed method is obviously superior to other methods for dealing with objects at different depths, especially in the distant object. The edge of the distant object is clearer and noises are suppressed. Therefore, the proposed method achieves good results for buildings in the distance and signal tower in close-range, which not only ensures the details of the image, but also has a better dehazing effect.

 figure: Fig. 6.

Fig. 6. The comparison of dehazing results generated by different methods for sky scenes under different haze concentrations. (L) ∼ (L5) The original image and the dehazing image generated by GPLPF [18], PLF [19], BCCR [3], Nonlocal [5] and the proposed method in light haze. (M) ∼ (M5) The original image and the dehazing image generated by GPLPF [18], PLF [19], BCCR [3], Nonlocal [5] and the proposed method in moderate haze. (D) ∼ (D5) The original image and the dehazing image generated by GPLPF [18], PLF [19], BCCR [3], Nonlocal [5] and the proposed method in dense haze. (H) ∼ (H5) The original image and the dehazing image generated by GPLPF [18], PLF [19], BCCR [3], Nonlocal [5] and the proposed method in heavy haze.

Download Full Size | PDF

Figure 7 is the comparison of dehazing results generated by different methods for skyless scenes under different haze concentrations. It can be seen that most of methods have good results in the light haze concentration, but the dehazing effect of the BCCR [3] is inferior to the others, and the white objects relatively dark. Other methods have a good effect on the object. In case of higher concentration, PLF [19] has a lot of noises, and the object is not clear. Effect of BCCR [3] and Nonlocal [5] are not obvious. Compared with other methods, the proposed method has a more natural effect of object restoration, and remains better texture details.

 figure: Fig. 7.

Fig. 7. The comparison of dehazing results generated by different methods for skyless scenes under different haze concentrations. (L) ∼ (L5) The original image and the dehazing image generated by GPLPF [18], PLF [19], BCCR [3], Nonlocal [5] and the proposed method in light haze. (M) ∼ (M5) The original image and the dehazing image generated by GPLPF [18], PLF [19], BCCR [3], Nonlocal [5] and the proposed method in moderate haze. (D) ∼ (D5) The original image and the dehazing image generated by GPLPF [18], PLF [19], BCCR [3], Nonlocal [5] and the proposed method in dense haze. (H) ∼ (H5) The original image and the dehazing image generated by GPLPF [18], PLF [19], BCCR [3], Nonlocal [5] and the proposed method in heavy haze.

Download Full Size | PDF

Figure 8 shows the comparison of dehazing results generated by different methods for RGB scenes under different haze concentrations. In the light haze concentration, several methods also have good results. However, in case of moderate and dense concentration, since the DoP of the airlight changes with depth, the proposed method is more natural after dehazing compared with GPLPF [18], PLF [19] and BCCR [3]. The proposed method has a better dehazing effect when processing the distant objects such as crane, which makes the distant objects more details and less noise. However, in the heavy haze concentration, most of methods have severe color distortion, especially in the sky area.

 figure: Fig. 8.

Fig. 8. The comparison of dehazing results generated by different methods for RGB scenes under different haze concentrations. (L) ∼ (L5) The original image and the dehazing image generated by GPLPF [18], PLF [19], BCCR [3], Nonlocal [5] and the proposed method in light haze. (M) ∼ (M5) The original image and the dehazing image generated by GPLPF [18], PLF [19], BCCR [3], Nonlocal [5] and the proposed method in moderate haze. (D) ∼ (D5) The original image and the dehazing image generated by GPLPF [18], PLF [19], BCCR [3], Nonlocal [5] and the proposed method in dense haze. (H) ∼ (H5) The original image and the dehazing image generated by GPLPF [18], PLF [19], BCCR [3], Nonlocal [5] and the proposed method in heavy haze.

Download Full Size | PDF

In general, Multiple scatterings are more probable when the concentration of scatterers is high, so, the depolarization not only depends on the concentration of scatterers, to make matters more complicated, the depolarization depends on the wavelength of light. In the heavy haze concentration, the depolarization varies with wavelength. To simplify the problem, only the DoP of single channel is considered in our experiment, so the color distortion is generated. Due to the severe color distortion of the dehazing result generated by BCCR [3], Nonlocal [5] and the proposed method in heavy haze, the correction method of the Auto White Balance (AWB) will be further used to alleviate this problem. Figure 9 is the comparison results after color correction. As shown in Fig. 9, the color distortion in the sky area after AWB is mitigated for all methods. The most importantly, the proposed method has much better contrast and details, especially in the distant buildings.

 figure: Fig. 9.

Fig. 9. The comparison results after color correction. (H3) ∼ (H5) The dehazing results before color correction. (ACC3) ∼ (ACC5) The dehazing results after color correction.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. (a) - (i) Dehazing results of the proposed method with ${\lambda _1}$ = 0.1, 0.3, 0.5 and ${\lambda _2}$ = 0.01, 0.03, 0.05, respectively.

Download Full Size | PDF

The next step is quantitative analysis. In order to objectively verify the effectiveness of the proposed method, the non-reference indexes of quality evaluation are used to conduct quantitative analysis of experimental data, such as Naturalness Image Quality Evaluator NIQE [24], which statistics the difference between the multivariate distribution of the image to be tested and the MVG model parameters of the natural haze-free image. Fog Aware Density Evaluator FADE [25], which fits the fog-aware features into the MVG model, and then counts the differences in the statistical laws among the observed image, the natural haze image and the haze-free image. The smaller values of NIQE and FADE are, the better the quality will be and the clearer the image will be. Image entropy IE and average gradient MG is the most common quality evaluation indexes, which means that the larger the value is, the more information it contains and the clearer the edge details are.

Table 1 shows the quantitative results of different dehazing methods for all the above scenes in terms of NIQE [24], FADE [25], IE and MG indexes. It can be seen that the proposed method is inferior to GPLPF [18] and PLF [19] in IE index, but there is little difference in value. Obviously, the proposed method is better than other methods in terms of NIQE [24], FADE [25] and MG. Compared with the sub-optimal results, the average values of the three index remarkably improve by 1.167%, 27.367% and 29.533%. It shows that the result of proposed method has the lowest haze density, and the image is closest to the natural image with better quality. Meanwhile, the result contains more edge details.

Tables Icon

Table 1. The quantitative results of different dehazing methodsa

4. Discussion

In the proposed method, the determination of ${\lambda _1}$ and ${\lambda _2}$ is the key to ensure the robustness effectiveness. In order to explore the influence of the two coefficients in our method, we scan carefully the empirical value of ${\lambda _1}$ and ${\lambda _2}$ and obtain a series of dehazing results by using the proposed method. Figure 10 is the dehazing results of the proposed method with ${\lambda _1}$ = 0.1, 0.3, 0.5 and ${\lambda _2}$ = 0.01, 0.03, 0.05, respectively. We can see that the lateral results show that the change of ${\lambda _1}$ has little effect on the recovery results, while the vertical results show that when ${\lambda _2}$ is 0.01, ${\lambda _1}$ is 0.1, the details of the object are the clearest, and the overall effect most natural. With the gradual increase of ${\lambda _2}$, the close-range object still has a good restoration effect, while the brightness of the long-range object gradually increases, which can indicate that ${\lambda _2}$ can control the brightness after dehazing, so in our experiment, ${\lambda _1}$ is set to 0.1, ${\lambda _2}$ is set to 0.01.

5. Conclusion

Aiming at the problem that some existing methods suffer from the limitations of the sky area and bias coefficient, a reconstruction algorithm based on blind separation model of polarized orthogonal airlight is proposed. Without sky area and bias coefficient, the depth-dependent DoP of the airlight is estimated to reduce the error of manual selection. Then, an adaptive estimation of airlight at infinity by using the deviation of the DoP of the airlight and the incident light is proposed, which can reduce the interference of white objects. In addition, based on the assumption that the airlight can be estimated from the airlight at infinity, the blind separation model with multi-regularization constraints is established to improve the estimation accuracy of airlight, and improve the visibility of objects at different depths subjectively. Experimental results show that the NIQE, FADE and MG of dehazing result using the proposed method increased by 1.167%, 27.367% and 29.533%, respectively, which improve the quality of reconstructed image objectively. Moreover, the proposed method has good robustness for the scenes under the different concentrations. Due to an amount of computation in the solution of the optimal iteration, the proposed method takes longer time than others, which will be the subject of further research.

Funding

Anhui Provincial Key Research and Development Plan (202004d07020012); Major Science and Technology Projects in Anhui Province (202103a06020010); National Natural Science Foundation of China (61571177).

Disclosures

The authors declare no potential conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. L. Zhang, Z. Yin, K. Zhao, and H. Tian, “Lane detection in dense fog using a polarimetric dehazing method,” Appl. Opt. 59(19), 5702–5707 (2020). [CrossRef]  

2. K. M. He, J. Sun, and X. O. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011). [CrossRef]  

3. 3. G. F. Meng, Y. Wang, J. Y. Duan, S. M. Xiang, and C. H. Pan, “Efficient image dehazing with boundary constraint and contextual regularization,” in 2013 IEEE International Conference on Computer Vision, (ICCV, 2013), 617–624.

4. Q. S. Zhu, J. M. Mai, and L. Shao, “A fast single image haze removal algorithm using color attenuation prior,” IEEE Trans. on Image Process. 24(11), 3522–3533 (2015). [CrossRef]  

5. D. Berman and S. Avidan, “Non-local image dehazing,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition, (CVPR, 2016), 1674–1682.

6. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt. 42(3), 511–525 (2003). [CrossRef]  

7. W. F. Zhang, J. Liang, and L. Y. Ren, “Haze-removal polarimetric imaging schemes with the consideration of airlight's circular polarization effect,” Optik 182, 1099–1105 (2019). [CrossRef]  

8. F. Huang, C. Z. Ke, X. Y. Wu, S. Wang, J. Wu, and X. S. Wang, “Polarization dehazing method based on spatial frequency division and fusion for a far-field and dense hazy image,” Appl. Opt. 60(30), 9319–9332 (2021). [CrossRef]  

9. J. Liang, L. Y. Ren, H. J. Ju, E. S. Qu, and Y. L. Wang, “Visibility enhancement of hazy images based on a universal polarimetric imaging method,” J. Appl. Phys. 116(17), 173107 (2014). [CrossRef]  

10. S. Fang, X. S. Xia, X. Huo, and C. W. Chen, “Image dehazing using polarization effects of objects and airlight,” Opt. Express 22(16), 19523–19537 (2014). [CrossRef]  

11. J. Liang, W. F. Zhang, L. Y. Ren, H. J. Ju, and E. S. Qu, “Polarimetric dehazing method for visibility improvement based on visible and infrared image fusion,” Appl. Opt. 55(29), 8221–8226 (2016). [CrossRef]  

12. L. Shen, M. Reda, and Y. Zhao, “Image-matching enhancement using a polarized intensity-hue-saturation fusion method,” Appl. Opt. 60(13), 3699–3715 (2021). [CrossRef]  

13. Y. Lei, B. Lei, Y. Cai, C. Gao, and F. J. Wang, “Polarimetric Dehazing Method Based on Image Fusion and Adaptive Adjustment Algorithm,” Appl. Sci. 11(21), 10040 (2021). [CrossRef]  

14. J. Liang, L. Y. Ren, H. J. Ju, W. F. Zhang, and E. S. Qu, “Polarimetric dehazing method for dense haze removal based on distribution analysis of angle of polarization,” Opt. Express 23(20), 26146–26157 (2015). [CrossRef]  

15. Y. F. Qu and Z. F. Zou, “Non-sky polarization-based dehazing algorithm for non-specular objects using polarization difference and global scene feature,” Opt. Express 25(21), 25004–25022 (2017). [CrossRef]  

16. W. Zhang, J. Liang, G. Wang, H. Zhang, and S. Fu, “Review of passive polarimetric dehazing methods,” Opt. Eng. 60(03), 030901 (2021). [CrossRef]  

17. S. P. Morgan, M. P. Khong, and M. G. Somekh, “Effects of polarization state and scatterer concentration on optical imaging through scattering media,” Appl. Opt. 36(7), 1560–1565 (1997). [CrossRef]  

18. J. Liang, H. J. Ju, L. Y. Ren, L. M. Yang, and R. G. Liang, “Generalized Polarimetric Dehazing Method Based on Low-Pass Filtering in Frequency Domain,” Sensors 20(6), C1 (2020). [CrossRef]  

19. J. Liang, L. Ren, and R. Liang, “Low-pass filtering based polarimetric dehazing method for dense haze removal,” Opt. Express 29(18), 28178–28189 (2021). [CrossRef]  

20. Z. Liang, X. Y. Ding, Z. Mi, Y. F. Wang, and X. P. Fu, “Effective polarization-based image dehazing with regularization constraint,” IEEE Geosci. Remote Sens. Lett. 19, 1–5 (2022). [CrossRef]  

21. L. H. Shen, Y. Q. Zhao, Q. N. Peng, J. C.-W. Chan, and S. G. Kong, “An iterative image dehazing method with polarization,” IEEE Trans. Multimedia 21(5), 1093–1107 (2019). [CrossRef]  

22. H. Jin, L. Qian, J. Gao, Z. Fan, and J. Chen, “Polarimetric calculation method of global pixel for underwater image restoration,” IEEE Photonics J. 13(1), 1–15 (2021). [CrossRef]  

23. H. Jin, X. Wang, Z. Fan, and N. Pan, “Linear solution method of solar position for polarized light navigation,” IEEE Sens. J. 21(13), 15042–15052 (2021). [CrossRef]  

24. A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,” IEEE Signal Process. Lett. 20(3), 209–212 (2013). [CrossRef]  

25. L. K. Choi, J. You, and A. C. Bovik, “Referenceless prediction of perceptual fog density and perceptual image defogging,” IEEE Trans. on Image Process. 24(11), 3888–3901 (2015). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. The statistical results of ${p_\textrm{A}}$ and p in different selected areas. (a) The locations of vertical area and the horizontal area. (b)–(c) The statistical results of ${p_\textrm{A}}$ and p in the vertical area and the horizontal area, respectively. (d) The sky area P1 and the object area P2. (e)–(f) The statistical results of ${p_\textrm{A}}$ and p in the sky area P1 and the object area P2, respectively.
Fig. 2.
Fig. 2. The estimation of ${A_\infty }$ excluding the white object. (a) The spatial location of ${A_\infty }$ with high probability in I. (b) The spatial location of ${A_\infty }$ with high probability in $\Delta p$. (c) The spatial location of ${A_\infty }$ corresponding to the intersection coordinates.
Fig. 3.
Fig. 3. The iterative flowchart for blind separation of airlight. The iteration starts with ${A_{\infty \max }}$ and ${A_{\infty \min }}$. $\hat{A}_{\max }^0$ and $\hat{A}_{\min }^0$ represent the airlight after the first iteration. ${\hat{A}_{\max }}$ and ${\hat{A}_{\min }}$ represent the airlight after iterative convergence. A is calculated by adding ${\hat{A}_{\max }}$ and ${\hat{A}_{\min }}$
Fig. 4.
Fig. 4. The flowchart of the proposed method.
Fig. 5.
Fig. 5. The comparison of dehazing results generated by different methods for local objects. (a) ∼ (a5) The original image and the dehazing results generated by generated by GPLPF [18], PLF [19], BCCR [3], Nonlocal [5] and the proposed method, respectively. (b) ∼ (b5) The locally enlarged results of building corresponding to (a) ∼ (a5). (c) ∼ (c5) The locally enlarged results of Ferris wheel corresponding to (a) ∼ (a5).
Fig. 6.
Fig. 6. The comparison of dehazing results generated by different methods for sky scenes under different haze concentrations. (L) ∼ (L5) The original image and the dehazing image generated by GPLPF [18], PLF [19], BCCR [3], Nonlocal [5] and the proposed method in light haze. (M) ∼ (M5) The original image and the dehazing image generated by GPLPF [18], PLF [19], BCCR [3], Nonlocal [5] and the proposed method in moderate haze. (D) ∼ (D5) The original image and the dehazing image generated by GPLPF [18], PLF [19], BCCR [3], Nonlocal [5] and the proposed method in dense haze. (H) ∼ (H5) The original image and the dehazing image generated by GPLPF [18], PLF [19], BCCR [3], Nonlocal [5] and the proposed method in heavy haze.
Fig. 7.
Fig. 7. The comparison of dehazing results generated by different methods for skyless scenes under different haze concentrations. (L) ∼ (L5) The original image and the dehazing image generated by GPLPF [18], PLF [19], BCCR [3], Nonlocal [5] and the proposed method in light haze. (M) ∼ (M5) The original image and the dehazing image generated by GPLPF [18], PLF [19], BCCR [3], Nonlocal [5] and the proposed method in moderate haze. (D) ∼ (D5) The original image and the dehazing image generated by GPLPF [18], PLF [19], BCCR [3], Nonlocal [5] and the proposed method in dense haze. (H) ∼ (H5) The original image and the dehazing image generated by GPLPF [18], PLF [19], BCCR [3], Nonlocal [5] and the proposed method in heavy haze.
Fig. 8.
Fig. 8. The comparison of dehazing results generated by different methods for RGB scenes under different haze concentrations. (L) ∼ (L5) The original image and the dehazing image generated by GPLPF [18], PLF [19], BCCR [3], Nonlocal [5] and the proposed method in light haze. (M) ∼ (M5) The original image and the dehazing image generated by GPLPF [18], PLF [19], BCCR [3], Nonlocal [5] and the proposed method in moderate haze. (D) ∼ (D5) The original image and the dehazing image generated by GPLPF [18], PLF [19], BCCR [3], Nonlocal [5] and the proposed method in dense haze. (H) ∼ (H5) The original image and the dehazing image generated by GPLPF [18], PLF [19], BCCR [3], Nonlocal [5] and the proposed method in heavy haze.
Fig. 9.
Fig. 9. The comparison results after color correction. (H3) ∼ (H5) The dehazing results before color correction. (ACC3) ∼ (ACC5) The dehazing results after color correction.
Fig. 10.
Fig. 10. (a) - (i) Dehazing results of the proposed method with ${\lambda _1}$ = 0.1, 0.3, 0.5 and ${\lambda _2}$ = 0.01, 0.03, 0.05, respectively.

Tables (1)

Tables Icon

Table 1. The quantitative results of different dehazing methodsa

Equations (22)

Equations on this page are rendered with MathJax. Learn more.

I = D + A
D = R exp (  -  β z ) ,
A = A ( 1 t ) ,
R = I A 1 A / A .
{ p = I max I min I  =  I I p A = A max A min A = A A ,
A I ,
p A = [ 1 + R A exp ( β z ) 1 exp ( β z ) ] p .
I α = 1 2 ( I + Q cos 2 α + U sin 2 α ) ,
{ I = 2 3 ( I 0 + I 60 + I 120 ) Q = 2 3 ( 2 I 0 I 60 I 120 ) U = 2 3 ( I 60 I 120 ) .
p = Q 2 + U 2 I ,
θ = 1 2 arctan ( U Q ) ,
p A = max ( | Q I cos 2 θ A | , | U I sin 2 θ A | ) .
A = 1 n i = 1 n I { ξ I i [ I ( x ) > t I ] ξ Δ p i [ Δ p ( x ) < t Δ p ] } .
A α = p A A cos 2 ( α θ A ) + 1 p A 2 A .
{ A max = 1 + p A 2 A A min = 1 p A 2 A ,
{ A max = 1 + p A 2 A ( 1 t ) A min = 1 p A 2 A ( 1 t ) ,
{ A max = 1 + p A 2 A A min = 1 p A 2 A .
A = A T .
arg min A ^ , T 1 2 | | A ^ A T | | 2 2 + λ 1 j w | | W j ( D j A ^ ) | | 1 + λ 2 2 | | T | | 2 2 ,
W j = exp ( c { r , g , b } | | D j A c | | 2 2 2 σ 2 ) ,
arg min A ^ 1 2 | | A ^ A T | | 2 2 + λ 1 j w | | W j ( D j A ^ ) | | 1 ,
arg min T 1 2 | | A ^ A T | | 2 2 + λ 2 2 | | T | | 2 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.