Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Thin cloud removal from single satellite images

Open Access Open Access

Abstract

A novel method for removing thin clouds from single satellite image is presented based on a cloud physical model. Given the unevenness of clouds, the cloud background is first estimated in the frequency domain and an adjustment function is used to suppress the areas with greater gray values and enhance the dark objects. An image, mainly influenced by transmission, is obtained by subtracting the cloud background from the original cloudy image. The final image with proper color and contrast is obtained by decreasing the effect of transmission using the proposed max–min radiation correction approach and an adaptive brightness factor. The results indicate that the proposed method can more effectively remove thin clouds, improve contrast, restore color information, and retain detailed information compared with the commonly used image enhancement and haze removal methods.

©2014 Optical Society of America

1. Introduction

During the acquisition of optical satellite images, clouds diversify the color and brightness of different image regions. Light transmission and scattering attenuation of clouds result in blurring and reduced contrast among ground objects. Images of objects cannot be obtained when clouds are too thick. The uneven illumination caused by thin clouds affects the processing of images by imaging technologies. Removal of thin clouds from satellite images can improve the capability and accuracy of applications that use satellite images.

Thin cloud removal is a subject of importance in remote sensing for many years [14]. Similar to haze, fog, and smoke, clouds are usually collectively called as “haze” for simplicity, because they cause atmospheric absorption and scattering. They differ mainly in material, size, shape, and concentration of atmospheric particles. Haze removal methods can also be applied in cloudy images in some cases.

Most of land surface reflections are obstructed in regions covered with thick clouds. Cloud removal methods can be classified into measurement-based approaches [5,6] and traditional synthesis and image inpainting techniques [7]. However, land surface information may still be extracted from areas covered with thin clouds. Clouds can be removed by image enhancement or image restoration. Some commonly used enhancement methods include histogram matching and color constancy enhancement, e.g., single- and multiple-scale retinex [8,9], multi scale retinex with color restoration (MSRCR) [10,11]. Image enhancement-based algorithms stretch a portion or the entire image to increase the brightness or contrast and highlight certain details, thereby reducing the interference of haze on the image. Without considering image degradation, these algorithms enhance or weaken some image details according to specific needs. This process results in images that are more suitable for human visual observation or machine recognition. However, these algorithms do not consider the causes of interference and the physical model of clouds. Algorithms can only reduce the interference of clouds to a certain extent, resulting in cloud residues in the processed images. The enhanced images tend to exhibit color distortion because of the loss of color proportions of the original image.

Image restoration-based algorithms are established according to the physical model of haze through the existing model and observed image degradation process to restore a clear image. Starting with light scattering, these algorithms analyze the causes of clouds, establish a degradation model, describe the haze formation process mathematically, estimate the model parameters using a variety of prior knowledge, and finally solve equations to obtain clear images [1215]. Haze removal from a single image has shown rapid development in recent years and presents a breakthrough, but available methods are all dependent on prior knowledge or assumptions.

Tan [16] found that the contrast of a haze-free image was higher than that of the image prior to haze removal. Haze interference was eliminated using maximized contrast approach and further normalized by the Markov random field model. Although the results were consistent with visual observation, the physical properties of the images were altered, and cavity defects occurred in the depth discontinuity areas. Fattal [17] estimated the scene albedo and deduced a transmission model using single images. The albedo in a local area was assumed to be a constant matrix, and the object surface reflectivity and transmission map were independent of the local area. Thus, the albedo direction was estimated using independent component analysis. Finally, the global image color was predicted using the Markov random field model. This method overcomes the shortcomings of Tan’s method and produces better visual effects and more effective depth map with physical validity. However, statistical results may not be reliable when dark noise exists (e.g., areas with heavy haze) because of local statistical independence. Fattal’s method is not applicable to areas with heavy haze without color information and grayscale images, because this statistical method is based on color. In the 2009 Conference on Computer Vision and Pattern Recognition, Kaiming He proposed a single image haze removal method using a dark channel and a haze imaging model, which yielded unexpected results [18,19]. He’s method can produce a good depth map and high-quality haze-free image independent of significant variance in transmission or surface shading and with few halo artifacts. Visibility restoration from a single image without using any additional information is known as a filtering problem. Tarel proposed a novel algorithm that is based on median filter, which is advantageous primarily because of its speed [20].

Several previously described methods have achieved remarkable results by assuming a uniformly distributed haze over a region, and all images in the region have approximately similar degrees of blurring. However, satellite images are obtained from high altitudes and with a large angular view. Various forms of clouds may simultaneously exist in one image, which invalidates the assumptions. In addition, with an uncertain distribution of thin clouds, the known haze removal methods may not always be effective in cloud removal. In the present study, we propose a novel thin cloud removal method to estimate and remove the cloud background based on a cloud physical model, eliminate the transmission effect of clouds, and restore the color information under clouds using the proposed max-min radiation correction approach and an adaptive brightness factor.

The remainder of this paper is organized as follows. Section II introduces the proposed thin cloud removal method including cloud background removal and max-min radiation correction. The experimental results, along with compared data sets, are provided in Section III. The conclusion is stated in Section IV.

2. Proposed thin cloud removal method

The commonly accepted physical model [21] of thin cloud can be illustrated as Fig. 1:

 figure: Fig. 1

Fig. 1 The physical model of thin cloud.

Download Full Size | PDF

The cloudy image degradation model used for areas with thin clouds is

I(x,y)=aLr(x,y)t(x,y)+L(1t(x,y))
where x and y are the pixel coordinates, L is the intensity of incident solar radiation, r(x,y) and t(x,y) are the albedoes of the ground object and transmission of cloud, respectively, and a is the attenuation coefficient of radiation during propagation. The first part aLr(x,y)t(x,y) denotes the signal reflected by the ground that subsequently passes through the cloud, and the second part L[1-t(x,y)], called the cloud background in this paper, represents the signal reflected by the cloud. Cloud removal is the elimination of L[1-t(x,y)]and reduction of the influence of transmission t(x,y).

2.1 Cloud background removal

The radiation in thin clouds gradually varies in the space domain. In terms of the low-frequency coefficients in the frequency domain, we can estimate the cloud background L[1-t(x,y)]using a low-pass filter. The fast Fourier transform (FFT) is first applied to every band of the input image, such as red, green, blue, or near infrared.

Then, the Gaussian low-pass filter is applied to FFT(I)in the frequency domain and defined as

H(u,v)=exp[D2(u,v)2σ02]
where FFT() is the FFT transform operator, σ0 denotes the cutoff frequency, u and v are the coordinates, and D(u,v) denotes the distance between the (u,v) coordinate and the origin of FFT. The cloud background of the I band, i.e., the second part L[1-t(x,y)] in Eq. (1), can be estimated by applying the inverse FFT to the filtered result
Bcloud=IFFT[FFT(I)×H]
where IFFT is the inverse FFT. Considering the unevenness of clouds, we use the following adjustment function to suppress the areas with greater gray value s and enhance the dark objects:
Bcloud(x,y)={Bcloud(x,y)+[Bcloud(x,y)thmax(Bcloud)th]λ*d1,Bcloud(x,y)>thBcloud(x,y)[thBcloud(x,y)thmin(Bcloud)]λ*d2,Bcloud(x,y)th
where th = [max(Bcloud) + min(Bcloud)]/2, and max(Bcloud) and min(Bcloud) are the maximum and minimum values of the cloud background, respectively. The adjustment factors are d1, d2,and λ. For simplicity, we set λ = 2 in this paper. Removing the cloud background from the input image in the X band will yield a result that is mainly affected by transmission t.
I'(x,y)=I(x,y)Bcloud(x,y)+offset
whereoffset is a user-defined constant. In this paper, we set it as the average value of the I band for simplicity.

Figure 2(a) is the single band of the original image with uneven cloud distribution, which can be observed from the estimated cloud background in Fig. 2(c) to 2(e) where the numbers are the maximum gray values of the neighboring pixels in red squares. Figure 2(b) is the more even single band image mainly influenced by transmission but has low contrast because of the removal of the cloud background. The adjusted cloud background is removed from the original image. Hence, the areas with greater gray values caused by thicker clouds are suppressed because the output from the adjustment function is higher than the initial output (d1 = 0 and d2 = 0). The dark areas are enhanced because of the low output value of the adjustment function. These parameters can be set according to cloud thickness. According to Fig. 2 (c) to 2(e), as d1 increases, the gray values of cloud areas will be greater; and the gray values of dark areas will decrease due to the increasing d2, i.e., a larger d1 is used to remove more clouds from the image, and a larger d2 is used to maintain more ground information.

 figure: Fig. 2

Fig. 2 Cloud background estimation (a) Original single band image (b) Image mainly influenced by transmission (c)Estimated cloud background where d1 = 10 and d2 = 10 (d)Estimated cloud background where d1 = 20 and d2 = 20 (e)Estimated cloud background where d1 = 30 and d2 = −30 (f) The adjusting function.

Download Full Size | PDF

2.2 Max–min radiation correction

Then, the effect of transmission is decreased, and contrast is increased. After removing the cloud background from the original image, we find that, in every band, the effect of evenly distributed transmission in the entire image results in high gray values in all pixels. Thus, several dark objects with gray values that are supposed to be close to zero show high values, whereas several bright objects become dimmer. These observations are radiation errors caused by the transmission of clouds. Hence, a max–min radiation correction approach is proposed to eliminate the influence of transmission and enhance contrast.

The proposed approach uses gray values of some of the darkest and brightest objects in an image to adjust the other objects, because the pixel numbers of the darkest and brightest objects are small enough to affect the global image. We usedT=αMNto denote the threshold, where α is a scale parameter, and MandN are the height and width of the image, respectively. We first obtained the histogram h(n) of I’, where n = 1,2,…,256 and then accumulated the histogram values from both sides of the histogram. We then used the gray values h_max and h_min that satisfy the following requirements as the maximum and minimum gray values:

{i=1h_minh(i)Tandi=1h_min+1h(i)>Tj=256h_maxh(j)Tandj=256h_max1h(j)>T
where h_min and h_max are treated as gray values of the selected darkest and brightest objects. Redenotes a band of the final image that corresponds to I’, such that the max-min radiation correction is

Irestored(x,y)={0,I'(x,y)<h_min255*[I'(x,y)h_minh_maxh_min]β,h_min<I'(x,y)<h_max255,I'(x,y)>h_max

The gray values of the darkest and brightest objects are adjusted to 0 and 255, respectively, and the other objects are corrected correspondingly. Figure 3 shows the images and corresponding histograms before and after processing with α = 0.005.β is the brightness factor used to adjust the brightness of the final image, where 0<β≤1. Smaller β values result in brighter final images. In Fig. 3(d), the peak of the histogram moves to the low gray-scale region with an increase in β, indicating that the final image becomes darker. This observation can also be seen from Figs. 3(f) and 3(g) with β = 0.6 and β = 1.0, respectively, where Fig. 3(g) is darker than Fig. 3(f). To obtain a result with proper brightness, we use the following formula to choose β adaptively:

β={mean[A(x,y)]/128,mean[A(x,y)]128128/mean[A(x,y)],otherwise
where A(x,y)=1Ni=1NI'i(x,y) is the average of the N-band image I’ mainly influenced by transmission, and mean(A)is the function used to obtain the average value of A. Figure 3(h) is the colored cloud-free image with adaptive β, in which the brightness is between that of Figs. 3(f) and 3(g) and is more proper for visual interpretation.

 figure: Fig. 3

Fig. 3 Transmission reduction.(a) image mainly influenced by transmission, (b) cloud-free image with β = 1.0, (c) histogram prior to processing, (d) histogram after processing with different values of β,(e) original cloudy color image, (f) color image without clouds with β = 0.6, (g) color image without clouds with β = 1.0, and (h) color image without clouds with adaptive β.

Download Full Size | PDF

Figure 3(a) shows the image with low contrast after the cloud background was removed, and Fig. 3(b) is the corresponding image after max-min radiation correction with proper contrast. In Fig. 3(c), the histogram is concentrated at high-gray level regions, whereas in Fig. 3(d) with different β values, the histograms cover all the gray levels. These results indicate that the images have wider dynamic gray-scale range as well as more proper brightness and contrast. Careful comparisons of Figs. 3(e)3(h) from the aspects of definition and color show the effectiveness of the proposed method in removing clouds.

Figure 4 shows removal results with different d1, d2 and adaptive β. The dark mountain areas at the upper right of image become brighter when d2 increases, and the gray values of thick cloud areas at the bottom left of image tend to be darker and darker as d1 increases. These results are obviously consistent with the analysis in Fig. 1. The values of d1 and d2 vary for different images and can be determined according to the thickness of cloud in images.

 figure: Fig. 4

Fig. 4 Cloud removal results with different d1, d2 and adaptive β (a) Original single image (b) result with d1 = 10 and d2 = 10 (c)result with d1 = 20 and d2 = 20 (d)result with d1 = 30 and d2 = −30.

Download Full Size | PDF

3. Experimental results

Many satellite images with different thin cloud levels were tested using our proposed method, and the results were compared with those obtained from the MSRCR, Homomorphic Filter (HF), improved Wavelet-based approach [22] as well as those from He’s [19] and Tarel’s [20] haze removal methods. He’s algorithm has been proven prior to the existing haze removal methods. The source code of Tarel’s algorithm can be found in http://perso.lcpc.fr/tarel.jean-philippe/publis/iccv09.html. The max-min radiation correction approach was also used for all the above used methods. Ground truth images were also used to verify the effectiveness of cloud removal. However, we could not find cloud-free images with the exact time as the cloudy images. Hence, we used the cloud-free images during the same seasons in different years. Although some objects may differ between the original cloudy images and the ground truth images, the retention of contrast, color, and texture can still be considered as standards to verify all the cloud removal results. Figure 5 shows the results with thinner clouds, and Fig. 8 shows the results with thicker clouds.

 figure: Fig. 5

Fig. 5 Thin cloud removal results: (a) Original cloudy image, (b) Ground truth image, (c) Tarel’s result, (d) He’s result, (e) MSRCR, (f) HF, (g) Wavelet-based result and (h) Our result.

Download Full Size | PDF

Figure 6are subset images of Fig. 5. The color of the MSRCR image greatly differs from the original image in spite of the improved contrast. The HF result tends to be dimmer, whereas Wavelet-based result shows sharper edges. Results from Tarel’s and He’s methods show darker images with some residual clouds and significant differences in color, contrast, and textures with the ground truth, indicating that they are not suitable to deal with cloud removal task although cloud is one type of haze. Our result provides better cloud removal results than the other methods and present similar characteristics with the ground truth.

 figure: Fig. 6

Fig. 6 Subset images of thin cloud removal results in Fig. 5: (a) Original cloudy image, (b) Ground truth image, (c) Tarel’s result, (d) He’s result, (e) MSRCR, (f) HF, (g) Wavelet-based result and (h) Our result.

Download Full Size | PDF

Two objective quality indices, mean and standard deviation, are used to assess the qualities of the cloud removal. It’s understandable and reasonable that for a cloud-free image, mean and standard deviation of every part should be similar to each other, and the standard deviation should be as big as possible. Hence all the images are divided into five parts uniformly: upper left, upper right, bottom left, bottom right and middle. Mean and standard deviation of these five parts at every band are calculated, and the results are shown in Fig. 7.

 figure: Fig. 7

Fig. 7 Mean and standard deviation of five parts of images in every bands. (a) Mean of R band (b) Standard deviation of R band (c) Mean of G band (d) Standard deviation of G band (e) Mean of B band (f) Standard deviation of B band (g) Average mean of all bands (h) Average standard deviation of all bands.

Download Full Size | PDF

Due to the uneven cloud, the mean value of cloudy image is much higher than that of cloud-free image. As shown in Fig. 7, the mean values of all bands in the proposed are obviously lower than that of the original image, whereas the mean values in the other methods are even higher than that of the original image. In addition, the mean values of five parts are similar to each other in the proposed method, whereas for the other methods, there are big differences between these five parts. Higher standard deviation value usually indicates that the image is with higher definition and better quality. Although some standard deviation values of five parts of the proposed method are not the highest, the differences between these five parts are the smallest except the upper left part, indicating that the other four parts have the similar definition. Since there are some water areas in the upper left part, its standard deviation will be evidently lower than that of the other four parts. Therefore, the proposed method shows the best performance both subjectively and objectively (Fig. 8).

 figure: Fig. 8

Fig. 8 Thick cloud removal results: (a) Original cloudy image, (b) Ground truth image, (c) Tarel’s result, (d) He’s result, (e) MSRCR, (f) HF, (g) Wavelet-based result and (h) Our result.

Download Full Size | PDF

From subset images in the following Fig. 9, massive clouds can still be observed from the images processed by all algorithms except the proposed method. The ground objects under the clouds still present poor contrast. MSRCR produced the worst result because of the untrue color and residual clouds. Our processed image shows some differences with the ground truth image in terms of the color of green trees and buildings. These differences are caused by two main factors. First, the cloudy and ground truth images are acquired from different years, which resulted indifferent ground objects such as trees. Second, the cloud is too thick such that restoration of the exact spectral information of ground objects becomes difficult. However, compared with the results from the other methods, our method removed almost all of the clouds. Additionally, our image exhibits the best contrast, color, and texture because it is more similar to the ground truth compared with the other images.

 figure: Fig. 9

Fig. 9 Subset images of thin cloud removal results in Fig. 8: (a) Original cloudy image, (b) Ground truth image, (c) Tarel’s result, (d) He’s result, (e) MSRCR, (f) HF, (g) Wavelet-based result and (h) Our result

Download Full Size | PDF

The standard deviation, mean and definition of all images are calculated and the results are shown in Table 1. The definition is expressed as

DE=1(M1)(N1)x=1M1y=1N1Δx2+Δy22
where

Tables Icon

Table 1. Objective evaluation of results in Fig. 8.

Δx=I(x+1,y)I(x,y)
Δy=I(x,y+1)I(x,y)

M and N are the width and height, respectively, of image I, x and y are the pixel coordinates. Higher definition means higher quality.

From Table 1, although some values of the proposed method are a little lower than Tarel’s and He’s methods, overall, the result of proposed method have highest standard deviation and definition, and smallest mean value, indicating that the proposed method have the best cloud removal effects.

Our proposed method can effectively remove clouds, restore true color, and enhance contrast. On the contrary, the results from the other algorithms, present several drawbacks such as untrue color and low contrast. Some additional results are listed below (Fig. 10) to prove the effectiveness of the proposed method in cloud removal.

 figure: Fig. 10

Fig. 10 Removal of clouds from several satellite images using the proposed method. (a), (c), and (e) are original cloudy images, whereas (b), (d), and (f) are the corresponding cloud-free images

Download Full Size | PDF

4. Conclusion

A two-step method based on the cloudy image degradation model is proposed to remove thin clouds from satellite images. The cloud background is first estimated and then adjusted using an adjustment function. An image, mainly influenced by transmission, is obtained by removing the cloud background from the original cloudy image. The cloud-free image is produced using the max–min radiation correction approach and an adaptive brightness factor. Results indicate that thin clouds can be effectively removed using our proposed method and the restored images also perform well in the visual aspect. Given that all bands are processed independently, our proposed method is also suitable for removal of thin clouds from satellite images with arbitrary bands, such as multi/hyper-spectral images. The proposed algorithm is effective and simple. Visual and objective performance of cloud removal could be improved by a more accurate estimation of the parameters of the physical model of thin clouds, and this is our future work.

Acknowledgment

This work is jointly supported by the International Science & Technology Cooperation Program of China (No. 2010DFA92720-24); National Natural Science Foundation program (No. 41301403, No.61172174, No.40801165 and No.10978003); Chongqing Basic and Advanced Research General Project (No. cstc2013jcyjA40010); 863 project (No. 2009AA121404); and the Fundamental Research Funds for the Central Universities (No. 111056 and No. 201121302020008).

References and links

1. Y. Zhang, B. Guindon, and J. Cihlar, “An image transform to characterize and compensate for spatial variations in thin cloud contamination of Landsat images,” Remote Sens. Environ. 82(2-3), 173–187 (2002). [CrossRef]  

2. Y. Zhang, B. Guindon, and J. Cihlar, “An image transform to characterize and compensate for spatial variations in thin cloud contamination of Landsat images,” Remote Sens. Environ. 82(2-3), 173–187 (2002). [CrossRef]  

3. C. Feng, J. W. Ma, Q. Dai, and X. Chen, “An improved method for cloud removal in ASTER data change detection,” IGARSS 04(5), 3387–3389 (2004).

4. A. Maalouf, P. Carré, B. Augereau, and C. Fernandez-Maloigne, “A Bandelet-Based Inpainting Technique for Clouds Removal From Remotely Sensed Images,” IEEE Trans. Geosci. Rem. Sens. 47(7), 2363–2371 (2009). [CrossRef]  

5. S. Benabdelkader and F. Melgani, “Contextual spatio spectral post reconstruction of cloud-contaminated images,” IEEE Geosci. Remote Sens. Lett. 5(2), 204–208 (2008). [CrossRef]  

6. F. Melgani, “Contextual reconstruction of cloud-contaminated multi temporal multispectral images,” IEEE Trans. Geosci. Rem. Sens. 44(2), 442–455 (2006). [CrossRef]  

7. M. Bertalmio, L. Vese, G. Sapiro, and S. Osher, “Simultaneous structure and texture image inpainting,” IEEE Trans. Image Process. 12(8), 882–889 (2003). [CrossRef]   [PubMed]  

8. Z. Rahman, D. D. Jobson, and G. A. Woodell, “Retinex processing for automatic image enhancement,” J. Electron. Imaging 13(1), 100–110 (2004). [CrossRef]  

9. D. J. Jobson, Z. Rahman, and G. A. Woodell, “Properties and performance of a center/surround Retinex,” IEEE Trans. Image Process. 6(3), 451–462 (1997). [CrossRef]   [PubMed]  

10. D. J. Jobson, Z. Rahman, and G. A. Woodell, “A multiscale Retinex for bridging the gap between color images and the human observation of scenes,” IEEE Trans. Image Process. 6(7), 965–976 (1997). [CrossRef]   [PubMed]  

11. J. Liu, Z. F. Shao, and Q. M. Cheng, “Color constancy enhancement under poor illumination,” Opt. Lett. 36(24), 4821–4823 (2011). [CrossRef]   [PubMed]  

12. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant Dehazing of Images Using Polarization,” IEEE Conf. Computer Vision and Pattern Recognition, 1: 325–332 (2001)

13. S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind Haze Separation,” IEEE Conf. Computer Vision and Pattern Recognition,2: 1984–1991 (2006).

14. J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep Photo: Model-Based Photograph Enhancement and Viewing,” ACM Trans. Graph. 27(5), 1–10 (2008). [CrossRef]  

15. S. G. Narasimhan and S. K. Nayar, “Contrast Restoration of Weather Degraded Images,” IEEE Trans. Pattern Anal. Mach. Intell. 25(6), 713–724 (2003). [CrossRef]  

16. R. Tan, “Visibility in bad weather from a single image”. IEEE Conference on Computer Vision and Pattern Recognition, 1–8 (2008). [CrossRef]  

17. R. Fattal. “Single image dehazing”. ACM SIGGRAPH,1–9(2008).

18. K. M. He, J. Sun, and X. O. Tang, “Single Image Haze Removal Using Dark Channel Prior”. IEEE Conference on Computer Vision and Pattern Recognition, 1956–1963 (2009).

19. K. M. He, J. Sun, and X. O. Tang, “Single Image Haze Removal Using Dark Channel Prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2010). [PubMed]  

20. J. P. Tarel and N. Hautière, “Fast Visibility Restoration from a Single Color or Gray Level Image”,IEEE International Conference on Computer Vision (ICCV'09),2201–2208 (2009) [CrossRef]  

21. Z. K. Liu and B. R. Hunt, “A new approach to removing cloud cover from satellite imagery,” Comput. Vis. Graph. Image Process. 25(2), 252–256 (1984). [CrossRef]  

22. F. Chen, D.M. Yan, Z.M. Zhao. “Haze detection and removal in remote sensing images based on undecimated wavelet transform”. Geomatics and information science of Wuhan University, 71–74 (2007)

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 The physical model of thin cloud.
Fig. 2
Fig. 2 Cloud background estimation (a) Original single band image (b) Image mainly influenced by transmission (c)Estimated cloud background where d1 = 10 and d2 = 10 (d)Estimated cloud background where d1 = 20 and d2 = 20 (e)Estimated cloud background where d1 = 30 and d2 = −30 (f) The adjusting function.
Fig. 3
Fig. 3 Transmission reduction.(a) image mainly influenced by transmission, (b) cloud-free image with β = 1.0, (c) histogram prior to processing, (d) histogram after processing with different values of β,(e) original cloudy color image, (f) color image without clouds with β = 0.6, (g) color image without clouds with β = 1.0, and (h) color image without clouds with adaptive β.
Fig. 4
Fig. 4 Cloud removal results with different d1, d2 and adaptive β (a) Original single image (b) result with d1 = 10 and d2 = 10 (c)result with d1 = 20 and d2 = 20 (d)result with d1 = 30 and d2 = −30.
Fig. 5
Fig. 5 Thin cloud removal results: (a) Original cloudy image, (b) Ground truth image, (c) Tarel’s result, (d) He’s result, (e) MSRCR, (f) HF, (g) Wavelet-based result and (h) Our result.
Fig. 6
Fig. 6 Subset images of thin cloud removal results in Fig. 5: (a) Original cloudy image, (b) Ground truth image, (c) Tarel’s result, (d) He’s result, (e) MSRCR, (f) HF, (g) Wavelet-based result and (h) Our result.
Fig. 7
Fig. 7 Mean and standard deviation of five parts of images in every bands. (a) Mean of R band (b) Standard deviation of R band (c) Mean of G band (d) Standard deviation of G band (e) Mean of B band (f) Standard deviation of B band (g) Average mean of all bands (h) Average standard deviation of all bands.
Fig. 8
Fig. 8 Thick cloud removal results: (a) Original cloudy image, (b) Ground truth image, (c) Tarel’s result, (d) He’s result, (e) MSRCR, (f) HF, (g) Wavelet-based result and (h) Our result.
Fig. 9
Fig. 9 Subset images of thin cloud removal results in Fig. 8: (a) Original cloudy image, (b) Ground truth image, (c) Tarel’s result, (d) He’s result, (e) MSRCR, (f) HF, (g) Wavelet-based result and (h) Our result
Fig. 10
Fig. 10 Removal of clouds from several satellite images using the proposed method. (a), (c), and (e) are original cloudy images, whereas (b), (d), and (f) are the corresponding cloud-free images

Tables (1)

Tables Icon

Table 1 Objective evaluation of results in Fig. 8.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

I(x,y)=aLr(x,y)t(x,y)+L(1t(x,y))
H(u,v)=exp[ D 2 (u,v) 2 σ 0 2 ]
B cloud =IFFT[FFT(I)×H]
B cloud (x,y)={ B cloud (x,y)+ [ B cloud (x,y)th max( B cloud )th ] λ * d 1 , B cloud (x,y)>th B cloud (x,y) [ th B cloud (x,y) thmin( B cloud ) ] λ * d 2 , B cloud (x,y)th
I'(x,y)=I(x,y) B cloud (x,y)+offset
{ i=1 h_min h(i) Tand i=1 h_min+1 h(i) >T j=256 h_max h(j)T and j=256 h_max1 h(j) >T
I restored (x,y)={ 0,I'(x,y)<h_min 255*[ I'(x,y)h_min h_maxh_min ] β ,h_min<I'(x,y)<h_max 255,I'(x,y)>h_max
β = { m e a n [ A ( x , y ) ] / 128 , m e a n [ A ( x , y ) ] 128 128 / m e a n [ A ( x , y ) ] , o t h e r w i s e
D E = 1 ( M 1 ) ( N 1 ) x = 1 M 1 y = 1 N 1 Δ x 2 + Δ y 2 2
Δ x =I(x+1,y)I(x,y)
Δ y =I(x,y+1)I(x,y)
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.