Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Atmospheric cloud algorithms for day/night whole sky imagers

Open Access Open Access

Abstract

This paper discusses new atmospheric cloud algorithms that we have developed to detect the presence and location of clouds in images of the sky acquired by day/night whole sky imagers, and to categorize the clouds by opacity. The day/night whole sky imagers that we developed are ground-based sensors that acquire digital imagery of the full sky down to the horizon in several visible spectral bands and a near-infrared spectral band. The instruments acquire accurate data under daylight, moonlight, and starlight conditions. This paper discusses the previously unpublished day and night cloud algorithms. The day cloud algorithm identifies opaque clouds using a red/blue ratio and identifies thin clouds by comparing current red/blue ratios with the ratios anticipated for clear skies. The clear sky ratios include an image-to-image adaptive algorithm feature to adjust for haze amount. The night cloud algorithms are based on determinations of earth-to-space beam transmittances and on comparisons of current radiances with radiance distributions for clear skies and for opaque clouds. They also include an adaptive algorithm feature to adjust for changes in haze amount. In addition, these new night algorithms report the Earth-to-space beam transmittance in selected directions. The algorithms provide a pixel-by-pixel determination of the cloud results with excellent accuracy. This paper discusses the new day and night cloud algorithms and an assessment of their accuracy, as well as a short discussion of the pros and cons of using these wave bands versus other wave bands.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. INTRODUCTION

This paper discusses the previously unpublished atmospheric cloud algorithms that we developed to detect the presence and location of clouds in images of the sky acquired by day/night whole sky imagers (D/N WSIs) and to categorize the clouds by opacity. This paper is the last in a set of four related papers. The first paper discussed our previously unpublished work to develop D/N WSIs, including a discussion of the instruments and applications [1]. The second documents the previously unpublished radiometric calibrations of these imagers and extinction imagers (EIs) [2]. These new calibration results are used in the D/N WSI cloud algorithms as well as the EI algorithms. The third paper documents our previously unpublished development of EI systems and their algorithms that determine beam transmittance and path extinction along horizontal paths of sight [3]. This current and fourth paper documents our previously unpublished day and night cloud algorithms and their results.

The D/N WSIs are the culmination of a family of fully automated digital whole sky imagers (WSIs) we developed in the Atmospheric Optics Group (AOG), at the Marine Physical Lab (MPL), Scripps Institution of Oceanography (SIO), University of California San Diego. The instruments were fielded at multiple sites, generally within the U.S. The instruments, as well as the development of the cloud algorithms, are documented in many previous papers (e.g., [4,5]) that are referenced by the first paper [1], as well as other papers referenced by those papers. The WSIs are ground-based sensors that acquired digital imagery of the full sky down to the horizon in several visible spectral bands and a near-infrared (NIR) spectral band, in order to measure the radiance distribution of the sky and detect the presence and distribution of clouds. The D/N WSIs automatically acquired high-quality digital imagery of the sky under all conditions, including full sunlight through moonlight down to starlight (no-moon) conditions.

The new cloud algorithms that we developed include a day algorithm that detects the presence of clouds on a pixel-by-pixel basis over the full sky, and classifies the clouds as thin or opaque. Our night algorithm makes similar identifications and can also provide an assessment of the Earth-to-space beam transmittance in 100 to 200 directions at night. The algorithms provide excellent results over the full sky.

The D/N WSIs acquired 16-bit images of the upper hemisphere with a field of view of about 181°. These data could either be processed automatically in the field, or processed in archival mode using the cloud algorithms. The algorithms include calibration corrections to adjust for sensor characteristics [2]; logic to handle opaque clouds; logic for thin clouds; and logic to adjust for variations in haze amount. This paper will discuss the algorithm methods and provide sample results and a discussion of their accuracy.

To the best of our knowledge, the AOG day WSIs, developed and deployed in the early 1980s, were the first digital WSIs, and the D/N WSIs, developed and deployed in the early 1990s, were the first WSIs capable of both day and night cloud determination. We also later developed a more sophisticated daylight visible/NIR WSI (VN WSI) [68]. The WSIs have been used for many applications, as documented earlier [1]. This earlier reference also documents many addition systems that have been developed in later decades by other researchers. These include several day-only sky imagers [912] deployed for a variety of applications, including support of solar resource development. Night systems have been developed in the visible [13], and day/night systems have been developed in the mid-IR [14], and long-wave IR (LWIR) [15,16]. While a full discussion of the relative uses of these systems is beyond the scope of this paper, we will provide additional comments later in this paper.

Although we will concentrate our algorithm discussion on the D/N WSI, it should be noted that we also adapted earlier versions of the algorithms for our VN WSI, operating in the visible and NIR. Also, we previously developed early versions of the algorithms for our earlier generation day WSIs and used them to process a large database in the late 1980s [1,4]. Data from the day WSIs at 10-min intervals from a 14-month period at four stations in the U.S. were processed to yield cloud decision images. We would also like to note that we used a version of the day-cloud algorithm to process approximately 7000 daytime images from the D/N WSI, and the data were then further processed to extract cloud-free line of dight statistics (CFLOS) [17,18]. With our current algorithms, we processed a large database acquired by the D/N WSIs and delivered the data to the U.S. Air Force. This processed database includes daytime results at 1-min intervals and night results at 2-min. intervals for 17–41 months at each of four sites [5].

A. Goals for the Day and Night Algorithm

Our goal in developing both the current versions of both the day and the night algorithm was to be able to detect both opaque clouds and reasonably thin clouds such as cirrus. More specifically, we were interested in clouds that would attenuate the optical radiance in the short-wave infrared (SWIR) with a cloud optical depth (COD) of more than about 1 dB (transmittance ∼.8) or more; although only detecting clouds that would attenuate the radiance by 3 dB (transmittance ∼.5) was even acceptable for our project. We were required to detect both high altitude and low altitude clouds, down to zenith angles of preferably 85°–90° or at least 60°. The systems and algorithms needed to handle a variety of environments, including very moist environments at or near sea level, for both day and night, including full-moon, partial-moon, and no-moon conditions. A secondary goal, of course, was to develop cloud decision results that were useful for a variety of more applications, and for this reason we aimed for a thin COD threshold of about 1 dB.

This definition of a cloud may differ from definitions used for other applications. As discussed in Koren [19,20] and Hirsch [21], a cloud can be reasonably well defined in the microscale, but the definition of a cloud is not so straightforward in the macroscale. For example, Koren and Hirsch point out that the detection of small clouds from satellite data depends strongly on the satellite resolution [20]. Clouds detected by infrared systems may or may not be exactly the same as clouds detected by WSI systems. Our sponsors attempted to use data from a lidar as a ground truth; however, they found that the lidar tended to identify extremely thin clouds that were not of interest for their application at low altitudes, yet miss significant clouds at higher altitudes. Koren and Hirsch also note that there are transition zones outside of detected cloud edges containing a mixture of aerosol and cloud droplets that can strongly affect radiation fluxes and climate research [1921]. The WSI algorithms did at times identify regions near opaque clouds as thin cloud, and these regions may be these transition zones.

To provide a qualitative feel for the WSI cloud determinations, we note that in the field during the day, all of the visible clouds in the sky were also visible on the acquired image, and all of the clouds visible in the image were also visible in the sky, except in one case, where the WSI detected a subvisual layer. At night, the WSI detected much thinner clouds than were detected visually due to the decreased human capability under low flux levels. In developing the WSI algorithms, we found that the resulting threshold for our algorithm for thin clouds at night was approximately 0.8 dB (cloud transmittance about 0.83). The signal loss for thin clouds was not determined for the day algorithm; however, the day algorithm was able to identify very thin cirrus routinely. The algorithm intentionally did not identify uniform haze as thin cloud, because with its smaller drop size distribution, haze should not affect SWIR wavelengths of interest as much as thin clouds. However, a value related to haze amount was reported by the algorithm.

Likewise, at times in the raw imagery we could observe structure in the haze that we called pre-emergent clouds (also see [21] regarding this type of cloud). This structure was also intentionally not identified as thin cloud. We should also note that since we were interested in identifying areas of decreased optical depth, and not concerned about identifying cloud type, it was appropriate for the algorithm to identify some regions of low clouds as optically thin and some regions of high clouds as optically opaque. In summary, our goals were to identify regions of decreased optical depth due to clouds that would affect SWIR wavelengths.

2. DAYTIME CLOUD ALGORITHM

A. Day Algorithm Overview

The day-cloud algorithm includes several conceptual steps, which will be discussed in more detail later in this section. First, radiometric calibrations are applied to correct for nonideal system characteristics [2]. Masks are applied to identify the sun shade (solar/lunar occultor) and surrounding obstructions in the field of view as “no data.” Next, opaque clouds are identified based on calculating the ratio at each pixel between an image acquired through a red filter near 650 nm or NIR filter near 800 nm, and an image acquired through a blue filter near 450 nm red/blue (R/B) ratio and near/blue (N/B) ratio. Pixels with an R/B or N/B ratio exceeding a given threshold are identified as opaque cloud. In initial studies, we found that the challenge with the opaque clouds is to be sure there is no bias as a function of zenith angle, but this concern was resolved once we started using the uniformity calibration.

The next steps in the algorithm identified the presence of thin clouds. We found that the challenge for thin clouds is to avoid bias as a function of scattering angle with respect to the Sun. The thin-cloud algorithm is based on first extracting (in preprocessing), for each site, a “clear-sky library” characterizing the R/B or N/B ratio distribution in cloud-free regions for the full sky as a function of solar zenith angle (SZA). These ratio distributions are saved in what is designated the “clear-sky library.” Then as data are processed, the current ratio is compared with a clear-sky background ratio generated from the clear-sky library. A ratio of the current R/B or N/B ratio divided by the clear-sky R/B or N/B ratio is derived and designated the perturbation ratio. This step includes an adaptive algorithm step to adjust the clear-sky ratio for current haze conditions. Pixels with a perturbation ratio above a given threshold are designated thin clouds.

Before explaining the algorithm in detail, we would like to provide sample results to orient the reader. The raw and cloud decision images for a sky with scattered opaque clouds and some distant thin clouds are shown in Figs. 1 and 2, and sample results for a case with thin clouds generated by contrails are shown in Figs. 3 and 4. The center of the image is the zenith (straight overhead), and the edges of the image are the surrounding horizon.

 figure: Fig. 1.

Fig. 1. Raw WSI image, day with red filter showing clouds, clear sky, and solar/lunar occultor.

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. Cloud decision image. Blue, no cloud; white, opaque cloud; yellow, thin cloud; black, no data (mask).

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Raw red image for case with contrails.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Cloud decision image for case with contrails.

Download Full Size | PDF

The clouds are visually obvious in Figs. 1 and 3 as generally lighter pixels, generally with more texture than the cloud-free sky. The east and west are right and left, respectively, and the north and south are the bottom and top of the image (visualize looking up at the sky with your toes to the north). The black regions in Figs. 2 and 4 are the masks, identifying the solar/lunar occultor and horizon obscurations as “no data.” In Figs. 2 and 4, the algorithm has colored the pixels identified as no cloud with blue, opaque cloud as white, and thin cloud as yellow. The algorithm retains the texture of the ratio image, as can be seen in the opaque-cloud region of Fig. 2, because we felt that this feature made visual assessment of the cloud decision results easier. However, all pixels of the same color provided the same assessment, i.e., opaque, thin, or no cloud.

Figure 2 illustrates that the algorithm works well over the whole sky, even down to the horizon. Also, note that even in the regions near the Sun, both pixels with and without clouds are properly identified, as can be seen by comparing the results with Fig. 1. Figure 4 shows a more difficult thin-cloud case, where the algorithm does an excellent job of identifying the contrails both near the Sun and away from the Sun. Many more examples are included in the AOG references mentioned above, especially the latter ones [5]. In extensive analysis (discussed later in this paper), we find that the algorithm provides accurate results over a wide range of conditions and SZAs for look angles from the zenith down to the horizon.

B. Day Algorithm: Calibration Logic

The first conceptual step in both the day and the night algorithms is the application of calibration corrections, based on calibration measurements of the instrument characteristics [2]. These corrections included application of dark calibrations and uniformity calibrations. Linearity calibration results were not applied for the D/N WSI, because the camera response was very linear. (However, linearity corrections were used for the 12-bit VN WSI and day WSI data processing.) The day algorithm required R/B or N/B relative response inputs derived from the absolute radiance calibrations, and the night algorithm required the absolute calibration results directly. It should be noted that interference filters tend to degrade over time, even in the purged nitrogen atmosphere of the WSI camera housings. This was somewhat mitigated by the use of ratios, as opposed to the absolute radiances in the day algorithm. Still, as we processed data, it was important to check the algorithm results for every year or two of data to verify that shifts in the system calibrations were not affecting the results. It is possible to add automated checks to detect and perhaps correct for such shifts in this type of system, but this was not completed.

C. Day Algorithm: Opaque-Cloud Logic

The opaque clouds are identified using the R/B or N/B ratio. When we were developing a day-cloud algorithm in 1984 with our first generation digital WSIs [4,5], we found that although the relative brightness and the pixel-to-pixel gradients provided some useful information, the R/B ratio provided a far more reliable assessment of the presence of opaque clouds. Our first cloud algorithms were based on these R/B ratios. The concept was shared with others and has been used by many researchers (e.g., [912]). After applying the calibrations and the masks, the current day algorithm derives a R/B or N/B ratio on a pixel-by-pixel basis, and those pixels with a ratio above a fixed threshold are identified as opaque. The N/B ratio is used if data are available and onscale, as this ratio provides a better contrast between thin cloud and either clear sky or haze. In early versions of the opaque algorithm, we had a bias as a function of zenith angle that was handled with a zenith-dependent threshold correction. However, once we started using the uniformity calibrations in the algorithm, this correction was no longer needed, showing that the bias had been introduced by the instrument optics.

D. Day Algorithm: Thin-Cloud Logic

Using a lower but fixed ratio threshold to similarly identify thin clouds is not effective, as this will falsely identify regions near the solar aureole and areas near the horizon as thin clouds. The ratio bias is caused by the enhanced forward scattering near the aureole, and the longer path lengths near the horizon. Thin clouds in the imagery tend to behave as a small perturbation with respect to the clear sky background ratio, i.e., the R/B and N/B ratio in thin clouds is higher than the clear-sky ratio for the same look angle and SZA. Our study of a case with contrails going through the Sun revealed that the R/B ratio within each contrail was a nearly fixed factor higher than the R/B ratio in the adjacent sky, even as the angle with respect to the Sun and the zenith angle of the line of sight changed along the contrail. Thus we needed to develop a method to characterize the clear sky background R/B or N/B ratio distribution.

The AOG had earlier developed a model for clear sky radiance distribution [22] based on a large database of airborne measurements the group collected in the 1960s through 1970s (e.g., [23]). Studies of the model results, as well as WSI field data, indicated that although the magnitude of the clear-sky background ratio varied somewhat with haze amount, the shape of the distribution did not vary significantly with haze, except very close to the Sun. Also, we found that the shape depended somewhat on altitude of the ground site. We were not confident of our ability to model this variance adequately. As a consequence, we developed programs that enabled us to generate a library of clear-sky background ratios based on the field data that were being acquired with the WSI.

To generate the clear-sky background library for each site, we first viewed images at 10-min resolution in order select representative clear or mostly clear days. In this case, we defined “clear” as meaning that we could see no sign of thin cloud (other than reasonably uniform haze) in the raw WSI imagery. These days were then processed to yield R/B or N/B ratios. Then an additional program was run to enable an analyst to process these ratio images, extracting values at every 5° zenith angle θ and 15° azimuth angle with respect to the Sun φs, while deleting any points contaminated by clouds or obstructions visible in the imagery. Only images for SZAs within ±1° of each 5° SZA increment were evaluated in this way. All the resulting clear-sky ratio data points within each image were normalized with respect to the value of the ratio at what we called the “beta points” in order to remove daily variations due to haze. The two beta points were defined as the two points at which both θ and the solar scattering angle β are 45°. Figure 5 shows the behavior of the ratio at the beta points at a very hazy site as a function of SZA for several days with varying haze amount. Most sites had much less variation from day to day–this is an extreme example.

 figure: Fig. 5.

Fig. 5. N/B beta reference values versus SZA for several nearly cloud-free days at one site.

Download Full Size | PDF

The extracted normalized ratio values at each 5° increment in θ, φs, and SZA were averaged to create the clear-sky ratio library that was used in the automated processing. In addition, the beta reference values were represented by one best-fit curve and then adjusted for haze amount during the automated processing. First, we will describe how the clear-sky library is used if there is no adaptive haze adjustment required for a given image.

In order to process the field data through the thin-cloud algorithm, for each field image, the processing program first applied the calibrations, the mask, and the opaque-cloud algorithm. The program then generated a clear-sky background ratio distribution (or image) for each pixel and the current SZA by interpolation of background library values as a function of θ, φs, and SZA. This background ratio image was then adjusted using the SZA-dependent beta-beta reference value for each image from the beta reference value best-fit curve. Next the program derived a perturbation ratio distribution, defined as the current R/B or N/B ratio distribution divided by the clear-sky background R/B or N/B ratio distribution. Then any pixels that were not opaque or masked were defined as thin cloud if the perturbation ratio exceeded a certain threshold, typically a factor of 1.2. On some hazy days, the adjusted background ratio in some directions was high enough that it actually exceeded the opaque ratio threshold, for example, near the Sun. In these cases, such pixels were identified as “indeterminate.”

Even without the adaptive haze step, the thin-cloud algorithm worked very well for most sites and supported our studies indicating that thin clouds acted as a perturbation with respect to the clear-sky background ratio. However, later in the program, we deployed one or more WSI systems to very hazy environments such as in Fig. 5, and it became necessary to develop a method to handle the day-to-day variations in the haze amount.

E. Day Algorithm: Haze Logic

In the adaptive algorithm step, in order to account for variations in haze, we programmed the processing software to extract data along a line between the two beta points, as shown in Fig. 6. The algorithm ignored points that had a small scattering angle, and ignored any points identified as opaque or no data. It used the spatial variance along the line to eliminate regions impacted by cirrus or other thin clouds. In this way, the algorithm extracted sections of the line that represented cloud-free areas or holes between clouds. In the analysis, we had found that in each image, the R/B or N/B ratios along these line segments were a nearly fixed factor above or below the clear-sky background ratio for the same pixel. By determining the average factor on the line segment, we could characterize the factor that was required for that image to adjust for haze. For the case shown in Fig. 6, which was a very hazy case, this ratio was 1.42, meaning that the current hazy R/B or N/B ratio was about a factor of 1.42 higher than the library clear-sky R/B or N/B ratio at each pixel along the line. During development, we also verified that the correction factor determined with the beta-beta line segments applied equally well to the remainder of the sky.

 figure: Fig. 6.

Fig. 6. Beta-beta line for a cloud-free but hazy image.

Download Full Size | PDF

The adaptive algorithm step used the adjustment factor to adjust the clear-sky background, and the rest of the thin-cloud algorithm was the same as documented above. That is, the current field ratios were compared with a clear-sky background adjusted for haze. If an adjustment factor for the current image could not be determined due to cloud obscuration, then the most recent correction values were used. Once the clear-sky background was adjusted for the adaptive correction, the thin clouds were identified as done previously, i.e., as having a perturbation ratio greater than an input threshold. Data were only processed to an SZA of 85°, because the program ended before a sunset algorithm could be developed.

Visual evaluation of hundreds to thousands of images such as those shown in Figs. 14 convinced us that the algorithm was successful at detecting the clouds with few false calls for both thin and opaque clouds and differing cloud heights, over a wide range of conditions. The algorithms worked well down to the horizon, and with the addition of the adaptive algorithm step, we obtained excellent results even for a very hazy site at sea level. A more quantitative assessment will be discussed in Section 5. As noted in Section 1, a large database acquired by the D/N WSIs has been processed and delivered to the U.S. Air Force. We do not currently have the authority to distribute this database, but it may be possible to access it through the military. However, we are free to share the algorithm methods with others and hope that these may be useful in the future.

3. NIGHTTIME-CLOUD ALGORITHM

This section discusses the night-cloud algorithm. The closely related ability to extract beam transmittances at night is discussed in Section 4. Like the day-cloud algorithm, we developed both of these capabilities, and they have not been previously published in journals. A night image acquired under no moon (starlight) is shown in Fig. 7. The night raw images are equivalent in format to the day images, but are acquired with an “open hole,” i.e., clear glass blanks in place of the spectral and neutral density filters. The resulting spectral response was limited by the response curve of the charge coupled device (CCD) used in the digital imager, as well as an NIR blocking filter, for a broadband response peaking near 600 nm.

 figure: Fig. 7.

Fig. 7. Night image under no moon and cloud-free sky showing the Milky Way and lights from a nearby city.

Download Full Size | PDF

The WSI used a 3-log neutral density filter (factor of roughly 1000) during the day, which was removed by the flux-control algorithm at night, and the day spectral filters were more narrowband (approximately 70 nm passband). In addition, the automated flux-control algorithm varied the exposure time from a minimum of 100 ms to 60 s under starlight, and the 16-bit dynamic range of the camera provided additional flexibility. The starlight data were well onscale, with the darkest part of the sky between stars having a signal to noise of approximately 40:1 at the darkest sites that were distant from cities. Overall, the instrument was able to measure calibrated radiances over a range of more than 1010 from daylight to starlight. The compromise at night was that although high clouds do not typically show signs of blurring, low fast-moving clouds in the night images were blurred as a result of the increased exposure time.

Before discussing the night algorithm in detail, we would like to provide an overview of the algorithm. As with the day-cloud algorithm, the night-cloud algorithm includes steps to apply the radiometric calibration results, and steps to mask those pixels blocked by the occultor (which is used under moonlight) and other features of the scene. Next, the Earth-to-space beam transmittance is determined in the direction of 100–200 stars, and from this beam transmittance the algorithm determines whether these directions have no cloud, thin cloud, or opaque cloud. This step also determines IDL and saves the background sky or cloud radiance in the direction of each of these stars (i.e., background not including the star signal).

Prior to the processing, typical radiance distributions or “shells” for each site are extracted to form a clear-sky library and an opaque-cloud library for no-moon and full-moon conditions. During processing, the clear sky shell is adjusted as needed according to moon phase and distance. Both the clear-sky and opaque shells are then adjusted with an adaptive algorithm feature using the background sky radiances in the star directions. The current radiances in each pixel are then compared with these shells, to provide a detection of no-cloud, thin-cloud, or opaque-cloud condition for each pixel. The algorithm is not quite full resolution, due to a 5×5 averaging used in the processing, so we refer to it as a “high-resolution” algorithm, as opposed to “full resolution.”

A. Night Algorithm: Wavelength Considerations

We should note that earlier in the program, we evaluated the pros and cons of using our visible and NIR wavelengths versus using SWIR, mid-wave IR (MWIR), or thermal infrared wavelengths (IR) [24]. Since our direct application involved transmittances in the SWIR bands, it would have been optimal to use sensors in this region. To test this possibility, we acquired WSI images and near-simultaneous SWIR images [24]. We found that the SWIR imager, having a much shorter dynamic range, failed to acquire useful imagery starting shortly before sunset, whereas the WSI readily handled even starlight conditions. On the positive side, we found that in daylight, all the clouds seen by the SWIR camera were also seen by the WSI, suggesting that the clouds detected by the WSI should apply reasonably well to the SWIR wavelength regimes. (This was also evaluated with modeling studies that are beyond the scope of this paper.)

Regarding MWIR or IR imagers, a full discussion of the theoretical and hardware analysis we did at that time (2004) is beyond the scope of this paper, but is summarized in the above Ref. [24]. The analysis convinced us and our sponsors that we would be better served to continue our work in the visible and NIR. This was partly because there were concerns that the MWIR system, which responds to both scattered and emitted radiation, might require quite sophisticated algorithm development. And there were concerns that an LWIR system might have difficulty with look angles away from the zenith, and especially with moist atmospheres near sea level. In addition, the D/N WSI provided Earth-to-space beam transmittance at night and potentially in the daytime. These beam transmittances were an important data product that would be difficult to determine with an IR system.

As noted in Section 1, both an MWIR [14] and an LWIR system [15,16] are in use for various applications at the present time. However, we are not in a position to evaluate the relative merits of these systems in comparison with visible/NIR imagers for the SWIR beam transmittance application nor for other applications. We feel that the ability to provide Earth-to-space beam transmittance is a significant advantage of the WSI. We should note that a WSI day-to-night bias was documented in an earlier Ref. [25]; however, that analysis was done based on early WSI data processed with very preliminary night algorithms. We do not believe that a large day/night bias exists with the current algorithm results. This is further discussed in Section 4. Our feeling is that for future applications, the choice of an optical instrument will depend on the application.

B. Night Algorithm: Calibration Logic

As discussed in Section 2, the first major conceptual step in the algorithm is applying the radiometric calibrations. The night algorithm also requires a high-accuracy angular calibration, which provides the relationship between the zenith and azimuth angle in object space and the pixel position in image space. This was generated prior to routine processing. First, a standard angular calibration was generated by acquiring an image of a room with markers at 10° intervals. A best-fit curve was generated that matched the data to within about 0.2 pixels. The field images, however, are also affected by any slight misleveling or misalignment to true north. To address these issues, we developed a program (partly using software developed by one of our sponsors) to enable the analyst to compare apparent position of about 200 stars in the imagery with actual star positions derived from a star catalog, knowing the time and location of the imager. From these data, the angular calibration was modified to provide a higher accuracy angular calibration for the system as fielded (i.e., taking into account leveling and alignment). The resulting calibrations were accurate to about half a pixel or 1/6°. To extract the high-accuracy angular calibration, the fifth edition of the Bright Star Catalogue [26] was used to provide the location and magnitude of stars.

C. Night Algorithm: Logic for Transmittance and Sky Radiance near Stars

Once the calibrations and mask are applied, the next conceptual step is determination of the Earth-to-space beam transmittance and sky background radiance in the direction of 100–200 stars. The measured pixel radiances in the direction of each star and the nearby pixels are modeled by the algorithm as a two-dimensional Gaussian distribution of width approximately 0.4 pixel, which is the approximate width of the system’s point spread function (PSF) measured during predeployment focus tests. The integrated signal inside the modeled Gaussian is used to derive the apparent irradiance of the star as seen from the ground location. In addition, the Gaussian model corrects the nearby pixel radiances to remove the star signal and determine the background sky or cloud radiance near the star.

Initially, we tested this approach using roughly 2000 stars under no-moon conditions, and 800 stars under moonlight. (It is possible to detect more stars than can be seen in illustrations such as Fig. 7, because the illustrations are 8-bit renditions of the 16-bit raw data.) In order to determine the Earth-to-space beam transmittances, the algorithm needs the inherent irradiance of each star we use, where inherent irradiance is defined as the irradiance above the atmosphere, and apparent irradiance is the irradiance at the instrument (or at ground level in our case). In collaboration with our Air Force sponsors, we developed techniques to use the star magnitude and color temperature from the star catalog, basically using the color temperature to estimate a spectral distribution, and then integrating that distribution with the WSI open-hole passband to yield the effective inherent irradiance for the WSI passband. The beam transmittance was calculated as the ratio of the apparent to inherent irradiance of each star.

During development of the algorithm, we found that these transmittances did a nice job of distinguishing the presence of clouds, as shown in Fig. 8, but the transmittance results in clear areas of the sky had too much scatter. We performed several studies of the relationship between inherent and apparent irradiances of the stars. We found that a calibration offset was required for each star, probably because the color temperature values in the library may not represent the spectral distribution well enough for our purposes. Also, we found the results were more reliable if we limited the algorithm to use magnitude 4 stars or brighter. For each site at which the WSI data were processed, we selected several clear nights and processed the data for each star to determine a calibration correction factor for each star. We evaluated the relationship between the apparent and inherent irradiances from 11 clear nights, and obtained a correlation of 0.99, based on over 10,000 data points. Thus, the algorithm uses the modified inherent star irradiances for stars of magnitude 4 or brighter and uses the measured apparent star irradiances (from the Gaussian step) to determine Earth-to-space beam transmittance. The technique gave us good results for transmittance, as will be shown in Section 4.

 figure: Fig. 8.

Fig. 8. Sample cloud transmittance extraction result from an early version of the night algorithm. Red, star not detected; blue, purple, green, yellow, and orange correspond with transmittances of 0.8–1.0, 0.6–0.8, 0.4–0.6, 0.2–0.4, and 0–0.2, respectively.

Download Full Size | PDF

D. Night Algorithm: No-Moon Logic

The next major step in the algorithm requires the development of typical radiance distributions for all phases of the moon as well as no-moon conditions prior to automated processing. We will first describe how this works for no-moon conditions. Prior to automated algorithm processing, we generated a library of normalized no-moon clear-sky radiance distributions called “shells,” similar in concept to the day algorithm clear-sky background library. These shells provide a nominal normalized radiance level at each pixel. Both normalized no-moon clear shells and normalized no-moon opaque cloud shells were generated from the field data for each site.

In order to adjust the no-moon clear shell for current conditions, the algorithm first identifies stars that are in clear regions by assessing the beam transmittances for each star in the current image, using only stars that are between 0° and 60° zenith angle (100–200 stars per image). The algorithm sorts out stars that correspond to clear conditions in the direction of the star based on these transmittances. Next the algorithm compares the clear sky shell radiances with the current background sky radiances in the directions of the stars identified as being in clear parts of the sky. These background sky radiances near the stars are well representative of the radiances nearby clear areas, because the signal associated with the star PSF was deleted, as noted in Section 3.C. From the comparison with these background radiances with the shell, the algorithm determines an average adjustment factor for the clear shell, which it applies to all pixels in the shell.

Similarly, the algorithm compares the opaque-shell radiances with the current background sky radiances in the directions of the stars identified as being in directions blocked by opaque cloud, and determines an average adjustment factor for the opaque shell. This adaptive algorithm correction to the shells adjusts for changes in radiance caused by varying haze amounts or other conditions. As with the day adaptive algorithm, if no correction can be determined for a given image, the most recent value is used.

A plot of the clear and opaque shells for a row through the middle of the shell images is shown in Fig. 9. For sites near cities, such as that shown in Fig. 9, the clear sky background varied slightly with hour-angle, as city lights were dimmed during the night, so clear sky shells were developed for each hour-angle. It was not necessary to develop hour-angle-dependent opaque shells.

 figure: Fig. 9.

Fig. 9. Typical no-moon clear-sky radiance levels for clear sky as a function of hour-angle (solid color curves), and for skies with opaque clouds under no-moon conditions (dashed line).

Download Full Size | PDF

To apply these shells to the field data, any unmasked pixels with a radiance more than 10% above the opaque shell are identified as opaque cloud, and any nonopaque pixels with a radiance more than 15% above the clear shell are identified as thin cloud. The remaining unmasked pixels are identified as clear (i.e., no cloud).

A sample image result for the no-moon case is shown in Figs. 10 and 11. In Fig. 11, the color scheme is the same as for the day algorithm, except that we have colored thin clouds green rather than yellow. The stars are not visible in Fig. 10, partly because this image was taken at a hazy site near a city; however, they were readily detected in the 16-bit raw data by the algorithm.

 figure: Fig. 10.

Fig. 10. Raw image for a no-moon case at night.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Processed cloud decision image for the case in Fig. 10.

Download Full Size | PDF

E. Night Algorithm: Moonlight Condition Logic

In order to expand the algorithm to include not just no-moon cases, but also all full and partial moon, we first modeled the clear-sky radiance distribution under moonlight with Eq. (1),

N(θ,φ,θM,φM,α)=NB(θ,φ)+RN(α)*NM(θ,φ,θM,φM,αmax).
In this equation, N(θ,φ,θM,φM,α) is the radiance distribution for a clear sky (i.e., cloudless condition) at night as a function of look angle θ, φ, and the source (moon) position θM,φM, and phase α. NB(θ,φ) is the radiance distribution of the moonless clear-sky background sky. RN(α) is the relative brightness of the moon as a function of phase angle α and Earth-to-moon distance. And NM(θ,φ,θM,φM,αmax) is the distribution of the additional radiance of the full-moon clear sky as a function of look angle θ, φ, and the source (moon) position θM,φM at maximum phase angle.

Note that the radiance distribution of the no-moon clear sky NB(θ,φ) is the same as previously extracted. The relative brightness of the moon RN(α) is computed as a function of phase angle and takes into account both phase and Earth-to-moon distance to determine relative brightness. These radiance distributions are designated as clear shells.

For the opaque shell under moonlight, we found that the best results were obtained by using the same shape as the no-moon opaque shell for all phases of the moon. This makes sense for sites at which the opaque radiance distribution is dominated by the anthropogenic (city) light scattered off the clouds. It works reasonably well for all sites because the opaque shell is modified in magnitude during the adaptive algorithm step. (Given more time and data, we might have modified this approach).

As with the no-moon algorithm described earlier, the adaptive algorithm is used to adjust both the clear-sky shell and the opaque shell. The clear shell was already specific to the moon phase due to Eq. (1), and the adaptive algorithm adjusts for changes in current lighting conditions or haze amount. As noted above, the opaque shell is simply the no-moon opaque shell. As a result, the adaptive algorithm can be especially important in adjusting the opaque shell for moon phase as well as the other changes. If no adjustment can be determined, the most recent adjustment is used. As with the no-moon version of the algorithm, any unmasked pixels with a radiance more than 10% above the opaque shell are identified as opaque cloud, any nonopaque pixels with a radiance more than 15% above the clear shell are identified as thin cloud, and the remaining nonmasked pixels are identified as clear.

An example is shown in Figs. 1214. Figure 12 shows a vertical column through the center of several clear-sky shell images for the raw image shown in Fig. 13. (This plot was generated using the Interactive Data Language package by Visual Information Systems, which labels the bottom of the image as row 0.) In Fig. 12, the orange curve shows the no-moon clear-sky shell values NB(θ,φ). The green curve shows the moon shell, which is the product of the phase-adjustment term RN(α) and the full-moon distribution NM(θ,φ,θM,φM,αmax). The sum of these two curves forms the nominal shell for no cloud (which is not shown) prior to the adaptive adjustment. From those stars that were identified as no cloud for this image, an adaptive correction factor of 1.13 was determined. The blue curve shows the clear-sky shell with the adaptive factor applied. The opaque-cloud shell is not shown in this graph, because it is well above the top of the graph. The black curve shows the measured radiance for the same central column. The vertical structures in the plot correspond with masked areas. The cloud decision results are shown in Fig. 14.

 figure: Fig. 12.

Fig. 12. Radiances along the center column for the clear and moon shells and the measured data for image in Fig. 13.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. Raw night image with thin clouds and moonlight.

Download Full Size | PDF

 figure: Fig. 14.

Fig. 14. Cloud decision image corresponding with Fig. 13. Green represents pixels identified as thin cloud.

Download Full Size | PDF

An example of a moonlight case with both thin and opaque clouds will be shown in the next section, and many examples are shown in the last project paper [5], as well as earlier papers referenced in that paper. We should also note that the algorithm includes some features that are beyond the scope of this note. Specifically, there is a feature to better handle the Milky Way at dark sites and a feature to handle light on the dome at one location where horizon lights changed after the instrument was fielded. (Normally any lights on the horizon were blocked by shades placed around the instrument.)

The beam transmittance results will be discussed in Section 4, and the accuracy of the algorithm will be discussed in Section 5. We did not develop a sunrise/sunset algorithm, so as data were processed, results for SZA between 90° and 104° were not processed by the night algorithm. The processed database includes daytime results at 1-min intervals and night results at 2-min intervals for 17–41 months at each of four sites [5].

4. NIGHTTIME BEAM TRANSMITTANCE ALGORITHM

As discussed more fully in our last paper [5], we were able to accurately extract the Earth-to-space beam transmittance for 100–200 stars per image, unless there were opaque clouds blocking the stars. The displays that were generated are difficult to see in the paper format; however, an example for a moonlight case with clear and cloud areas is shown in Figs. 15 and 16, and several other examples including clear skies and thin clouds are shown in the last paper.

 figure: Fig. 15.

Fig. 15. Raw image for moonlight with broken clouds, with transmittance results superimposed. Orange, star not detected; blue, purple, green, and yellow correspond with transmittances of 0.8–1.0, 0.6–0.8, 0.2–0.6, and 0–0.2, respectively.

Download Full Size | PDF

 figure: Fig. 16.

Fig. 16. Cloud decision image for case shown in Fig. 15.

Download Full Size | PDF

For clear cases, the values for total transmittance (i.e., including aerosol) at the zenith were typically about 0.88–0.93 and then dropped off as the zenith angle increased, as anticipated. We were able to detect transmittances as low as 0.01; however, our minimum detectable transmittance depended on the star brightness and other factors.

From detailed analysis of the transmittance results with the aerosol transmittance removed, we believe that our thin-cloud threshold in the cloud algorithm corresponded with a cloud transmittance of about 0.83 or a cloud optical fade of about 0.8 dB. This means that some of the thinnest clouds would not be identified as clouds by the algorithm. However, our sponsors indicated that this threshold met their application well. Similarly, analysis indicated that the opaque threshold corresponded with a cloud transmittance of about 0.16 with the aerosol transmittance removed, or about 8 dB fade due to the cloud. Note that these results were based on limited data, but illustrate the promise of the technique.

At the time the program ended, we had been working toward providing a daytime beam transmittance product, and assessing the transmittances that corresponded with opaque- and thin-cloud threshold for the daytime, but this work was not completed. The technique was to be partially based on modifying the solar occultor so that we could reliably measure beam transmittance in the direction of the Sun. Based on theoretical considerations and visual assessment, we believe that the perturbation ratio used in the day algorithm is closely related to the COD. We had anticipated that with analysis of field data related to these parameters, we would be able to extract the beam transmittance distributions over the daytime sky.

This transmittance study helps to answer the question posed in Section 1.A, regarding how we define a cloud as identified by the WSI algorithms. For the night images, preliminary evaluation indicates that the algorithms identify a thin cloud as a feature with a COD of about 0.8 or more, and an opaque cloud as a feature with a COD of about eight or more. We would have liked to have more time to further evaluate the thin-opaque transition. Unfortunately, we did not have time to determine these thresholds for the daytime, and all we can say at this time is that the day algorithm does an excellent job of detecting even very thin cirrus clouds. Clearly assessing whether there is any day-to-night bias in the algorithm results would require completion of this study, but we can state that from comparison of raw and cloud decision images in the day and night, there is no obvious large bias.

5. ASSESSMENT OF ACCURACY

In assessing the accuracy of the day- and night-cloud algorithms, we first compare the cloud decision imagery with the raw imagery. We should note that our definition of “thin” indicates regions with slightly less transmittance than for the clear sky or hazy sky. It does not necessarily imply cirrus or similar clouds. Although in most cases, the thin-cloud assessments were in fact due to cirrus-like clouds, we also noted that often there is cloud debris in pockets near the edges of opaque clouds that is identified as thin cloud, as noted in Section 1.A. We also intentionally did not identify haze as being thin cloud, because with its smaller drop size distribution, haze should not affect SWIR wavelengths of interest as much as thin clouds. However, the algorithm does report the adaptive algorithm adjustment, which in the daytime is primarily driven by haze amount.

Although scanning hundreds to thousands of cloud decision results gave us confidence that the day and night algorithms worked well most of the time, we wished to assess accuracy in a more quantitative way and have a means to assess the impact of improvements to the algorithm. Early in the program, our sponsors attempted to use data from a lidar that was nearly co-located as a ground truth; however, they had too many problems with pointing and with lidar accuracy for this to be practical at the time. As noted earlier, the lidar tended to identify extremely thin clouds (below the optimal threshold for our applications) at lower altitudes, and yet miss significant clouds at higher altitudes.

Our primary tool for assessing improvements was a program CloudAssess that allowed an analyst to assess a database of images. The program displayed the cloud decision image over which it superimposed fiducials at the zenith and in the four cardinal directions at zenith angles of 30°, 60°, and 80°. The job of the analyst was to mark each fiducial location as having a cloud decision result that was correct or incorrect. The program allowed the analyst to review the raw image and/or the ratio image and zoom the image. The analyst could also change the windowing that converted the raw 16-bit image to an 8-bit display image, in order to better bring out relatively dark or bright details.

This program was used to assess improvements up until just before the day adaptive algorithm features were added, and just before the final night features to better handle stray light and the Milky Way were added. At that point, we were asked to switch to a “blind test” that will be discussed below, and we never had a chance to apply the CloudAssess program to the final algorithm results. However, we feel these results were very useful, so we have shown them even though they were not for the final improved versions of the algorithms. In Figs. 17 and 18, we show sample day and night cloud decision images, and the superimposed results for the test database. The rectangles show the percentage of answers assessed by the analyst to be correct in the test data at each fiducial location. Figure 18 includes all cases in the test bed, including both the no-moon and the moonlight cases.

 figure: Fig. 17.

Fig. 17. Day algorithm results showing percent of results from test bed assessed by the analyst to be correct in selected directions.

Download Full Size | PDF

 figure: Fig. 18.

Fig. 18. Night algorithm results showing percent of results from test bed assessed by the analyst to be correct in selected directions.

Download Full Size | PDF

In Figs. 17 and 18, we see that both preliminary algorithms did best at the zenith, with accuracies of about 99% and 98%, respectively, and did reasonably well at other angles. For those regions from the 60° zenith angle to overhead, the average results were 98% for the day algorithm and 95% for the night algorithm. Near the horizon, the results were accurate in about 96% of the cases in the day and 80% at night. The no-moon results, which are not shown, were slightly better than the results in Fig. 18. In general, the day algorithm provided slightly better results than the night algorithm, and we believe this was true even with the later versions of the algorithms. This was partly because the night algorithm had to handle cases both with and without varying amounts of moonlight, and partly because anthropogenic light sources, primarily from nearby cities, had to be handled in the night algorithm. It should also be noted that although the thin-cloud thresholds were expected to miss some very thin clouds (and that were appropriate for the sponsor’s needs), we called the result an error if thin clouds that could be seen in the imagery were not identified by the algorithm. Thus we would anticipate that some of the cases that were categorized as incorrect were actually simply below the optimal thin-cloud threshold.

In order to minimize any perception of bias in the validation work, we also developed a blind test to attempt to avoid any human bias. For this test, the analyst evaluated raw imagery, and assessed whether the answer should be no cloud, thin cloud, or opaque cloud. It was very difficult to judge whether thin clouds were above or below the threshold, and also very difficult to judge whether there was cloud clutter in regions near opaque clouds that should be identified as thin. For this reason, there was an additional category the analyst could use labeled “uncertain.” The results are shown in Table 1.

Tables Icon

Table 1. Summary of Results with New Blind Test Programs

We did not find the results in Table 1 very useful because there were so many cases in the “uncertain” category. It might have been that with more experience, we could have lowered the number of uncertain cases. In general, it is vital that blind tests assess the accuracy of the algorithm, not the accuracy of the analyst, so this approach might never have been very useful. However, the results in Table 1 can be summarized as follows. If the clouds were such that the analyst could tell whether they should be identified as no cloud, thin cloud, or opaque cloud, the algorithm provided the same result as the analyst 98% to 99% of the time. Based on the tests shown in Figs. 17 and 18 and Table 1, we believe it is fair to say that the algorithms did a remarkably good job, with room for refinement, particularly regarding the night algorithm near the horizon and sunrise/sunset conditions. The algorithms were fully automated without the need for analyst interaction once deployed, and included adaptive algorithm features to adjust for changing atmospheric conditions.

6. SUMMARY

The AOG was very fortunate to be funded to work on development of WSIs and their algorithms for an extended period. This paper presents the final versions of the day- and night-cloud algorithms, which have not previously been published. These new algorithms that we have developed detect the presence and location of clouds and classify their opacity on a pixel-by-pixel basis over the whole sky, under conditions ranging from clear to overcast, during daylight, starlight, and moonlight. In addition to cloud detection, the new night algorithm returns a map of Earth-to-space beam transmittance values for stars brighter than magnitude 4. We did not develop a sunrise/sunset algorithm for SZAs from 85° to 104°, but consider it feasible.

An overview of the WSI systems was documented earlier [1], and the radiometric calibration methods used to derive calibration inputs used by the algorithms were also documented in an earlier paper [2]. In this followup paper, we have discussed the new methods we developed for the cloud algorithms and provided sample results. Although we have not sought to continue this research at this time, we believe that the capabilities represented by these algorithms are still unique in their ability to provide accurate results for the full sky, for both night and day, and including beam transmittance at night as well as cloud assessment. The results of analysis, including blind tests, show that the cloud detection techniques implemented for both day and night operation of the D/N WSIs are effective at identifying opaque cloud, thin cloud, and clear sky at pixel or near-pixel levels.

Funding

Air Force Research Laboratory (FA9451-008-C-0226); Office of Naval Research (N00014-01-D-0043, N00244-07-1-009).

Acknowledgment

As noted earlier, the day and night algorithms were developed over many years, and thus it is not practical to list all of the contracts and grants, nor all the individuals that have aided in this effort. We would like to acknowledge the technical and fiscal support of our recent sponsors. The most recent contract was Air Force Research Laboratory and Kirtland’s Starfire Optical Range Contract. Earlier contracts were funded by AFRL and SOR but contracted via Navy contracts. We would like to thank our contract monitors, most recently Ann Slavin, Earl Spillar, and Robert Fugate. We would also like to thank Tim Tooman for use of logic we used in our geometric calibration. Regarding AOG members, we especially want to thank Richard Johnson, who developed the original concepts for digital WSI systems and guided much of the original work related to WSI systems. We would also like to acknowledge and thank additional AOG team members who contributed to the algorithm development work: Thomas Koehler, Wayne Hering, and John Cameron.

REFERENCES

1. J. E. Shields, M. E. Karr, R. W. Johnson, and A. R. Burden, “Day/night whole sky imagers for 24-h cloud and sky assessment: history and overview,” Appl. Opt. 52, 1605–1616 (2013). [CrossRef]  

2. J. E. Shields and M. E. Karr, “Radiometric calibration methods for day/night whole sky imagers and extinction imagers,” Appl. Opt. 58, 5663–5673 (2019). [CrossRef]  

3. J. E. Shields and M. E. Karr, “Extinction imagers for measurements of atmospheric beam transmittance,” Appl. Opt. 58, 5486–5495 (2019). [CrossRef]  

4. R. W. Johnson, W. S. Hering, and J. E. Shields, “Automated visibility and cloud cover measurements with a solid state imaging system,” SIO 89-7, GL-TR-89-0061, ADA216906, University of California, San Diego, Scripps Institution of Oceanography, Marine Physical Laboratory, 1989, https://apps.dtic.mil/docs/citations/ADA216906.

5. J. E. Shields, M. E. Karr, A. R. Burden, V. W. Mikuls, J. R. Streeter, R. W. Johnson, and W. S. Hodgkiss, “Scientific report on whole sky imager characterization of sky obscuration by clouds for the Starfire Optical Range,” Scientific report for AFRL contract FA9451-008-C-0226, Technical Note 275, ADA556222, Marine Physical Laboratory, Scripps Institution of Oceanography, University of California San Diego, 2010, https://apps.dtic.mil/docs/citations/ADA556222.

6. U. Feister, J. E. Shields, M. E. Karr, R. W. Johnson, K. Dehne, and M. Woldt, “Ground-based cloud images and sky radiances in the visible and near infrared region from whole sky imager measurements,” in EUMP31, EUMETSAT Satellite Application Facility Workshop, Dresden, Germany, 20 –22 November (2000), pp. 79–88.

7. U. Feister and J. Shields, “Cloud and radiance measurements with the VIS/NIR Daylight Whole Sky Imager at Lindenberg (Germany),” Meteorol. Z. 14, 627–639 (2005). [CrossRef]  

8. U. H. Feister, H. Möller, T. Sattler, J. E. Shields, U. Görsdorf, and J. Güldner, “Comparison of macroscopic cloud data from ground-based measurements using VIS/NIR and IR instruments at Lindenberg, Germany,” Atmos. Res. 96, 395–407 (2010). [CrossRef]  

9. C. N. Long and J. J. DeLuisi, “Development of an automated hemispheric sky imager for cloud fraction retrievals,” in Tenth Symposium on Meteorological Observations and Instrumentation, Phoenix, Arizona (American Meteor Society, 1998), pp. 171–174.

10. G. Pfister, R. L. McKenzie, J. B. Liley, W. Thomas, B. W. Forgan, and C. N. Long, “Cloud coverage based on all-sky imaging and its impact on surface solar irradiance,” J. Appl. Meteorol. 42, 1421–1434 (2003). [CrossRef]  

11. A. Cazorla, F. J. Olmo, and L. Alados-Arboledas, “Development of a sky imager for cloud cover assessment,” J. Opt. Soc. Am. A 25, 29–39 (2008). [CrossRef]  

12. C. W. Chow, B. Urquhart, M. Lave, A. Dominguez, J. Kleissl, J. E. Shields, and B. Washom, “Intra-hour forecasting with a total sky imager at the UC San Diego solar test bed,” J. Sol. Energy 85, 2881–2893 (2011). [CrossRef]  

13. R. Pérez-Ramírez, R. J. Nemiroff, and J. B. Rafert, “nightskylive.net: the Night Sky Live project,” Astron. Nachr. 325, 568–570 (2004). [CrossRef]  

14. D. J. Klege, R. D. Blatherwick, and V. R. Morris, “Ground-based all-sky mid-infrared and visible imagery for purposes of characterizing cloud properties,” Atmos. Meas. Tech. 7, 637–645 (2014). [CrossRef]  

15. P. W. Nugent, J. A. Shaw, and S. Piazzolla, “Infrared cloud imager development for atmospheric optical communication,” Opt. Express 17, 7862–7872 (2009). [CrossRef]  

16. B. J. Redman, J. A. Shaw, P. W. Nugent, R. T. Clark, and S. Piazzolla, “Reflective all-sky thermal infrared cloud imager,” J. Opt. Express 26, 11276–11283 (2018). [CrossRef]  

17. J. E. Shields, M. E. Karr, A. R. Burden, R. W. Johnson, and J. G. Baker, “Analysis and measurement of cloud free line of sight and related cloud statistical behavior,” Final Report to the Office of Naval Research Contract N00014-89-D-0142 (DO #2), ADA425400, University of California, San Diego, Scripps Institution of Oceanography, Marine Physical Laboratory, 2003, https://apps.dtic.mil/docs/citations/ADA425400.

18. J. E. Shields, A. R. Burden, R. W. Johnson, M. E. Karr, and J. G. Baker, “New cloud free line of sight statistics measured with digital Whole Sky Imagers,” Proc. SPIE 5891, 58910M (2005). [CrossRef]  

19. I. Koren, L. A. Remer, Y. J. Kaufman, Y. Rudich, and J. V. Martins, “On the twilight zone between clouds and aerosols,” Geophys. Res. Lett. 34, L08805 (2007). [CrossRef]  

20. I. Koren, L. Oreopoulos, G. Feingold, L. A. Remer, and O. Altaratz, “How small is a small cloud?” Atmos. Chem. Phys. 8, 3855–3864 (2008). [CrossRef]  

21. E. Hirsch, I. Koren, Z. Levin, O. Altaratz, and E. Agassi, “On transition-zone water clouds,” Atmos. Chem. Phys. 14, 9001–9012 (2014). [CrossRef]  

22. W. S. Hering, R. W. Johnson, W. S. Hering, and R. W. Johnson, The FASCAT odel performance under fractional cloud conditions and related studies,” SIO Re. 85–7, AFGL-TR-84-0168, ADA 085 451, University of California, Scripps Institution of Oceanography, Visibility Laboratory, 1984, https://apps.dtic.mil/docs/citations/ADA169894.

23. S. Q. Duntley, R. W. Johnson, and J. I. Gordon, “Airborne measurements of optical atmospheric properties in Southern Illinois,” SIO Re. 73–74, AFCRL- TR-0422, NTIS No. AD-774-597, University of California, Scripps Institution of Oceanography, Visibility Laboratory, 1973, https://apps.dtic.mil/docs/citations/AD0774597.

24. J. E. Shields, M. E. Karr, A. R. Burden, R. W. Johnson, and W. S. Hodgkiss, “Enhancement of near-real-time cloud analysis and related analytic support for whole sky imagers,” Final report for ONR contract N00014-01-D-0043 DO #4, ADA468076 (Marine Physical Laboratory, Scripps Institution of Oceanography, University of California San Diego, 2007).

25. B. Thurairajah and J. A. Shaw, “Cloud statistics measured with the infrared cloud imager (ICI),” IEEE Trans. Geosci. Remote Sens. 43, 2000–2007 (2007). [CrossRef]  

26. E. D. Hoffleit and W. H. Warren Jr., The Bright Star Catalogue, 5th ed. (Astronomical Data Center and Yale University Observatory, 1991).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (18)

Fig. 1.
Fig. 1. Raw WSI image, day with red filter showing clouds, clear sky, and solar/lunar occultor.
Fig. 2.
Fig. 2. Cloud decision image. Blue, no cloud; white, opaque cloud; yellow, thin cloud; black, no data (mask).
Fig. 3.
Fig. 3. Raw red image for case with contrails.
Fig. 4.
Fig. 4. Cloud decision image for case with contrails.
Fig. 5.
Fig. 5. N/B beta reference values versus SZA for several nearly cloud-free days at one site.
Fig. 6.
Fig. 6. Beta-beta line for a cloud-free but hazy image.
Fig. 7.
Fig. 7. Night image under no moon and cloud-free sky showing the Milky Way and lights from a nearby city.
Fig. 8.
Fig. 8. Sample cloud transmittance extraction result from an early version of the night algorithm. Red, star not detected; blue, purple, green, yellow, and orange correspond with transmittances of 0.8–1.0, 0.6–0.8, 0.4–0.6, 0.2–0.4, and 0–0.2, respectively.
Fig. 9.
Fig. 9. Typical no-moon clear-sky radiance levels for clear sky as a function of hour-angle (solid color curves), and for skies with opaque clouds under no-moon conditions (dashed line).
Fig. 10.
Fig. 10. Raw image for a no-moon case at night.
Fig. 11.
Fig. 11. Processed cloud decision image for the case in Fig. 10.
Fig. 12.
Fig. 12. Radiances along the center column for the clear and moon shells and the measured data for image in Fig. 13.
Fig. 13.
Fig. 13. Raw night image with thin clouds and moonlight.
Fig. 14.
Fig. 14. Cloud decision image corresponding with Fig. 13. Green represents pixels identified as thin cloud.
Fig. 15.
Fig. 15. Raw image for moonlight with broken clouds, with transmittance results superimposed. Orange, star not detected; blue, purple, green, and yellow correspond with transmittances of 0.8–1.0, 0.6–0.8, 0.2–0.6, and 0–0.2, respectively.
Fig. 16.
Fig. 16. Cloud decision image for case shown in Fig. 15.
Fig. 17.
Fig. 17. Day algorithm results showing percent of results from test bed assessed by the analyst to be correct in selected directions.
Fig. 18.
Fig. 18. Night algorithm results showing percent of results from test bed assessed by the analyst to be correct in selected directions.

Tables (1)

Tables Icon

Table 1. Summary of Results with New Blind Test Programs

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

N ( θ , φ , θ M , φ M , α ) = N B ( θ , φ ) + R N ( α ) * N M ( θ , φ , θ M , φ M , α max ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.