Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Airborne system for multispectral, multiangle polarimetric imaging

Open Access Open Access

Abstract

In this paper, we describe the design, fabrication, calibration, and deployment of an airborne multispectral polarimetric imager. The motivation for the development of this instrument was to explore its ability to provide information about water constituents, such as particle size and type. The instrument is based on four 16 MP cameras and uses wire grid polarizers (aligned at 0°, 45°, 90°, and 135°) to provide the separation of the polarization states. A five-position filter wheel provides for four narrow-band spectral filters (435, 550, 625, and 750 nm) and one blocked position for dark-level measurements. When flown, the instrument is mounted on a programmable stage that provides control of the view angles. View angles that range to ±65° from the nadir have been used. Data processing provides a measure of the polarimetric signature as a function of both the view zenith and view azimuth angles. As a validation of our initial results, we compare our measurements, over water, with the output of a Monte Carlo code, both of which show neutral points off the principle plane. The locations of the calculated and measured neutral points are compared. The random error level in the measured degree of linear polarization (8% at 435) is shown to be better than 0.25%.

© 2015 Optical Society of America

1. INTRODUCTION

The role of ocean-color remote sensing in bio-optical oceanography is to provide information about water column constituents on synoptic scales. Only photons in the spectral range between 400 and 750 nm penetrate the water column and interact with its constituents through absorption and scattering, both elastic and inelastic. Remote sensing approaches collect the photons that, having interacted with the water and its constituents, leave the water body and reach the sensor. The measurable physical characteristics of these photons are intensity, spectral dependence, polarization, and coherence, or phase. Intensity-only images provide some spatial information about materials that possess broadly different reflectivities, but are otherwise of limited use. Spectral measurements [multispectral (MSI) and hyperspectral (HSI)] have added the ability to determine inherent optical properties (IOPs), sea floor types, and bathymetry in shallow water cases [1]. These IOPs are related to the materials present in the water, such as phytoplankton, suspended sediments (SS), and colored dissolved organic matter (CDOM). Changes in the types and concentrations of these particles influence water column properties, such as primary productivity and light attenuation. In this paper, we will use the term “hydrosols” to refer to any particle, organic or inorganic, in the water column.

Over the last several decades, the development of remote sensing capabilities designed for the open ocean (Case 1 waters) has focused on using multispectral sensors. Satellite sensors such as SeaWIFS, MODIS, MERIS, and VIIRS (with 4–12 water-penetrating bands) have provided a wealth of information about phytoplankton concentrations and associated CDOM and SS, particularly in the open ocean [2].

In Case 2 waters, the limited number of bands from the multispectral systems no longer provides sufficient differentiation of the in-water constituents. Terrestrially based sources for CDOM and SS along with elevated nutrient inputs are the primary causes of this added complexity. In shallow waters, the presence of the seafloor may further impact the upwelling signal. In an effort to provide additional information, many newer multispectral systems, such as Digital Globe’s WorldView 2, have increased the number of bands and in some cases, had a corresponding decrease in band-to-band spectral spacing [3].

Eventually, this drive for more retrieved information (and therefore bands) resulted in the proliferation of hyperspectral systems. In coastal areas where the seafloor is visible, these systems improved the ability to provide environmental information. However, most water constituents do not have spectrally distinct features and as such, there is a significant correlation between the spectral channels in a hyperspectral system. Progress continues to be made in extracting information from visible near-infrared (VNIR) HSI data. However, optical closure (the ability to use measured IOPs to correctly predict the spectral signature above the water) does not always work. Chang and Whitmire [4] have pointed out the need for additional information on backscattering information. Thus, it appears that, particularly when the seafloor is visible, the general problem of the retrieval of environmental information is underdetermined when using MSI and likely even HSI measurements.

This leaves the polarization of the upwelling light field as the last photon property that is not yet routinely exploited in passive remote sensing. The scattering of light by particles of any size results in a polarimetric signature (with angular dependence) that contains information about particle size, shape, and refractive index that is not well captured by intensity measurements alone [5]. Chami and Platel [6] demonstrate that the measurements of the wavelength and angular dependence of the polarization of the upwelling radiance improves the accuracy of the retrieval of hydrosol scattering coefficients by 75% in a modeled dataset.

Polarization measurements are commonly used to determine information about atmospheric aerosols. Aircraft instruments include the Research Scanning Polarimeter (RSP) [7] and the Airborne Multiangle SpectroPolarimetric Imager (AirMSPI) [8]. Space-based systems include Polder-1, Polder-2, and Parsol [9]. Each of these instruments makes measurements of the angular dependence of the polarimetric signature for multiple wavelengths emanating from one location on the earth’s surface.

Kattawar presents a general overview of polarized light in the ocean [10]. Measurements of the polarimetric light field both beneath and above the ocean surface showing neutral points [where the observed degree of linear polarization (DOLP) goes to zero] have been shown [11,12]. Modeling and simulation studies have looked at the sensitivity of polarization to aerosols and hydrosols. Studies of neutral points (NPs) showed that their position is affected by concentrations of atmospheric aerosols and hydrosols, as well as the roughness of the air-sea interface [13]. Chami found that polarization captured the influence of suspended matter in coastal waters, and can be used to separate the inorganic and organic components [14]. The degree of polarization of upwelling light measured above the water surface is highly angle dependent, as illustrated in most of the plots. This angular dependence mirrors the numerous polarization phenomena in the atmosphere, including NPs, sun dogs, haloes, and glories, all of which are highly dependent on the sun and viewing angles. Explicit dependence of the polarization signature on the water constituents has been demonstrated [15].

Based on the body of work referenced above, our goal was to design, build, and operate an airborne system for measuring the polarized light field at several wavelengths above the water surface. The system was designed to provide repeat coverage, solar geometry flexibility, the ability to change the ground sample distance (GSD) and therefore swath width, coverage at synoptic scales, and to achieve mobility and cost savings offered by aircraft when compared to ships. The system was designed to be flexible and is suited to land applications as well, but in this paper, we focus on applications in water. The flexible nature of the instrument led us to name it the Versatile Imager for the Coastal Ocean (VICO).

Hooper et al. describe a similar system, the Airborne Remote Optical Spotlight System—Multispectral Polarimeter (AROSS-MSP) [16]. The primary differences are as follows: the AROSS-MSP measures three polarization angles compared to four for VICO; its spectral bandpass is equal to or greater than 100 nm compared to between 8 and 20 nm for VICO; VICO’s focal plane has 10 times more pixels; and finally, AROSS-MSP has 12 cameras and does not need a filter wheel to move between spectral channels. This allows it more flexibility while imaging. The narrower spectral bandpass of VICO is important for addressing the spectral dependency of the polarimetric signature in the coastal ocean.

The polarized light field is described by the four-parameter Stokes vector [I,Q,U,V] [e.g., Coulson], or equivalently, [S0,S1,S2,S3], which is the terminology we will use. Circularly polarized light may be created in the ocean through the total reflection of upwelling light by the air-sea interface. However, it has been shown [17] that it occurs only under very specific and limited circumstances (visible only within a couple of meters of the surface and along sight lines that exceed the critical angle, the angle above which total internal reflection occurs). Thus, throughout this work, any circular polarization is ignored (S3=0). S0, S1, and S2 can be determined through intensity measurements of three appropriate polarization directions (e.g., 0°, 60°, and 120°), or four directions (e.g., 0°, 45°, 90°, and 135°), which allows for error computations. In our system, the 0° filter is aligned with the forward-to-aft axis of the aircraft. To state it another way, the 90° filter is aligned with the wings. In a system with four polarization filters, such as ours, the following relationships are used to compute the Stokes parameters from the measured intensities:

S0=I0+I90=I45+I135=12(I0+I45+I90+I135),
S1=I0I90,
S2=I45I135,
where Ix indicates the intensity measured with a polarization filter placed at the specified orientation in degrees. That is, I0 is the magnitude of light with a polarization angle of 0° and which does not contain any light with a polarization angle of 90°. The DOLP and the angle of polarization (AOP) are calculated as follows:
DOLP=(S12+S22)/S0,
AOP=12tan1(S2S1).
Here, the possible values of the arctangent run from 180° to +180°, and the AOP from 90° to 90°.

The remainder of the paper proceeds as follows: Section 2 describes the instrument design, Section 3 details the fabrication and calibration, Section 4 discusses the collection of data and its processing, and Section 5 presents an example of the field data, which shows an NP. Additionally, contemporaneously collected in-water data is described and a brief explanation of the modeling used to estimate the location of the NP is provided. Last, Section 6 summarizes the paper.

2. DESIGN CONSIDERATIONS

To determine the linear components (S0, S1, and S2) of the polarization field, measurements at a minimum of three complementary angles must be made. As described by Tyo et al. [18], these measurements can be accomplished by several techniques: (1) using a single rotating polarizer (either moving between set angles such as 0°, 60°, and 120°, or rotating it continuously to measure the modulation); (2) using a single fore-optic and then dividing the photons using an appropriate combination of beam splitters/polarizing filters to direct the photons to separate focal planes; (3) using a single fore-optic and then allowing the photons to pass through optics designed to place each required polarization measurement onto a different portion of a single focal plane; or (4) using separate synchronized focal planes and fore-optics—one for each polarization angle measured. Each of the last three options requires that the cameras be bore-sighted to assure that the cameras are measuring the same location on the surface at the same time.

Each of these approaches has its pluses and minuses. The first approach from the above list, in which a rotating polarizer is used, is generally not suitable for quickly changing (dynamic) scenes that are typical of those imaged by aircraft instruments. Here, for example, a glint pattern may change during the time it takes to image enough data to determine the polarization. Then the glint would be represented in only some of the data used to determine the polarization. Approaches 2 and 3 are generally difficult to align and have low throughput. Because we are imaging a dark target (the ocean) and doing so from an aircraft, we have chosen the fourth option: to use a set of synchronized cameras. There are four cameras, each with a polarizing filter in front of the fore-optics aligned at 0°, 45°, 90°, and 135°. All the information needed to determine the polarization state of the light is measured at the same instant. In front of the polarizing filters are spectral filters housed in a motorized filter wheel. The challenge of this approach arises from the requirement that the four cameras be very well aligned. Similarly, each camera must provide an accurate measure of the radiance since the desired output [Eqs. (4) and (5)] depends on the quality of all of the cameras.

To overcome these challenging requirements, we chose a very large format (4872pixels×3248pixels) camera, which we aligned to within 1/2 a pixel. We also binned pixels significantly in order to increase the effective overlap. If starting with an overlap of ½ a pixel, binning 4pixels×4pixels provides an overlap of about 76%, while 16pixels×16pixels gives 94%. As will be shown later, the angular widths of the NPs are on the order of 2°–3°. The 16 by 16 binning would result in an angular resolution of about 0.12°. Thus, the binning provides an increased signal-to-noise ratio (SNR) while allowing for good dynamic range and sufficiently small effective angular resolution required to identify and accurately map out features such as NPs.

A. Spectral Considerations

MSI and HSI work has shown that spectral information provides an ability to separate and quantify components of water, such as phytoplankton, CDOM, and SS. The system described here maintains some degree of spectral discrimination, while other information was gained by adding polarization.

Changing color filters is most easily accomplished using a motorized rotating filter wheel. This is a common technique in multispectral imaging, and it offers the opportunity to collect a dark frame for calibration by placing an opaque (blocking) filter in one slot of the wheel. The number of spectral bands was chosen to sample the spectral range of interest, but was constrained by the time it would take to rotate through all the positions. It is important that the set of four filters at each wavelength (e.g., the four blue filters, the four green filters, etc.) be nearly identical in terms of bandpass shape and central wavelength so they match for each camera. The variation in throughput between filters of the same color can be managed by the calibration.

The center wavelengths of the filters were chosen to sample the spectral regions where the primary constituents (phytoplankton and SS) were likely to have a large effect on the DOLP for various common water types that would be sampled by the instrument. Simple bio-optical models were used to calculate the total absorption as a proxy for DOLP, since both reflectance and DOLP are proportional to absorption [19]. Thus, strong differences in reflectance portend strong changes in DOLP. Band centers at approximately 435, 555 (because of the impact of phytoplankton and CDOM), 625 (because of the impact of SS), and 750 nm were identified as the optimum band centers. The filter at 750 nm was chosen to represent a waveband that was not strongly influenced by water properties, but instead would allow the testing of the atmospheric and water surface components of the polarization signature since it is at a local absorption peak of liquid water. The polarization signature is not expected to change dramatically over the 8–20 nm bandwidths used. The bandwidth was also used to control the dynamic range of the signal across the four different filters. The filters are quite easily changed, and thus appropriate filters can be selected for different applications according to the expected water properties, for example, emphasizing the longer wavelength region in waters with high SS.

B. View Geometry Control

In an aquatic environment, the exploitation of the polarization information contained in the light field requires measurements from many different view angles—just as is done in the aerosol community with RSP and AirMSPI. This necessitates the ability to steer the sensor view direction accurately and to follow a flight plan in which the sensor maintains the desired view of the ocean surface.

Polarization changes with solar geometry and other changes to the downwelling light field, such as moving clouds. During the course of the measurements shown in Section 4, the solar zenith changed by 2° and the solar azimuth changed by 3° over the time of the measurement. The data that are shown here are adjusted frame by frame for the changing solar azimuth, since the output from this measurement depends only on the difference between the solar and view azimuths. However, such an adjustment for the zenith angle is not possible, since the absolute value of the solar zenith is the most important parameter. We decided to limit the data collection to about 15 min, which avoids unacceptable smearing in the data plots. However, it is acknowledged that detailed quantitative comparisons with models (such as those in the process of doing retrievals) may require in some cases that the data be segmented on shorter time scales—perhaps as short as a couple of minutes.

The full angular space that could be measured (and is available to models) would include the azimuth range of 180° to +180° and the zenith range of 0°–90°. Ideally, the system would be able to make measurements at any combination of those two angles, but that ability would provide significant practical difficulties. The spatial scale of changes in water type, speed of the aircraft, and instrument limitations all play a role is determining how best to map the largest meaningful portion of this space.

3. INSTRUMENT DESCRIPTION

Following the design considerations, the imaging system consists of four bore-sighted cameras, each with a polarization filter mounted in a fixed orientation, and a filter wheel containing spectral bandpass filters in front of that filter. The cameras are rigidly mounted to a 12-mm-thick aluminum plate, and the plate is then mounted to a rotation stage. The system has computer controls to synchronize the filter rotation and image capture of the four cameras. A combined global positioning system (GPS)/inertial measurement unit (IMU) system that is mounted to the stage provides the position and attitude information, which is used to control the rotation stage motion and is recorded for use in image processing. See Fig. 1 for layout details.

 figure: Fig. 1.

Fig. 1. Shown here are the four cameras (indicated by the X’s) with the tear drop-shaped filter wheels mounted on a plate, which was then mounted onto the stage. The rotation of the U-shaped yoke moves the FOV in the general “roll” direction while rotating the plate that holds the cameras is similar to changing the pitch direction. The CMIGITS is not shown in this drawing.

Download Full Size | PDF

A. Imaging System Components

Each camera in the system measures the linear polarization state of the scene with orientations relative to each other of 0°, 45°, 90°, and 135°. The basic optical design from scene to sensor for each channel consists of a bandpass filter, polarizing filter, lens, and focal plane. The description of each element below, working outward from the focal plane, describes one channel; the other three are identically constructed.

The focal plane array (FPA) is an Imperx model IPX16M3L camera, based on Kodak’s KAI-16000 focal plane. The 12-bit monochrome interline transfer CCD has 7.4 mm square pixels in a 4872 by 3248 pixel format. In an interline array, every other line of pixels is used to transfer the charge out and is not sensitive to the incoming photons. This leaves the array with a 50% fill factor. However, typically, a microlens array (where each lens is larger than a pixel) is incorporated to restore the fill factor to close to 100%. With a read noise of about 16 electrons and a well depth of 30,000 electrons, the effective dynamic range is about 1700:1.

Each camera uses a 50 mm lens for an instantaneous field of view of about 0.14 mrad, which, when centered at the nadir, provides a GSD of 22 cm from a height of 1525 m, and a total field of view of about 39° by 26°. The lenses are operated no faster than f/4.0 to prevent stray light due to the microlens array on the focal plane. When focusing the system using an off-axis parabola, running faster than f/4 resulted in multi-peaked spots. In 2012, a standard Nikon 50 mm lens was used, which had considerable color aberration over the wavelength range of the filters. In 2013, the lenses were switched to 50 mm Schneider Optics Xenon-Emeralds, which have much better color correction. The lenses were tested for color aberration using a collimating system in which a 100 μm pinhole illuminated by a filtered halogen lamp was placed at the focus of a 0.5 m diameter, off-axis parabola (2.5 m focal length) and a “best focus” at infinity was chosen to minimize the image blur from 435 to 650 nm. The geometric size of the spot produced here would be 2 μm (about 1/3 of a pixel) on the focal plane. The original lenses yielded aberration-broadened images of a pinhole on the order of 1–2 pixels full width at half-maximum for the 550 nm filter, increasing to about 3 pixels at 435 and 650 nm, and 5 pixels at 750 nm. The new lenses yielded spot sizes on the order of 1–2 pixels over the entire wavelength range.

The polarizers are a wire grid type purchased from Moxtek (model PFU04C Ultra Contrast). When light strikes the front surface of the filter at a 90° (normal) incident angle, they provide a high contrast ratio over a large range of wavelengths with a typical ratio of 3000 at 400 nm, increasing to 10,000 at 525 nm, and to over 15,000 by 600 nm, where it levels off into the near-IR. As the incident angle changes to 20° from normal, the contrast ratio is reduced by only about 10%, which is tolerable, especially given the large contrast ratios for the wavelengths being used. The polarizers are placed in a precision rotation mount, aligned in the calibration lab, and epoxied in place.

The spectral dichroic bandpass filters were selected to be near the band centers identified in the design stage (443, 565, 625, and 750 nm), yet affordable in a 2 in. diameter size. The filters used in 2012 had center wavelengths/bandpasses of 437/8, 562/15, 625/10, and 750/10 nm. That setup tended to saturate the green band while leaving the blue band relatively short of photons. The first two filters were replaced with 435/20 and 550/10 nm filters in 2013, which helped to balance the dynamic ranges of the four bands.

B. Rotation Stage

The rotation stage that is used to control pointing is a laboratory two-axis gimbal from Newmark Corporation (GM-12E), which can handle loads in excess of 20 kg. The four cameras are mounted on a plate, and the plate is then mated to the yoke of the stage (see Fig. 1). The mounting of the cameras on the plate is done in a manner that centers the physical extent and weight of the cameras as close as possible to the rotation axis. This design limits the spatial translation of the cameras that occurs as the rotation is performed. When mounted in an aircraft, the limited translation helps to maintain the cameras’ fields of view from being obstructed by the airframe, and also limits rotational inertia, which allows the stage to respond faster.

Also attached to the plate (but not shown in Fig. 1) is a C-miniature integrated GPS/INS tactical system (C-MIGITS) GPS/IMU system (Systron-Donner, Concord, California), which provides location and attitude information at 10 Hz with absolute attitude accuracy of about 1 mrad. A small, rugged computer is used to control the CMIGITS and record its data. The same data stream provides the position and attitude to the in-house-created stage software, which adjusts the pointing of the stage in real time in a predict-and-correct fashion. For this work, the software commanded the stage to maintain its pointing toward a chosen location on the surface. The location used was provided to the software by manually entering the latitude and longitude. An altitude correction equal to the World Geodetic System-84 (WGS-84) ellipsoid height of the water’s surface was also used.

The stage’s pointing accuracy is ±0.0042°, which exceeds our requirements, since the angular extent of a pixel is approximately 0.008°. The stage has a maximum slew rate of 15°/s and torque of 10.2 Nm that enables it to respond well to aircraft gyrations at the level needed for this project. However, it does not respond to high-frequency movements and thus it does not function as a stabilization platform. At times, aircraft motion-induced smear is easily detected in the images, but smear-free images are not needed for our objectives.

C. System Control

The cameras are controlled using two IO Industries’ DVR-Express Cores, with each Core controlling two cameras. The Cores enable synchronized data collection and maximum frame rate of 3frames/s for each of the cameras, as well as GPS time tagging. The data collection rate is limited by the speed of the filter wheel to about 1.1 Hz. A separate computer runs in-house software that synchronizes the filter wheels so that the same color filter is over all four cameras at the same time, then initiates data collection by providing a triggering pulse to the Cores. The Cores complete the data transfer to their solid-state drives. There is no direct communications between the stage software and the camera controlling package.

D. Laboratory Calibration

1. Radiometric Calibration

The Naval Research Laboratory (NRL) maintains a National Institute of Standards and Technology (NIST) traceable calibration facility that was used to radiometrically calibrate all four cameras. A 102 cm integrating sphere, which contains 10 internal quartz halogen lamps, is used for radiance calibration. The output of the sphere is unpolarized. Integration times planned for field data collection of 8 ms (over land) or 32 ms (over water) were used to collect the calibration data. Separate calibration coefficients were calculated for each camera with each of the four spectral filters in place, totaling 16 separate calibration results.

The known radiance output from the sphere is convolved with the spectral filters to produce the expected radiance value that would be observed by the cameras for each filter. In order to keep the total intensity measured by the system correct and comparable to other systems, the cameras are individually calibrated to provide a value of ½ of the total radiance emitted by an unpolarized source. Later, when the cameras are used to determine the intensity of I0+I90 or I45+I135, the true radiance is provided. Care was taken to avoid saturation during the collection of the calibration data. The calibration uses a linear function to relate the sphere radiance to the dark corrected digital numbers provided by the camera.

The long-term stability of the instrument is not known because it is still being developed and has had some changes made between each of the deployments. However, a comparison of the calibration data done both before and after the deployments has been investigated, and we found that the gain is 1%–2% higher after the deployment than before and that this is true of each of the cameras. The increase in gain is likely the result of oils and dirt coating the fore-optics during deployment.

2. Spatial Calibration

We use the term “spatial calibration” to refer to the determination of the camera model, which associates each pixel on the camera with two angles—one describing the position in the “pitch” direction and one in the “roll” direction. The geometric calibration process provides a correction for different lens distortions (such as pin cushion), focal length differences (minor in this case), and relative rotation between the four cameras. See Fig. 2 for more information.

 figure: Fig. 2.

Fig. 2. (a) The angles that are determined in the camera model are shown. (b) The angles reported by the CMIGITS relative to the NED (x,y,z) frame of reference. The view direction is down to the right and off the page, as would be seen from the aircraft looking forward. The roll angle is between the projection (projections shown as red dotted lines) of the view direction onto the yz plane and the positive z axis, the pitch angle is between the projection of the view direction onto the xz plane and the x axis, and the heading angle is between the projection of the view direction onto the xy plane and the x direction.

Download Full Size | PDF

The spatial calibration of the entire system is performed by placing the instrument, mounted to the rotation stage, in front of a 0.5 m off-axis parabola. An illuminated pinhole is placed at the focus of the parabola. To the cameras, the image of the pinhole appears to be a point of light that comes from infinity. The image of the point can be viewed across any portion of the parabola—an area large enough to fill the apertures of all four cameras simultaneously. While viewing the image of the pinhole (which illuminates 1–2 pixels in the cameras) the stage is rotated in both axes in increments of 0.5° (about 60 pixels). At each incremental step, a small image chip is saved. This process creates a matrix of nearly 4000 points that relates pixels in each camera to the stage angles. This information is used to create a model in which a pointing vector (in the stage’s frame of reference) for each pixel in the camera is determined to be

u^ij,stage=[cosρijsinπijsinρijcosρijcosπij],
where ρij and πij are the commanded roll-like and pitch-like angles, respectively, of the stage in order to illuminate the pixel numbered ij, with the pointing calibration system described above. The accuracy of this process is largely dependent on the stage precision.

When collecting data in an aircraft, the GPS/INS system provides the important attitude information relating the stage’s frame of reference to the external world frame of reference (see Fig. 2). The aircraft position can be supplied by the CMIGITS attached to the stage or by a more accurate system mounted directly to the airframe. Bore-sighting is required in order to get both good location and good attitude information from this setup, and is discussed in more detail in Section 4.B.1.

E. Estimation of Accuracy

The precision and accuracy of the products (S0, S1, S2, DOLP, and AOP) are derived from errors in the measurement of the polarized radiance (I0, I45, I90, and I135). Several distinct processes (random and systematic) have an impact on these values. Some processes, such as scattered light, variation in focus, or spot size, between the cameras or slight uncorrected distortions are hard to quantify. Fortunately, some of these errors are likely to be removed by spatial averaging, which is discussed later. Others, such as the effects of shot noise, lend themselves to being well characterized. Other error sources include the rotational position accuracy of the polarization filters and uncertainty in the calibration sources being used.

Light is scattered off of the various optical surfaces, resulting in the measurement of photons in the wrong location on the focal plane. Laboratory work was performed to estimate the magnitude of light that is scattered and its source. Contributions from photons out of the field of view were not seen. Additionally, measurements were made that used a point source to illuminate a portion of the array. With just the polarizing filters and fore-optics in place, signals were only seen in the expected locations. When the color filters were put into position, a small ghost of the point source was detected. The ghosting occurs when photons strike the focal plane and some are reflected back toward the fore-optics. Some of that light scatters off the back of the color filters and is directed back to the focal plane, where it is displaced due to the color filter not being exactly perpendicular to the optical axis. The intensity of the ghost was roughly 0.5% of the point source signal. Because the ghosting moves photons in a particular direction across the focal plane, the strongest effect on the calibration is where the gain is rapidly changing (roughly the outer 500 pixels in the cross-track direction of the FPA). The gain is changing due to some vignetting from the filter wheels. The shorter, along-track direction is not affected strongly. Because the field data is also very uniform and we mask out the outer 500 pixels, the impact of this scattered light is minimal. We expect to correct both the scattered light and the vignetting in a future version of the instrument.

Ideally, the rotational positions of the polarizing filters are precisely set to 0°, 45°, 90°, and 135°. A polarized source is used in the process of setting the filter position. The source polarization can be accurately rotated and is set to 90° from the filter that is being set. The filter is then rotated using a micrometer; minimizing the transmittance of the light from the source sets its position. Determining the exact location of the minimum is the difficult part of this process. The error level is determined by adjusting the position through the lowest signal level several times from both directions while monitoring the output signal. We believe that the accuracy with which the filter positions are set is no worse than 0.5°. For a simplified view of the impact of misalignment, consider a case where the DOLP is 30%. A 0.5° error in the setting of one filter (say, the 0° filter) results in an error of 1.2% in the intensity at 0° and a subsequent error of about 0.75% in the DOLP. At a DOLP of 0%, this error goes to zero.

Now we consider the systematic error associated with the uncertainty of the output radiance of the integrating sphere. The NRL integrating sphere is calibrated against a field emission lamp that has an uncertainty of about 1.5%. In the process of transferring this calibration to the NRL integrating sphere, an additional 1% is added to the uncertainty. So, in addition to errors from stray light, shot noise, dark noise, and filter positions, 2.5% must be added to that value for the measurements of the intensity.

Turning to photon statistics, consider that each camera measures one polarization state of the incoming light field. That data is calibrated to at-sensor-radiance, as described in Section 3.D.1. S0, S1, and S2 are calculated from these individual measurements. Thus, the uncertainties in the individual intensity measurements are propagated into these products and continue into the DOLP and AOP through Eqs. (4) and (5).

The effect of shot noise (the noise associated with photon counting statistics) and dark noise (noise produced by the system without any photons striking the focal plane) on these calculated values can be determined in a straightforward manner. Dark noise is standard deviation of the measured data when the cameras are not illuminated and is approximately 3.5 counts for these cameras. The data measured using the various integrating sphere light output levels have correspondingly higher uncertainties, which are reflected in the standard deviation of the appropriate calibration data. Using this relationship, it is possible to determine the error (shot noise plus the dark noise) for any radiance level for any of the cameras. That is, the noise squared can be written as

σ2=C*signal+σDL2,
where C is the noise coefficient and σDL is the dark-level noise, both of which are determined experimentally (here, the values for C are 0.00067, 0.00088, 0.0012, and 0.0046 for the blue, green, red, and IR bands).

Typically, the photon statistics error contributes 1% for most of the radiances. The error associated with the position of the polarization filters also adds about 1%. So finally, after adding each of the error sources together, the measurement of the intensity, S0, for a typical radiance level has a total error of about 4.5%. The noise level for each separate measurement of the intensities can then be used to determine an estimate of the error in the products, as outlined below.

Here, we consider the error levels in the DOLP due to the photon statistics. Because the DOLP (and AOP) are both ratios of measured radiances, the uncertainty in the integrating sphere radiance levels is canceled out. Similarly, if S1 and S2 are scaled to S0, those ratios are also not impacted by this particular error source. Additionally, at low values of the DOLP, the error associated with the position of the polarization filters becomes small. In this case, the primary error in the DOLP is that resulting from the photon statistics.

The DOLP is a function of S1, S2, and S0. An approximation of the uncertainty in the DOLP can be written as [20]

σDOLP2i=0,1,2(DOLPSiσi)2,
where the σi is the uncertainty in the measurement of S1, S2, or S0, as appropriate. Of course, the actual measurements are of the intensities at 0°, 45°, 90°, and 135° and ultimately, the uncertainties in those measured quantities are what drive the error levels for the DOLP measurement. The algebra here can be simplified if one realizes the following relationships:
σS12=σI02+σI902=σI452+σI1352=σS22,
σS02=14(σI02+σI902+σI452+σI1352)=12σS12=12σS22.

The identities stem from the fact that the noise is linear with the square of the intensity. The formulas being used show that the uncertainty of adding or subtracting two numbers is the same. Thus, because the components of S1 and S2, the intensity measurements, add to the same value (I0+I90=I45+I135), the uncertainty in S1 and S2 (when the components are subtracted) is the same. Additionally, if S0 is written as ½ the sum of the four measured intensities, its uncertainty is one-half the uncertainty in S1 or S2. Using this information, Eq. (8) is rewritten as

σDOLP2=(S1σS1S0S12+S22)2+(S2σS2S0S12+S22)2+(σS0S12+S22S02)2,
σDOLP2=(σS1S0)2+12(σS1DOLPS0)2,
σDOLP=σI02+σI902S0(1+DOLP22)12,
where we also made the substitution σS1=σI02+σI902. Numerical modeling was used to check this approximation and found that it is accurate within a couple of percents and is typically slightly larger than the numerical value. Note that the formula provides for a smaller error for higher intensity and also a higher error for a higher DOLP.

As an example of the error level experienced with this system, consider a single pixel of data taken on 29 August 2014 over the Chesapeake Bay. The radiance measured in the four cameras was 9.88, 9.05, 10.1, and 10.76mWm2sr1nm1, for I0, I45, I90, and I135, respectively. Using Eq. (7), the uncertainty in the values of the radiance can be calculated. Using Eqs. (4) and (13), the calculated DOLP for this example can be determined to be 0.0867±0.0059. This is an error of 6.8%.

As explained earlier, the data is heavily binned. The binning is a trade-off between precision and spatial/angular resolution. After binning by 4pixels×4pixels, the uncertainty in the abovementioned case falls to 3.9%. Binning by 16pixels×16pixels would give an error of 0.43% for this particular binned pixel. This is not the only place that averaging can occur. As the data is taken, there is often angular overlap between the frames of data. Thus, for a particular view of the zenith and azimuth, there may be multiple measurements that can be averaged to further reduce the uncertainty caused by counting statistics. In many cases, four frames are averaged together, further reducing the random noise by a factor of 2. For a given frame rate, the ability to perform this averaging is controlled by the aircraft’s altitude and speed. The binning also reduces the error from misregistration of the focal planes. With the additional binning, this particular example would have an error level of less than 0.25%. It should be noted here that the DOLP error level required for the retrieval of information concerning the water column contents is not firmly established. However, for context, the stated requirement for the polarimetric accuracy of the Glory mission (designed for aerosol retrievals) was less than 0.2% [21].

4. DATA COLLECTION AND PROCESSINGS

A. Experimental Setup and Execution of Data Collection

The setup in the aircraft is relatively straightforward. The rotation stage is mounted with its base on a vertical surface on the back of the large observation port in the main cabin of the aircraft. Referring back to Fig. 1, the stage is oriented such that the vertical direction is into the page. Due to the physical size of the stage and its location in the aircraft, the stage is limited to roll excursions of ±25° and pitch changes of ±60°.

During the initial deployments, the goal was to examine the variations in polarization caused by hydrosols, the wind-roughened sea surface, the aircraft’s altitude, and solar/view geometry over a variety of uniform waters. We covered the azimuthal space by flying several straight lines with “compass” directions relative to the azimuthal direction to the sun of approximately 0°, 20°, 45°, and 75°, all centered on a fixed, central point (see Figs. 3 and 4). We refer to this configuration as a star pattern. It normally takes about 15 min to collect this data, although the four lines fill the angular space. Flying only four lines was considered a compromise to limit the time of collection so that the environmental conditions remained relatively constant. Even within the 15 min window, the portion of the images that are affected by sun glint changes by a noticeable amount.

 figure: Fig. 3.

Fig. 3. Aircraft is flown in a straight line with the stage pointing the camera’s FOV forward as the interest area is approached. The stage rotates the system downward and then toward the rear of the aircraft after the interest area has been passed. Multiple lines at different azimuth angles are flown, resulting in more than 100 individual frames of data collected for each color filter.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Gray lines show the position of the plane while data was being collected. Not all of the collected data was used during the processing. The camera/stage system tracks the center location at (38.9031°N, 76.4119°W), which is approximately 10 km south of the Chesapeake Bay Bridge in Maryland.

Download Full Size | PDF

The behavior of the rotation stage during a flight line is as follows: the stage is pointing forward (pitched up) while approaching the central point; as the aircraft flies along the flight line, the stage rotates the cameras down toward the nadir direction; when directly over the center point, the stage is pointed in the nadir direction; finally, the stage rotates aft as the aircraft moves away from the center point.

The pitch limit of 60° acted as the determining factor for when data acquisition would start and stop. At 60° and at a flight altitude of 1525 m above ground level, there was approximately 2.65 km of viewing on either side of the center position, for a total data collection length of 5.3 km. At 90 knots (1knot=0.51m/s) the transit takes about 120 s. During that time, 130 frames of data are taken (at the 1.1 Hz limit of the filter wheel), of which 25 are from each filter position (including the dark fifth filter). The cameras were oriented with the long (4872 pixels) side in the cross-track direction; thus each of these 25 images covers about 26° in the along-track direction. The near-zenith directions are sampled during each leg of the star pattern, but the portions of the angular space more than about 20° from the nadir are often only imaged during one leg of the pattern. However, the angular speed is slowest away from the nadir, and so it is often the case that the angular locations far from the nadir, on a single leg, are sampled by successive frames within that leg.

B. Data Processing

The collected image intensity data must be processed into polarimetric information (S1, S2, DOLP, and AOP), and the position and attitude information of the camera systems must be used together with the camera models determined in the calibration step in order to determine the view angles for each pixel on each of the cameras. The radiometric calibration starts with the data from each camera being dark subtracted and applying gains. The next step provides for rectifying the data onto a common angular grid using the camera models for each camera/spectral filter combination. The first camera with the blue filter is arbitrarily used as the base image, with the center of that camera defining the center of the field of view for the combined system. Then the rectified data is used to create the data products of S0, S1, S2, DOLP, and AOP.

Additionally, the time stamp for each frame is extracted and used to determine the aircraft position and the attitude of the camera systems at the time of the collection. The GPS/INS attitude information is recorded at 10 Hz. The camera frames run at about 1.1 Hz. A linear interpolation between the two closest matching times is used to determine the attitude of the cameras when the data was imaged.

1. Geometric Processing

The geometric processing uses the recorded attitude and position information together with the camera models to determine the geographic position, and the view azimuth and view zenith angles for each pixel in each frame of the data.

The reference frame (refer back to Fig. 2) of the attitude information from the CMIGITS provides a standard right-handed system with the x axis toward the nose, the y axis toward the right (starboard) wing, and the z axis down. This data gives the stage’s orientation with respect to a NED [north (x), east (y), down (z)] system. The components ux, uy, and uz of a unit vector in the NED system, from the sensor to the earth, are calculated using

[ux,ijuy,ijuz,ij]=R3T(θ3+dθ3)R2T(θ2+dθ2)R1T(θ1+dθ1)[cosρijsinπijsinρijcosρijcosπij],
where
R1(ϑ)=[cos(ϑ)sin(ϑ)0sin(ϑ)cos(ϑ)0001],
R2(ϑ)=[cos(ϑ)0sin(ϑ)010sin(ϑ)0cos(ϑ)],
R3(ϑ)=[1000cos(ϑ)sin(ϑ)0sin(ϑ)cos(ϑ)],
describe rotations by the angle ϑ about the x, y, and z axes, respectively, in a right-handed system.

Equation (14) yields the unit pointing vectors of each pixel in the NED system, where the i and j subscripts denote the pixel positions in the cross-track and along-track directions on the focal plane. θ1, θ2, and θ3 are the roll, pitch, and heading angles provided by the CMIGITS, and dθ1, dθ2, dθ3 are the offsets (bore-sighting corrections) needed to account for the offset between what the CMIGITS is reporting and where the cameras are actually looking. The offset values are expected to be small, on the order of a few degrees.

The view zenith angle can then be found from θnadir=cos1(uz), and the view azimuth angle can be found from φazi=tan1(uy/ux). Once this information is known, the polarimetric variables can be displayed on polar-type plots, as shown in the next section. Additionally, that information can be used with the aircraft altitude (and a digital elevation map (DEM), if needed) to determine the geographic position where the pixel projects onto the earth’s surface. The position information would be important if one was limiting the geographic extent that one wanted to consider further along in the processing. At the same time, it is straightforward to determine the phase angle, the direction between the view angle and the solar angle. The vector pointing toward the sun is given by [sin(SA)*sin(SZ),cos(SA)*sin(SZ),cos(SZ)] where SA is the solar azimuth and SZ is the solar zenith. The dot product between this vector and the view direction [Eq. (14)] yields the cosine of the phase angle. Similarly, the angle to the specular direction can be calculated by negating the x and y components of the solar vector and then following the same approach for the phase angle.

The angular offsets are required to make sure that the correct angles are used during the calculations. In a number of images (more than 10), the shadow of the aircraft is visible. The location of the shadows must have a 0° phase angle or equivalently, the view azimuth and zenith must be equal to the solar azimuth and zenith (using the same reference—to the ground or from the ground). An optimization was performed to minimize the phase angle at the center of the shadowed regions by adjusting the angular offsets. The corrections were found to be 0.471°, –0.060°, and –1.313° for the roll, pitch and heading directions, respectively. The average error in the phase angle was 0.069°. There are several advantages of determining the offsets in this manner as opposed to the standard approach of finding ground control points and optimizing the error between the determined location and that provided by an accurate map (or by using tie points). First, there is no need to use a DEM, as the altitude of the surface on which the shadow is projected is not relevant to the calculation. Second, the accuracy of the latitude and longitude for the ground control points, taken from a map, does not limit the accuracy of the corrections.

5. RESULTS

The instrument described in this paper has been deployed three times. The most recent deployment occurred over the Chesapeake Bay on 29 August 2014. In order to validate these initial results, a Monte Carlo radiative transfer code was run that simulates the polarimetric signature expected to be observed by the instrument. In particular, we compare the measured and expected location of the NPs observed in the data. The location of each NP is a sensitive test of both the instrument and the model. The Monte Carlo code is briefly described below.

As shown in Fig. 4, a “star” flight pattern was flown over a location in the Chesapeake Bay just south of the Chesapeake Bay Bridge. Data collection started at 14:32 and ended at 14:47 UTC (local time is 4 h earlier). During that time, the solar zenith angle changed from 44.9° to 42.6°. In-water measurements were taken within 1 h of the flight. The airborne data was processed as explained in Section 4. For each DOLP plot, 115 individual frames of data were corrected for the aircraft and stage pitch, roll, and heading and overlapping angular regions were averaged, where possible. The polar plots of the DOLP for each spectral channel are shown in first column of Fig. 5.

 figure: Fig. 5.

Fig. 5. Plots show the measured and modeled DOLP for each of the four spectral bands. Here (a) is for 435 nm, (b) 550, (c) 625 nm, and (d) 750 nm. The plots are linear in the DOLP. Note the movement of the center of the NP between the bands.

Download Full Size | PDF

The azimuth angles for the measured data were adjusted for the solar azimuth so that the sun is at the top of the plot; therefore, the labeled azimuth angles are relative to the solar position. The plots may be understood by considering an observer standing at the center of the area of interest. When considering the line labeled the 0° in the azimuth direction, the viewer is looking in the azimuthal direction of the sun. When moving up that 0° azimuth line, the zenith angle ranges from 0° to as much as 70°. The specular point is along that line at the solar zenith angle.

In the modeling plots that are in the second column and described shortly, the area near the specular point is masked out in order to conserve dynamic range in the figure. The measured data is not altered to account for the specular data. Data was collected closer to the specular point, but no result is available there because at least one of the cameras was saturated.

It is possible to collect data closer to the specular point without saturating the system by lowering the integration time. Increasing the frame rate in that situation will maintain the SNR and dynamic range.

NPs are visible off of the principle plane for each of the spectral filters. In order to have an NP, the polarimetric contribution from the water and that from the atmosphere must cancel each other out. The random measurement errors in the instrument prevent a measured value of exactly zero. However, for each spectral filter, the minimum measured DOLP is within the one standard deviation of zero.

To evaluate this product, we compared it to the results from a coupled atmosphere-ocean polarimetric radiative transfer code. The code is based on the semi-analytic estimation Monte Carlo method and calculates the full Stokes vector of the light field in the atmosphere and ocean. The mathematical details are beyond the scope of this paper, but can be found in Tynes et al. [22] and Zhai et al. [23].

For the results shown here, the model atmosphere includes both molecular (Rayleigh) and aerosol scatterers. Their concentrations decrease exponentially with altitude with scale heights of 8 km for molecules and 2 km for aerosols. Aerosols are divided into two layers based on the maritime aerosol model known as MAR-1: [24] continental-type aerosols above 2.5 km, and maritime aerosols below. The overall aerosol optical depth and Angstrom exponent were set with data taken from the nearest Aeronet site [Easton-MDE (Maryland Department of the Environment)] on the same day but about 4 h after the aircraft measurements [AOD=0.059 (440 nm), angstrom=1.35]. Scattering from molecules in the atmosphere is described by the Rayleigh–Mueller matrix with a depolarization value of rho=0.02794 [25]. Aerosol-scattering Mueller matrices are calculated by the Mie theory based on the size distributions and indices of refraction for the constituents in each aerosol type [24].

The water surface includes the effects of wind ruffling and is modeled using the Cox–Munk waveslope distribution. We used a constant wind speed of 3m/s in all the calculations. The absorption and scattering coefficients in the water were measured with a WETLabs ac-s and a 0.2 μm filtered ac-9. The scattering phase functions for sea water and particles are those from Morel [26] and Petzold [27], respectively. The Muller matrix for sea water is described with the reduced Mueller matrix for anisotropic particles with a depolarization value of 0.095.

The Mueller matrix for scattering from particles in the ocean is that from Voss and Fry [28]. The code was run with a solar zenith angle of 43.5° and the sensor located at an altitude of 1525 m above the water surface. The Stokes vector parameters were then used to calculate the DOLP.

A comparison between the angular positions of the NPs between the experimental and modeling results are shown in Table 1. The NP is expected to move because of the wavelength dependence of the atmospheric optical depth. The positions are in general agreement for the elevation angle for all bands with both the theory and measurements increasing the angle when the wavelength increases. The azimuth angles show some discrepancies. For the blue, green, and red bands, both the modeling and the measurements are decreasing the angle when the wavelength increases. The largest disagreement is the azimuth angle for the IR channel. Here, the difference is almost 20°, and the measured angle increased, as opposed to the modeling angle decreasing. The reason for the discrepancy is not fully understood, and will be addressed in a later manuscript. However, it is important to remember that NPs are only one point. Looking at the rest of the structure of the DOLP plots, one can see excellent agreement throughout much of the angular space.

Tables Icon

Table 1. Showing the View Azimuth (First Number in Cell) and View Zenith (Second Number) Positions of the NPsa

6. CONCLUSIONS

This paper describes an airborne four-camera system that makes polarimetric measurements at four different spectral bands in the VNIR portion of the spectrum. The instrument is flexible, allowing for changing of the spectral filters to address different scientific goals. Using the system with a rotating stage, which is mated to a GPS/INS system, has allowed for the measurement of the angular behavior of the polarimetric signature over various water bodies.

The instrument is demonstrated using data collected over the Chesapeake Bay. Here, the instrument appears to have sufficient accuracy both radiometrically (on the order of 0.25% error in DOLP) and with its angular resolution (0.125°), to identify and determine the positions of NPs in the data. Where it is possible to compare, the DOLP data largely shows the expected symmetry on both sides of the principle plane.

A coupled ocean-atmosphere code was run to model the expected results from the measurement. A comparison with the model shows good agreement over much of the angular space of the measurement with some differences shown in the IR band, where it is suspected that the atmospheric parameters in the model are the cause. Future work will address this issue.

The eventual goal of the work is to use a system of this type to provide information about the characteristics of hydrosols present in the water column. Since the polarimetric measurements are also sensitive to atmospheric conditions, additional uses may include aiding the atmospheric correction of contemporaneously collected hyperspectral data.

Funding

Office of Naval Research.

Acknowledgment

The authors would like to acknowledge the efforts of Joe Rhea, who collected and then processed the in-water data used in this paper. We thank Wayne Dulaney and Craig Zinter for their efforts in establishing and maintaining the Easton-MDE Aeronet site.

REFERENCES

1. C. D. Mobley, L. K. Sundman, C. O. Davis, J. H. Bowles, T. V. Downes, R. A. Leathers, M. J. Montes, W. P. Bissett, D. D. R. Kohler, R. P. Reid, E. M. Louchard, and A. Gleason, “Interpretation of hyperspectral remote-sensing imagery by spectrum matching and look-up tables,” Appl. Opt. 44, 3576–3592 (2005). [CrossRef]  

2. C. R. McClain, G. C. Feldman, and S. B. Hooker, “An overview of the SeaWiFS project and strategies for producing a climate research quality global ocean bio-optical time series,” Deep Sea Res. Part II 51, 5–42 (2004). [CrossRef]  

3. G. Chang, K. Mahoney, A. Briggs-Whitmire, D. Kohler, C. Mobley, M. Lewis, M. Moline, E. Boss, M. Kim, W. Philpot, and T. Dickey, “The new age of hyperspectral oceanography,” Oceanography 17, 16–23 (2004). [CrossRef]  

4. G. Chang and A. L. Whitmire, “Effects of bulk particle characteristics on backscatter and optical closure,” Opt. Express 17, 2132–2142 (2009). [CrossRef]  

5. M. Chami, “Importance of the polarization in the retrieval of oceanic constituents from the remote sensing reflectance,” J. Geophys. Res. 112, C05026 (2007).

6. M. Chami and M. D. Platel, “Sensitivity of the retrieval of the inherent optical properties of marine particles in coastal waters to the directional variations and the polarization of reflectance,” J. Geophys. Res. 112, C05037 (2007).

7. J. Chowdhary, B. Cairns, M. Mishchenko, and L. Travis, “Retrieval of aerosol properties over the ocean using multispectral and multiangle photopolarimetric measurements from the Research Scanning Polarimeter,” Geophys. Res. Lett. 28, 243–246 (2001). [CrossRef]  

8. D. J. Diner, F. Xu, M. J. Garay, J. V. Martonchik, B. E. Rheingans, S. Geier, A. Davis, B. R. Hancock, V. M. Jovanovic, M. A. Bull, K. Capraro, R. A. Chipman, and S. C. McClain, “The Airborne Multiangle SpectroPolarimetric Imager (AirMSPI): a new tool for aerosol and cloud remote sensing,” Atmos. Meas. Tech. Discuss. 6, 1717–1769 (2013). [CrossRef]  

9. D. Tanré, F. M. Bréon, J. L. Deuzé, O. Dubovik, F. Ducos, P. François, P. Goloub, M. Herman, A. Lifermann, and F. Waquet, “Remote sensing of aerosols by using polarized, directional and spectral measurements within the A-Train: the PARASOL mission,” Atmos. Meas. Tech. 4, 1383–1395 (2011). [CrossRef]  

10. G. Kattawar, “Genesis and evolution of polarization of light in the ocean,” Appl. Opt. 52, 940–948 (2013). [CrossRef]  

11. K. J. Voss, A. C. R. Gleason, H. R. Gordon, G. W. Kattawar, and Y. You, “Observation of non-principal plane neutral points in the in-water upwelling polarized light field,” Opt. Express 19, 5942–5952 (2011). [CrossRef]  

12. J. T. Adam, D. J. Gray, and S. Rayner, “Observation of non-principal plane neutral points in the upwelling polarized light field above a water surface,” Appl. Opt. 51, 5387–5391 (2012). [CrossRef]  

13. J. T. Adams and G. W. Kattawar, “Neutral points in an atmosphere-ocean system. 1: upwelling light field,” Appl. Opt. 36, 1976–1986 (1997). [CrossRef]  

14. M. Chami and D. McKee, “Determination of biogeochemical properties of marine particles using above water measurements of the degree of polarization at the Brewster angle,” Opt. Express 15, 9494–9509 (2007). [CrossRef]  

15. H. Loisel, L. Duforet, D. Dessailly, M. Chami, and P. Bubuisson, “Investigation of the variations in the water leaving polarized reflectance from the POLDER satellite data over two biogeochemical contrasted oceanic areas,” Opt. Express 16, 12905–12918 (2008). [CrossRef]  

16. B. A. Hooper, B. Van Pelt, J. Z. Williams, J. P. Dugan, M. Yi, C. C. Piotrowski, and C. Miskey, “Airborne spectral polarimeter for ocean wave research,” J. Atmos. Ocean. Technol. 32, 805–815 (2015). [CrossRef]  

17. A. Ivanoff and T. H. Waterman, “Elliptical polarization of submarine illumination,” J. Mar. Res. 16, 255–282 (1958).

18. J. S. Tyo, D. L. Goldstein, D. B. Chenault, and J. A. Shaw, “Review of passive imaging polarimetry for remote sensing applications,” Appl. Opt. 45, 5453–5469 (2006). [CrossRef]  

19. A. Ibrahim, A. Gilerson, T. Harmel, A. Tonizzo, J. Chowdhary, and S. Ahmed, “The relationship between upwelling underwater polarization and attenuation/absorption ratio,” Opt. Express 20, 25662–25680 (2012). [CrossRef]  

20. J. R. Taylor, An Introduction to Error Analysis (University Science Books, 1997).

21. S. Persh, Y. J. Shaham, O. Benami, B. Cairns, M. I. Mishchenko, J. D. Hein, and B. A. Fafaul, “Ground performance measurements of the Glory Aerosol Polarimetry Sensor,” Proc. SPIE. 7807, 780703 (2010). [CrossRef]  

22. H. H. Tynes, G. W. Kattawar, E. P. Zege, I. L. Katsev, A. S. Prikhach, and L. I. Chaikovskaya, “Monte Carlo and multicomponent approximation methods for vector radiative transfer by use of effective Mueller matrix calculations,” Appl. Opt. 40, 400–412 (2001). [CrossRef]  

23. P. W. Zhai, G. W. Kattawar, and P. Yang, “Impulse response solution to the three-dimensional vector radiative transfer equation in atmosphere-ocean systems. I. Monte Carlo method,” Appl. Opt. 47, 1037–1047 (2008). [CrossRef]  

24. WMO International Association for Meteorology and Atmospheric Physics Radiation Commission, “A preliminary cloudless standard atmosphere for radiation computation,” World Climate Program, WCP-112 WMO/TD-#24 (World Meteorological Organization, 1986).

25. A. T. Young, “Revised depolarization correction for atmospheric fextinction,” Appl. Opt. 19, 3427–3428 (1980). [CrossRef]  

26. A. Morel, “Optical properties of pure water and pure sea water,” in Optical Aspects of Oceanography, N. G. Jerlov and E. S. Nielsen, eds. (Academic, 1974), pp. 1–24.

27. T. J. Petzold, Volume Scattering Functions for Selected Ocean Waters (Scripps Institution of Oceanography, 1972).

28. K. J. Voss and E. S. Fry, “Measurement of the Mueller matrix for ocean water,” Appl. Opt. 23, 4427–4439 (1984). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Shown here are the four cameras (indicated by the X ’s) with the tear drop-shaped filter wheels mounted on a plate, which was then mounted onto the stage. The rotation of the U -shaped yoke moves the FOV in the general “roll” direction while rotating the plate that holds the cameras is similar to changing the pitch direction. The CMIGITS is not shown in this drawing.
Fig. 2.
Fig. 2. (a) The angles that are determined in the camera model are shown. (b) The angles reported by the CMIGITS relative to the NED ( x , y , z ) frame of reference. The view direction is down to the right and off the page, as would be seen from the aircraft looking forward. The roll angle is between the projection (projections shown as red dotted lines) of the view direction onto the y z plane and the positive z axis, the pitch angle is between the projection of the view direction onto the x z plane and the x axis, and the heading angle is between the projection of the view direction onto the x y plane and the x direction.
Fig. 3.
Fig. 3. Aircraft is flown in a straight line with the stage pointing the camera’s FOV forward as the interest area is approached. The stage rotates the system downward and then toward the rear of the aircraft after the interest area has been passed. Multiple lines at different azimuth angles are flown, resulting in more than 100 individual frames of data collected for each color filter.
Fig. 4.
Fig. 4. Gray lines show the position of the plane while data was being collected. Not all of the collected data was used during the processing. The camera/stage system tracks the center location at (38.9031°N, 76.4119°W), which is approximately 10 km south of the Chesapeake Bay Bridge in Maryland.
Fig. 5.
Fig. 5. Plots show the measured and modeled DOLP for each of the four spectral bands. Here (a) is for 435 nm, (b) 550, (c) 625 nm, and (d) 750 nm. The plots are linear in the DOLP. Note the movement of the center of the NP between the bands.

Tables (1)

Tables Icon

Table 1. Showing the View Azimuth (First Number in Cell) and View Zenith (Second Number) Positions of the NPs a

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

S 0 = I 0 + I 90 = I 45 + I 135 = 1 2 ( I 0 + I 45 + I 90 + I 135 ) ,
S 1 = I 0 I 90 ,
S 2 = I 45 I 135 ,
DOLP = ( S 1 2 + S 2 2 ) / S 0 ,
AOP = 1 2 tan 1 ( S 2 S 1 ) .
u ^ i j , stage = [ cos ρ i j sin π i j sin ρ i j cos ρ i j cos π i j ] ,
σ 2 = C * signal + σ DL 2 ,
σ DOLP 2 i = 0 , 1 , 2 ( DOLP S i σ i ) 2 ,
σ S 1 2 = σ I 0 2 + σ I 90 2 = σ I 45 2 + σ I 135 2 = σ S 2 2 ,
σ S 0 2 = 1 4 ( σ I 0 2 + σ I 90 2 + σ I 45 2 + σ I 135 2 ) = 1 2 σ S 1 2 = 1 2 σ S 2 2 .
σ DOLP 2 = ( S 1 σ S 1 S 0 S 1 2 + S 2 2 ) 2 + ( S 2 σ S 2 S 0 S 1 2 + S 2 2 ) 2 + ( σ S 0 S 1 2 + S 2 2 S 0 2 ) 2 ,
σ DOLP 2 = ( σ S 1 S 0 ) 2 + 1 2 ( σ S 1 DOLP S 0 ) 2 ,
σ DOLP = σ I 0 2 + σ I 90 2 S 0 ( 1 + DOLP 2 2 ) 1 2 ,
[ u x , i j u y , i j u z , i j ] = R 3 T ( θ 3 + d θ 3 ) R 2 T ( θ 2 + d θ 2 ) R 1 T ( θ 1 + d θ 1 ) [ cos ρ i j sin π i j sin ρ i j cos ρ i j cos π i j ] ,
R 1 ( ϑ ) = [ cos ( ϑ ) sin ( ϑ ) 0 sin ( ϑ ) cos ( ϑ ) 0 0 0 1 ] ,
R 2 ( ϑ ) = [ cos ( ϑ ) 0 sin ( ϑ ) 0 1 0 sin ( ϑ ) 0 cos ( ϑ ) ] ,
R 3 ( ϑ ) = [ 1 0 0 0 cos ( ϑ ) sin ( ϑ ) 0 sin ( ϑ ) cos ( ϑ ) ] ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.