Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Spatial light modulator based color polarization imaging

Open Access Open Access

Abstract

We describe a compressive snapshot color polarization imager that encodes spatial, spectral, and polarization information using a liquid crystal modulator. We experimentally show that polarization imaging is compressible by multiplexing polarization states and present the reconstruction results. This compressive camera captures the spatial distribution of four polarizations and three color channels. It achieves <0.027° spatial resolution, 103 average extinction ratio, and >30 PSNR.

© 2015 Optical Society of America

1. Introduction

The emergence of imaging spectroscopy and imaging polarimetry has enabled applications in remote sensing [1, 2], biomedical diagnosis [3], and scattering imaging [4]. While the spectral information usually reflects the physical and chemical properties of the object [5], the polarimetry analyzes the surface properties between different orientations of the electromagnetic information from the object. For example, it has been used in contrast enhancement and material analysis [2]. Since the polarization parameters have little physical correlation with the spectrum, measuring them can better decipher the object. An extension of polarimetry is combining the photometry to achieve spectral polarization imaging to diagnose the spectral and polarization distribution of the scene. Recording this four-dimensional (4D) datastream requires advanced sampling strategy.

Conventional spectral polarization imagers use sequential measurements or parallel the recording process to sample a high dimensional data-cube. The sequential measurement samples several two-dimensional (2D) or three-dimensional (3D) sub-datasets by scanning the scene or switching the polarization and/or bandpass filters. Such mechanical movements cause lengthy acquisition time, which limits the application in dynamic scenes. Also, those motion related errors such as beam wandering and jitter noise should be considered and minimized by system optimization [68]. Parallel sampling can be achieved by using detectors and beam splitters to record multiple sub-datasets simultaneously. An alternative parallel method uses thin film filter arrays with multichannel spectral [9] and polarization filters [1017], known as division of focal plane (DoFP) sampling. This strategy suffers from the following technical challenges, such as low extinction ratio of the micro-polarizer array and the fabrication difficulty for more than 10 spectral-polarization channels. Alternatively, the computed tomography imaging channeled spectro-polarimeter (CTICS) uses a computer generated hologram to project an unfolded Fourier transformed spectral-polarization data-cube on the detector array [18]. Similarly, the polarization grating imaging spectro-polarimeter uses multiple polarization sensitive gratings to project the dispersed spectral images based on their polarization [19, 20]. Although the well-registered images are captured from the same aperture, the projected data-cube occupies different regions on the image plane, which reduce the effective numerical aperture.

Those strategies follow the dimension conservation of the imaging techniques, which only allow the detectors recording limited object dimensions from the scene. The recordable dimensions should be equal or less than the dimensions of the detector. Compressive sensing breaks the dimensional conservation by projecting the object space to the image space with designed modulation and then reconstructing the object through computational processing [21]. By adapting hardware to generate amplitude, phase or temporal modulation, compressive imagers allow the signal compression on spectral imaging [22], diffraction tomography [23], x-ray scattering imaging [24], and mass spectroscopy [25].

Multiplex sampling strategy expands the temporal [26], spectral [22] or polarization [27] sensitivity in compressive optical sensing. By using coded aperture, this strategy can be viewed as a coded division multiple access (CDMA) process, which encodes each channel by a spatially independent code. The detector integrates all the encoded channels and then the information is recovered into separated channels based on their projected code patterns. This coding strategy amplifies the difference of band-limited channels by projecting them on their corresponding spatial position on the detector plane. However, this modulation technique can only sense the difference between two orthogonal polarization states [27].

Here we demonstrate a snapshot color polarization imager using a spatial light modulator (SLM) to encode 4D spatial, spectral and polarization information on a 2D detector with a similar CDMA process. The SLM is an array of micro cells of parallel aligned nematic liquid crystal on a reflecting layer [28], which provides polarization and wavelength dependent transmission patterns to encode the scene and then multiplexes them on a 2D detector. After the compressed measurement, the polarization and color information can be decoded inversely by using the iterative optimization algorithm. Considering the polarimetry optimization, we isolate the signal into 0°, 90°, 45°, and 135° polarization channels to resolve the linear polarization state. This linear polarimetry would satisfy several applications without significant circular polarizations, such as most of the natural scenes.

2. Theory

The Stokes vector is a mathematical formula describing average irradiance between different polarization subsets to express the properties of electromagnetic field [29]. Three of the Stokes parameters are related to the linear polarization state; the relationship can be derived by the following equations:

S0=(I0+I90+I45+I135)/4
S1=I0I90
S2=I45I135,
where I0, I90, I45, and I135 represent the average intensity filtered by an ideal polarizer oriented at the corresponding angle. Conventionally, multiple filtered images with different polarization orientations have been required to recover the polarization information. Compressive sampling, on the other hand, only requires a snapshot to measure the S0 to S2 Stokes parameters. Here we propose using a Liquid Crystal on Silicon (LCoS) based SLM to modulate the color and polarization signal to multiplex four intensity subsets in a single measurement. In the SLM, the electrical controllable optical anisotropy of the liquid crystal (LC) is used for signal modulation. Each layer of the LC can be considered as a thin optical birefringence material. Its long axis and the short axis provide different refractive indices to the electromagnetic wave. Since the orientation of the LC molecules can be controlled by the applied voltage, the controllable birefringence makes the SLM a variable wave-plate. The SLM provides up to 3π programmable phase retardation to create the wavelength dependent polarization state rotation. We sample the vertical fraction of the polarization imaging to transfer the phase modulation into a detector recognizable amplitude modulation, which can be described as:
I=12(S0S1cos(2β(λ))+S2sin(2β(λ))),
where β(λ) is the variable birefringence generated by the modulator, which is a function of wavelength. Figure 1 illustrates the amplitude modulation in different polarization and color channels. Each sub-image describes the relationship between the transmission pixel counts and the applied voltage on the SLM. Since the applied voltage is relevant to the birefringence of the modulator, these knowledge can be used to map the transmission code to the color-polarization signal. These polarization dependent transmission patterns multiplex the polarized images into one enoded intensity measurement. We note that this multiplexing strategy enables the freedom of choosing the basis of polarization state decomposition. Any two orthogonal polarization states can be assigned as one pair of polarization channels to decompose the incident light. This paper chooses linear horizontal, linear vertical, linear 45°, and linear 135° as the decomposing basis to analyze all linear polarization states.

 figure: Fig. 1

Fig. 1 The amplitude modulation tests for three color channels and two orthogonal decompositions. The horizontal axis represents the 8-bit applied voltage address on the SLM. The vertical axis is the average pixel count. (a), (b), and (c) are the response of linear horizontal and linear vertical of red, green, and blue channels, respectively. (d), (e), and (f) are the response of linear 45° and linear 135°.The spectral band of the red, green, and blue channels is 580 nm to 680 nm, 490 nm to 580 nm, and 400 nm to 490 nm, respectively.

Download Full Size | PDF

3. System design and mathematical model

A schematic of this imager is shown in Fig. 2. This design includes the optical elements which relay, modulate, and record the characteristics of light. The scene of interest is first imaged by an objective lens (L1). Then it is relayed by the collimating lens (L2) and the imaging lens (L3) onto an intermediate image plane. A pseudo random voltage map is applied on the SLM to generate phase retardation to encode the image. After reflected by the silicon layer of the SLM and then filtered by the polarizer, the applied phase retardation becomes a wavelength and polarization dependent amplitude modulation which is added to the scene. Finally, the modulated image is projected by the imaging lens (L4) and then recorded by a color detector. The LCoS device has a considerable pre-tilt angle, which causes additional phase retardation, transfers the linear polarizations into elliptical polarizations [28]. This artifact reduces the contrast in transmission between two orthogonal linear polarized images after the modulated signal filtered by the polarizer. Here we apply a quarter wave-plate as a compensator to increase the contrast ratio. The modulation process can be represented by the following mathematical model:

g(x,y,λ)=f(x,y,λ)T(x,y,λ)+f(x,y,λ)T(x,y,λ),
where g represents the spectral density on the detector plane, f represent the spatial-spectral distribution of the scene, and T is the wavelength and polarization dependent transmission code patterns. The horizontal and vertical subscripts represent a given pair of orthogonal states which decompose the incident polarization. The color detector integrates the spectral density into three color channels λR, λG, and λB. Considering the detector array has size Δ and the measurement noise w, the discrete form of the measurement model becomes:
gmn=[f(x,y,λ)T(x,y,λ)+f(x,y,λ)T(x,y,λ)]rect(xΔm,yΔn)dxdydλ+wmn.

Finally, we denote the source spectral density discretely as fmnkp and the color and polarization dependent code patterns as Tmnkp. Here m and n denote the (m, n)th spatial location, k represents the spectral channels defined by the color filter, and p is the number of polarization subsets. We derive this measurement process in matrix form as:

gmn=k=14p=14fmnkpTmnkp+wmn.
Considering a M×N pixels active area on the detector, we demosaic the Bayer filter by down sampling the detector by a factor of two in each spatial domain. Therefore, the 4D scene can be discretized as a datastream with size M2×N2×4×4. We depict the 4D discrete sampling function in matrix form as g = Hf + w, where H(M2×N2×4)×(M2×N2×4×4) represents the forward matrix of the system, f(M2×N2×4×4)×1 represents the object datacube discretized at the dimensions of the spectral and polarization compression ratios, and n(M2×N2×4)×1 represents the sensor noise. The forward matrix H approximates the sensing process that encodes each color and polarization channel identically to map the continuous 4D datastream f onto the 2D measurement g.

 figure: Fig. 2

Fig. 2 The schematic of the compressive color polarization camera.

Download Full Size | PDF

4. Experimental setup and system calibration

Figure 3 shows the experimental prototype of this compressive camera. The optics in this camera include a 60 mm commercial objective lens (Jenoptik), a 75 mm achromatic collimating lens (Edmund Optics), two 75 mm imaging lenses (Pentax), a broadband non-polarizing beam splitter (Newport), an achromatic quarter wave-plate (Newport), and a linear polarizer (Newport). This camera has a 25° field of view and a 12.5 mm clear aperture. The detector is a 1280×720 color camera (GuppyPro, AVT) with 4.08 μm square pixel size. The SLM used in this setup is a 1920×1080 phase only liquid crystal modulator (Pluto, Holoeye) with 8 μm resolution. All of these components are aligned on optical rails (Newport). A translation stage and a lab jack are used to provide precision horizontal, vertical and axial alignment for the SLM to correct the coding on the intermediate image plane. The applied voltage map on the SLM has an 8-bit pseudo random pattern with 16 μm square feature size, which provides one-to-four mapping between the code patterns and the detector pixels. The SLM has a 60 Hz refresh frame rate, which could have the potential to be adopted in video rate sensing. However, we are using a single pattern during the modulation in order to simplify the system calibration. We note that for an effective modulation, the exposure time for each measurement should cover at least one modulation period, which should be longer than 16.7 ms.

 figure: Fig. 3

Fig. 3 Experimental prototype of the compressive spectral polarization camera.

Download Full Size | PDF

The reconstruction performance is correlated to the reliability of the forward matrix H. A revised H is calibrated experimentally, which considering the errors that could break the ideal mapping relationship, such as the aberration caused by the relay optics and the sub-pixel misalignment. We record the calibrated H matrix by illuminating the camera under four identical polarization states to record the modulation patterns sequentially. The light source we use is a white light LED filtered by a rotatable polarizer. We assign linear horizontal, linear vertical, linear 45°, and linear 135° polarization channels as the identical states because these two orthogonal pairs are the basis of all linear polarizations.

5. Reconstruction algorithms

The conventional linear inversion constrain that the rank of the projection H must be equal to the number of the object modes f. Also, the magnitude of the noise w should be small enough to achieve reliable reconstruction. Compressive sampling is a novel method which is designed to use fewer number of measurement modes to estimate the object; thus, conventional linear inversion methods, such as pseudo inverse and least-squares are incapable of reconstructing the object. Modern reconstructions algorithms solve this ill-conditioning sampling by using convex optimization processes. Such methods can effectively use finite measurement modes to estimate extra dimensional scenes. Here we use Two-step Iterative Shrinkage Thresholding (TwIST) [30] and General Alternative Projection (GAP) [31]. TwIST usually use regulation functions such as L1-norm or total variation to constrain the estimation. GAP, on the other hand, transforms estimations into some sparse domains to fulfill the requirement of sparsity of compressive sampling.

5.1. TwIST

TwIST solves the optimization by using the following estimation:

f^=argminf{12gHf22+τHTV(f)},
where τ is the weighting factor of the regularization and the total variation (TV) regularizer HTV (f) is defined by:
HTV(f)=li,j|[f(i+1,j,l)f(i,j,l)]2+[f(i,j+1,l)f(i,j,l)]2|,
where i, j, and l denotes the two spatial dimension and the joint dimension of color and polarization, respectively. Since the color and polarization channels are usually uncorrelated, we therefore fold these two dimensions into a joint dimension during this convex optimization process. The sparsity in spatial gradient is enforced during the iteratively object estimation. Therefore, the TV regularization could estimate the spatially smooth cases well since those images gain sparsity by ignoring the high frequency spatial information. In this paper, the TwIST algorithm is used in the reconstruction with regularization weight τ ∈ [0.02, 0.05] and iterations between 50 and 300.

5.2. Generalized alternating projection (GAP)

We adapt an anytime algorithm, GAP, from other applications to reconstruct the compressed spectral and polarization. GAP produces a sequence of partial solutions that monotonically converge to the true signal (thus, anytime). In [31], no real data or application was considered and it was improved for the video and depth compressive sensing in [32]. The manner in which the GAP algorithm is employed here, as well as the application considered, is significantly different from [31], and similar to but different from [26, 32]. Specifically, the wavelet transformation is used in space globally in [26, 32], while in our applications, we use the locally DCT for different blocks. In the following, we first review the underlying GAP algorithm and then show how to improve it to get better results for the data considered here.

GAP is used to investigate the group-sparsity of wavelet/DCT coefficients of the video to be reconstructed. Let Tx ∈ ℝnx×nx, Ty ∈ ℝny×ny, Tt ∈ ℝnt×nt be orthonormal matrices defining bases such as wavelets or DCT. Define v=(TtTTyTTxT)f, and Φ = H (TtTyTx), where ⊗ denotes Kronecker product. Then we can write Eq. (7) concisely as g = Φv + w, where Φ ∈ ℝnxny×nxnynt with ΦΦT=diag(vec(k=1ntHkHk)). For simplification, from now we ignore possible noise w. Note that g reflects one nx × ny compressively measured image, and f = (TtTyTx) w is the nx × ny × nt images we wish to recover.

5.2.1. GAP for CS inversion

GAP solves the following problem

(v(t),θ(t))=argminw,θvθ22+λ(t)θ2,1𝒢βsubjecttoΦv=y,
where λ(t) ≥ 0 is the Lagrangian multiplier uniquely associated with C(t). Denote by λ* the multiplier associated with C*. It suffices to find a sequence {λ(t)}t≥1 such that limt→∞λ(t) =λ*.

We solve Eq. (10) by using alternately projection between v and θ. Given one, the other is solved analytically: v is an Euclidean projection of θ on the linear manifold, while θ is the result of applying group-wise shrinkage to v. An attractive property of GAP is that, by using a special rule of updating λ(t), we only need to run a single iteration of Eq. (10) for each λ(t) to make {λ(t)}t≥1 converge to λ*. In particular, GAP starts from θ(0) = 0 and computes two sequences, {θ(t)}t≥1 and {v(t)}t≥1:

v(t)=θ(t1)+ΦT(ΦΦT)1(yΦθ(t1)),
θ𝒢k(t)=v𝒢k(t)max{1λ(t)βkw𝒢k(t)2,0},k=1,,m
whereλ(t)=v𝒢jm+1(t)(t)2βjm+1(t)1,m<m
with (j1(t),j2(t),,jm(t)) a permutation of (1, 2,···, m) such that v𝒢j1(t)(t)2βj1(t)1v𝒢j2(t)(t)2βj2(t)1v𝒢jm(t)(t)2βjm(t)1.

The algorithm Eq. (11) and Eq. (12) is referred as generalized alternating projection (or GAP) to emphasize its difference from alternating projection (AP) in the conventional sense: conventional AP produces a sequence of projections between two fixed convex sets, while GAP produces a sequence of projections between two convex sets that undergo systematic changes over the iterations. In the GAP algorithm as shown in Eq. (11) and Eq. (12), the alternating projection is performed between a fixed linear manifold 𝒮Φ,y and a changing weighted-2,1 ball, i.e., B2,1𝒢β(C(t)) whose radius C(t) is a function of the iteration number t.

5.2.2. Extension of GAP for the proposed camera

The diagonalization of ΦΦT is the key to achieve fast GAP recovery of videos. The inversion of ΦΦT in Eq. (11) now just requires computation of the reciprocals of the diagonal elements, as a result of the hardware implementation of the proposed camera. In our experiments, we use the block-wise reconstruction. More specifically, we partition the 3D cube to overlapping 3D blocks (bx × by × 4), and we inverse each block independently and then average the results. This leads to the parallel computation. Best results are found with bx = by = 16 and Tx, Ty, Tt corresponds to DCT. The weights of the DCT is similar to the time weight used in [32], and each group in GAP is 2 × 2 × 1. There is no group in the polarization domain because we here reconstruct four polarization channels and they are not necessary to share any common information.

6. Experimental results

The experimental results are presented in this section. A resolution chart, color bricks and a scene of a parking lot were used as examples to examine the compressive sampling and the reconstructing ability. We chose the reconstruction algorithm based on the spatial feature of the object. As such TwIST was used to solve the spatially smooth objects, such as the resolution chart, while GAP was used to reconstruct the object with spatial complexity, including toys and the natural scene. The reconstruction recovers the linear horizontal, linear vertical, linear 45°, and/or linear 135° polarization channels with red, green, and blue color channels.

The first experiment measured the color polarization image of a 1951 USAF resolution chart. A linear polarizer was mounted in front of the light source to control the incident polarized white light. Figure 4 shows the detector measurement of the resolution chart. As shown in the image, the multiplexing process encoded the image with pattern. Figure 5 shows the reconstruction of this compressed measurement in three colors and four polarization channels. The noisy modulation pattern which appears in Fig. 4 had been removed and the sharp edges can now be visually reconstructed. The brightness in each image reveals its irradiance between polarization channels, which had been normalized to the maximum value. Since the illumination is linear vertical polarized, the linear vertical (90°) channels have the highest brightness; the linear horizontal (0°) channels are close to zero brightness; and the other two channels have half of the brightness of the linear vertical channels.

 figure: Fig. 4

Fig. 4 The detector measurement of a negative 1951 resolution chart which is under a linear vertical polarized illumination. The phase retardation generated by the SLM provided a polarization and wavelength dependent transmission pattern to the scene.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 The spectral and polarization reconstruction of the compressive sampling. The reconstruction includes red, green and blue colors combined with linear horizontal (0°), linear vertical (90°), linear 45(45°), and linear 135(135°) polarization channels. Their brightness follows the normalized irradiance. The color representations are using Matlab generated pseudo-color.

Download Full Size | PDF

The second experiment was to validate the camera’s reconstruction ability as imaging polarimeter. We rotated the polarizer at different azimuth angles to change the incident polarized light. Figures 6 and 7 show the distribution of the reconstructed S1 and S2 Stokes parameters under a series of different incident polarizations in the green channel. The reconstructed values vary with the incident polarization state. These reconstructed images follow the theoretical value of S1 and S2 Stokes parameters.

 figure: Fig. 6

Fig. 6 The S1 Stokes parameter reconstructions of the green channel under different incident polarization states. The azimuth angles of the polarizer are 0°[(a) upper left, S1 = 1], 30°[(b) upper middle, S1 = 0.5], 45°[(c) upper right, S1 = 0], 60°[(d) bottom left, S1 = −0.5], 90°[(e) bottom middle, S1 = −1], and 135 [(f) bottom middle, S1 = 0]. The average value: 0.79, 0.31, 0, −0.35, −0.82, and −0.04, which corresponding to S1 = 1, 0.5, 0, −0.5, −1, and 0, respectively.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 The S2 Stokes parameter reconstructions of the green channel under different incident polarization states. The azimuth angles of the polarizer are 0°[(a) upper left, S2 = 0], 30°[(b) upper middle, S2 = 0.87], 45 [(c) upper right, S2 = 1], 60°[(d) bottom left, S2 = 0.87], 90°[(e) bottom middle, S2 = 0], and 135°[(f) bottom middle, S2 = −1]. The average value: 0.03, 0.67, 0.84, 0.64, 0.01, and −0.69, which corresponding to S2 = 0, 0.87, 1, 0.87, 0, and −1, respectively.

Download Full Size | PDF

The experimental results of spatially and spectrally complex scenes are shown in Figs. 8 and 9. Polarization filtered toys and a scene of a parking lot were captured and reconstructed. Each Fig. includes an unpolarized reference color image, a monochrome coded image represents the compressed measurement, a polarized color images represents the reference of polarization channels, and pseudo-color demodulated images depict the reconstructed channels. Both scenes were reconstructed with patch based GAP using a spatial DCT basis. In the reconstruction, each color channel had been reconstructed separately and using Bayer filter demodulation to generate pseudo-color estimations.

 figure: Fig. 8

Fig. 8 The measurement, the references, and the reconstruction of toys which is filtered by two orthogonal sheet polarizers. (a) The azimuth angle of two sheet polarizers. The left polarizer is vertical and the right polarizer is horizontal. (b) An un-polarized reference of the scene. (c) The compressed measurement. (d), (e), and (f) are the references of linear horizontal, linear vertical, and linear 45° polarized color images measured by the same detector with a rotatable polarizer. (g), (h), and (i) are the reconstructed images of linear horizontal, linear vertical, and linear 45° polarized color images. Notice that the left side of the image is linear vertically polarized and the right side is linear horizontally polarized.

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 The measurement, the references, and the reconstruction of the scene in a parking lot. (a) An un-polarized reference of the scene. (b) The compressed measurement. (c), (d), (e), and (f) are the reference images of linear horizontal, linear vertical, linear 45°, and linear −45° polarized color images measured by a color camera with a sheet polarizer. (g), (h), (i), and (j) are the reconstructed images of linear horizontal, linear vertical, linear 45°, and linear 135° polarized color images. Notice that the reflections on the windows and on the rear screen of cars are polarized.

Download Full Size | PDF

We note that the errors in the recovery images are possibly caused by several factors. First, the calibration error in the forward matrix H could reduce the quality of reconstruction. Such errors could come from the beam deviation and the azimuth angle misalignment of rotatable polarizer in the calibration, and the insufficient of the light source uniformity. Using a better calibrated H matrix could reduce such error. Second, the patch based DCT regularization function might generate additional spatial noise to compensate the color and polarization reconstruction in Figs. 8 and 9. A potential solution to improve the spatial reconstruction is to adapt a 2D TV regularizer to increase the spatial smoothness of the recovery images.

Figure 10 compares the spatial resolution between an unmodulated image and a reconstructed image to estimate the resolution degradation caused by the compressive measurement. The camera’s angular resolution can be estimated by measuring the minimum resolvable line width on the resolution chart over the object distance. Since the object was placed 323.5 mm in front of the system, the angular resolution of the normal image measured by the camera is 0.024°; and the angular resolution of the compressive measurement has been degraded to 0.027°.

 figure: Fig. 10

Fig. 10 The spatial resolution test. The test target is a negative USAF 1951 resolution chart which was illuminated by vertically polarized light. (a) The reference image which is recorded by the same camera without modulation. (b) The reconstructed image in the linear vertical channel. The finest resolvable line pairs are group 2 element 6 and group 2 element 5 in (a) and (b), respectively. The corresponding angular resolution is 0.024° in unmodulated reference image; and 0.027° in reconstructed compressive measurement. Both images have a 60×60 pixels spatial resolution.

Download Full Size | PDF

The extinction ratio is one important parameter of a polarization camera, where its definition is the fraction of maximum transmission over the minimum transmission. The measurement result in Fig. 5 gives the extinction ratios, which are 2418.5, 664.3, and 556.1 for red, green, and blue channels, respectively. This camera has a 1075.5 overall extinction ratio, which is 10 times higher than the micropolarizer array. We note that the low extension ratio in the blue channel might due to the low visibility and high gain value in such channel, which provides poor signal to noise ratio to the reconstruction.

We present the peak signal to noise ratio (PSNR) of all four polarization channels in multiple measurement frames to evaluate the reconstruction stability of the camera. (Figure 11) The stationary object is a resolution chart under vertically polarized illumination. Since the objective has a constant spatial, polarization, and color distribution in all measurement frames, the average reconstruction was used as the ground truth. All of the polarization channels provide stable, low noise reconstruction with PSNR usually higher than 30 dB. This result shows that the reconstruction algorithm is robust enough to provide stable estimations. We note that the linear vertical channel has the highest PSNR since the test target was illuminated by linear vertically polarized light, which also has the best signal-to-noise ratio.

 figure: Fig. 11

Fig. 11 The PSNR for the reconstruction stability. The object is a stationary resolution chart illuminated by vertically polarized light. Each point represents the average PSNR in each reconstructed frame. The mean PSNR are 32.5, 38.0, 33.8, and 33.7 for linear horizontal, linear vertical, linear 45° and linear 135°, respectively.

Download Full Size | PDF

7. Conclusion

This single-shot color polarization imager presents an integration of color and polarization compressive and multiplexing sampling. It uniquely encodes and decomposes the scene into polarization images by using the phase modulation of a SLM, which provides extra polarization sensitivity compared to a conventional detecto The color-polarization compression eliminates mechanical movements that hinder conventional polarimetry. We have also presented a patch based GAP algorithm combined with a spatial DCT basis to estimate scenes with relatively complex color polarization distribution (Figs. 8 and 9). This algorithm requires no dictionary training and prior information of the object scene. Also, its patch based characteristic enables the inverse estimation to be parallelly processed, which saves time in reconstruction. Finally, the experimental results show high extinction ratios, clear spatial resolution and stable, low noise reconstructions. Future applications will use the polarization sensitivity to analyze the surface information, such as the curvature and the roughness.

We note that the spectral compression ratio of this camera could be extended by applying a side color camera to record the un-coded, high spatial resolution iamges [33]. Increasing the thickness of the LC cell or adapting dispersive prisms in the system to enhance the complexity of the spectral modulation could also contribute to the quality of the spectral compression and reconstruction.

Since the SLM is an active modulating device, switching multiple SLM frames per integration may compress the scene in temporal domain to achieve polarization compression video [26]. Such polarization-temporal compression must modulates C SLM frames in every measurement to acquire C times temporal resolution. This temporal coding strategy should provide proof of concept estimations describing 5D data-cubes f(x, y, λ, p, t) under the same camera design. Future high-dimensional compressive sampling implementations may also adapt the SLM in design to gain the polarization and/or temporal sensitivity.

Acknowledgments

This work was supported by the Comprehensive Space-Object Characterization Using Spectrrally Compressive Polarimetric at the Air Force Office of Scientific Research, grant FA9550-11-1-0194. Note:LEGO is a trademark of The LEGO Group which is not overseeing, involved with, or responsible for this activity, product, or service.

References and links

1. W. G. Egan, “Polarization in remote sensing,” Proc. SPIE 1747, 2–48 (1992). [CrossRef]  

2. J. S. Tyo, D. L. Goldstein, D. B. Chenault, and J. A. Shaw, “Review of passive imaging polarimetry for remote sensing applications,” Appl. Opt. 45, 5453–5469 (2006). [CrossRef]   [PubMed]  

3. R. G. Nadeau, W. Groner, J. W. Winkelman, A. G. Harris, C. Ince, G. J. Bouma, and K. Messmer, “Orthogonal polarization spectral imaging: A new method for study of the microcirculation,” Nat. Med. 5, 1209–1212 (1999). [CrossRef]   [PubMed]  

4. J. S. Tyo, M. P. Rowe, E. N. Pugh Jr., and N. Engheta, “Target detection in optically scattering media by polarization-difference imaging,” Appl. Opt. 35, 1855–1870 (1996). [CrossRef]   [PubMed]  

5. W. Smith, D. Zhou, F. Harrison, H. Revercomb, A. Larar, A. Huang, and B. Huang, “Hyperspectral remote sensing of atmospheric profiles from satellites and aircraft,” Proc. SPIE 4151, 94–102 (2001). [CrossRef]  

6. J. E. Ahmad and Y. Takakura, “Error analysis for rotating active Stokes-Mueller imaging polarimeters,” Opt. Lett. 31, 2858–2860 (2006). [CrossRef]   [PubMed]  

7. A.-B. Mahler, D. Diner, and R. Chipman, “Analysis of static and time-varying polarization errors in the multiangle spectropolarimetric imager,” Appl Opt , 50, 2080–2087 (2011). [CrossRef]   [PubMed]  

8. J. S. Tyo and H. Wei, “Optimizing imaging polarimeters constructed with imperfect optics,” Appl. Opt. 45, 5497–5503 (2006). [CrossRef]   [PubMed]  

9. B. Bayer, “Color imaging array,” U.S. Patent 4,054,906 (20 July 1976).

10. J. J. Peltzer, K. A. Bachman, J. W. Rose, P. D. Flammer, T. E. Furtak, R. T. Collins, and R. E. Hollingsworth, “Ultracompact fully integrated megapixel multispectral imager,” Proc. SPIE 8364, 83640O (2012). [CrossRef]  

11. X. Zhao, F. Boussaid, A. Bermak, and V. G. Chigrinov, “High-resolution thin guest-host micropolarizer arrays for visible imaging polarimetry,” Opt. Express 19, 5565–5573 (2011). [CrossRef]   [PubMed]  

12. G. Myhre, A. Sayyad, and S. Pau, “Patterned color liquid crystal polymer polarizers,” Opt. Express 18, 27777–27786 (2010). [CrossRef]  

13. X. Zhao, F. Boussaid, A. Bermak, and V. G. Chigrinov, “Thin Photo-Patterned Micropolarizer Array for CMOS Image Sensors,” IEEE Circuit. Devic. 21, 805–807 (2009).

14. V. Gruev, A. Ortu, N. Lazarus, J. Van der Spiegel, and N. Engheta, “Fabrication of a dual-tier thin film micropolarization array,” Opt. Express 15, 4994–5007 (2007). [CrossRef]   [PubMed]  

15. J. Guo and D. Brady, “Fabrication of thin-film micropolarizer arrays for visible imaging polarimetry,” Appl. Opt. 39, 1486–1492 (2000). [CrossRef]  

16. J. Guo and D. Brady, “Fabrication of high-resolution micropolarizer array,” Opt. Eng. 36, 2268–2271 (1997). [CrossRef]  

17. X. Zhao, X. Pan, X. Fan, P. Xu, A. Bermak, and V. G. Chigrinov, “Patterned dual-layer achromatic micro-quarter-wave-retarder array for active polarization imaging,” Opt. Express 22, 8024–8034 (2014). [CrossRef]   [PubMed]  

18. D. Sabatke, A. Locke, E. L. Dereniak, M. Descour, J. Garcia, T. Hamilton, and R. W. McMillan, “Snapshot imaging spectropolarimeter,” Opt. Eng. 41, 1048–1054 (2002). [CrossRef]  

19. J. Kim and M. J. Escuti, “Snapshot imaging spectropolarimeter utilizing polarization gratings,” Proc. SPIE 7086, 708603 (2008). [CrossRef]  

20. C. Oh and M. J. Escuti, “Achromatic diffraction from polarization gratings with high efficiency,” Opt. Lett. 33, 2287–2289 (2008). [CrossRef]   [PubMed]  

21. D. J. Brady, Optical imaging and spectroscopy (Wiley-Interscience, 2009). [CrossRef]  

22. M. Gehm, R. John, D. J. Brady, R. Willett, and T. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express 15, 14013–14027 (2007). [CrossRef]   [PubMed]  

23. D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive Holography,” Opt. Express 17, 13040–13049 (2009). [CrossRef]   [PubMed]  

24. K. MacCabe, K. Krishnamurthy, A. Chawla, D. Marks, E. Samei, and D. Brady, “Pencil beam coded aperture x-ray scatter imaging,” Opt. Express 20, 16310–16320 (2012). [CrossRef]  

25. E. X. Chen, M. Gehm, R. Danell, M. Wells, J. T. Glass, and D. Brady, “Compressive Mass Analysis on Quadrupole Ion Trap Systems,” J. Am. Soc. Mass Spectr. 251295–1307 (2014). [CrossRef]  

26. P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21, 10526–10545 (2013). [CrossRef]   [PubMed]  

27. T. H. Tsai and D. J. Brady, “Coded aperture snapshot spectral polarization imaging,” Appl. Opt. 52, 2153–2161 (2013). [CrossRef]   [PubMed]  

28. W. Osten and N. Reingand, Optical imaging and metrology advanced technologies (Wiley-VCH, 2012). [CrossRef]  

29. D. Goldstein, Polarized Light, 2nd ed (Marcel Dekker, 2003).

30. J. Bioucas-Dias and M. Figueiredo, “A new twist: two-step iterative shrinkage/thresholding for image restoration,” IEEE T. Image Process. 16, 2992–3004 (2007). [CrossRef]  

31. X. Liao, H. Li, and L. Carin, “Generalized Alternating Projection for Weighted ℓ2,1 Minimization with Applications to Model-based Compressive Sensing,” SIAM J. Imaging Sci. 7(2), 797–823 (2014). [CrossRef]  

32. X. Yuan, P. Llull, X. Liao, J. Yang, D. Brady, G. Sapiro, and L. Carin, “Low-Cost Compressive Sensing for Color Video and Depth,” Proc. CVPR IEEE, (2014).

33. X. Yuan, T. H. Tsai, R. Zhu, P. Llull, D. J. Brady, and L. Carin, “Compressive Hyperspectral Imaging with Side Information,” IEEE J. Sel. Top. Signa. (to be published).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 The amplitude modulation tests for three color channels and two orthogonal decompositions. The horizontal axis represents the 8-bit applied voltage address on the SLM. The vertical axis is the average pixel count. (a), (b), and (c) are the response of linear horizontal and linear vertical of red, green, and blue channels, respectively. (d), (e), and (f) are the response of linear 45° and linear 135°.The spectral band of the red, green, and blue channels is 580 nm to 680 nm, 490 nm to 580 nm, and 400 nm to 490 nm, respectively.
Fig. 2
Fig. 2 The schematic of the compressive color polarization camera.
Fig. 3
Fig. 3 Experimental prototype of the compressive spectral polarization camera.
Fig. 4
Fig. 4 The detector measurement of a negative 1951 resolution chart which is under a linear vertical polarized illumination. The phase retardation generated by the SLM provided a polarization and wavelength dependent transmission pattern to the scene.
Fig. 5
Fig. 5 The spectral and polarization reconstruction of the compressive sampling. The reconstruction includes red, green and blue colors combined with linear horizontal (0°), linear vertical (90°), linear 45(45°), and linear 135(135°) polarization channels. Their brightness follows the normalized irradiance. The color representations are using Matlab generated pseudo-color.
Fig. 6
Fig. 6 The S1 Stokes parameter reconstructions of the green channel under different incident polarization states. The azimuth angles of the polarizer are 0°[(a) upper left, S1 = 1], 30°[(b) upper middle, S1 = 0.5], 45°[(c) upper right, S1 = 0], 60°[(d) bottom left, S1 = −0.5], 90°[(e) bottom middle, S1 = −1], and 135 [(f) bottom middle, S1 = 0]. The average value: 0.79, 0.31, 0, −0.35, −0.82, and −0.04, which corresponding to S1 = 1, 0.5, 0, −0.5, −1, and 0, respectively.
Fig. 7
Fig. 7 The S2 Stokes parameter reconstructions of the green channel under different incident polarization states. The azimuth angles of the polarizer are 0°[(a) upper left, S2 = 0], 30°[(b) upper middle, S2 = 0.87], 45 [(c) upper right, S2 = 1], 60°[(d) bottom left, S2 = 0.87], 90°[(e) bottom middle, S2 = 0], and 135°[(f) bottom middle, S2 = −1]. The average value: 0.03, 0.67, 0.84, 0.64, 0.01, and −0.69, which corresponding to S2 = 0, 0.87, 1, 0.87, 0, and −1, respectively.
Fig. 8
Fig. 8 The measurement, the references, and the reconstruction of toys which is filtered by two orthogonal sheet polarizers. (a) The azimuth angle of two sheet polarizers. The left polarizer is vertical and the right polarizer is horizontal. (b) An un-polarized reference of the scene. (c) The compressed measurement. (d), (e), and (f) are the references of linear horizontal, linear vertical, and linear 45° polarized color images measured by the same detector with a rotatable polarizer. (g), (h), and (i) are the reconstructed images of linear horizontal, linear vertical, and linear 45° polarized color images. Notice that the left side of the image is linear vertically polarized and the right side is linear horizontally polarized.
Fig. 9
Fig. 9 The measurement, the references, and the reconstruction of the scene in a parking lot. (a) An un-polarized reference of the scene. (b) The compressed measurement. (c), (d), (e), and (f) are the reference images of linear horizontal, linear vertical, linear 45°, and linear −45° polarized color images measured by a color camera with a sheet polarizer. (g), (h), (i), and (j) are the reconstructed images of linear horizontal, linear vertical, linear 45°, and linear 135° polarized color images. Notice that the reflections on the windows and on the rear screen of cars are polarized.
Fig. 10
Fig. 10 The spatial resolution test. The test target is a negative USAF 1951 resolution chart which was illuminated by vertically polarized light. (a) The reference image which is recorded by the same camera without modulation. (b) The reconstructed image in the linear vertical channel. The finest resolvable line pairs are group 2 element 6 and group 2 element 5 in (a) and (b), respectively. The corresponding angular resolution is 0.024° in unmodulated reference image; and 0.027° in reconstructed compressive measurement. Both images have a 60×60 pixels spatial resolution.
Fig. 11
Fig. 11 The PSNR for the reconstruction stability. The object is a stationary resolution chart illuminated by vertically polarized light. Each point represents the average PSNR in each reconstructed frame. The mean PSNR are 32.5, 38.0, 33.8, and 33.7 for linear horizontal, linear vertical, linear 45° and linear 135°, respectively.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

S 0 = ( I 0 + I 90 + I 45 + I 135 ) / 4
S 1 = I 0 I 90
S 2 = I 45 I 135 ,
I = 1 2 ( S 0 S 1 cos ( 2 β ( λ ) ) + S 2 sin ( 2 β ( λ ) ) ) ,
g ( x , y , λ ) = f ( x , y , λ ) T ( x , y , λ ) + f ( x , y , λ ) T ( x , y , λ ) ,
g m n = [ f ( x , y , λ ) T ( x , y , λ ) + f ( x , y , λ ) T ( x , y , λ ) ] rect ( x Δ m , y Δ n ) d x d y d λ + w m n .
g m n = k = 1 4 p = 1 4 f m n k p T m n k p + w m n .
f ^ = argmin f { 1 2 g H f 2 2 + τ H TV ( f ) } ,
H TV ( f ) = l i , j | [ f ( i + 1 , j , l ) f ( i , j , l ) ] 2 + [ f ( i , j + 1 , l ) f ( i , j , l ) ] 2 | ,
( v ( t ) , θ ( t ) ) = arg min w , θ v θ 2 2 + λ ( t ) θ 2 , 1 𝒢 β subject to Φ v = y ,
v ( t ) = θ ( t 1 ) + Φ T ( Φ Φ T ) 1 ( y Φ θ ( t 1 ) ) ,
θ 𝒢 k ( t ) = v 𝒢 k ( t ) max { 1 λ ( t ) β k w 𝒢 k ( t ) 2 , 0 } , k = 1 , , m
where λ ( t ) = v 𝒢 j m + 1 ( t ) ( t ) 2 β j m + 1 ( t ) 1 , m < m
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.