Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Eavesdropping of display devices by measurement of polarized reflected light

Open Access Open Access

Abstract

Display devices, or displays, such as those utilized extensively in cell phones, computer monitors, televisions, instrument panels, and electronic signs, are polarized light sources. Most displays are designed for direct viewing by human eyes, but polarization imaging of reflected light from a display can also provide valuable information. These indirect (reflected/scattered) photons, which are often not in direct field-of-view and mixed with photons from the ambient light, can be extracted to infer information about the content on the display devices. In this work, we apply Stokes algebra and Mueller calculus with the edge overlap technique to the problem of extracting indirect photons reflected/scattered from displays. Our method applies to recovering information from linearly and elliptically polarized displays that are reflected by transmissive surfaces, such as glass, and semi-diffuse opaque surfaces, such as marble tiles and wood furniture. The technique can further be improved by applying Wiener filtering.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. INTRODUCTION

When incident light interacts with a surface, the surface’s properties (shape, color, refractive index, roughness, etc.) are encoded into the plenoptic function [1] of the reflected and refracted light [2]. The plenoptic function includes spatial, angular, spectral, and polarization degrees of freedom of the optical field. A set of such light reflection and refraction from the surface can be used to form an image of a scene through an imaging system, which in turn provides the basis for inferring various properties of the surface comprising the scene [3]. In conventional imaging, typically only direct photons that are emitted and/or reflected from an object that is inside the line of sight of the imaging aperture are exploited; thus, only the properties of the source (object that emits light) and/or the surface properties of the reflector (object that reflect light) are studied. For example, an image of a tree under sunlight represents the spatial distribution of the diffuse reflection coefficient over the tree itself. Meanwhile, indirect photons that are reflected off other objects before being reflected by an object inside the field of view (FoV) of the camera contain information of those hidden objects that are not perceivable in conventional imaging.

To separate indirect photons from direct ones, prior work in this area has focused on the separation of transmission and reflection on a transparent window, where both the physical approach (using polarization information) [4,5] and the computational approach (using independent component analysis) [6,7] have been demonstrated to recover useful information. Post-processing has been applied to improve image extraction by decreasing the correlation between the transmitted and reflected images [8]. Meanwhile, transparent windows induce the unique problem of ghost reflections on the captured image, and several techniques have been developed that either chose to eliminate ghost images as artifacts [9] or take advantage of them to recover better transmitted images [10]. On the other hand, opaque reflective surfaces that also reflect and scatter light (e.g., surfaces on marble tiles, wood furniture, painted walls) have received relatively little attention [5,11,12], although they are more common in real-world scenarios than transparent media. Important issues such as surface scattering, volume scattering, and non-shift-invariant point spread functions (PSFs) arise during the reflection from these surfaces, and different physical models and more sophisticated post-processing are necessary to compute the separated images.

While prior work on separation has mainly dealt with reflection from an unpolarized light source, polarized light sources are very common, especially in display devices. Due to the increasing usage of liquid crystal displays (LCDs), the screens of many electronic devices emit predominantly linearly polarized light. Moreover, some displays, such as the iPhone screens, emit elliptically polarized light and are designed to be visible through polarized sunglasses when the screen is rotated.

In this work, we demonstrate the separation of the reflection of polarized sources from transmitted light incident on a transparent window as well as the reflection of polarized sources from surface textures upon opaque reflective surfaces. First, we assume the Stokes vectors of different sources, Ssrc, are polarized. Second, based on Mueller calculus, we derived a physical model of the Stokes vector of light captured by the camera, Scam, originating from multiple types of reflective media. Third, for different reflective surfaces, a closed form of Ssrc was derived from Scam by minimizing the correlation of reflected photons and the refracted and scattered photons based on a well-defined pixel-wise metric function over the different spectral channels (e.g., red, green, and blue). We then apply Stokes imaging polarimetry to measure the components of the Stokes vector at each spatial location (pixel) in the image, which provides a full characterization of the polarization information in a scene. Our technique has several distinct advantages: (1) good separation result for different real-life scenes for a variety of display devices, (2) applicability to both grayscale and color images, (3) measurement of reflection from transparent or opaque semi-glossy objects, and (4) the ability to estimate the angle of reflection of the surface, including the orientation of the surface and the light source.

2. PRINCIPLE

A general schematic of the optical measurement process is shown in Fig. 1. A light source, such as an LCD screen, emits polarized light incident on a reflective medium. The light is reflected by the medium, passes through a polarization analyzer, and is captured by an imaging sensor. The interaction of the light with the medium yields photons in the light rays that undergo specular reflection, surface scattering, or volume scattering. Meanwhile, the medium may also transmit light in the spectrum of interest (in our case the visible spectrum), and the transmitted component can also be subsequently reflected, for example, as photons from a picture behind a glass window. Ambient light, e.g., light from an overhead lamp, generally acts as unwanted background or noise in the separation, and it is typically unpolarized and uniform in terms of spatial/angular distribution. In the discussion of the separation methods below, we refer to all photon contributions to the captured image that do not result from specular reflection as “ambient,” and we do not discern further between the various “ambient” components. A robust separation method should provide good separation results, even when the ambient component is not modeled in detail because in most cases, little is known about this component compared with the specular reflection component.

 figure: Fig. 1.

Fig. 1. Schematic of the scene is shown. Light emitted from a display device (image of a camera man) propagates to the reflector (gray) at an incident angle of ϕ, is reflected by the reflector, passes through the polarization analyzer (blue), and is detected by an imaging sensor. The imaging lens is not shown. Local coordinates of the display device in x, y are rotated by an angle of χ into the incident local coordinates sin,pin on the reflector, and the exit local coordinates sout,pout on the reflector are rotated by an angle of θ into the local coordinates m, n of the sensor. The horizontal direction of the polarization analyzer (in yellow) aligns with m.

Download Full Size | PDF

A. Stokes Vector of Display Devices

In this section, we compute the Stokes vector of different display devices. The Stokes vector is 4×1 column vector, S=(S0,S1,S2,S3)T, where S0 is the irradiance, and the superscript T denotes the transpose. Linearly polarized sources can be represented by (S0,S0,0,0)T. The source is assumed to be initially polarized along the vertical direction, as most LCDs are. Other displays, such as the iPhone screen, can be approximately described by a Stokes vector with a circular polarization component (S0,s1S0,0,S0)T, where s1 is the projection of Ssrc along the 0°/90° direction on the Poincarè sphere. The validity of this approximation can be shown as follows.

The iPhone screen is roughly right-handed circularly polarized at a single wavelength, the designed wavelength λ0550nm. The screen can be modeled as a quarter-wave plate (QWP) placed on top of a linearly polarized source. The phase difference between the fast and slow axes of the QWP is OPDQWP=Δn·dQWP, where Δn is the difference of refractive indices between the two axes, and dQWP is the physical thickness of the QWP. For a QWP operating at λ0, OPDQWP=(m+1/4)λ0(mZ). From Jones calculus, the electric field of light emitted from the screen, Esrc, is an integral of the Jones vector under each wavelength,

Esrc=λ1λ2A(λ)eiωtR45°·(eiπ·OPDQWPλ00eiπ·OPDQWPλ)·R45°·(10)dλ=λ1λ2A(λ)eiωtcos(π·OPDQWPλ)(1itan(π·OPDQWPλ))dλ,
where λ1,2 are limits of the integral over wavelength, A(λ) is the spectrum-dependent field amplitude whose square integrates to a constant (λ1λ2A(λ)2·dλ=S0), λ is the wavelength of light, ω=2πc/λn is the angular frequency of light, c is the speed of light in vacuum, n is the refractive index at λ, and R±45° are the two-dimensional rotation matrices at angle ±45° along the z axis (propagation direction). Without loss of generality, the linearly polarized source is also assumed to emit vertically polarized light and is thus represented by the Jones vector (1,0)T. Clearly, for applications at the designed wavelength, i.e., A(λ)=S0δ(λλ0); Esrc evaluates as (1,i)T to within a constant phase factor and represents a right-handed circular polarization state.

For numerical calculation of the integral, we focus on the visible spectrum (λ1=400nm, λ2=700nm). The change of n, i.e., dispersion, is assumed to be small relative to the absolute change of λ. Direct evaluation of Eq. (1) yields the electric field instead of a Jones vector, as the latter is defined for a single wavelength. For a more intuitive result, the Jones vector being integrated is converted first into its corresponding Stokes vector and then integrated. For a general electric field Ex,y that oscillates in the xy plane, the relationship between the Stokes vector and the electric field is

S=(|Ex|2+|Ey|2|Ex|2|Ey|22Re(ExEy*)2Im(ExEy*)),
such that the Stokes vector corresponding to Esrc becomes
Ssrc=(S0400nm700nmA2(λ)cos(πλ02λ)dλ0400nm700nmA2(λ)sin(πλ02λ)dλ),
where we have applied the property of A(λ) and have expanded OPDQWP in terms of λ0.

For cases where the spectrum is peaked around λ0, in other words, A(λ) is largely concentrated around λ0, the sinusoidal functions inside the two integrals can be approximated by their first-order terms (first and second terms in the Taylor series expansion) as cos(πλ0/2λ)π(λλ0)/2λ0 and sin(πλ0/2λ)1. The cosine term’s expansion is of the order of O(λλ0), which is one order smaller than the sine term’s expansion. We therefore define s1=400nm700nmA2(λ)π(λλ0)/2λ0·dλ, and the iPhone screen can be approximated as an elliptically polarized source described by S0(1,s1,0,1)T. Note that this model violates the normalization property of the Stokes vector to the second order of λλ0: i=1,2,3Si2/S02=1+s12=1+O[(λλ0)2/λ02]>1.

This approximation generally underestimates the s1 component and overestimates the s3 component because the sine term in s1 is approximated by a linear term, and the cosine term in s3 is approximated by 1. Deviation from the approximation becomes noticeable for wavelengths far from λ0. The degree of circular polarization, DoCP=S3/S0, is 1 if the light is right-handed circularly polarized. When the source emits only 400 nm light, the DoCP of the source reaches a minimum of 0.83. This deviation has been observed in our experiments and is discussed in Section 3.

B. Reflection, Scattering, and Refraction on Reflective Media

When light from the source encounters the reflective media, it undergoes various processes depending on the physical properties of the media. In this section, we introduce the model for Scam, and then explain in detail all the processes and their corresponding derivations;

Scam=Rθ·(R(ϕ)·Rχ·Ssrc+T(ϕ)·Samb),
where Samb denotes the Stokes vector of the ambient light, R(ϕ),T(ϕ) are the Mueller matrices for reflection and refraction/scattering on the media (their exact forms depending on the type of media), and Rθ,χ are two 4×4 two-dimensional (2D) rotation matrices in the xy plane. The rotation matrix under Stokes formalism is
RΘ=(10000cos(2Θ)sin(2Θ)00sin(2Θ)cos(2Θ)00001),
where Θ=θ, χ. The negative sign comes from the fact that θ, χ in our definition refers to the rotation of the reference frame, while the angles originally refer to the physical rotation of a Stokes vector. The ambient light is assumed to be unpolarized with the Stokes vector given by (Samb,0,0,0)T, where Samb is the irradiance.

We consider three cases of reflecting media. The first case considers a single-surfaced opaque media, such as marble tiles. A portion of light is reflected on the surface, while the rest transmits through the surface and is subsequently volumetrically scattered or absorbed. The ratio of reflection and refraction is determined by Fresnel equations, while the portion of light that undergoes volume scattering depends on the porosity and exact chemical composition of the material and can be unknown to us. We note that volume scattering is assumed to create unpolarized light [13]. Both the reflected and a portion of the scattered light are captured by the camera. Therefore, in this case, the Mueller rotation matrix due to reflection can be expressed as [14]

R(ϕ)=12(R12s+R12pR12sR12p00R12sR12pR12s+R12p00002R12sR12p00002R12sR12p),
where R12s,12p are reflectances of s, p light on a surface with the light propagating from material 1 (for air, n1=1) to material 2 (n2=nmarble). The reflectances are given by the Fresnel equations [15]. The transmission matrix T(ϕ) is
T(ϕ)=a(1000000000000000),
where a is the portion of light being volume scattered, and T(ϕ) represents the volume scattering process.

In the second case, we consider coated opaque media, such as surfaces of wood furniture. The media has two interfaces—the air–coating interface and the coating–substrate interface. The physical processes on the media resemble those in the single-surface opaque media, except that the volume scattering is now replaced by surface scattering. The surface scattering often partially depolarizes the incident light and produces partially polarized light, whose properties depend on the polarized bidirectional reflectance distribution function (pBRDF) of the surface/interface, for example, at the latex–wood interface of the wood furniture. The Mueller matrix BRDF of a general depolarizer is [16]

T(ϕ)=(10TPΔmΔ),
where 0 is a 3×1 zero vector, PΔ is a 3×1 angle-dependent polarizance vector, and mΔ is a 3×3 angle-dependent symmetric matrix. As the pBRDF of this interface depends on both the polished condition of the substrate and how the coating is applied, T(ϕ) can vary a lot and cannot be assumed to obey any specific distribution. For the wood surface, the latex–wood interface is generally both a weak polarizer (due to first-surface reflections) and a strong depolarizer (due to multiple surface and subsurface reflections) at oblique incidence [13]. In this case, T(ϕ) has nine independent variables. For simplicity, we treated the latex–wood interface used in our experiment with the pBRDF of a full depolarizer, since all elements in PΔ and mΔ should be small based on its properties mentioned above.

For the third case, we consider transparent windows as reflective media [5]. The Mueller matrix is

R(ϕ)=12(Rsw+RpwRswRpw00RswRpwRsw+Rpw0000R33w0000R33w),
where Rs,pw=2R12s,12p/(1+R12s,12p), R33w=2R12sR12p(R12s+R12p2)/(1R12sR12p), and
T(ϕ)=12(Tsw+TpwTswTpw00TswTpwTsw+Tpw0000T33w0000T33w),
where Ts,pw=T12s,12p/(2T12s,12p), and T33w=4TswTpw/(Tsw+Tpw). R12s,12p have the same definitions as in the first case, and T12s,12p=1R12s,12p are the corresponding transmittances. In this case, Samb corresponds to the light emitted/scattered from the object behind the window, which can be assumed to be unpolarized (for example, a picture behind a photo frame).

For the above studies, we focus on discerning photons that come from specular reflection from those from refraction/scattering, while recovering some of the geometrical information of the scene, such as the angles χ,ϕ,θ. If the reflective media is in direct line of sight, the orientation of the media and the angle θ can be determined. If the rim of the screen is visible, the orientation of a screen and the angle χ can be calculated by line detection algorithms using the Hough transform.

C. Estimation of the Source’s Stokes Vector

To solve Ssrc in terms of Scam, we list below the three sets of equations corresponding to the three cases discussed in Section 2.B. Both Ssrc and Samb are functions of the incident angle ϕ and can be obtained separately by solving the linear equations. As the source and the ambient light component/surface texture are generally uncorrelated, the correlation between the two should ideally reach a minimum for the correct ϕ corresponding to the physical setup of the scene. Note that the reflectance and transmittance are in fact functions of ϕ, and they determine the diattenuation of the reflection/transmission Mueller matrices. In Section 2.D, we describe the metric function used to calculate the correlation. The metric function is minimized to estimate the value of ϕ. In this section, we focus on deriving solvable linear equations between the captured image and the source. In general, the equations are nonlinear, but we show here that the problem can be approximated by linear equations.

For the first case of the single-surfaced opaque media,

Sijcam=((A12B12cos2χ)S0;ij+aij·SambC12sin(2χ)sin(2θ)S0;ij+σ1cos2θC12sin(2χ)cos(2θ)S0;ijσ1sin2θ0),
where i, j are the pixel indices in the captured image, S0;ij and aij are the values of S0 and a [defined in Section 2.B] at the pixel, A12=(R12s+R12p)/2, B12=(R12sR12p)/2, C12=R12sR12p, and σ1=B12S0;ijA12cos(2χ)S0;ij. There are six unknowns (θ,χ,ϕ,S0;ij,aij,Samb) and four equations; thus, the inverse problem is ill-posed. We make two assumptions for subsequent analysis. First, because Samb is a constant and does not affect the spatial distribution of surface texture, we only focus on recovering aijSamb but not aij and Samb separately. Second, the reflective medium is assumed to be inside the FoV, such that θ can be determined from the measured image (for example, θ=180° if the medium is the ceiling). χ is solved from θ and the line detection algorithm, and Eq. (11) becomes linear.

For the second case, we consider the model for coated surfaced media with a circularly polarized source,

Sijcam=((A12+B12s1cos2χ)S0;ij+aij·SambC12s1sin(2χ)sin(2θ)S0;ij+σ1cos2θC12s1sin(2χ)cos(2θ)S0;ijσ1sin2θC12S0;ij),
where σ1=B12S0;ij+A12s1cos(2χ)S0;ij. Note that A12,B12,C12 have the same definitions as in the first case, but the R12s,12p in them are replaced by Rs,pw calculated from the coating’s refractive index (n2=ncoat). Compared with the first case, there is an additional unknown, s1, and the number of equations are the same. Therefore, for the equations to be linear and solvable, aside from the two assumptions made in the first case, we also approximate s1 by its value when A(λ)=S0δ(λλR,G,B). λR,G,B=460, 530, and 610 nm correspond to the three wavelengths of peak quantum efficiency for a typical complementary metal-oxide-semiconductor (CMOS) sensor with Bayer filters [17]. Thus, we assume s1=0.1539, 0.0592, 0.3025 for the red, green, and blue pixels, respectively. Errors from the approximation of s1 have been observed in the experiments and are discussed in Section 3.

For transparent windows [5],

Sijcam=((σ2σ1cos2χ)S0;ij+(At;122+Bt;122)Sijamb{[(σ1σ2cos2χ)cos2θ+(Ct;122+1)C12sin2χsin2θ]S0;ij+2At;12Bt;12cos(2θ)Sijamb}{[(σ1σ2cos2χ)sin2θ+(Ct;122+1)C12sin2χcos2θ]S0;ij2At;12Bt;12sin(2θ)Sijamb}0),
where Sijamb is the value of Samb at pixel (i, j), σ1=B12+At;12(A12Bt;12+B12At;12)+Bt;12(A12At;12+B12Bt;12), σ2=A12+At;12(A12At;12+B12Bt;12)+Bt;12(A12Bt;12+B12At;12), At;12=(T12s+T12p)/2, Bt;12=(T12sT12p)/2, and Ct;12=T12sT12p. A12,B12,C12 and At;12,Bt;12,Ct;12 are calculated with the window’s refractive index (n2=nwindow). There are five unknowns (θ,χ,ϕ,S0;ij,Sijamb) and four equations. Here we assume that θ is known. The equations are then solvable and linear between S0;ij(ϕ) and Sijamb(ϕ).

D. Metric Function

We use the edge overlap (EO) [5] as the metric function to evaluate the correlation between photons from specular reflection and from refraction/scattering. By first performing edge detection on the separated images and then computing their pixel-wise correlation, the incident angle ϕ that minimizes the metric function is found. This angle is considered to be the optimal value. The mathematical formula of EO is

EO(ϕ)=i,jΔij[IRe(ϕ)]·Δij[ITr/Sc(ϕ)],
where IRe,Tr/Sc(ϕ) are the separated reflection or transmission/scattering images corresponding to ϕ, and Δij takes the value 1 if there is an edge at pixel (i,j) and 0 if no edge is detected at the pixel. Edge detection is performed by Canny edge detection algorithm.

For each color channel, EO is computed for a set of trial ϕs, and an optimal value that minimizes EO is estimated as ϕR,G,Best. The three RGB values ϕR,G,Best are averaged as ϕest=13i=R,G,Bϕiest, and the final separated images are obtained by substituting in ϕest into Eqs. (11)–(13) for all color channels.

3. RESULTS AND DISCUSSION

A. Experimental Setup

Static images were acquired using a Sony DSC-RX10M3 camera with a ZEISS Vario-Sonnar T* 2.4–4/8.8–220 lens. Polarizing filters were attached in front of the camera lens and rotated manually to achieve different polarization modulation of the incoming light. The linear polarizer was a HOYA 72 mm linear polarizing filter, which is fixed in front of the camera and can be manually rotated for different polarization modulations. The corresponding orientations of the polarizer are 0°, 45°, 90°, and 135° with respect to its horizontal direction shown in Fig. 1. The polarization properties of the filter were characterized beforehand with an Axometrics AxoScan polarimeter, and the Stokes images of the scene were computed with the conventional data reduction method [18,19].

The LCD screen, acting as a linearly polarized source, was a Dell AS501 monitor [Fig. 2(a)] and the screen of a Toshiba Protègè laptop [Fig. 3(a)]. The screen that acted as a circularly polarized source was an Apple iPhone 6 cell phone [Fig. 4(a)]. Screen resolutions were 1280×800 for both linearly polarized sources, and 750×1334 for the circularly polarized source. For all experiments, ambient light was provided by the same fluorescent light tube with the same brightness overhead. The refractive indices of marble, glass, and latex coating are assumed to be 1.65, 1.52, and 1.4, respectively.

 figure: Fig. 2.

Fig. 2. Reflection of the monitor screen from a marble tile is shown. (a) Overlap of the image of the screen contents and the tile is denoted by a red rectangle. (b) Separated screen contents and (c) separated tile texture are shown. (d) Reflection-removed image emphasizing the continuity of irradiance over the boundary of the overlapping region is also shown.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Reflection of the laptop screen (displaying ISO 12233 test chart) from a glass-covered picture is shown. (a) Overlap of the image of the screen contents and the picture is denoted by a red rectangle. (b) Separated screen contents and (c) separated picture are shown. (d) Reflection-removed image showing discontinuity of irradiance over the boundary of the overlapping region is also shown.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Reflection of a cell phone screen (displaying a login screen with the username and the password) from a wood sample (overlapping region shown in red rectangle) is shown. (a) Image of the screen contents is reflected from the wood surface. (b) PSF is measured with a mirror as the reflection surface. (c) PSF is measured with the wood surface as the reflection surface. (d) MAD plot is shown comparing the overlapping/separated images before and after Wiener filtering with the mirror-reflected image. (e) Overlapping unfiltered image, (f) overlapping filtered image, (g) separated unfiltered image, (h) separated filtered image, and (i) mirror-reflected image are shown. Images (e)–(i) are rotated and flipped for a better view of the screen contents. (j) Separated wood surface texture image is shown. (k) Reflection-removed image shows a discontinuity of irradiance and color over the boundary of the overlapping region.

Download Full Size | PDF

B. Separation Results

To study opaque media with a single surface, a piece of marble tile was used as the reflective medium. The monitor displayed a 1951 USAF resolution target. This target was used as a general object rather than the purpose of studying the modulation transfer function (MTF) after the separation, and as such, it was not displayed at its designed resolution. For this case, θ was 90° and χ was calculated to be 134°. ϕ was measured to be 63° and recovered through EO-based correlation algorithm to be 60°. From Eq. (11), the screen contents and tile texture were separated as shown in Figs. 2(b) and 2(c). The model of single reflection and volume scattering is found to work well, and there is no visible artifact in the separated images, especially in terms of the irradiance continuity over the boundary of the screen as shown in Fig. 2(d).

For the case of transparent window media, the glass plate from a photo frame was used on top of a cat photograph. The laptop screen showed an ISO 12233 test chart, which was not displayed at its designated resolution. For this case, θ was 90° and χ was calculated to be 79°. ϕ was measured to be 47° and was recovered through algorithm to be 46°. The screen contents and picture were separated using Eq. (12), and the results are shown in Figs. 3(b) and 3(c). Clear separation of the test chart and the cat picture are observed. The assumption of the printer paper as a perfect depolarizer does not consider the fact that reflected light from the printer paper is also weakly polarized. This leads to a visible discontinuity in irradiance over the boundary of the screen in the separated image as shown in Fig. 3(d).

For the case of coated single-surfaced media with a circularly polarized source, a wood sample coated with latex acted as the reflective medium. The cell phone screen showed a login interface with the username “U ARIZONA” and the password “OSC” as shown in Fig. 4(i). For this case, θ was 0°, and χ was calculated to be 1°. ϕ was measured to be 54° and was estimated by algorithm to be 55°. The screen contents and wood surface texture were separated using Eq. (13), and the results are shown in Figs. 4(g) and 4(j). The separated screen image is bluish due to the approximation on the spectral response of the camera sensor. As discussed in Section 2.A, the s1 component of the source’s blue channel is underestimated among the three color channels. Since S1=S0·s1 is recovered with relatively high accuracy, the irradiance, S0, of the blue channel is overestimated, resulting in the bluish tone.

The separated contents on the screen were barely recognizable. In order to improve the quality of the images, the point spread function (PSF) of the reflective medium was measured. An 8×8 white square was displayed on the screen under the same geometry, and reflected images were taken using a mirror and a wood sample as the reflective media. The PSFs of the mirror and wood sample are shown in Figs. 4(b) and 4(c). The standard Wiener filter (see Appendix A) using the measured PSF are applied to both the overlapping image and the separated image, and the results are compared with the ideal image as shown in Fig. 4(i). Fidelity was measured with the mean absolute deviation (MAD), MAD=1/3r,g,bi,j|I¯i,jI¯ideal;i,j|, where I¯i,j and I¯ideal;i,j are the normalized to-be-compared and ideal images with each color channel normalized to (0, 1). As shown in Fig. 4(d), the lowest-MAD separated filtered image has an MAD below 0.16 at an SNR of 5, and the separated filtered image quality as shown in Fig. 4(h) has improved significantly with better visibility after application of the Wiener filter, though the noise induced by scattering is not fully eliminated and the recovered image remains blurry. Note that the original image displayed on the screen is mainly in black and white. The color shift of the recovered images [Figs. 4(e)4(h)] is caused by the assumption on the spectral response of the color filter in the DSLR camera’s sensor (Section 2.C) and on the assumption of the circular polarization state of the display device (Section 2.A).

4. CONCLUSION

In the last section, we discuss the various sources of error in our technique and how they can be potentially mitigated. Two major sources of inaccuracy in the image separation process are the fully depolarizing model for a general rough surface/interface, Eq. (8), and the assumed transmission spectrum of the camera’s Bayer filters. In the former case, the error can be reduced by using a more comprehensive model that incorporates (1) the pBRDF of the surface/interface, (2) an accurate dimension of the display and its distance to the reflector, which define the incoming rays of the pBRDF, and (3) the aperture size of the camera and its distance to the reflector, which define the outgoing rays of the pBRDF. In practice, information from (2) and (3) are not known a priori. Time-of-flight methods and lightfield imaging can likely provide the necessary depth information in the scene; this information can be utilized to improve the image separation process.

The Bayer filters in the camera sensor generally have wide spectral response (>100nm spectral width with 50% quantum efficiency) and color crosstalk. Thus, the Taylor expansion discussed in Section 2.B is not a very accurate approximation. This error can be reduced by using an imaging spectro-polarimeter that has a narrow spectral response for each color channel.

Finally, the presence of ghost images can degrade the recovered image quality. Multiple reflections in the glass cover or in the latex coating induce ghost images that overlap the original image. The visibility of the ghost images is discussed in the Appendix, and our analysis shows that the ghost image can be ignored in certain cases when the thickness of the coating is less than the product of the camera’s magnification and its pixel size.

In conclusion, in this work, we have successfully demonstrated a method to recover information from display devices that are not in direct line of sight by using imaging polarimetry. Our technique can be applied to various types of displays and reflective surfaces. The recovered image can further be improved by Wiener filtering or other more sophisticated nonlinear de-convolution filtering techniques, which require a prior or post characterization of the reflective surface.

APPENDIX A: ANALYSIS OF GHOST IMAGES

In this section, we analyze the effect of ghost images due to the finite thickness of the multi-layer reflective medium. We study the propagation of a beam of light incident on a multi-layer structure shown in Fig. 5. This model applies to the reflection from a glass cover (Fig. 3) and from a latex-painted wood surface (Fig. 4). The first layer or coating can be the glass cover or latex paint, and the substrate can be air or wood. In general, the irradiance of beams that reflect more than once inside the coating (higher-order ghost reflections) is much smaller than the two reflected beams (beam 0 and 1) shown in Fig. 5. Consider the combination of a typical transparent dielectric material (n21.5) and a typical metallic material (n32.5), when a beam of unpolarized light (mixture of s and p light with equal amplitude) encounters the surface with an incident angle of 45°; the reflected flux ratios of beam 0, beam 1, and a three-time-reflected beam are 5.1%, 5.8%, 0.02%, respectively. On the other hand, when the substrate is air, the same ratios are 5.3%, 4.4%, 0.04%, respectively. In both cases, higher-order reflections are significantly less compared with the first two reflections.

 figure: Fig. 5.

Fig. 5. Schematic shows the ghost image from multiple reflections. The incident beam splits into two beams on the air–coating interface. Beam 0 reflects off the interface toward the camera. Beam 1 transmits through the interface, reflects off the coating–substrate interface, and transmits toward the camera. Only the first-order ghost reflection (beam 1) is shown. Beams 0 and 1 are separated spatially by d at the air–coating interface. ϕ is angle of incidence, ϕ is angle of refraction, and t is the coating thickness.

Download Full Size | PDF

We next consider why the shift, d, between the image (resulted from beam 0) and the ghost image (resulted from beam 1), which is induced by the nonzero thickness of the coating (t), is not always observable. We compare d with the size of the pixel, p, of the imaging sensor. For a coating surface such as latex on wood, t200μm, and d is thus less than 2·t·sin(ϕC)267μm, where ϕC=41.8° is the critical angle for n2=1.5. In imaging of an LCD screen, the size of the screen is approximately 100 times the size of its image on the sensor. Thus, d2.67μm1 pixel dimension for a typical commercial digital single-lens reflex camera. The ghost image is hardly observable in this configuration (Fig. 4). On the other hand, the thickness of a typical photo frame cover is approximately 4 mm, which results in approximately 20 pixels of displacement between the ghost image and the image (Fig. 3).

APPENDIX B: APPLICATION OF WIENER FILTERING IN THE IMAGE RECOVERY

Wiener filtering is a common method of image restoration for a spatially invariant system [20]. Our system can be approximated as spatially invariant because spatial variance of PSFs is mainly caused by (1) motion blur of two objects with different velocities, (2) spatially varying scattering, and (3) off-axis aberrations of the imaging system. In the experiment, the display device was stationary, and the off-axis aberrations of the camera were negligible, because the image quality is mainly limited by the pixel size of the sensor. We assumed the scattering property of the wood surface was uniform. Iterative methods that focus on spatially variant PSFs can be utilized to account for the non-uniformity of the surface’s scattering property and to improve the filtering process [21].

The inputs of the Wiener filter are the OTF of the reflecting medium, the OTF of the ideal system, both of which can be characterized beforehand, and the SNR of the measurement. The Wiener filter that includes the PSF of the camera, i.e., the ideal PSF, and the reflecting medium are given by

Ifilt=IFT[OTFsurf*|OTFsurf|2+SNR1×OTFmirror×FT(Iunfilt)]
where Iunfilt/filt are the images before and after Wiener filtering, FT, IFT denote the Fourier transform and inverse Fourier transform, OTFsurf/mirror are the Fourier transform of PSFs taken with the reflecting medium (wood surface) and with a flat mirror as the reflector. The SNR is defined to be the ratio of the power spectrum of the signal and the noise; its value is determined by the object and the ambient light and can be determined by estimating the specular reflected and diffuse reflection light components. Using the refractive index of latex and the estimated incident angle, SNR is estimated to be around 10. This value provides an improved image separation and is consistent with the MAD calculation using the reference image of the screen that is taken with a mirror as the reflector.

Funding

National Science Foundation (NSF) (1607358); Defense Advanced Research Projects Agency (DARPA) Revolutionary Enhancement of Visibility by Exploiting Active Light-fields (REVEAL HR0011-16-C-0026).

Acknowledgment

Y. D. thanks Dr. Bofan Song for discussions on image processing and Dr. Ori Katz for suggesting the experiment.

REFERENCES

1. M. Levoy and P. Hanrahan, “Light field rendering,” in 23rd Annual Conference on Computer Graphics and Interactive Techniques (ACM, 1996), pp. 31–42.

2. J. S. Tyo, D. L. Goldstein, D. B. Chenault, and J. A. Shaw, “Review of passive imaging polarimetry for remote sensing applications,” Appl. Opt. 45, 5453–5469 (2006). [CrossRef]  

3. B. M. Ratliff and J. S. Tyo, “Moving towards more intuitive display strategies for polarimetric image data,” Proc. SPIE 10407, 104070E (2017). [CrossRef]  

4. Y. Y. Schechner, J. Shamir, and N. Kiryati, “Polarization and statistical analysis of scenes containing a semireflector,” J. Opt. Soc. Am. A 17, 276–284 (2000). [CrossRef]  

5. Y. Ding, A. Ashok, and S. Pau, “Real-time robust direct and indirect photon separation with polarization imaging,” Opt. Express 25, 29432–29453 (2017). [CrossRef]  

6. H. Farid and E. H. Adelson, “Separating reflections from images by use of independent component analysis,” J. Opt. Soc. Am. A 16, 2136–2145 (1999). [CrossRef]  

7. A. M. Bronstein, M. M. Bronstein, M. Zibulevsky, and Y. Y. Zeevi, “Sparse ICA for blind separation of transmitted and reflected images,” Int. J. Imaging Syst. Technol. 15, 84–91 (2005). [CrossRef]  

8. N. Kong, Y. W. Tai, and J. S. Shin, “A physically-based approach to reflection separation: from physical modeling to constrained optimization,” IEEE Trans. Patt. Anal. Mach. Intell. 36, 209–221 (2014). [CrossRef]  

9. Y. Diamant and Y. Y. Schechner, “Overcoming visual reverberations,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

10. Y. Shih, D. Krishnan, F. Durand, and W. T. Freeman, “Reflection removal using ghosting cues,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 3193–3201.

11. T.-P. Wu and C.-K. Tang, “Separating specular, diffuse, and subsurface scattering reflectances from photometric images,” in European Conference on Computer Vision (ECCV) (Springer, 2004), pp. 419–433.

12. B. Lamond, P. Peers, A. Ghosh, and P. Debevec, “Image-based separation of diffuse and specular reflections using environmental structured illumination,” in IEEE International Conference on Computational Photography (IEEE, 2009), pp. 1–8.

13. X. D. He, K. E. Torrance, F. X. Sillion, and D. P. Greenberg, “A comprehensive physical model for light reflection,” in SIGGRAPH (ACM, 1991), pp. 175–186.

14. R. A. Chipman, “Mueller matrices,” in Handbook of Optics (McGraw-Hill, 2010), Vol. 1.

15. E. Hecht, Optics (Pearson Education, 2016).

16. S.-Y. Lu and R. A. Chipman, “Interpretation of Mueller matrices based on polar decomposition,” J. Opt. Soc. Am. A 13, 1106–1113 (1996). [CrossRef]  

17. Thorlabs, “CMOS Cameras: USB 2.0 and USB 3.0,” (2017), retrieved https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=4024.

18. W. L. Hsu, J. Davis, K. Balakrishnan, M. Ibn-Elhaj, S. Kroto, N. Brock, and S. Pau, “Polarization microscope using a near infrared full-Stokes imaging polarimeter,” Opt. Express 23, 4357–4368 (2015). [CrossRef]  

19. W. L. Hsu, G. Myhre, K. Balakrishnan, N. Brock, M. Ibn-Elhaj, and S. Pau, “Full-Stokes imaging polarimeter using an array of elliptical polarizer,” Opt. Express 22, 3063–3074 (2014). [CrossRef]  

20. M. Sonka, V. Hlavac, and R. Boyle, Image Processing, Analysis, and Machine Vision (Cengage Learning, 2014).

21. J. G. Nagy and D. P. O’Leary, “Restoring images degraded by spatially variant blur,” SIAM J. Sci. Comput. 19, 1063–1082 (1998). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Schematic of the scene is shown. Light emitted from a display device (image of a camera man) propagates to the reflector (gray) at an incident angle of ϕ , is reflected by the reflector, passes through the polarization analyzer (blue), and is detected by an imaging sensor. The imaging lens is not shown. Local coordinates of the display device in x , y are rotated by an angle of χ into the incident local coordinates s in , p in on the reflector, and the exit local coordinates s out , p out on the reflector are rotated by an angle of θ into the local coordinates m , n of the sensor. The horizontal direction of the polarization analyzer (in yellow) aligns with m .
Fig. 2.
Fig. 2. Reflection of the monitor screen from a marble tile is shown. (a) Overlap of the image of the screen contents and the tile is denoted by a red rectangle. (b) Separated screen contents and (c) separated tile texture are shown. (d) Reflection-removed image emphasizing the continuity of irradiance over the boundary of the overlapping region is also shown.
Fig. 3.
Fig. 3. Reflection of the laptop screen (displaying ISO 12233 test chart) from a glass-covered picture is shown. (a) Overlap of the image of the screen contents and the picture is denoted by a red rectangle. (b) Separated screen contents and (c) separated picture are shown. (d) Reflection-removed image showing discontinuity of irradiance over the boundary of the overlapping region is also shown.
Fig. 4.
Fig. 4. Reflection of a cell phone screen (displaying a login screen with the username and the password) from a wood sample (overlapping region shown in red rectangle) is shown. (a) Image of the screen contents is reflected from the wood surface. (b) PSF is measured with a mirror as the reflection surface. (c) PSF is measured with the wood surface as the reflection surface. (d) MAD plot is shown comparing the overlapping/separated images before and after Wiener filtering with the mirror-reflected image. (e) Overlapping unfiltered image, (f) overlapping filtered image, (g) separated unfiltered image, (h) separated filtered image, and (i) mirror-reflected image are shown. Images (e)–(i) are rotated and flipped for a better view of the screen contents. (j) Separated wood surface texture image is shown. (k) Reflection-removed image shows a discontinuity of irradiance and color over the boundary of the overlapping region.
Fig. 5.
Fig. 5. Schematic shows the ghost image from multiple reflections. The incident beam splits into two beams on the air–coating interface. Beam 0 reflects off the interface toward the camera. Beam 1 transmits through the interface, reflects off the coating–substrate interface, and transmits toward the camera. Only the first-order ghost reflection (beam 1) is shown. Beams 0 and 1 are separated spatially by d at the air–coating interface. ϕ is angle of incidence, ϕ is angle of refraction, and t is the coating thickness.

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

E src = λ 1 λ 2 A ( λ ) e i ω t R 45 ° · ( e i π · OPD QWP λ 0 0 e i π · OPD QWP λ ) · R 45 ° · ( 1 0 ) d λ = λ 1 λ 2 A ( λ ) e i ω t cos ( π · OPD QWP λ ) ( 1 i tan ( π · OPD QWP λ ) ) d λ ,
S = ( | E x | 2 + | E y | 2 | E x | 2 | E y | 2 2 Re ( E x E y * ) 2 Im ( E x E y * ) ) ,
S src = ( S 0 400 nm 700 nm A 2 ( λ ) cos ( π λ 0 2 λ ) d λ 0 400 nm 700 nm A 2 ( λ ) sin ( π λ 0 2 λ ) d λ ) ,
S cam = R θ · ( R ( ϕ ) · R χ · S src + T ( ϕ ) · S amb ) ,
R Θ = ( 1 0 0 0 0 cos ( 2 Θ ) sin ( 2 Θ ) 0 0 sin ( 2 Θ ) cos ( 2 Θ ) 0 0 0 0 1 ) ,
R ( ϕ ) = 1 2 ( R 12 s + R 12 p R 12 s R 12 p 0 0 R 12 s R 12 p R 12 s + R 12 p 0 0 0 0 2 R 12 s R 12 p 0 0 0 0 2 R 12 s R 12 p ) ,
T ( ϕ ) = a ( 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ) ,
T ( ϕ ) = ( 1 0 T P Δ m Δ ) ,
R ( ϕ ) = 1 2 ( R s w + R p w R s w R p w 0 0 R s w R p w R s w + R p w 0 0 0 0 R 33 w 0 0 0 0 R 33 w ) ,
T ( ϕ ) = 1 2 ( T s w + T p w T s w T p w 0 0 T s w T p w T s w + T p w 0 0 0 0 T 33 w 0 0 0 0 T 33 w ) ,
S i j cam = ( ( A 12 B 12 cos 2 χ ) S 0 ; i j + a i j · S amb C 12 sin ( 2 χ ) sin ( 2 θ ) S 0 ; i j + σ 1 cos 2 θ C 12 sin ( 2 χ ) cos ( 2 θ ) S 0 ; i j σ 1 sin 2 θ 0 ) ,
S i j cam = ( ( A 12 + B 12 s 1 cos 2 χ ) S 0 ; i j + a i j · S amb C 12 s 1 sin ( 2 χ ) sin ( 2 θ ) S 0 ; i j + σ 1 cos 2 θ C 12 s 1 sin ( 2 χ ) cos ( 2 θ ) S 0 ; i j σ 1 sin 2 θ C 12 S 0 ; i j ) ,
S i j cam = ( ( σ 2 σ 1 cos 2 χ ) S 0 ; i j + ( A t ; 12 2 + B t ; 12 2 ) S i j amb { [ ( σ 1 σ 2 cos 2 χ ) cos 2 θ + ( C t ; 12 2 + 1 ) C 12 sin 2 χ sin 2 θ ] S 0 ; i j + 2 A t ; 12 B t ; 12 cos ( 2 θ ) S i j amb } { [ ( σ 1 σ 2 cos 2 χ ) sin 2 θ + ( C t ; 12 2 + 1 ) C 12 sin 2 χ cos 2 θ ] S 0 ; i j 2 A t ; 12 B t ; 12 sin ( 2 θ ) S i j amb } 0 ) ,
EO ( ϕ ) = i , j Δ i j [ I Re ( ϕ ) ] · Δ i j [ I Tr / Sc ( ϕ ) ] ,
I filt = IFT [ OTF surf * | OTF surf | 2 + SNR 1 × OTF mirror × FT ( I unfilt ) ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.