Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Optimal discrimination of multiple regions with an active polarimetric imager

Open Access Open Access

Abstract

Until now, most studies about polarimetric contrast optimization have focused on the discrimination of two regions (a target and a background). In this paper, we propose a methodology to determine the set of polarimetric measurements that optimize discrimination of an arbitrary number of regions with different polarimetric properties. We show on real world examples that in some situations, a few number of optimized polarimetric measurements can overcome the performance of full Mueller matrix imaging.

© 2011 Optical Society of America

1. Introduction

Active polarimetric imaging systems are useful for gathering information that is not visible on intensity images, and can be useful in such domains as machine vision, remote sensing, biomedical imaging, and industrial control [18]. However, cost, size and technological complexity of polarimetric imagers depend on the number of images they need to acquire in order to perform their task [9, 10]. Acquisition time can also be an issue when observing rapidly evolving scenes [11]. In this context, a key issue is to evaluate the added value of each acquired image in order to optimize the compromise between complexity, speed, and efficiency of these systems.

In target detection applications, the relevant criterion for quantifying the efficiency of an imaging configuration is the contrast (or discrimination ability) between a target and a background region. Analysis and optimization of the contrast in polarimetric images have been investigated in the radar [12, 13] and optics [1416] communities. In particular, in [10], three different polarization imaging modalities (scalar, Stokes and Mueller) have been compared and it has been shown that for discriminating two regions, the optimal strategy consists in performing a single measurement with optimized illumination and analysis polarization states.

Until now, most studies about polarimetric contrast optimization have focused on the discrimination of two regions (a target and a background). In this paper, we address discrimination of an arbitrary number of regions having different polarimetric properties. This problem is more complex than two-region discrimination since it involves new degrees of freedom. In particular, the optimal number of measurements may no longer be equal to one, since several projections may be needed to optimally discriminate a given number of regions. This incurs accrued complexity in the expression of the performance criterion (separability measure) and its optimization. The present study is, to the best of our knowledge, the first one that addresses these issues theoretically and experimentally.

The paper is organized as follows. In section 2, we define our classification method and illustrate it on full Mueller matrix data. These results will serve as a benchmark for the following of the study. In section 3, we address the problem of finding the optimal set of polarimetric measurements when the number of acquisitions is fixed. In particular, we discuss the separability criterion and the numerical method to optimize it. We then illustrate our methodology on a real-world imaging example, and show that in some experimental conditions, a limited number of projections can yield better discrimination results than full Mueller matrix data.

2. Polarimetric imaging and region discrimination

In this section, we present the basic active imaging principle that we will consider, and the classification algorithm that we will use for multiple target discrimination. We then apply this approach to the classification of full Mueller matrix data. These results will serve as a benchmark for the following of the study. Finally, we discuss the limits and possible improvements of this approach.

2.1. Polarimetric measurements

We consider an active polarimetric imaging system that illuminates the scene with light coming from an unpolarized white source. Polarization state in illumination is defined by a Stokes vector S generated thanks to a Polarization State Generator (PSG) (see Fig. 1). In the practical implementation we use, the PSG is composed of two Liquid Crystal Variable Retarders and one polarizer. The output beam allows illuminating the scene uniformly in polarization and intensity. The polarimetric properties of a region of the scene corresponding to a pixel in the image is characterized by its Mueller matrix M. The Stokes vector of the light scattered by this region is S′ = MS. It is analyzed by a Polarization State Analyser (PSA), which is a generalized polarizer whose eigenstate is the Stokes vector T. As for the PSG, in the experimental setup we use, the PSA is composed of two Liquid Crystal Variable Retarders and one polarizer. The number of photoelectrons measured at a pixel of the sensor is:

i=ηI02TTMS
where the superscript T denotes matrix transposition. In this equation, S and T are unit intensity, purely polarized Stokes vectors, I0 is a number of photons and η is the conversion efficiency between photons and electrons. The total field of view of the imager, using a 480 × 640 CCD camera, is about 10°. The measurements are done in a spectral band of 10 nm centered on 640 nm and selected thanks to an interference filter.

 figure: Fig. 1

Fig. 1 Polarimetric imaging setup

Download Full Size | PDF

2.2. Maximum Likelihood (ML) region classification

Our objective in this section is to classify different regions of the scene by using a series of acquisitions of the type defined in Eq. (1). Let us consider a scene composed of a number K of regions with different polarimetric properties, indexed as k ∈ [1, K]. We assume that we have a database containing sets of Mueller matrices associated to these different regions. One acquires N scalar polarimetric images with N different couples of illumination Stokes vectors (𝒮 = [S1, ..., SN]) and analysis Stokes vectors (𝒯 = [T1, ..., TN]). For each pixel p of the scene, the measures can be gathered in a vector xp of dimension N defined as:

xp=[i1pi2piNp]whereinp=ηI02TnTMpSn
with Mp, the Mueller matrix of the region of the scene corresponding to pixel p and inp, n ∈ [1, N], the intensity associated to the pixel p projected on the polarization states Sn and Tn. Each pixel of the scene is thus represented by a point in a N-dimensional space, and our goal is to discriminate the different regions in this space.

The choice of the classifier depends on the application and on the statistics of the noise disturbing the images, that can be for example detection noise or spatial fluctuations of the scene. For the sake of simplicity, we will assume that the sum of these perturbations leads to Gaussian statistics for the measured data. The Probability Density Function (PDF) associated with class k is thus:

Pk(x)=12πdet(Γk)exp[12(xx¯k)TΓk1(xx¯k)]
where det returns the determinant of a square matrix, k and Γk respectively the mean and covariance matrix of x associated to the class k. From this expression of the PDF, it is possible to define the log-likelihood:
Lk=log[Pk(x)]=12log[2πdet(Γk)]12(xx¯k)TΓk1(xx¯k)
The Maximum Likelihood (ML) classifier consists in deciding that a pixel of the scene belongs to class as:
k^=argmaxk[1,K][Lk]

In practice, the mean and covariance matrix of each class k are estimated from a database composed of sets Ωk containing Pk training Mueller matrices. For each of these datasets, Pk values of xkp, p ∈ [1, Pk], can be computed as in the Eq. (2). The estimate of k and Γk are obtained by the following formula:

x¯^k=1PkpΩkxkp
Γ^k=1PkpΩk(xkpx¯k)(xkpx¯k)T
We will now illustrate this classification method on full Mueller matrix data.

2.3. Classification on full Mueller matrix data and discussion

The full Mueller matrix of the scene is obtained by performing 16 projections of the type of Eq. (1) using different couples of illumination and analysis states. These data are then inverted to yield an estimate of the Mueller matrix associated with each pixel p of the scene, which is thus represented by a vector xp of dimension N = 16 (Eq. (2)). The set of illumination and analysis states is usually chosen to minimize the error propagation during this inversion [17,18]. The Mueller matrix contains all the polarimetric properties of the scene and such data has often been used for discrimination by applying different methods of classification [19, 20].

Let us illustrate our approach on a real Mueller matrix image. We consider a scene composed of four regions (three regions over a background). The three regions and the background will be denoted ti,i = {1, 2, 3} and b in the following and are represented in Fig. 2(a). The intensity image of the scene is presented in Fig. 2(b). We can see that the four regions cannot be discriminated on it, since they have similar intensity reflectances. The Mueller matrices and covariances matrices are estimated from the database and these estimates are used to design a ML classifier (see Eq. (5)). The classification results are presented in Fig. 3(b). We can see that the different objects are globally well discriminated, although some errors are present.

 figure: Fig. 2

Fig. 2 (a) Scheme of the scene. (b) Intensity Image

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 (a) Mueller image of the scene with an integration time of t0/16 ∼ 5ms. (b) Results of the classification.

Download Full Size | PDF

Our purpose in the next section will be to compare the results obtained with full Mueller data to those obtained by using a smaller number of optimized projections. To have a common reference, we will consider that the total amount of time t0 available to perform the acquisition is constant. Consequently, the larger the number of projections, the smaller the integration time for each measurement. For example, for the Mueller matrix acquisition, we have to acquire 16 projections thus the acquisition time is t0/16 for each image. In Fig. 3(a), it is seen that some Mueller images contain no information useful for discrimination. The time used for acquiring them would thus have been better spent on other more informative projections. To reduce the number of images that need to be acquired, one solution is to estimate only the coefficients of the Mueller matrix containing relevant information to discriminate the regions. This selection of specific coefficients has been the subject of recent works [9] and it has been shown that in general, 4 acquisitions are needed to extract one coefficient. In the next section, we will propose an alternative approach that makes it possible to further reduce the number of acquisitions while keeping, and even improving the classification results.

3. Discrimination using optimal projections

In this section, our purpose is to determine the optimal set of polarimetric projections that optimizes discrimination capacity. We can note that Eq. (1) can be put in the form of a linear projection [21]:

i=ηI02qTm
[M00M01M02M03M10M11M12M13M20M21M22M23M30M31M32M33]
where m is the 16-component vector formed by reading the Mueller matrix M in the lexicographic order:
m=[M00M01M02M33]
, and q = TS (⊗ denotes the Kronecker product). Using this formulation, each pixel of the scene can be seen as a point in a 16-dimension space defined by each component of the Mueller matrix and, as all the pixels associated with the same object have not exactly the same polarimetric properties, each region is represented by a point cloud in this 16-dimension space. Using a vecteur q, it is then possible to project these points on a direction of the space in which the separability of the regions is maximal. The problem is thus of finding the optimal set of linear projections that maximizes the separability of the different point clouds. This issue has long been solved in pattern recognition: it is the well-known Linear Discriminant Analysis (LDA) [22]. The optimal projection vectors are the K – 1 generalized eigenvectors associated to the interclass and intraclass covariances matrices. In our case, it would lead to K – 1 vectors q, that, after having projected the data, allow maximizing the separability of the different regions. However, this solution is valid when the domain of definition of the projection vectors is the set of real valued 16-component vectors with unit norm. In our problem, the vector q = TS does not span this space (it has 4 degrees of freedom instead of 15). Consequently, the classical LDA technique cannot be used, and we describe in this section an approach to solve this problem. The key point is to define a tractable separability criterion for multi-region discrimination, which is not obvious. We then briefly describe the optimization algorithm that we use to determine the optimal configuration and illustrate this approach on a real-world imaging scenario.

3.1. Separability criterion

The statistics of each class k, and thus the performance of the classifier, depends on the sets of illumination and analysis states 𝒮 = [S1, ..., SN] and 𝒯 = [T1, ..., TN]. Our goal in the following will be to optimize these states in order to maximize the separability of the different classes. For that purpose, one has to define a separability criterion, that is, a function C(𝒮, 𝒯) of the illumination and analysis states that quantifies the discrimination performance. The optimal projection parameters will be obtained as:

(𝒮^,𝒯^)=argmax𝒮,𝒯[C(𝒮,𝒯)]
The adequate separability criterion for a multi-class discrimination problem is the Bayesian probability of error, which involves a sum of integrals of the PDF over the decision regions, weighted by the relative importance of the different types of errors (misclassification between pairs of classes) [23]. However, this criterion is difficult to calculate and optimize. This is why we will use a separability criterion which is suboptimal, but easier to compute. Recently, it has been shown that for such real-world discrimination tasks as target detection and object segmentation, the Bhattacharyya distance is a good candidate for separability criterion [24]. We have thus decided to use it as contrast criterion in our study.

The Bhattacharyya distance (B) is an asymptotic exponent on the probability of error in discrimination problems [25, 26]. Let us consider two probability density functions (pdf) Pk(x) and Pl(x). In our case, these pdf correspond to the noise statistics of the pixels in regions k and l. If n denotes the size of the sample, the probability of error in deciding wether the observed data has been generated with Pk(x) and Pl(x) behaves as exp[−nB] as n tends to infinity [26]. Considering our two sets of data defined by their pdf Pk(x) and Pl(x), the Bhattacharyya distance is defined as:

Bk,l=ln[𝒟[Pk(x)Pl(x)]1/2dx]
with 𝒟 the definition domain of Pk and Pl. The Bhattacharyya distance is thus a scalar value that quantifies the similarity between the pdf Pk(x) and Pl(x). It belongs to the interval [0; +∞[, is equal to zero when the pdf are identical and infinite when the pdf supports do not overlap.

In our case, the Bhattacharyya distance between two classes k and l has the following expression:

Bk,l(𝒮,𝒯)=18(x¯kx¯l)T[Γk+Γl2]1(x¯kx¯l)+12log[det(Γk+Γl2)det(Γk)det(Γl)]
We have to note that, if it can be considered that the average values of the projected pixels in each class k are sufficiently different, we can neglect the second term associated only to the difference of covariance matrices Γk and Γl . This simplification is particularly valuable since a large number of iterations have to be done to compute the optimal sets of images. We have checked that in the example of Fig. 4, taking into account the second term does not change the sets of optimal polarization states.

 figure: Fig. 4

Fig. 4 Optimal sets of N = 1, 2 and 3 projections maximizing the separability of the objects in the scene. Last column: results of classification using a ML classifier. Last row : classification results obtained with full Mueller matrix data. The polarization states in illumination and analysis are represented by their azimuth (α) and ellipticity (ɛ): (αS,ɛS)(αT ,ɛT).

Download Full Size | PDF

When the number of regions to discriminate is larger than two, there are several possible ways of merging the separability of each pair of classes into a single numerical criterion. We have chosen a min-max approach which consists in maximizing the minimal separation between pairs of classes. It corresponds to maximizing the following criterion:

C(𝒮,𝒯)=min(k,l),kl[Bk,l(𝒮,𝒯)]
The higher the value of this criterion, the easier will be the discrimination of all classes in the N-dimensional space. Our goal is now to determine the optimal set of parameters (𝒮, 𝒯) maximizing this criterion.

3.2. Computational issue for the optimization

Once the optimization criterion has been defined and the number N of projections chosen, one has to search for the optimal combination of projections maximizing the separability. The main characteristic of this problem is the number of parameters that have to be optimized simultaneously. Indeed, each projection depends on 4 parameters, e.g., the azimuth and ellipticity of the illumination and analysis states. Searching for N projections thus involves optimization on 4N parameters, which is quite large even for low values of N and it is thus likely that the separability criterion will have local maxima. It is thus necessary to use an algorithm robust to the presence of local maxima. After having compared different solutions, we have chosen to use the Shuffled Complex Evolution (SCE-UA) Method [27]. This algorithm consists in generating different sets consisting of N couples of illumination and analysis polarization states, and is changing them by using a global evolution framework to finally converge to the best set of parameters (𝒮, 𝒯) that leads to the highest separability of the contrast. Concerning our issue, we have verified that in our applications, it converges rapidly to the global maximum researched.

3.3. Application to a real-world imaging example

Let us now apply the proposed approach to the scene represented in Fig. 2. We have considered successively that we can acquire one, two or three polarimetric projections. On each of these configurations, we have compared the discrimination results obtained with the classifier presented in Eq. (5) on each of these configurations. The obtained images, the associated optimal states and the results of the classification are gathered in Fig. 4, as well as the discrimination result and Bhattacharyya distance obtained with full Mueller matrix data (see Fig. 3), that will serve as a benchmark.

As explained in the previous section, for all the acquisition scenarios, the total measurement time is constant and equal to t0. If only one projection is acquired, the integration time is thus t0. It can been seen in Fig. 4(a) that the obtained image is indeed much less noisy than the Mueller images in Fig. 3, since the latter corresponds to an integration time that is 16 times smaller for each projection. However, for this 4-region scenario, one projection - although optimal - is clearly insufficient since many classification errors are observed in Fig. 4(a). To visualize the detection efficiency, we have represented the estimation of the PDF (histogram) associated to the four classes of object in Fig. 5. We can observe that the average values of the four classes are well separated (which explains why we can globally discriminate the objects in the scene), but there is also a large overlap between the different PDF that leads to a high number of classification errors as we can see in Fig. 4(a). Consequently, the contrast criterion (𝒞 = 2.0) is low compared to that obtained using the full Mueller matrix (𝒞 = 6.6). It is thus necessary to increase the number of projections to enhance the classification performance.

 figure: Fig. 5

Fig. 5 Estimated PDF of the four classes in the scene (estimated on sample of around 300 pixels in each class). This projection corresponds to the results in the first row of the figure 4.

Download Full Size | PDF

Let us now consider that we can acquire two projections (see Fig. 4(b) and 4(c). These images are obtained with an integration time equal to t0/2. We observe that the optimal projections are different from that obtained by trying to optimize the separability on only one projection (Fig. 4a). Indeed, in the image 4.b, the three regions are separated from the background but are not discriminated between themselves. This discrimination is done thanks to the second image (Fig. 4.c). To visualize the detection efficiency, we have plotted, in Fig. 6, the pixel value distributions of the different objects in the 2-dimensional space defined by the two optimal projections. We can see that the different point clouds are well separated that leads to good discrimination performance, as we can see in Fig. 4(b). Indeed, the obtained value of the separability criterion (𝒞 = 9.8) is higher than that obtained with full Mueller matrix data. We can now ask the question: is it possible to increase further the contrast using a third projection ?

 figure: Fig. 6

Fig. 6 Representation of pixels of the four classes in the 2-dimensional space defined by the two images presented in the second row of the Fig. 4.

Download Full Size | PDF

The set of three projections that maximize separability is presented in Fig. 4(d,e,f). The integration time for each image is equal to t0/3. These images are different from all the previously obtained images and correspond to a different way to discriminate the objects. Indeed the first image discriminates the regions t1 and t3 from the background b and the region t2. The second image separates the region t1 from the region t3 and finally the third image isolates all the regions from the background. This set of images leads to a high value of the separability criterion (𝒞 = 13.1) that corresponds to a good separability of the classes in the 3-dimensional space (see Fig. 7) and thus to excellent discrimination results, as we can see in Fig. 4(c). Theses results are better than those obtained with the full Mueller matrix because we acquire only images that contain information relevant to classification: by decreasing the number of images, we increase the integration time associated with each image and thus increase the signal to noise ratio.

 figure: Fig. 7

Fig. 7 Representation of pixels of the four classes in the 3-dimensional space defined by the three images presented in the third row of the Fig. 4.

Download Full Size | PDF

The objective is thus to find the best number of images that allows obtaining all the information necessary for the discrimination and also a good signal to noise ratio. Indeed, by increasing the number of images while keeping the same global acquisition time, we may not be able to extract more useful information but since the signal to noise ratio in the images is decreased, this may lead to a decrease of the contrast. To verify this assumption, we have represented in Fig. 8 the evolution of the contrast criterion as a function of the number of optimal images acquired. The integration time for one set of images is kept constant and equal to t0 ∼ 80ms. We can see in Fig. 8 that the contrast begins by increasing and reaches its maximum for 3 images. This evolution can be explained by the fact that each extra image brings enough new information to compensate the decrease of the signal to noise ratio per image due to the reduction of the integration time. With 3 images, all the information useful for discrimination is gathered.

 figure: Fig. 8

Fig. 8 Evolution of the contrast criterion in function of the number of optimal images acquired. The integration for one set of images is keeping constant and equal to t0 ∼ 80ms.

Download Full Size | PDF

This is in perfect accord with the standard result in Linear Discriminant Analysis (LDA): if K classes have to be discriminated, K – 1 linear projection are sufficient to obtain optimal discrimination [22]. If the number of acquired images is increased while keeping the total acquisition time constant, the information brought by these new images is no longer sufficient to compensate for the reduction of the signal to noise ratio and the contrast decreases. We can also notice that with 16 optimal projections, we obtain the same contrast as with the raw Mueller matrix data (𝒞 = 6.6).

4. Conclusion

We have proposed a methodology to determine the set of active polarimetric measurements that optimize discrimination of an arbitrary number of regions with different polarimetric properties. For that purpose, we have considered systems where the illumination and analysis states can vary on the whole Poincaré sphere. Of course, this methodology can also be easily applied to systems with reduced sets of accessible polarization states in illumination and/on in analysis. We have demonstrated on a real world example that a few number of optimized polarimetric measurements can overcome the performance of full Mueller matrix imaging. The optimal number of measurements is of course highly dependent on the nature of the observed scene.

It is clear that the determination of the optimal projections require some prior knowledge about the polarimetric properties of the scene - i.e. the average and the covariance of Mueller matrices of each class. It can thus not be used in all polarimetric imaging scenarios. However, if such prior knowledge is available, we have demonstrated that it may be preferable to use a few, well selected acquisitions. Of course, research on adaptive strategies that can reduce or suppress the need of prior knowledge about the scene is an interesting perspective of the present work.

The authors thank Ryoichi Horisaki for fruitful discussions and the anonymous reviewers for their comments which were very helpful to improve the quality of this paper. Guillaume Anna’s Ph.D thesis is supported by the Délégation Générale pour l’Armement, MRIS domain IMAT.

References and links

1. J. E. Solomon, “Polarization imaging,” Appl. Opt. 20, 1537–1544 (1981). [CrossRef]   [PubMed]  

2. J. S. Tyo, M. P. Rowe, E. N. Pugh, and N. Engheta, “Target detection in optical scattering media by polarization-difference imaging,” Appl. Opt. 35, 1855–1870 (1996). [CrossRef]   [PubMed]  

3. S. Breugnot and P. Clémenceau, “Modeling and performances of a polarization active imager at λ = 806 nm,” Opt. Eng. 39, 2681–2688 (2000). [CrossRef]  

4. S. L. Jacques, J. C. Ramella-Roman, and K. Lee, “Imaging skin pathology with polarized light,” J. Biomed. Opt. 7, 329–340 (2002). [CrossRef]   [PubMed]  

5. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt. 42, 511–525 (2003). [CrossRef]   [PubMed]  

6. F. Boulvert, B. Boulbry, G. Le Brun, B. Le Jeune, S. Rivet, and J. Cariou, “Analysis of the depolarizing properties of irradiated pig skin,” J. Opt. A Pure Appl. Opt. 7, 21–28 (2005). [CrossRef]  

7. J. M. Bueno, J. Hunter, C. Cookson, M. Kisilak, and M. Campbell, “Improved scanning laser fundus imaging using polarimetry,” J. Opt. Soc. Am. A 24, 1337–1348 (2007). [CrossRef]  

8. A. Pierangelo, B. Abdelali, M.-R. Antonelli, T. Novikova, P. Validire, B. Gayet, and A. De Martino, “Ex-vivo characterization of human colon cancer by mueller polarimetric imaging,” Opt. Express 19, 1582–1593 (2011). [CrossRef]   [PubMed]  

9. J. S. Tyo, Z. Wang, S. J. Johnson, and B. G. Hoover, “Design and optimization of partial mueller matrix polarimeters,” Appl. Opt. 49, 2326–2333 (2010). [CrossRef]   [PubMed]  

10. F. Goudail, “Comparison of the maximal achievable contrast in scalar, stokes and mueller images,” Opt. Lett. 35, 2600–2602 (2010). [CrossRef]   [PubMed]  

11. M. Dubreuil, S. Rivet, B. Le Jeune, and J. Cariou, “Snapshot mueller matrix polarimeter by wavelength polarization coding,” Opt. Express 15, 13660–13668 (2007). [CrossRef]   [PubMed]  

12. A. A. Swartz, H. A. Yueh, J. A. Kong, L. M. Novak, and R. T. Shin, “Optimal polarizations for achieving maximal constrast in radar images,” J. Geophys. Res. 93, 15252–15260 (1988). [CrossRef]  

13. J. Yang, et al., “Numerical methods for solving the optimal problem of contrast enhancement,” IEEE transactions on geoscience and remote sensing 38, 965–971 (2000). [CrossRef]  

14. M. Floc’h, G. Le Brun, C. Kieleck, J. Cariou, and J. Lotrian, “Polarimetric considerations to optimize lidar detection of immersed targets,” Pure Appl. Opt. 7, 1327–1340 (1998). [CrossRef]  

15. F. Goudail, “Optimization of the contrast in active stokes images,” Opt. Lett. 34, 121–123 (2009). [CrossRef]   [PubMed]  

16. F. Goudail and A. Bénière, “Optimization of the contrast in polarimetric scalar images,” Opt. Lett. 34, 1471–1473 (2009). [CrossRef]   [PubMed]  

17. J. S. Tyo, “Design of optimal polarimeters : maximization of the signal-to-noise ratio and minimization of systematic error,” Appl. Opt. 41, 619–630 (2002). [CrossRef]   [PubMed]  

18. F. Goudail, “Noise minimization and equalization for stokes polarimeters in the presence of signal-dependent poisson shot noise,” Opt. Lett. 34, 647–649 (2009). [CrossRef]   [PubMed]  

19. S. Ainouz, O. Morel, and F. Meriaudeau, “Geometric-based segmentation of polarization-encoded images,” in “IEEE International Conference on Signal Image Technology and Internet Based System,” (2008). [CrossRef]  

20. J. Ahmad and Y. Takakura, “Improving segmentation maps using polarization imaging,” in “IEEE International Conference on Image Processing,” (2007). [CrossRef]  

21. J. Zallat, S. Ainouz, and M. P. Stoll, “Optimal configurations for imaging polarimeters : impact of image noise and systematic errors.” J. Opt. A 8, 807–814 (2006). [CrossRef]  

22. K. Fukunaga, Introduction to statistical pattern recognition (Academic Press, San Diego, 1990).

23. H. L. Van Trees, Detection, Estimation and Modulation Theory (John Wiley and Sons, Inc., New York, 1968).

24. F. Goudail, P. Réfrégier, and G. Delyon, “Bhattacharyya distance as a contrast parameter for statistical processing of noisy optical images,” J. Opt. Soc. Am. A 21, 1231–1240 (2004). [CrossRef]  

25. T. M. Cover and J. A. Thomas, Elements of Information Theory (John Wiley and Sons, New York, 1991). [CrossRef]  

26. A. Jain, P. Moulin, M. I. Miller, and K. Ramchandran, “Information-theoretic bounds on target recognition performance based on degraded image data,” IEEE Trans. Pattern Anal. Mach. Intell. 24, 1153–1166 (2002). [CrossRef]  

27. Q. Y. Duan, V. K. Gupta, and S. Sorooshian, “A shuffled complex evolution approach for effective and efficient global minimization,” J. Optim. Theory Appl. 76, 501–521 (1993). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Polarimetric imaging setup
Fig. 2
Fig. 2 (a) Scheme of the scene. (b) Intensity Image
Fig. 3
Fig. 3 (a) Mueller image of the scene with an integration time of t0/16 ∼ 5ms. (b) Results of the classification.
Fig. 4
Fig. 4 Optimal sets of N = 1, 2 and 3 projections maximizing the separability of the objects in the scene. Last column: results of classification using a ML classifier. Last row : classification results obtained with full Mueller matrix data. The polarization states in illumination and analysis are represented by their azimuth (α) and ellipticity (ɛ): (αS,ɛS)(αT ,ɛT).
Fig. 5
Fig. 5 Estimated PDF of the four classes in the scene (estimated on sample of around 300 pixels in each class). This projection corresponds to the results in the first row of the figure 4.
Fig. 6
Fig. 6 Representation of pixels of the four classes in the 2-dimensional space defined by the two images presented in the second row of the Fig. 4.
Fig. 7
Fig. 7 Representation of pixels of the four classes in the 3-dimensional space defined by the three images presented in the third row of the Fig. 4.
Fig. 8
Fig. 8 Evolution of the contrast criterion in function of the number of optimal images acquired. The integration for one set of images is keeping constant and equal to t0 ∼ 80ms.

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

i = η I 0 2 T T M S
x p = [ i 1 p i 2 p i N p ] where i n p = η I 0 2 T n T M p S n
P k ( x ) = 1 2 π det ( Γ k ) exp [ 1 2 ( x x ¯ k ) T Γ k 1 ( x x ¯ k ) ]
L k = log [ P k ( x ) ] = 1 2 log [ 2 π det ( Γ k ) ] 1 2 ( x x ¯ k ) T Γ k 1 ( x x ¯ k )
k ^ = arg max k [ 1 , K ] [ L k ]
x ¯ ^ k = 1 P k p Ω k x k p
Γ ^ k = 1 P k p Ω k ( x k p x ¯ k ) ( x k p x ¯ k ) T
i = η I 0 2 q T m
[ M 00 M 01 M 02 M 03 M 10 M 11 M 12 M 13 M 20 M 21 M 22 M 23 M 30 M 31 M 32 M 33 ]
m = [ M 00 M 01 M 02 M 33 ]
( 𝒮 ^ , 𝒯 ^ ) = arg max 𝒮 , 𝒯 [ C ( 𝒮 , 𝒯 ) ]
B k , l = ln [ 𝒟 [ P k ( x ) P l ( x ) ] 1 / 2 d x ]
B k , l ( 𝒮 , 𝒯 ) = 1 8 ( x ¯ k x ¯ l ) T [ Γ k + Γ l 2 ] 1 ( x ¯ k x ¯ l ) + 1 2 log [ det ( Γ k + Γ l 2 ) det ( Γ k ) det ( Γ l ) ]
C ( 𝒮 , 𝒯 ) = min ( k , l ) , k l [ B k , l ( 𝒮 , 𝒯 ) ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.