Abstract
An algorithm is described to extract two features that represent the chromaticity of a surface and that are independent of both the intensity and correlated color temperature of the daylight illuminating a scene. For mathematical convenience this algorithm is derived using the assumptions that each photodetector responds to a single wavelength and that the spectrum of the illumination source can be represented by a blackbody spectrum. Neither of these assumptions will be valid in a real application. A new method is proposed to determine the effect of violating these assumptions. The conclusion reached is that two features can be obtained that are effectively independent of the daylight illuminant if photodetectors with a spectral response whose full width at half maximum is or less are used.
© 2010 Optical Society of America
1. INTRODUCTION
A well-known problem when imaging naturally illuminated scenes is that shadows and other effects can create a scene with a wide dynamic range that can lead to saturation and/or underexposure of parts of a scene. In some applications an equally important problem is that the spectral composition of the illuminant varies. These variations can arise between regions in shadow and in direct illumination within the same scene. However, even larger variations occur in directly illuminated scenes at different times of day [1, 2] or on different days. These variations make it very difficult, if not impossible, to use otherwise useful color or chromaticity information to find and/or recognize potentially interesting objects within a scene.
Imaging sensors are available from companies including Aptina and Melexis that have a high input dynamic range. Since these sensors can image scenes without saturation and/or underexposure, the more subtle problem can be addressed. The ability to determine the color of a surface independent of the illuminant is known as color constancy. Many methods of achieving color constancy have been proposed, including algorithms that process data that represent the logarithm of narrow photodetector responses [3, 4, 5]. One of these methods, which has the benefits that it is simple and yet applicable when the illuminant is daylight, is the “color constancy at a pixel” algorithm proposed by Finlayson and co-workers [6, 7]. This algorithm was developed based on the assumption that each of the four photodetectors responds to a single wavelength at different positions in the visible spectrum. Since this type of response will severely limit the number of photons reaching each photodetector this is far from ideal. However, this mathematically “ideal” spectral response was included in the development of the algorithm only for convenience. If the performance of the algorithm is not critically dependent on using photodetectors with a narrow spectral response it may be possible to create cameras whose outputs are suitable for extracting reliable chromaticity information from a daylight illuminated scene despite the diurnal variations in the spectrum of daylight.
In this paper the results of an initial investigation into the effects of changing the spectral response of photodetectors on the results obtained from an algorithm inspired by the work of Finlayson and others [6] are presented. Although this means that our primary concern is the width of the spectral response of the photodetectors it is necessary to take into account the effect of noise introduced into the data. The paper starts in Section 2 with a mathematical model of the photon flux incident on a photodetector. This model is then simplified, and a method of combining the responses of four photodetectors to create two descriptors or features that are independent of the illuminator is described. In Section 3 four example surface reflectances are used to explain why reflectances that correspond to different colors will be projected to different locations in the two-dimensional feature space. This prediction is then confirmed in Section 4 by estimating the response of photodetectors using numerical simulation. A method is then proposed in Section 5 to assess the impact of using data from photodetectors that respond to a range of wavelengths on the two features. Finally in Section 6 the effects of both subtle changes in the spectral characteristics of the photodetectors and the reflectance data are presented to ensure that any conclusions are independent of the details used to obtain the results.
2. ALGORITHM
An image is formed when light from an illuminator is reflected from different parts of a scene into an array of detectors. If the intensity of the illuminator is I and its output spectrum is , then the response, , of a photodetector with a spectral sensitivity function that is imaging part of a scene with a reflectance at a point x on a surface is given by [6, 8]
where an underscore denotes a vector quantity. The dot product between the unit vector representing the direction of the light source and unit vector representing the direction of the surface normal models a geometry factor that influences the amount of reflected light.This expression for the response of the photodetector can be considerably simplified if it is assumed that its spectral sensitivity function is narrow enough that it can be presented by a Dirac delta function. The sifting property of the Dirac delta function can then be applied to simplify Eq. (1), so that for a photodetector that is effectively sensitive to light only at a wavelength
The different components of Eq. (2) can be separated by taking the logarithm of both sides of equation (2) [3, 4, 6]. In particular the logarithm of the response of a photodetector can be written in the formwhere is the geometry factor.In scenes that are illuminated by daylight there are two potential problems. First, shadows can result in different parts of the scene having significantly different effective illumination intensities I. This can cause saturation or underexposure in parts of some scenes unless a camera with a high enough dynamic range is used. The more subtle effect is that the relative responses of the photodetectors that are sensitive to different wavelengths are influenced by both the reflectance being imaged and the spectrum of the daylight.
In naturally illuminated scenes, changes in the spectral content of daylight can influence the apparent ratio of the responses of different photodetectors and hence the color of an area in a scene. Studies have shown that the spectrum of daylight is quite similar to that of a blackbody [6, 7]. In fact they are sufficiently similar that daylight spectra are often described using the temperature of the blackbody with the most similar spectrum, a parameter known as the correlated color temperature (CCT) for the particular daylight spectrum. This similarity means that the power spectrum of a blackbody is a useful approximation when developing an algorithm to deal with the diurnal changes in the spectrum of daylight. The output spectrum for a blackbody with a temperature T, , can be calculated using Planck’s equation,
where h is the Planck’s constant, is the Boltzmann constant, and c is the speed of light. Over the wavelength and CCT ranges of interest the blackbody spectrum can be approximated by the Wien approximation [6],where and . Substituting Eq. (5) into Eq. (3) then giveswhere the equation is written in this form to emphasize a wavelength-independent component, a component that depends on the reflectance of the surface being imaged, and a component that depends on the CCT of the illuminant. To obtain an illuminant-independent descriptor of the surface reflectance both the first and third terms in Eq. (6) need to be removed.Following a procedure similar to that adopted by Marchant and Onyango [9] the two undesirable components in Eq. (6) can be removed by taking the log-difference between the response of one detector and the weighted sum of the responses of two other detectors to form the descriptor or feature
Substituting Eq. (6) into Eq. (7) will cancel the wavelength-independent components of the photodetector responses:This feature can then be made independent of the CCT of the illuminator ifwhich simplifies toA feature that is independent of the illuminator can therefore be obtained using this equation to select the value α and the wavelengths at which the three photodetectors respond.With only one feature, quite different reflectances can be confused [5]. To avoid this confusion a second illuminator-independent feature is required. To obtain this feature the output from a fourth photodetector can be combined with those of two of the existing photodetectors to create a second feature :
As with this second feature will be independent of the illuminator ifThe two illuminator-independent features rely on the choice of six variables (four wavelengths and two mixture coefficients) subject to two constraints [Eqs. (10, 12)]. This means that there are four variables whose values can be chosen independently. When choosing the values for these variables it is sensible to ensure that information from different parts of the visible spectrum is employed. There are different combinations of variable values that are consistent with this aim, including those in Table 1 . This set of values has been chosen to cover the wavelength range from . With four photodetectors spread uniformly across this range the difference between the characteristic wavelengths of neighboring photodetectors would be . The photodectors with the shortest and longest wavelength responses have therefore been placed from each end of the relevant wavelength range, at and . The wavelengths of the other two photodetectors have then been calculated using Eqs. (10, 12), so that and .
3. TWO-DIMENSIONAL FEATURE SPACE
Applying Eqs. (7, 11) to the photodetector responses will lead to two features that are ideally independent of the illuminator. The perceived color of a surface within an image partly depends upon its relative lightness. Since the relative lightness of a surface is indistinguishable from a change in the local illuminator intensity this information is lost in Eqs. (7, 11). The two features that are obtained from these two equations therefore represent the chromaticity of the surface rather than its color.
To be robust to noise in the photodetector responses the two features should form a space in which very different chromaticities are widely separated. To understand why different chromaticites are widely separated in the two-dimensional feature space consider the four representative reflectances in Fig. 1 . With the parameters listed in Table 1 substituting Eq. (6) into Eqs. (7, 11) gives
where and are constants. This shows that for this choice of parameters the features are independent of the illuminator.Equations (13, 14) show that the responses of four photodetectors can be used to extract two illuminant independent chromaticity features. From these four photodetector responses the other combinations of the responses of three photodetectors are linear combinations of these two features. In principle more information about a surface can be obtained using photodetectors with more than four different spectral responses. In particular, the process of creating a feature space that is independent of both the intensity and CCT of the illuminant is expected to result in a space that has two dimensions fewer than the number of different types of photodetectors. In some applications these extra dimensions could be useful. For example if we want to estimate reflectance of a color surface in a higher dimension, this algorithm can be adopted to n-color channels by taking three responses at a time to form an illuminant-independent feature as given in Eq. (7). In this case the estimated reflectance features will be in (n-2)-dimensional space. However, four different types of photodetectors can be easily accommodated in the Bayer color filter pattern, commonly used in color cameras, and the resulting two-dimensional feature space is sufficient to represent the chromaticity of a surface.
Assume that is mapped onto the x axis and is mapped onto the y axis of an illuminant-independent chromaticity space. Figure 1 shows that the reflectance of the gray surface is independent of wavelength. This means that all the photodetector responses are identical, and therefore this surface will be mapped to point in the feature space. The relative position of the other surfaces with respect to the position of the gray surface can then be predicted as follows. Consider the reflectance spectra of the blue surface in Fig. 1. For this surface photodetector 1 will have the strongest response, photodetector 2 will have a strong response, but the responses of photodetectors 3 and 4 will be weaker. These relative values will mean that for this reflectance is larger than but is smaller than . In the feature space this blue surface will therefore appear to the right and below the gray surface. The equivalent processes for the green and red surfaces in Fig. 1 lead to the conclusion that green will be to the right and above gray while the red will be to the left and above gray. These colors will therefore be well separated in the two-dimensional feature space. More important, their relative positions have been determined using the ratio of reflectances at different wavelengths. This means that surfaces with similar reflectances will be projected to neighboring parts of the feature space.
4. SIMULATED FEATURE SPACE
For simplicity the discussion so far has been based upon the assumption that the photodetectors respond only to a single wavelength. Technologically it is difficult to make a detector with this type of response. Equally important, photodetectors with a very narrow spectral response will be starved of photons and hence have a very poor sensitivity. To determine if it is possible to use this algorithm it is critically important to determine the effect of using photodetectors that respond to a range of different wavelengths. The effect of changing the spectral response of the photodetectors has been investigated using a Gaussian function to represent the spectral response of each photodetector. For a range of Gaussian photodetector model widths the response of each photodetector has been modeled by simulating Eq. (1) for a range of different illuminators and surface reflectances.
Although the theory leading to the features have been derived assuming a blackbody illuminator the numerical integration (simulation) has been performed using CIE standard daylight spectra [10]. These standard spectra are generated from three basis functions whose contribution to the final spectrum is determined by the CCT. Daylight spectra measured at different times and locations suggest that daylight spectra correspond to different ranges of CCT [11, 12, 13]. However, most of the measured daylight spectra fall in the CCT range from and the CIE standard daylight spectra represent measured data quite accurately below [14]. For this investigation 14 different spectra with CCTs between and have therefore been used. The particular values used (, , , , , , , , , , , , and ) were chosen to represent the non-uniform distribution of CCTs from measured daylight spectra [14].
The surface reflectances that were used for this study were the Munsell reflectance samples [15] widely used in color research [16, 17]. These data were sampled at intervals, and the response of each detector to the different Munsell reflectances was obtained by integrating the product of Munsell reflectance, the CIE standard daylight spectra, and the Gaussian photodetector sensitivity model over the wavelength range from .
A typical feature space obtained using the 14 CIE standard daylights and 202 Munsell reflectances with similar relative luminance is shown in Fig. 2 . In this figure each cross represents the actual color of the Munsell reflectance when illuminated with one of the 14 daylight spectra. The blues, greens, and reds occur in the expected relative positions in the space. Also as expected most similar colors are near neighbors in the feature space. A closer inspection of the feature space shows that the imperfect cancellation of the changes in the daylight spectra means that each of the Munsell reflectances creates a small cluster of responses in the feature space. The size of these clusters depends on the spectral width of the detectors that are being used. A method is required to determine the significance of the area occupied by each cluster of responses that correspond to the same reflectance. This method can then be used to determine the widths of the spectral responses that may be used to obtain data from which features can be extracted.
5. ASSESSMENT OF THE FEATURE SPACE
Rather than assessing the feature space for a particular application the approach that has been adopted is to compare the size of each cluster of responses with a measure of the perceptual similarity of the colors of reflectances that create neighboring clusters in the feature space. A color space that has been defined so that distances between colors within the space are proportional to their perceptual differences is the CIELab space [5]. In this space colors that are separated by a Euclidean distance of one unit are just noticeably different. However, just noticeable differences are difficult to detect, and the differences between colors that are separated by between 3.0 and 6.0 units have been described as good matches [18, 19]. This suggests that the size of each cluster in the feature space should be compared to the separation of reflectances that are separated in CIELab space by a few units. An important factor to take into account when comparing CIELab coordinates is that the L value of each reflectance spectrum represents its relative luminance when viewed by an observer whose eyes are adapted to a particular light level. The feature space has been designed to be independent of relative lightness and therefore independent of L. The sizes of the clusters of responses in the feature space have therefore been assessed using reflectances with very similar L values. In the CIELab space the value of L varies from 100 for the brightest colors to 0 for absolute black. Examination of the distribution of L values of the 1269 Munsell data set showed that the L values of the reflectances in this data set were close to one of a small number of values. An L value of 50 is used as the reference L value in the CIE standard 1994 color difference model (CIE94) [1], and there are 187 reflectances with L values between 47.8 and 50.2 in the Munsell data set. The Munsell samples with L values in this range were therefore used as the test data to assess the feature space. To obtain 100 pairs of test reflectances, the distances between different pairs of these 187 reflectances were compared. If the difference between the L values of a pair was smaller than 0.5 units then the CIELab distance between the pair was calculated. Discarding any remaining pairs that were separated by more than six CIELab units it was necessary to use pairs that were separated by between 4.6 and 6.0 units in CIELab space to obtain 100 pairs of perceptually similar reflectances.
In earlier work the size of the cluster of responses corresponding to a particular reflectance was characterized using the smallest circle enclosing all the relevant responses [20]. This is a simple method of determining the area covered by a set of responses. However, this method does not take into account the fact that, as Fig. 2 shows, the responses from a particular reflectance are not uniformly distributed around the average position of the relevant responses. Furthermore, in each part of the feature space the responses from each reflectance tend to have similar orientations.
To account for the observed distribution of responses it is more appropriate to use the Mahalanobis distance, rather than the Euclidean distance implied when using a circle, to determine a boundary that ideally encloses all points in a cluster. For a multivariable normal distribution, the Mahalanobis distance between the center of the distribution C and a point P is defined as
where Σ is the covariance matrix of the distribution. The first step in determining a boundary for a particular reflectance was to find the center of each cluster of responses using the average position of all the responses in the cluster. The points on a boundary at a small Mahalanobis distance from the center were then calculated for both pairs of reflectances with very similar CIELab values. The Mahalanobis distance from the cluster center to these boundaries was then gradually increased until the boundaries touched. The typical result in Fig. 3 shows that these boundaries enclosed most of the relevant responses. To assess the dependency of the feature space on the illuminator spectrum the number of responses that fall inside the correct boundary in the pair was then counted. This test was performed on all 100 pairs of reflectances in the test data set and the percentage of points falling within the boundary was recorded.The results in Fig. 4 show the effect of varying the FWHM of the photodetectors spectral response from . The first set of results that were obtained suggested that if the photodetector responses are represented to an infinite precision, the width of the spectral response had very little effect on the usefulness of the extracted features. As a result the performance of the algorithm did not vary much when increasing the FWHM of the photodetectors used to capture the image. The large overlap between the wide photodetector spectral responses means that for the photodetectors with wide spectral responses the extraction of the features relies on small differences between very similar responses. In a real system these small differences could be lost in the system noise. Thus, although the focus of this study is on the effects of changing the photodetectors’ spectral response, it is important to model the impact of noise on the effective precision of the available data. The signal-to-noise ratio (SNR) of data available from any camera depends on multiple factors including the charge storage capacity of each pixel, the noise introduced by the readout electronics, and the photon shot noise [21]. The SNRs expected from typical digital cameras have been estimated using the method and parameters described by Fowler [21]. The results in Fig. 5 show the expected SNR of a cell phone camera with a charge storage capacity of 5,000 electrons and a CMOS camera with a charge storage capacity of electrons. These results show that the reduction in pixel size, and hence charge storage capacity, needed to match the price targets in the cost sensitive cell phone market degrades the available SNR. However the better quality CMOS imagers used in cameras give an SNR of more than for all the photocurrents that can be detected when a analog/digital converter is used to represent the response from each pixel.
To obtain a more realistic indication of the impact of varying the photodetector FWHM Fig. 4 shows the results obtained when the SNR of the data from the photodetectors was increased from . To assess the impact of noise, different levels of Gaussian noise were added to the linear photodetector responses, and at each level of noise 100 examples of the nominally same combination of responses were generated. That is, each Munsell reflectance when illuminated with a single daylight spectrum forms 100 points on the chromaticity space. These 100 points were obtained by adding 100 points of Gaussian noise to the original response of the photodetector. The results in Fig. 4 show that as the SNR is increased the number of points that fall within the correct boundary increases. This trend arises because the noise will reduce the effectiveness of the cancellation of the illuminator-dependent components of the detector responses. Since the feature extraction procedure was based on the assumption of narrow spectral responses the surprising aspect of the results in Fig. 4 is that with a SNR of good results are obtained for photodetectors with a FWHM of or less. A FWHM of is comparable to the FWHM of photodetector responses in conventional color cameras such as the DXC930 [7]. The sensitivities of the photodetectors needed to generate the two features are therefore expected to be comparable to the sensitivity of photodetectors in existing cameras.
6. ROBUSTNESS OF THE FEATURE SPACE
The results that have been obtained are dependent on the assumptions and the data that have been used. To ensure that any conclusions are independent of the assumptions about the details of the responses of the photodetectors and the reflectance data, other results have been obtained.
As pointed out in Section 2 the wavelengths of the peak spectral responses of the photodetectors in Table 1 are only one of a number of possible combinations. Another combination that has been investigated extensively is given in Table 2 . This combination of parameters was obtained by spacing the photodetectors uniformly across the wavelength range of interest and then calculating the corresponding values of α and γ. Since these two values are close to 0.5 it appears that this choice of detectors is as good as the choice in Table 1.
Figure 6 shows some of the results obtained with the features calculated using this second choice of detectors compared with the equivalent results obtained with the parameters in Table 1. This comparison shows that for these two sensible choices of photodetectors the results obtained are very similar. Most important the effects of changing the SNR and varying the FWHM of the detectors are comparable.
The explanation for the impact of varying the SNR on the photodetectors with the wider spectral responses shown in Fig. 4 was based on different amounts of correlation between the different photodetector responses. The degree of correlation between the responses of different photodetectors will be affected by aspects of the shape of the spectral response that are not captured by the FWHM of the photodetector response. To study the possible effects of other aspects of the spectral response of the photodetectors the model of the detector response has been varied. In particular, in addition to the Gaussian model, the photodetector spectral response has also been modeled using a parabola and a Lorentzian function. As shown in Fig. 7 these three sensitivity models have been chosen because for the same FWHM the parabola is sensitive to a narrower range of wavelengths than the Gaussian, while the Lorentzian has a broader response than the Gaussian.
The results obtained as the FWHM of the three different models of the detector were varied are shown in Fig. 8 . Although the details of the results for the different models are different the general trend is still that increasing the FWHM of the detectors degrades the illumination independence of the two features. A comparison of the results from the three different models for the same FWHM values confirms that increasing the amount of overlap between detector responses degrades the quality of the features. Despite these differences the results are consistent with the conclusion that the useful illuminator independent features can be obtained when the SNR is or higher from photodetectors that have a spectral response with a FWHM of or less.
To check the robustness of the conclusions to changes in the reflectance data, results have also been obtained using the reflectance spectra of different types of flowers from around the world [22]. Again these data were searched to find pairs of measured reflectance spectra that correspond to perceptually similar colors with a CIELab L value of approximately 50. In this case 100 pairs could be obtained using CIELab differences between 4.35 and 6.0 units. Figure 9 shows the results obtained with both sets of reflectance data. The performance of the algorithm with both sets of reflectance data is comparable. This suggests that the detail of the results obtained is independent of the data used. However the most important feature of the results in Fig. 9, and similar results, is that they are consistent with a conclusion that useful illuminator-independent data can be obtained when the SNR is at least and the photodetectors have spectral responses with FWHMs of or less.
7. DISCUSSION AND CONCLUSION
There are two challenges when imaging scenes illuminated by daylight. The first is that shadows and other effects can create a scene with a wide dynamic range. The second, more subtle, problem is that diurnal changes in the spectrum of daylight can cause variations in the apparent color of surfaces within the scene. These diurnal variations make it very difficult, if not impossible, to use otherwise valid color or chromaticity information to find and/or recognize potentially interesting objects within a scene.
An approach for solving both of these problems based on photodetectors with a response to light at a single wavelength has been proposed by Finlayson and co-workers [6, 7]. The assumption that the photodetector responds to a single wavelength was introduced into the algorithm for mathematical convenience. With the resulting simplified mathematical model it is possible to understand how the algorithm creates illumination-independent features for any illuminator whose spectrum can be approximated by that of a blackbody. It is also possible to understand why different colors appear in different positions in the two-dimensional feature space.
The simple algorithm obtained assuming a photodetector that responds to a single wavelength could be very useful. However, there will be very few photons at a particular wavelength, and so a photodetector with such a narrow spectral response will have a very low input signal. It is therefore important to determine how the results from the algorithm are affected when the width of the spectral response of the photodetectors is increased. In addition the algorithm is based on the assumption that the spectrum of the illumination source is a blackbody spectrum. The effects of using CIE standard daylights as the illuminator and varying the width of the spectral response of the photodetectors have been studied. This study showed that these effects mean that the two features that are ideally independent of the illuminator have a residual illuminator dependence. A method of assessing this residual dependence by comparing the illuminator-induced variation in the extracted features to the difference between the features extracted from very similar colors was proposed. Initial results obtained with photodetector responses represented to a very high precision suggested that the algorithm was effective when used on the responses of photodetectors with very wide spectral responses. However, this is only possible if the algorithm is exploiting very small differences between the responses of the different photodetectors. To obtain a more realistic estimate of the effect of changing the width of the spectral responses of the photodetectors, noise was added to the data used in the algorithm. The results that have been presented suggest that with an SNR of better than the two proposed features can be obtained from the responses of photodetectors whose spectral responses have a FWHM of less than . In many situations when the SNR from a single pixel is worse than it may be possible to average the responses of neighboring pixels with almost identical extracted features to reduce the overall effective SNR. Using this method on as few as nine pixels should increase the SNR by . The results in Fig. 4 can lead to a significant improvement in the usefulness of the extracted features.
In conclusion, the residual illuminant dependency of a feature space formed by the proposed algorithm for solving color constancy in daylight illuminated scenes has been investigated. Mathematical analysis of the feature space has been presented. A method was then proposed to assess the impact on the illumination independence of the feature space for different photodetector spectral responses. The significance of any residual illuminator dependence was tested with perceptually similar colors while varying the illuminant spectra. The results suggest that when the SNR is better than for photodetectors with a FWHM of or less the illumination dependency of the feature space is small enough to identify colors described as good matches to the human visual system. These initial results are promising enough to justify further work on a range of issues, including the impact of the response characteristics of the different types of pixels that could be used to obtain the data required by the algorithm. Our future work will be extracting illuminant-independent reflectance images in a higher-dimensional space.
ACKNOWLEDGMENT
This research work was supported by the Engineering and Physical Sciences Research Council (UK) (EPSRC).
1. H. C. Lee, Introduction to Color Imaging Science (Cambridge Univ. Press, 2005), pp. 46–47, 450–459.
2. S. D. Buluswar and B. A. Draper, “Color machine vision for autonomous vehicles,” Eng. Applic. Artif. Intell. 11, pp. 245–256 (1998). [CrossRef]
3. E. H. Land and J. J. McCann, “Lightness and retinex theory,” J. Opt. Soc. Am. 61, 1–11 (1971). [CrossRef] [PubMed]
4. B. K. P. Horn, “Determining lightness from an image,” Comput. Graph. Image Process. 3, (1974), pp. 277–299. [CrossRef]
5. M. Ebner, Color Constancy, Wiley Series in Imaging Science and Technology (Wiley, 2007).
6. G. D. Finlayson and S. D. Hordley, “Color constancy at a pixel,” J. Opt. Soc. Am. A 18, 253–264 (2001). [CrossRef]
7. G. D. Finlayson and M. S. Drew. “4-sensor camera calibration for image representation invariant to shading, shadows, lighting, and specularities,” in Proceedings of IEEE International Conference on Computer Vision(IEEE, 2001), pp. 473–480.
8. G. D. Finlayson, B. Schiele, and J. L. Crowley, “Comprehensive colour image normalization,” in H. Burkhard and B. Neumann, eds., Computer Vision—ECCV’98 (Springer, 1998), pp. 475–490.
9. J. A. Marchant and C. M. Onyango “Shadow-invariant classification for scenes illuminated by daylight,” J. Opt. Soc. Am. A 17, 1952–1961 (2000). [CrossRef]
10. Munsell Color Science Laborartory, “Daylight spectra,” http://mcsl.rit.edu/.
11. V. D. P. Sastri and S. R. Das, “Spectral distribution and color of north sky at Delhi,” J. Opt. Soc. Am. 56, 829–830 (1966). [CrossRef]
12. Y. Nayatani and G. Wyszecki, “Color of daylight from north sky,” J. Opt. Soc. Am . 53, 626–629 (1963). [CrossRef]
13. T. Henderson and D. Hodgkiss, “The spectral energy distribution of daylight,” Br. J. Appl. Phys. 14, 125–133 (1963). [CrossRef]
14. J. Hernández-Andrés, J. Romero, J. L. Nieves, and R. L. Lee Jr., “Color and spectral analysis of daylight in southern Europe,” J. Opt. Soc. Am. A 18, 1325–1335 (2001). [CrossRef]
15. Database—“Munnsell Colours Matt,” ftp://ftp.cs.joensuu.fi/pub/color/spectra/mspec/.
16. L. T. Maloney, “Illuminant estimation as cue combination,” J. Vision 2, 493–504 (2002). [CrossRef]
17. G. D. Finlayson and M. S. Drew, “White-point preserving color correction,” in Proceedings of IS&T/SID 5th Color Imaging Conference (Society for Imaging Science and Technology, 1997), pp. 258–261.
18. J. Y. Hardeberg, “Acquisition and reproduction of color images: colorimetric and multispectral approaches,” Ph.D. dissertation (Ecole Nationale Supérieure des Télécommunications, 1999).
19. A. Abrardo, V. Cappellini, M. Cappellini, and A. Mecocci “Art-works colour calibration using the VASARI scanner,” in Proceedings of IS&T and SID’s 4th Color Imaging Conference: Color Science, Systems and Applications (Society for Imaging Science and Technology,1996), pp. 94–97.
20. S. Ratnasingam, S. Collins, and J. Hernández-Andrés, “A method for designing and assessing sensors for chromaticity constancy in high dynamic range scenes,” in Proceedings of Color Imaging Conference CIC17 (EEUU, 2009), pp 15–20.
21. B. Fowler, “High dynamic range image sensor architectures,” High Dynamic Range Imaging Symposium and Workshop, Stanford University, California (2009), http://scien.stanford.edu/HDR/HDR_files/Conference%20Materials/Presentation%20Slides/Fowler_WDR_sensor_architectures_9_8_2009.pdf.
22. S. E. J. Arnold, V. Savolainen, and L. Chittka, “The floral reflectance spectra database,” Nature Proceedings http://dx.doi.org/10.1038/npre.2008.1846.1.