Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Sensory representation of surface reflectances: assessments with hyperspectral images

Open Access Open Access

Abstract

Specifying surface reflectances in a simple and perceptually informative way would be beneficial for many areas of research and application. We assessed whether a ${3} \times {3}$ matrix may be used to approximate how a surface reflectance modulates the sensory color signal across illuminants. We tested whether observers could discriminate between the model’s approximate and accurate spectral renderings of hyperspectral images under narrowband and naturalistic, broadband illuminants for eight hue directions. Discriminating the approximate from the spectral rendering was possible with narrowband, but almost never with broadband illuminants. These results suggest that our model specifies the sensory information of reflectances across naturalistic illuminants with high fidelity, and with lower computational cost than spectral rendering.

Published by Optica Publishing Group under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. INTRODUCTION

Specifying the colors of objects and materials (surface colors) is not only important to perceptual research [13], but also to applications in art, architecture, and industry where pigments and/or dyes are used to color artworks (e.g., paintings), buildings, and products, such as clothes, cars, or food [4,5]. The appearance of such colors is determined by the interaction between surfaces and the lights illuminating them. The visual system samples light in three broad spectral bands, the L-, M- and S-cone sensitivities. So, it is possible to produce paints and pigments from diverse materials (and thus with different reflectance spectra) that yield the same cone excitations under a given illumination, i.e., they are metameric. The metamerism breaks down when the illumination changes and may lead to clearly visible color differences between surfaces that had looked the same under the original illumination (metamer mismatching) [6,7]. For example, what seems like a perfect retouching of a painting under the illumination of the atelier may turn out to show striking mismatches in the light of the museum (for example, see [8], p. 105). Without knowing reflectance properties, we cannot predict how objects and materials look under different illuminations [7,9].

A reflectance spectrum allows for calculating the sensory signal (i.e., the color information at the level of the cone photoreceptors, cf. [10]) under any illumination, following Formula 1 (left column) in Table 1. However, reflectance spectra might be of limited use for color specifications in practice because (1) reflectance spectra involve a lot of information that may be cumbersome to communicate, for example on an information sheet about a product’s color palette; (2) reflectance spectra are not self-explanatory and require spectral computations following the above formula; and (3) the computations also require knowledge or measurements of illuminant spectra that are not readily available without specialist equipment. Professionals at different levels of a supply chain might have neither the expertise nor the instruments to measure, compute, and communicate reflectance and illuminant spectra. Indeed, most producers of pigments, dyes, colored objects, and materials rely on coordinates in color appearance spaces (e.g., CIELAB, CIELUV, CIECAM02; for review, see [1,2]) or comparisons with reference surfaces under reference lighting (e.g., Munsell, Pantone, [1113]). These widely used practical approaches are not only subjective, ad hoc, and approximate; they also miss the problem of metamer mismatching; imposing limitations on the accuracy of these models [9].

Tables Icon

Table 1. Spectral and Approximate Renderinga

Linear models of surface reflectance [14,15] have shown that natural and Munsell reflectance spectra can be represented by five to seven linear basis functions, and as few as three to four when focusing on the information needed for the broadband cone sensitivities [14,15]. This implies that the sensorily relevant information of those reflectance spectra can be reconstructed by specifying only the three to four surface-specific weights of the basis functions. For practical purposes, the user needs the basis functions to reconstruct the reflectance spectra through those parameters. While this approach works for broadband reflectance and illuminant spectra, it cannot work well for narrowband illuminants and reflectances, for which a larger number of basis functions is necessary.

Instead, we propose an approach introduced in a different context [10,1618] that allows simpler and easier communication of reflectances. A given surface reflectance produces a unique pattern of sensory signals across all possible illuminations. We have shown that this pattern can be approximated with high precision through a ${3} \times {3}$ matrix that we call a sensory reflectance matrix (matrix A in Formula 2 in the right column of Table 1). Sensory reflectance matrices do not only approximate natural, but also different types of artificial surfaces and illuminants [10]. While the reflectance spectrum is a physicist’s characterization of a surface, telling us how the whole spectrum of incoming light will be affected by the surface, the sensory reflectance matrix is a sensory characterization of the surface that tells us for any incoming light how the measures taken by the three cone photoreceptors will be affected by the surface (cf. [16]).

For practical applications, the sensory reflectance matrix of a given surface has the advantages that (1) it only requires the communication of nine numbers instead of the whole, infinite-dimensional reflectance spectrum; (2) it only requires sensory illuminant signals (LMS or $XYZ$) rather than the spectra of illuminants; and (3) with some basic colorimetric knowledge, a user may understand the weights in the sensory reflectance matrix of the surface. Matrix elements (we consider them as weights) in each row indicate how much of the illuminant signal contributes to the excitation of each kind of cone (LMS) by the light reflected off the surface. Metamerism may be represented by sensory reflectance matrices because two matrices with different weights may produce the same weighted sum in each row under one illuminant. Sensory reflectance matrices can also be produced with tristimulus value (${XYZ}$) instead of cone excitations, which are mathematically equivalent to LMS, more widely used in practical applications, and easier to interpret in terms of luminance and chromaticity [2,10]. Using a simple matrix transformation instead of spectral computations may also have computational advantages when used for real-time rendering of the surfaces of 3D objects in virtual reality.

The sensory reflectance matrices are a perfect model of the sensory signals under daylight illuminants that can be decomposed into three linear basis functions [10,18]. For other illuminants, the sensory reflectance matrices are only an approximation that differs numerically from the ground truth provided by the spectral computations. The question arises whether the small numerical differences between full spectral renderings and approximate renderings based on sensory reflectance matrices are visible to human observers, given that human sensitivity to color differences is limited (e.g., [19]). If these differences are negligible, sensory reflectance matrices can be used for specifying reflectances in an applied context without losing any perceptually relevant information.

Here, we assessed whether those numerical differences between spectral renderings and approximate renderings based on surface reflectance matrices are perceivable. We consider the colors in digital photographs of real scenes as “virtual surface colors” because observers perceive and interpret them as depicted surfaces, even if their physical source are the monitor primaries (which are lights, not surfaces). So, we used hyperspectral images of different scenes containing natural and man-made surfaces of various colors (cf. [2025]) and rendered them on a computer monitor as they would appear under different illuminants.

We combined online and lab experimentation, the former to boost statistical power through a high number of participants, and the latter to fully control viewing conditions. Since results do not differ in important ways, we will focus on the results from the online experiment here and provide the results from the lab experiment in Supplement 1. Code and data are provided in a data repository [26].

2. METHOD

The online experiment was conducted using three separate measurements to avoid participants getting demotivated or tired. These measurements only differed in the sets of scenes and illumination colors used as stimuli. See also Table S1 and digital printouts in Supplement 1.

A. Participants

After excluding those with self-reported color vision deficiencies (22, 26, and 21), 460, 488, and 435 participants provided data for the three online measurements, mistakes in catch trials (6, 4, and 2), and completion time above 1.5 interquartile range (38, 37, and 42). For details, see Table S1 in Supplement 1. Participants were recruited through an online recruitment platform (Prolific) and the undergraduate student participation pool (UG-Pool) of the School of Psychology at the University of Southampton. Participants received £2 (Prolific) or 3 credits (UG-Pool). Online and lab experiments were approved by the Ethics Committee at the University of Southampton (ERGO 64353, 65240, 66013, 67105). All participants gave informed consent.

B. Apparatus

The online measurements were implemented through Qualtrics. Chromaticities, luminance, and gamma functions of standard RGB (sRGB) monitors were used for RGB rendering, assuming sRGB is representative of varying displays used by participants (see “Lab Experiment“ in Supplement 1 for full control).

 figure: Fig. 1.

Fig. 1. Illuminations. (a) Circles show the CIE1931 chromaticity coordinates for illuminations. They are the same for broad- and narrowband illuminants. The curve indicates the daylight locus for reference. (b) Narrowband (dashed line) and broadband (solid) illuminants.

Download Full Size | PDF

C. Stimuli

We aimed at testing the general usefulness of sensory reflectance matrices, rather than testing effects of specific hyperspectral images or illumination colors. For this reason, we measured a large variety of combinations instead of systematically combining a small number of images with a small number of illumination colors (see column 1 of Table S1 for overview). There were seven different images from four databases [2024] in the first and second measurements (both measurements had one image in common). The four images in the first measurement were rendered with four main illumination colors, and the four images in the second measurement were rendered with four intermediate illumination colors [cf. Fig. 1(a); see Table S2 for numerical details]. The third measurement featured four images of fruits (apple, banana, dragon fruit, orange) rendered with the four main illumination colors [25]. The fruit objects were rendered on a black background.

For each illumination color, the renderings were conducted with two types of illuminant spectra [Fig. 1(b)]. First, smooth broadband illuminants from a previous study [27] were considered to be naturalistic, i.e., similar to illuminant spectra in the natural environment [15,28,29]. Second, we generated artificial metamers of the broadband spectra that have two narrowbands within the visible spectrum. We considered these narrowband illuminants as artificial in contrast to the naturalistic broadband illuminants [3]. We set the intensity of the spectra so that scenes were as visible as possible while minimizing image RGB values out of gamut (see Table S3 for details). With four exceptions, all 88 images involved less than 1% RGBs out of gamut. For the four exceptions, 8%–11% of the RGB values fell out of gamut (two scenes, with red broadband and narrowband illuminants). We did not remove these four images because clipping the out-of-gamut values seemed not to affect the comparison between spectral renderings and those based on sensory reflectance matrices (see Fig. S1 in Supplement 1).

We adapted an existing toolbox for hyperspectral images [9]. Spectra of each pixel of the hyperspectral images were multiplied with the illuminant spectrum and CIE1931 color-matching functions (which are equivalent to cone sensitivities), as indicated in Eq. (1). All spectra were sampled at 10-nm intervals. The resulting ${XYZ}s$ were transformed into gamma-corrected RGBs.

Our approach based on sensory reflectance matrices [see Eq. (2)] provides the alternative, approximate rendering. The sensory reflectance matrix for each pixel was obtained through regression, using the function “A_maker” provided by a previous study [10] (see “Computation of Reflectance Matrix” in Supplement 1). In preparation, we had compared different ways to calculate sensory reflectance matrices, including the original approach using daylight illuminants [16,17], the same approach but using narrowband, random-spline illuminants (cf. [10]), and Flachot et al.’s [18] illuminant-independent approach (Fig. S2 and rows 1–3 in Fig. S3). The second approach based on narrowband illuminants better approximates reflected sensory signals across a larger range of illuminants, probably because those narrowband illuminants cover a much larger range of illumination colors than daylight illuminants (cf. Fig. S2, panel c). So, we calculated the sensory reflectance matrix (for each pixel) with a set of 367 narrowband, random-spline illuminants (the same number as the original daylight set; increasing the number of illuminants did not change the resulting performance of sensory reflectance matrices).

Figure 2 illustrates the stimulus display. Images were shown four at a time in a ${2} \times {2}$ arrangement (cf. digital printouts). Three of them (comparisons) were spectrally rendered with the illuminant, but the rendering of the fourth image (target) was approximated based on sensory reflectance matrices. The width of each image was fixed at 300 pixels. (The height changed accordingly, so that the image was resized without getting distorted) to make sure that all four images in a stimulus display were simultaneously visible.

 figure: Fig. 2.

Fig. 2. Illustration of stimulus display and task. Observers had to “spot the different image.” This example trial is one of the easiest. We added a red arrow to this illustration to help the reader identify the odd one (bottom left image). Other trials were more difficult. Participants indicated their confidence with the slider at the bottom of the display.

Download Full Size | PDF

Each stimulus display was presented only once. Overall, each of the three online measurements consisted of 32 experimental trials, resulting from the combination of four hyperspectral images, four illumination colors, and two types of illuminant spectra (broadband versus narrowband).

D. Procedure

Participants were asked to identify the image that looked different from the other three in the ${2} \times {2}$ stimulus display (Four-Alternative Forced Choice, Fig. 2). When the display was presented in a trial, participants identified the different image by clicking on it. Participants’ choice and response time were recorded. Then, participants indicated the confidence about their choice using a slider that ranged from 0 (not confident at all) to 100 (very confident). Four catch trials were included to identify participants who did not properly engage with the task and randomly responded without paying any attention. In catch trials, an obvious stimulus display was used, in which the odd one had either obviously different colors or was a different scene. The position of the target image within a trial and the sequence of trials were randomized.

At the beginning of an online session, the participant was asked to indicate gender, age, and whether they had color vision deficiencies. Then followed instructions and two practice trials with images other than the main stimuli. We provided feedback on both practice trials by showing a green frame around the target image to make sure that participants fully understood what they were asked to do. Trials with narrowband illuminants were completed in a first main part to avoid initial frustration with the more difficult broadband displays, which followed in the second main part (cf. digital printout). Within each part, trials were presented in random order. Completion took between 6 and 17 min (lower and upper quartiles).

3. RESULTS

For both narrowband and broadband illuminants, we had to discard the data for two stimuli in the third measurement of the online experiment due to an artifact that could have affected participants’ choices. As a result, in the third measurement of the online experiment, we had 30 instead of 32 stimuli. We assumed that more than 50 ms are required for perceptual decisions and motor responses (e.g., [30]), and 2 min was enough time to thoroughly inspect the image. Hence, we discarded trials where participants responded below 50 ms (0.11%, 0.11%, and 0.06% of responses in all the three measurements of the online experiment) or above 2 min (0.34%, 0.25%, and 0.17%) to avoid spurious responses. For the resulting data, we calculated the proportion correct (accuracies) for identifying the target rendered based on sensory reflectance matrices among the three spectrally rendered comparisons.

A. Main Results: Accuracies

Figure 3 shows the accuracy for five example scenes across illumination colors and bandwidths. Detailed results for other scenes are provided in Supplement 1 (Fig. S4). If the approximation based on our approach is sufficient to perceptually model surface colors, the renderings based on sensory reflectance matrices should be indistinguishable from spectral renderings, and accuracies should be at chance level ($p = {0.25}$). We calculated ${ z}$-tests to compare accuracies with chance (see Table S4 for numerical details).

 figure: Fig. 3.

Fig. 3. Main results for five example scenes. Each column corresponds to the scene shown in the first row. The vertical axis shows the proportion of correct responses (accuracy), and the color of each bar represents the color of the illumination under which the corresponding image was viewed. Error bars show standard errors, and the chance level is shown by the dashed line at 0.25 in each plot. Significant differences from chance are indicated as $^*\!p \lt {0.05}$, $^{**}\!p \lt {0.01}$, and $^{***}\!p \lt {0.001}$. The number of participants in the respective online measurement is given in the third row.

Download Full Size | PDF

For renderings of narrowband illuminants [Figs. 3(g)–3(k)], accuracies were significantly above chance level (dashed line) for 41 out of the 47 (87%) stimulus displays (all ${ z} \gt {1.96}$, all $p \lt {0.05}$). In contrast, accuracies for 38 of 47 (80%) of the stimulus displays with broadband illuminants did not differ from chance [Figs. 3(l)–3(p); ${z} \lt {1.96}$, $p \gt {0.05}$]. The other nine stimulus displays (20%) yielded significant differences from chance, but those differences were much smaller than those observed with narrowband renderings [e.g., compare Figs. 3(k) and 3(p)]. Discounting for chance performance indicates that only 7% [Fig. 3(l), color B] to 19% [Fig. 3(p), Color R-Y] of participants perceived the difference between target and comparisons across the seven stimulus displays with performance above chance level (See Fig. S5 for corresponding results in the lab).

B. Confidence and Response Times

Confidence ratings and response times largely confirmed the results observed with accuracies (Figs. S6 and S7; see also Figs. S8 and S9 for the lab results). Response times were logarithmically transformed to impose an approximate normal distribution. In all three online measurements, confidence ratings were positively correlated with accuracy across stimulus displays [${r}({30}) = {0.98}$ and ${ r}({29}) = {0.98}$, all $p \lt {0.001}$, Figs. S10(a), S10(d), and S10(g)]. Logarithmic response times were negatively correlated with both accuracies [all r(30) and ${r}({29}) \lt - {0.58}$, all $p \lt {0.05}$, Figs. S10(b), S10(e), and S10(h) and confidence ratings r(30) and ${r}({29}) \lt - {0.63}$, all $p \lt {0.05}$, Figs. S10(c), S10(f), and S10(i)]. These correlations indicate that accuracies, response times, and confidence ratings consistently represented task difficulty.

Confidence ratings add an interesting observation to accuracies in that they reflect the observer’s awareness of image differences. Confidence ratings for the nine broadband stimulus displays with accuracies above chance level were similar to those of the other broadband stimuli and much lower than average ratings for narrowband displays (Figs. S6 and S8). Even though accuracies for those seven broadband stimuli were significantly above chance, the confidence ratings suggest that participants were not necessarily aware of the color differences that distinguished targets from comparisons.

C. Pixelwise Color Differences

Color appearance spaces provide a rough estimate of perceived differences between pairs of colors under a given illumination [1,2]. Those estimates might be indicative of discrimination between target and comparisons in our experiment. To explore this idea, we represented the pixels of the images in CIELAB, assuming the respective illumination color [cf. Fig. 1(a)] as the white-point. We calculated Euclidean distances in CIELAB ($\Delta {\rm ELab}$) between the respective spectral version and the one rendered based on sensory reflectance matrices for each pixel and then averaged across pixels. Like accuracies, both average [Fig. 4(a)] and maximum [Fig. 4(b)] $\Delta {\rm ELab}$ for renderings with broadband illuminants (circles in Fig. 4) were much lower than those of most renderings with narrowband illuminants (stars in Fig. 4), resulting in a strong positive correlation between average and maximum $\Delta {\rm ELab}$ and accuracies across stimulus displays [${r}({92}) = {0.8}$ and ${r}({92}) = {0.87}$, respectively; $p \lt {0.001}$].

 figure: Fig. 4.

Fig. 4. Relationship between performance and pixelwise color differences. (a) Average and (b) maximum pixelwise differences ($\Delta {{E}_{\rm{Lab}}}$) between the spectral and approximate rendering are shown along the $y$ axis, respectively. The circles and the stars represent broadband and narrowband illuminants, respectively. Symbol colors indicate illumination colors. The correlation between $\Delta {{E}_{\rm{Lab}}}$ and accuracy is given at the top left. $^{***}\!p \lt {0.001}$.

Download Full Size | PDF

These results did not depend on the difference metric or the color appearance model. S-CIELAB is more appropriate than CIELAB to account for spatial contingencies [31], and maximal color differences might be more salient than averages across the pixels of an image pair. Nevertheless, similar results were obtained when representing colors in S-CIELAB, CIELUV, or CIECAM02 [cf. Figs. 4(a) and S11(a), S11(c), and S11(e)], and when aggregating color differences by maxima rather than averages across pixels [cf. Figs. 4(b) and S11(b), S11(d), and S11(f)]. Details may be found in Fig. S11 of Supplement 1. These results confirm that simple, pixelwise color differences are at least partly predictive of measured discrimination performance: The higher the pixelwise differences between spectral renderings and those based on sensory reflectance matrices, the easier their discrimination.

4. DISCUSSION

In sum, for narrowband illuminants, pixelwise color differences were comparatively large (stars in Figs. 4 and S11) and observers successfully distinguished renderings based on sensory reflectance matrices from spectral renderings with great confidence [Figs. 3(g)–3(k) and S4(a)–S4(d), S4(i)–S4(l) and S4(q)–S4(t)]. This was not the case for broadband-illuminant renderings, where pixelwise differences were much smaller (circles in Figs. 4 and S11), most performance was at chance level [Figs. 3(l)–3(p)], and low confidence ratings indicated that participants did not have the feeling of seeing the differences between renderings based on sensory reflectance matrices and spectral renderings (Figs. S6 and S8).

A. Online and Laboratory Viewing Conditions

The results from the online experiment seem particularly interesting for applications that involve rendering colored objects and materials on websites and online applications, such as for virtual art exhibits, pigment palettes, or products for online shopping. However, the measurements in the online experiment lacked display calibration and the control of the observer’s state of adaptation. This implies that performance might have been different under controlled conditions. For this reason, we had also done measurements with the same stimuli under controlled lab conditions. In addition to the stimuli we used in the three measurements of the online experiment, we added other stimuli, including fruits under intermediate illumination colors. Results in the lab (Figs. S5 and S8–S9, Table S4) largely confirmed those from the online experiment (cf. Figs. 3, S4, S6, and S7, Table S4). This suggests that the high number of participants in our online studies compensated for the lower signal-to-noise ratio due to varying displays and states of adaptation. In any case, the consistency of results from online and lab experiments suggests that our findings are valid for a large range of quite different viewing conditions.

B. Realism of Virtual Surfaces

Computational renderings of hyperspectral images might not be fully comparable with real surface colors under three-dimensional viewing conditions. For example, it is known that surface color perception may depend on realistic conditions such as full-field adaptation, three-dimensionality, and interreflections (for a review, see [32]). In addition, we were limited in the range and quality of available hyperspectral images. We needed to carefully select images and illuminant intensities to make sure they did not produce artifacts due to the monitor gamut. As a result, the scene content, the spectral resolution, the variation of (virtual) reflectance, and the range of colors sampled by our images were limited. These limitations are not easily overcome without further technical developments in hyperspectral imaging. Nevertheless, our samples still featured a large variety of scene content and colors that give a first, rough guideline about the usefulness of sensory reflectance matrices to represent surface colors, especially in virtual environments with digital images, such as online applications.

C. Usefulness of Sensory Reflectance Matrix

Subjects’ ability to pick out the narrowband renderings done with surface reflectance matrices in our study indicates that these matrices fail to successfully approximate surface colors under narrowband lighting. In real-life contexts, narrowband illuminants occur, for example, in LED or fluorescent lighting. Since we did not test those specific, real-life illuminants, we cannot say how well sensory reflectance matrices work with each of them, but the usability of sensory reflectance matrices under such lighting is likely to be very limited. Since metamer mismatching is most frequent with narrowband illuminants [22,33], the failure of these matrices with narrowband illuminants undermines its potential to account for metamer mismatching.

Nevertheless, our findings suggest that approximations based on sensory reflectance matrices are barely distinguishable from spectral renderings based on broadband illuminants. Tiny differences might be visible for some surface colors and scenes under thorough scrutiny, but for some applications such small differences might not be relevant. Broadband illuminants do not only include most lights in the natural environment, but also some common artificial lights, such as candles, tungsten bulbs, or advanced LEDs and OLEDs that emulate broadband spectra [3]. Under such conditions, sensory reflectance matrices can be used for faster, compressed specification, computation, and communication of surface colors.

Explorative comparisons (Fig. S3) suggest that sensory reflectance matrices might have an advantage over existing approaches to surface color specification (color appearance models or reference palettes; cf. Introduction). However, a definite response to this question requires systematic comparisons with those other approaches and evaluations that experimentally manipulate metamerism under the illuminants used in those contexts.

5. CONCLUSION

Our simple ${3} \times {3}$ matrix approximation allows specifying, computing, and communicating virtual surface colors across naturalistic broadband illuminants with high fidelity. This implies that this algorithm can be used to represent surface colors on displays using a few numbers rather than full spectra in a broad range of applications that involve naturalistic and other common broadband illuminants. Given the observed limitations with narrowband illuminants, more work needs to be done in order to determine whether sensory reflectance matrices provide an advantage over existing approaches to surface color specification in applied contexts.

Funding

Mayflower Scholarship, School of Psychology, University of Southampton.

Acknowledgment

We thank Irem Ozdemir, Neslihan Ozhan, Veronica Pisu, and Michaela Trescakova for help with piloting, and Alban Flachot and Giulio Palma for comments on the manuscript.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are available in Ref. [26].

Supplemental document

See Supplement 1 for supporting content.

REFERENCES

1. R. W. G. Hunt and M. Pointer, Measuring Colour (Wiley, 2011).

2. M. D. Fairchild, Color Appearance Models, Wiley-IS&T Series in Imaging Science and Technology (Wiley, 2013).

3. A. Hurlbert, “Challenges to color constancy in a contemporary light,” Curr. Opin. Behav. Sci. 30, 186–193 (2019). [CrossRef]  

4. T. Porter, B. Mikellides, and T. Farrell, Colour for Architecture Today (Taylor & Francis, 2009).

5. D. B. MacDougall, “Colour measurement of food: principles and practice,” in Colour Measurement (Woodhead Publication Text, 2010), pp. 312–342.

6. C. Witzel, C. van Alphen, C. Godau, and J. K. O’Regan, “Uncertainty of sensory signal explains variation of color constancy,” J. Vis. 16(15):8 (2016). [CrossRef]  

7. A. D. Logvinenko, B. Funt, H. Mirzaei, and R. Tokunaga, “Rethinking colour constancy,” Plos One 10, e0135029 (2015). [CrossRef]  

8. R. S. Berns, and Getty Conservation Institute, Color Science and the Visual Arts: A Guide for Conservators, Curators, and the Curious (The Getty Conservation Institute, 2016).

9. D. H. Foster and K. Amano, “Hyperspectral imaging in color vision research: tutorial,” J. Opt. Soc. Am. A 36, 606–627 (2019). [CrossRef]  

10. C. Witzel, F. Cinotti, and J. K. O’Regan, “What determines the relationship between color naming, unique hues, and sensory singularities: illuminations, surfaces, or photoreceptors?” J. Vis. 15(8):19 (2015). [CrossRef]  

11. M. R. Luo, “Applying colour science in colour design,” Opt. Laser Technol. 38, 392–398 (2006). [CrossRef]  

12. R. C. Pastilha, J. M. M. Linhares, A. I. C. Rodrigues, and S. M. C. Nascimento, “Describing natural colors with Munsell and NCS color systems,” Color Res. Appl. 44, 411–418 (2019). [CrossRef]  

13. A. R. Robertson, “Color order systems—an introductory review,” Color Res. Appl. 9, 234–240 (1984). [CrossRef]  

14. J. L. Dannemiller, “Spectral reflectance of natural objects: how many basis functions are necessary?” J. Opt. Soc. Am. A 9, 507–515 (1992). [CrossRef]  

15. L. T. Maloney, “Evaluation of linear-models of surface spectral reflectance with small numbers of parameters,” J. Opt. Soc. Am. A 3, 1673–1683 (1986). [CrossRef]  

16. D. L. Philipona and J. K. O’Regan, “Color naming, unique hues, and hue cancellation predicted from singularities in reflection properties,” Visual Neurosci. 23, 331–339 (2006). [CrossRef]  

17. J. Vazquez-Corral, J. K. O’Regan, M. Vanrell, and G. D. Finlayson, “A new spectrally sharpened sensor basis to predict color naming, unique hues, and hue cancellation,” J. Vis. 12(6):7 (2012). [CrossRef]  

18. A. Flachot, E. Provenzi, and J. K. O’Regan, “An illuminant-independent analysis of reflectance as sensed by humans, and its applicability to computer vision,” in Proceedings of the 6th European Workshop on Visual Information Processing (EUVIP) (IEEE, 2016).

19. J. Krauskopf and K. Gegenfurtner, “Color discrimination and adaptation,” Vis. Res. 32, 2165–2175 (1992). [CrossRef]  

20. D. H. Foster, K. Amano, and S. M. C. Nascimento, “Time-lapse ratios of cone excitations in natural scenes,” Vis. Res. 120, 45–60 (2016). [CrossRef]  

21. A. Chakrabarti and T. Zickler, “Statistics of real-world hyperspectral images,” in Proceedings Computer Vision and Pattern Recognition Conference (CVPR) (IEEE, 2011), pp. 193–200.

22. D. H. Foster, K. Amano, S. M. C. Nascimento, and M. J. Foster, “Frequency of metamerism in natural scenes,” J. Opt. Soc. Am. A 23, 2359–2372 (2006). [CrossRef]  

23. F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar, “Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum,” IEEE Trans. Image Process. 19, 2241–2253 (2010). [CrossRef]  

24. D. H. Brainard, “Hyperspectral image data,” http://color.psych.upenn.edu//hyperspectral/.

25. R. Ennis, F. Schiller, M. Toscani, and K. R. Gegenfurtner, “Hyperspectral database of fruits and vegetables,” J. Opt. Soc. Am. A 35, B256–B266 (2018). [CrossRef]  

26. H. Karimipour, J. K. O’Regan, and C. Witzel, “Sensory representation of surface reflectances: assessments with hyperspectral images,” Open Science Framework, 2022, https://doi.org/10.17605/OSF.IO/6C7NU.

27. D. Weiss, C. Witzel, and K. Gegenfurtner, “Determinants of colour constancy and the blue bias,” I-Perception 8, 2041669517739635 (2017). [CrossRef]  

28. L. Chittka and R. Menzel, “The evolutionary adaptation of flower colours and the insect pollinators’ colour vision,” J. Comp. Physiol. 171, 171–181 (1992). [CrossRef]  

29. S. Westland, J. Shaw, and H. Owens, “Colour statistics of natural and man–made surfaces,” Sens. Rev. 20, 50–55 (2000). [CrossRef]  

30. S. J. Thorpe and M. Fabre-Thorpe, “Perspectives: neuroscience—seeking categories in the brain,” Science 291, 260–263 (2001). [CrossRef]  

31. X. Zhang and B. A. Wandell, “A spatial extension of CIELAB for digital color-image reproduction,” J. Soc. Inf. Disp. 5, 61–63 (1997). [CrossRef]  

32. C. Witzel and K. R. Gegenfurtner, “Color perception: objects, constancy, and categories,” Annu. Rev. Vis. Sci. 4, 475–499 (2018). [CrossRef]  

33. A. Akbarinia and K. R. Gegenfurtner, “Color metamerism and the structure of illuminant space,” J. Opt. Soc. Am. A 35, B231–B238 (2018). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Supplemental document with items numbered accordingly.

Data availability

Data underlying the results presented in this paper are available in Ref. [26].

26. H. Karimipour, J. K. O’Regan, and C. Witzel, “Sensory representation of surface reflectances: assessments with hyperspectral images,” Open Science Framework, 2022, https://doi.org/10.17605/OSF.IO/6C7NU.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. Illuminations. (a) Circles show the CIE1931 chromaticity coordinates for illuminations. They are the same for broad- and narrowband illuminants. The curve indicates the daylight locus for reference. (b) Narrowband (dashed line) and broadband (solid) illuminants.
Fig. 2.
Fig. 2. Illustration of stimulus display and task. Observers had to “spot the different image.” This example trial is one of the easiest. We added a red arrow to this illustration to help the reader identify the odd one (bottom left image). Other trials were more difficult. Participants indicated their confidence with the slider at the bottom of the display.
Fig. 3.
Fig. 3. Main results for five example scenes. Each column corresponds to the scene shown in the first row. The vertical axis shows the proportion of correct responses (accuracy), and the color of each bar represents the color of the illumination under which the corresponding image was viewed. Error bars show standard errors, and the chance level is shown by the dashed line at 0.25 in each plot. Significant differences from chance are indicated as $^*\!p \lt {0.05}$, $^{**}\!p \lt {0.01}$, and $^{***}\!p \lt {0.001}$. The number of participants in the respective online measurement is given in the third row.
Fig. 4.
Fig. 4. Relationship between performance and pixelwise color differences. (a) Average and (b) maximum pixelwise differences ($\Delta {{E}_{\rm{Lab}}}$) between the spectral and approximate rendering are shown along the $y$ axis, respectively. The circles and the stars represent broadband and narrowband illuminants, respectively. Symbol colors indicate illumination colors. The correlation between $\Delta {{E}_{\rm{Lab}}}$ and accuracy is given at the top left. $^{***}\!p \lt {0.001}$.

Tables (1)

Tables Icon

Table 1. Spectral and Approximate Renderinga

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

n rLM S 3 = n I Λ 1 R Λ × Λ S 3
n rLM S 3 = n iLM S 3 × 3 A 3
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.