Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Color appearance model for self-luminous stimuli

Open Access Open Access

Abstract

A model for brightness and hue perception of self-luminous stimuli surrounded by a self-luminous achromatic background has been developed based on a series of visual experiments. In the model, only the absolute spectral radiance values of the stimulus and background are used as input. Normalized cone excitations are calculated using the 10° Commission Internationale de l’Éclairage (CIE) 2006 cone fundamentals. A von Kries chromatic adaptation transform applied in the CIE 2006 cone space is adopted, and luminance compression and adaptation due to the self-luminous background are included by using a Michaelis–Menten function. Model parameters are determined by fitting the model to the experimental visual data obtained for brightness, hue, and the amount of color versus neutral. The model is validated with additional experimental data. An absolute brightness scale expressed in “bright” is proposed.

© 2018 Optical Society of America

1. INTRODUCTION

The fundamental goal of a color appearance model (CAM) is to predict the color appearance of a stimulus. To some extent, the physiological processes taking place in the eye, retina, and brain are taken into account to calculate the visual attributes of the stimulus [13].

Various CAMs dealing with the color perception of surface colors have been developed [1,3]. Their application requires the characteristics of the light source illuminating the target and the background (commonly being a neutral 20% grey), the reflectance of the target, and the characteristics of the background and surround. One of them, CIECAM02, is most widely used [4]. It already contains chromatic adaptation, luminance adaptation, degree of adaptation, and noise. In 2016, Li et al. [5] revised the CIECAM02 model by merging the chromatic adaptation transform and the cone response transform and by adopting a two-step chromatic adaptation transform. Applying CIECAM02 to a self-luminous stimulus surrounded by a self-luminous background is not easy. The spectral radiance of the stimulus is totally independent from that of the background, the concept of the reference white (considering an ideal diffuse reflecting object) is not straightforward, and the luminance of the background can have values from zero (an unrelated stimulus) to values far beyond the luminance of the stimulus. Modelling these kinds of stimuli could be very important when investigating, e.g., the brightness of lamps, LEDs, OLEDs, luminaires, and advertisement billboards.

Unrelated colors are colors seen in complete isolation from any other color [1,3]. Typical examples of unrelated colors are bright light sources seen in a completely dark environment. In 1997, CAM97u was designed by Hunt and Pointer [3]. In 2012, some improvements were introduced by Fu et al. [6], leading to the CAM02u model, which is applicable under mesopic and photopic viewing conditions. In 2013, Withouck et al. [7] showed that both models were unable to accurately predict the perceived brightness of unrelated stimuli, mainly due to underestimation of the Helmholtz–Kohlrausch effect. This effect refers to an increase in perceived brightness as the colorfulness of the stimulus increases, despite keeping its luminance constant. In 2015, Withouck et al. [8] designed a new CAM for unrelated self-luminous colors, CAM15u.

Regarding self-luminous stimuli, a number of models focusing only on the brightness perception have been developed in the past. Fechner [9], Stevens [10], and Bartleson and Breneman [11] developed brightness models where the perceived brightness increases proportionally to the logarithm or to a power function of the luminance. As already pointed out by Arend and Spehar [12], a change in background luminance will have an influence on the perceived brightness of the stimulus. Bodmann and Toison [13] developed a brightness model including the effect of the background. The main problem with these models is that the input parameters are restricted to luminance values and neutral stimuli. In 2002, Guth [14] developed the ATD01 color vision model, but the brightness prediction of this vision model was found unsatisfactory for colored unrelated stimuli [15]. Nayatani [16] and Commission Internationale de l’Éclairage (CIE) [17] developed “equivalent luminance” models. The concept of these models is to define a neutral reference stimulus having a photopic luminance that matches the test stimulus in terms of brightness. However, these equivalent luminance models underestimate the Helmholtz–Kohlrausch effect and do not take the effect of a background into account. In 2018, CIE TC1-93 published a grey-scale calculation for self-luminous devices based on Whittle’s logarithmic formula to calculate the number of equal perceptible differences of suprathreshold brightness contrast between the background luminance and the target luminance [1820]. This model uses the luminance value of stimulus and background as input and makes a distinction between stimuli seen in positive and negative luminance contrast with the background. Recently, a brightness model was developed for neutral self-luminous stimuli and backgrounds by Hermans et al. [21]. The impact of the background was included by calculating the adapted and compressed cone responses using a Michaelis–Menten function [22] and by considering the semi-saturation parameter inside the Michaelis–Menten function to be dependent on the mean cone excitation induced by the background. However, the model was restricted to neutral stimuli, neutral backgrounds, and brightness perception.

In this paper, a CAM is presented, combining the approach used in CAM15u (colored stimuli and a dark background) with the successful implementation of the effect of the background on the brightness of neutral stimuli [21]. In five series of experiments, perceptual and spectral data are gathered for self-luminous stimuli surrounded by and in positive and negative luminance contrast with neutral self-luminous backgrounds. Stimulus and background luminance covered a wide photopic range. Based on these data, a CAM capable of predicting the perceived brightness, the hue, and the amount of color versus neutral is presented. An independent validation experiment shows a very good prediction accuracy. An absolute brightness scale expressed in “bright” is proposed.

2. EXPERIMENT

A. Experimental Setup

A specially designed experimental room has been set up. The self-luminous 3 m by 5 m background is created by illuminating a diffusor from the back by a series of 40 dimmable tube luminescent (TL)-fluorescent lamps. In the center of the background, a circular test stimulus is created by a RGB-LED light source encased in a cylindrical tube placed behind the diffusor (Fig. 1). The field of views of the self-luminous background and of the central stimulus were approximately 100° by 71° and 10°, respectively. The uniformity of the self-luminous background and the self-luminous stimulus were within 10% of the mean. For more details about the experimental setup, refer to [8,21].

 figure: Fig. 1.

Fig. 1. (left) Experimental room and (right) the central colored stimulus and part of the neutral low luminous background.

Download Full Size | PDF

B. Stimuli

Five background luminance levels were selected: 0, 50, 150, 300, and 500cd/m2 (10° luminance values). The CIE 1976 uv chromaticity of the background was (0.231, 0.492), which corresponds to a correlated color temperature (CCT) of approximately 4000 K. The variation of both u and v over the luminance range was less than 3%. Regarding the circular test stimulus, 20 chromaticity values (Fig. 2) are combined with three luminance values (50, 150, and 300cd/m2; CIE 1964 10° luminance values). The neutral achromatic stimulus was also presented at 500cd/m2. The (u,v) chromaticity of the neutral stimulus (u=0.232 and v=0.491) was approximately identical to that of the background. Any combination of each test stimulus (20×3+1=61) with any luminance of the background (5) gives rise to 305 test scenes. The chromaticity gamut of the test stimuli and the luminance range of the stimuli and background cover the widest gamut available in the experimental setup.

 figure: Fig. 2.

Fig. 2. Chromaticity coordinates of all stimuli plotted in the CIE 1976 u10, v10 chromaticity diagram. (+) Saturated red, green, and blue; (triangle) less saturated red, green, and blue; and (circle) white stimuli (see Figs. 5 and 6).

Download Full Size | PDF

C. Experimental Procedure

Visual data about the brightness perception of the test stimuli were obtained in a magnitude estimation experiment. The stimulus region was recognizable by a line originating from the encased cylindrical tube in contact with the diffusor. Observers were asked to rate the brightness of the test stimulus in comparison to that of a reference stimulus. To this reference stimulus, a brightness value of 100 was attributed. The reference stimulus was presented in temporal juxtaposition to the actual test stimulus. If observers perceived the actual test stimulus twice as bright as the reference, they had to attribute a value of 200 to the brightness. The perceived brightness values could range from 0, representing a dark stimulus, to any other positive value without defining an upper limit. This assessment method produces a ratio scale, and the experimental brightness data are scalable.

The scaling of colorfulness is not evident for an inexperienced observer. However, in view of a broad applicability of the model, the perception by inexperienced observers is certainly most relevant. For this reason, colorfulness was not questioned directly nor trained, but the observers were asked to rate the relative contribution of “colored light” and “neutral light” they perceived to be present within the stimulus. This approach has been used before [8,2325]. Withouck et al. [8] already concluded that the “amount of neutral” or the “amount of color” of a stimulus is rather easy to assess by inexperienced observers, although it did not lead to more robust data than assessing colorfulness by trained observers. In Fig. 3, an example of the observer response sheet is shown. For every stimulus, the observer had to intuitively put a cross along the bar (from colorful to neutral). The same graphical evaluation sheet was also used for hue, where a mark had to be put on the circumference of the circle.

 figure: Fig. 3.

Fig. 3. Graphical observer response sheet.

Download Full Size | PDF

The questions asked to the observer were as follows (translated from Dutch):

“You will see 61 test stimuli during the entire experiment. There will be a small 10 min break in the middle of the experiment. First, a reference stimulus will be shown for 15 s followed by a beeping sound. This beeping sound is the indication that the actual test stimulus will next be presented for 15 s. You will be asked to give a value to the brightness of the test stimulus with respect to that of the reference. After 15 s of showing the actual test stimulus, another beeping sound will be presented, indicating you have to answer.

The reference is assigned a brightness value of 100. A value of zero represents a dark stimulus without any brightness. There is no upper limit to the value of brightness; a value of 200 represents a stimulus appearing twice as bright as the reference, and a value of 50 should be given to a stimulus appearing half as bright, etc.

After answering the brightness question, you have to fill in the graphical response sheet. How much white compared to non-white (or color) do you recognize in the stimulus? Put a cross in the column NEUTRAL-COLORFUL. Keep in mind that this represents the degree of neutrality. Grey, dark, or white stimuli can be assigned as neutral. Put the cross completely at the top of the bar when there is only color visible in the stimulus. Put it completely at the lower end if there is no color present, and thus, only a white, a grey, or a black (neutral) stimulus is visible.

Next you have to put a cross on the hue circle. Try to put the cross as closely as possible to the corresponding color. If you see a completely red, yellow, green, or blue stimulus, put the mark on the circumference near the corresponding color. If you see more than one hue, put the cross to the proportion of each hue present in the stimulus, e.g., in between red and yellow for a particular orange stimulus. If this stimulus contains more red than yellow, put the cross closer to the word RED on the circle.”

The experiment was split up into five separate series; in each series, the background was kept fixed at one luminance level. This approach was adopted because observers found it very difficult to compare the test stimulus with the reference stimulus if the background of both was different. The reference stimulus used in all series was the same, characterized by a luminance level of 300cd/m2 and a chromaticity identical to the chromaticity of the background.

These response sheets were scanned, and the position of the cross in the bar was transferred to a value between 0 and 1. If the cross was at the bottom of the bar (neutral), a value of 1 was returned, indicating that the observer perceived the stimulus as completely neutral. If the cross was at the top of the bar (colorful), a value of 0 was returned, indicating that the observer perceived the stimulus as completely colored. The position on the hue circle was transformed to a value between 0 and 400, corresponding with hue quadrature (100=yellow, 200=green, 300=blue, and 0/400=red).

Before each experiment, the observers were provided with instructions, and a test set of 10 stimuli was shown. This pre-experiment period (more than 15 min) ensures that the observers are adapted maximally to the luminance level of the background. During the experiment, 10 randomly picked stimuli appeared twice and were used to calculate intra-observer variability. Immediately after each series, the observers were asked a number of questions about their general experience, such as the comfort in the room, the oral and graphical evaluation method, and about the general appearance of the self-luminous stimuli. Observers reported that the use of the graphical response sheet was easy and comfortable.

D. Observers

The panel of test subjects was composed of 10 naïve observers (5 males and 5 females) with normal color vision as tested by the Ishihara 24 plate test and Farnsworth Munsell 100 hue test; all observers had an “average” or “superior” discrimination. Ages ranged from 18 to 30 years with an average age of 26 years and a median age of 27 years old.

E. Observer Variability

For the perception of brightness, a half-open scale was used. The results of the individual observers are converted to an “average observer” by calculating the geometric mean [26]. For hue quadrature and amount of neutral, the arithmetic mean was calculated [26]. The average inter- and intra-observer variability was determined by calculating the arithmetic mean of the standardized residual sum of squares (STRESS) obtained for each observer. These values can be used to analyze the goodness of fit between two sets of data. If two sets have a perfect agreement, the STRESS will be zero [27]:

STRESS=1ni=1nj=1k(Ai,jfBi,j)2j=1k(fBi,j)2,withf=j=1k(Ai,j)2j=1kAi,jBi,j.
In Eq. (1), Ai,j and Bi,j represent the perceptual attribute of stimulus j assessed by the individual observer i. For calibrating the mean intra-observer variability, Ai,j and Bi,j represent the observer response to the perceptual attributes of the stimuli, which were shown twice. For calibrating the mean inter-observer variability, Ai,j represents the response of the individual observer i to the perceptual attribute of stimulus j and Bi,j represents the perceptual attribute of stimulus j of the average observer. For checking the variability between the model and the individual observer, Bi,j represents the predicted perceptual attribute of the model of stimulus j, which is identical for each observer i.

F. Rescaling

The reference scenes used in the five experimental series are not identical; they are characterized by the same neutral circular stimulus but by a different background luminance. To pool all the brightness data, a rescaling to one general reference scene has to be made. Recently, Hermans et al. [21] have developed a brightness model for self-luminous but achromatic stimuli surrounded by a self-luminous achromatic background. The reference scene used in these experiments was a 10° self-luminous stimulus (u=0.232 and v=0.491) and a self-luminous achromatic background (u=0.231 and v=0.492), both with a luminance value of 250cd/m2. To establish a scaling parameter for a background series, the spectral radiance of the four achromatic stimuli of the series and the corresponding background were used as inputs to the model. The outcome of the model brightness is compared to the perceptual brightness data and a linear regression (Y=aX) using the least chi-square method [28] between both data sets is used to determine the proportionality factor, a. In this way, all brightness data of each background series were rescaled to one unique reference scene characterized by an achromatic stimulus and background of 4000 K and 250cd/m2 (Fig. 4). The average STRESS and R2 values of the linear regression varied between 0.05 and 0.10 and 0.94 and 0.99, respectively, validating the approach.

 figure: Fig. 4.

Fig. 4. Rescaled brightness values of the average observer as a function of the predicted brightness value of the achromatic brightness model together with all proportionality factors a; R2 and STRESS values for each background independently. Error bars are standard errors and boxplots are included for Lb=0cd/m2.

Download Full Size | PDF

3. RESULTS

In Fig. 5, the brightness observer data are shown for a selection of the test stimuli at a fixed background (LB=50cd/m2). It is clear that stimuli with a higher (CIE 1964 10°) luminance level LS will be perceived brighter if the luminance level of the background is kept constant. Stimuli with the same luminance value but which are saturated are perceived as brighter. This effect is called the Helmholtz–Kohlraush effect (HK effect).

 figure: Fig. 5.

Fig. 5. Brightness perception of the average observer as a function of the luminance level of the stimulus (LB=50cd/m2). (+) Most saturated red, green, and blue; (triangle) less saturated red, green, and blue; and (circle) white stimuli (see Fig. 2).

Download Full Size | PDF

In Fig. 6, only the white, the saturated, and the less saturated red, blue, and green stimuli are shown as a function of the background luminance. For all stimuli, the brightness perception decreases whenever the luminance level of the background increases. It is clear that the impact of the background is similar for all stimuli and that the highest impact is observed at low luminance backgrounds. At high luminance backgrounds, the stimuli are observed in negative contrast, and the stimulus brightness converges to a lower value: observers report the white self-luminous stimulus to even become grey. The HK effect is still present but becomes less dominant. In contrast to brightness, hue quadrature values are statistically invariant whenever the luminance level of the background is changed. All stimuli follow the same trend.

 figure: Fig. 6.

Fig. 6. Brightness perception as a function of the background luminance for all three different luminance levels of stimuli. (+) Saturated red, green, and blue; (triangle) less saturated red, green, and blue; and (circle) white stimuli (see Fig. 2). Full lines indicate the predicted brightness values by CAM18sl for the most saturated and white stimuli.

Download Full Size | PDF

The values of the “amount of neutral” of the average observer for all stimuli for one particular background (50cd/m2) are given in Fig. 7 as a function of the CIE 1976 u10, v10 saturation, suv,10.

 figure: Fig. 7.

Fig. 7. Amount of neutral of the average observer as a function of the CIE1976 u10, v10 saturation. Background luminance level, LB=50cd/m2. Error bars are standard errors.

Download Full Size | PDF

Obviously, the “amount of neutral” of the white and highly saturated stimuli is close to 1 and 0, respectively. The large variance in the data for intermediate saturation values is striking. Despite its simplicity and familiarity to the observers, “amount of neutral/color” does not lead to a highly robust assessment. The variation of the “amount of neutral” with background luminance is much smaller than the inter-observer variability, although a slight increase can be observed when the background luminance increases.

4. DEVELOPMENT OF THE MODEL

The model that is presented in this paper is a symbiosis between CAM15u [8] (which is valid for a large variety of stimuli but on a black background) and the former brightness model developed for achromatic stimuli and background [21]. The account for chromatic adaptation, a von Kries transform [29], is included.

The following steps are identified.

A. Calculation of the Absolute Normalized Cone Excitations

In 2006, the CIE provided a set of cone fundamentals l¯10(λ), m¯10(λ), and s¯10(λ), specifically suited to stimuli with an angular extend of 10° [3032]. These cone fundamentals are used to calculate the cone excitations ρ, γ, and β of the stimulus. The normalization constants have been chosen such that the cone excitations of spectral equal-energy white are identical and nominally equal to the CIE 1964 10° luminance value:

ρ=676.7390830Le,λ(λ)l¯10(λ)dλ,γ=794.0390830Le,λ(λ)m¯10(λ)dλ,β=1461.5390830Le,λ(λ)s¯10(λ)dλ.

B. Chromatic Adaptation

In a next step, the corresponding colors of the stimuli, when adapted to an equal-energy white background of the same luminance as the 4000 K test white background, are calculated. Chromatic adaptation is traditionally modelled by the von Kries coefficient rule in a suitable cone space [29]. Smet et al. [33] reported that the choice of the cone space is not very critical and that prediction errors are comparable to other cone spaces. The CIE 2006 long, middle, short (LMS) cone space will be used.

Regarding the value of the degree of adaptation, D, there are reasons to believe that a value of 1 (complete adaptation to the white point) is a good choice. During the test phase, before the actual experiment started, observers were already looking at stimuli in the presence of the background for more than 15 min. All observers experienced the background as neutral (“amount of white” W=1), which indicates complete adaptation, probably induced by the large field of view (>70°) and the near-neutral chromaticity (4000 K). Given a D-factor equal to 1, the chromatic adaptation transform can be written as follows:

[ρcγcβc]=[ρwr/ρB000γwr/γB000βwr/βB][ργβ].
In Eq. (3), (ρwr, γwr, and βwr) are the mutual equal cone responses of the equal-energy white (EEW) reference white point at the same luminance as the test white; (ρB, γB, and βB) are the cone responses of the background, acting as the test white (4000 K fluorescent tubes in this experimental setup), and (ρc, γc, and βc) are the cone responses of the corresponding colors of the stimuli.

One could question the application of this transform for the unrelated stimuli (dark background), but it has been shown that equal-energy white is still a valid reference white for near-dark backgrounds [34]. For this reason (ρB, γB, and βB) are taken as the mutual equal cone responses of the EEW reference white point whenever the stimuli are presented on a dark background, causing the diagonal elements of the adaptation matrix to be equal to 1.

C. Compressed Cone Responses

Next, the cone excitations are compressed and adaptive shifts are introduced. Cone responses of primates follow a sigmoidal curve [35], which is shifted by the level of adaptation of the cones in the retina. This sigmoidal compression is implemented by using the Michaelis–Menten equation [22], and the numerical values of the parameters are taken identical to the values mentioned in Ref. [21]. This is a reasonable procedure, since all experimental data have been pooled according to this model:

ρc,a=ρc0.58ρc0.58+(291.20+71.8αwr0.78)0.58,γc,a=γc0.58γc0.58+(291.20+71.8αwr0.78)0.58,βc,a=βc0.58βc0.58+(291.20+71.8αwr0.78)0.58.
In Eq. (4), ρc,a, γc,a, and βc,a are the adapted cone responses of the corresponding color of the stimulus. The adaptive shift is represented by αwr, which is equal to ρwr=γwr=βwr.

D. Color Opponent Signals

The color opponent signals a and b are defined in a similar way as in other CAMs:

a=ca(ρc,a1211γc,a+βc,a11),b=cb(ρc,a+γc,a2βc,a).
In this formula, ca and cb are constants that should be determined by fitting the experimental data.

E. Perceptual Attributes

The hue angle h(°) can be calculated by taking the inverse tangent of the color opponent signals a and b [3,36] as

h=180πtan1(b/a).
Hue quadrature can be found by linearly transforming the hue angle from a 0° to 360° range to a 0–400 range [3],
H=Hi+100hhihi+1hi,
with hi the unique hue angle and Hi the unique hue quadrature, the values of which are mentioned in Table 1 [8].

Tables Icon

Table 1. Unique Hue Data for Calculating the Unique Hue Quadrature (Hi)

The parameters ca and cb were determined from the experimental hue quadrature data by minimizing the mean of the squared residual errors between the experimentally observed hue quadrature Havg and the predicted hue quadrature HCAM18sl.ca was found to be 0.63, and cb was found to be 0.12. Notice that they are almost the same as the ca and cb in the CAM15u model. This confirms that hue perception is not much influenced by the introduction of a background. When using these parameters, the hue quadrature predicted by the model results in a very high coefficient of determination R2=0.99, a small root-mean-square-error (RMSE) of 14, and an average STRESS value of 0.06 (Fig. 8).

 figure: Fig. 8.

Fig. 8. Hue quadrature of the average observer as a function of the predicted hue quadrature by the model. Error bars are standard errors.

Download Full Size | PDF

Colorfulness can be calculated as the strength of the color opponent signals a and b:

M=cM(a2+b2).
The constant cM can be used to anchor the colorfulness scale of this model to any reference value. When using the numerical value of 3260, the colorfulness of stimuli surrounded by a dark background is identical to the values resulting from the application of CAM97u [3] and CAM15u [8].

Brightness Q is modelled as the sum of the achromatic signal A and a HK term including the colorfulness:

A=(2ρc,a+γc,a+120βc,a),Q=cA(A+c1McHK2).
cA can be considered as a scaling factor determined by the brightness value attributed to the reference stimulus. The parameters values for cA, c1, and cHK2, found by fitting the perceptual brightness data to the predicted brightness data for all stimuli by minimizing the mean of the squared residual errors, are 123, 0.0024, and 1.09, respectively. The low value of c1 does not suggest a small contribution from the HK effect because of the high numerical values of M.

When using these parameters, the brightness predicted by the model results in a very high coefficient of determination R2=0.97, a RMSE of 25, and an average STRESS value of 0.12. Similar values have been obtained when the model was applied for each background separately. In Fig. 9, the brightness perception of the average observer is plotted against the brightness prediction of the CAM18sl model. Yellow, green, blue, red, and black stars indicate the series with background luminances of 500, 300, 150, 50, and 0cd/m2, respectively.

 figure: Fig. 9.

Fig. 9. Brightness perception of the average observer as a function of the predicted brightness value by CAM18sl. Error bars are standard errors and RMSE, R2, and STRESS values for each background separately.

Download Full Size | PDF

Saturation can be calculated as the colorfulness relative to the brightness:

S=MQ.
In Fig. 10, W is plotted versus the CAM18sl saturation S. The large variability in the midrange of W is again present. The goodness of fit between the saturation predicted by the model and the average observer data for amount of white resulted in an R2=0.71, a small RMSE of 0.13, and a STRESS value of 0.70 (Fig. 10).

 figure: Fig. 10.

Fig. 10. Amount of neutral for the average observer as a function of the CAM18sl saturation, S.

Download Full Size | PDF

Following the suggestion by Withouck et al. [8], a prediction of the amount of white is obtained by minimizing the mean of the squared residual errors between the experimentally observed amount of white and a sigmoidal function of the saturation:

W=11+2.29S2.09.
The goodness of fit between the model prediction and the average observer data for amount of white resulted in an R2=0.71, a small RMSE of 0.12, and an average STRESS value of 0.24 (Fig. 11).

 figure: Fig. 11.

Fig. 11. Average observer amount of white against the predicted amount of white of the CAM18sl model.

Download Full Size | PDF

5. VALIDATION

In the validation experiment, visual data for a total of 100 scenes was collected. Each consecutive test scene consisted of a stimulus and a background, which both changed. Observers reported that these kinds of assessments are more difficult to perform. For this reason, the experiment has been split up into five shorter series of 20 scenes. During each series, one fixed reference scene was used. Test stimuli varied in hue, saturation, and luminance; background luminance levels were identical to the ones used in the test experiment. Brightness, amount of color, and hue were questioned following the same procedure as explained before. The brightness data was rescaled to a common reference as before. The goodness of fit for the brightness, hue quadrature, and amount of neutral data from the validation experiment was assessed by calculating the R2, RMSE, and average STRESS values with respect to the model (Table 2). As can be seen, the R2, RMSE, and average STRESS values for brightness, hue quadrature, and amount of neutral obtained in the validation experiment are nearly the same as those obtained in the test experiment. The average STRESS values for the model and both sets (test and validation set) are smaller than the inter-observer variability, indicating the good performance of the model.

Tables Icon

Table 2. R2, RMSE, and Average STRESS Values of the Brightness, Hue Quadrature, and Amount of Neutral Perception of the Test Experiment and of the Validation Experiment

6. ABSOLUTE BRIGHTNESS SCALE

CAM97u and CAM15u (unrelated self-luminous colors) did not introduce a particular unit for brightness. Also, CIECAM02 (object colors) did not use the concept of an absolute brightness unit. In 1949, Hanes [37] attempted to establish a subjective brightness scale (the bril scale) by investigating specific brightness ratios. In 1962, Stevens [10] defined the bril as an absolute unit of brightness, corresponding to the brightness of a 5° self-luminous stimulus with a luminance level of 0.0032cd/m2 on a dark background [38]. The spectral content of the reference is, however, unclear. The luminance value is also somewhat unusual and clearly outside of the photopic region. Regarding the spectral content, an EEW stimulus would be convenient to define a brightness unit. For these reasons, a new absolute brightness unit is proposed: 1 bright corresponds to the apparent brightness of a 10° spectral equal-energy self-luminous stimulus having a CIE 1964 10° luminance of 100cd/m2 and surrounded by a dark background of 0cd/m2. The brightness scale of the CAM18sl model can be easily anchored to this brightness unit by adjusting the parameter cA in Eqs. (10) and (11) (see Appendix A for a full overview). The CAM18sl brightness Q expressed in bright is given by

Q=0.937((2ρc,a+γc,a+120βc,a)+0.0024M1.09).
If an EEW 10° stimulus of 100cd/m2 on a dark background corresponds to 1 bright, increasing the background to 300cd/m2 lowers the brightness to 0.29 bright. A saturated red stimulus with the same luminance (100cd/m2) on a dark background has a brightness of 3.68 bright due to the HK effect.

7. MODEL COMPARISON

For stimuli assessed on a dark background, the results could be compared to the outcome of CAM15u [8]. The brightness predicted by this model results in a coefficient of determination R2=0.89, a RMSE of 17, and an average STRESS value of 0.23. The performance of CAM18sl for the same dataset reveals a coefficient of determination R2=0.93, a RMSE of 51 (due to the expanded scale), and an average STRESS value of 0.11. From this comparison, it is clear that CAM18sl also covers unrelated stimuli.

The brightness model developed by Hermans et al. is only applicable on neutral stimuli [21]. The goodness of fit between the achromatic brightness model and the neutral stimuli from this data set result in an R2 value equal to 0.96, a RMSE of 17, and an average STRESS value of 0.08. Applying CAM18sl on the same data set resulted in an R2=0.98, a RMSE of 19, and an average STRESS value of 0.10. The correspondence is obvious.

Very recently, TC 1-93 published a grey-scale calculation model for self-luminous devices [18]. This model is only applicable for neutral stimuli on neutral backgrounds. It calculates the number of equal perceptible differences of suprathreshold brightness contrast between the background luminance and the target luminance. Applying this model for all neutral self-luminous stimuli results in an encouraging R2 value of 0.71, but it does not outperform CAM18sl.

Although CIECAM02 was developed for related surface colors [4], the model was applied on the self-luminous scenes. Applying CIECAM02 to only the neutral stimuli (excluding the dark background) results in an R2 value of 0.81. Testing the goodness of fit between the CIECAM02 model and the whole data set (colored and neutral stimuli, again excluding the dark background) results in a low R2 value of 0.07. This is not surprising. As already pointed out by Luo and Li [39], the HK effect is not modelled in CIECAM02, which explains the bad performance.

8. CONCLUSION

The perceived brightness, hue, and amount of white for self-luminous stimuli surrounded by a neutral (4000 K) self-luminous background was investigated in a series of visual experiments. Brightness perception of a stimulus was assessed compared to the brightness perception of a reference stimulus presented in temporal juxtaposition using magnitude estimation. Hue quadrature and amount of neutral perception were evaluated using a graphical response sheet. Based on the obtained visual data, a color appearance model, CAM18sl, was developed. The main features of this model are the use of the CIE 2006 cone fundamental space, a von Kries chromatic adaptation transformation, and the inclusion of luminance adaptation by using a Michaelis–Menten function and the cone excitations of the background. All model parameters were determined by minimizing the squared residual errors between the experimentally observed data and the model predictions. The performance of this new model was tested by a validation data set and verified by calculating the coefficient of determination R2, the RMSE, and the average STRESS). CAM18sl brightness is expressed in “bright,” which has been defined as the apparent brightness of a 10° equal-energy self-luminous stimulus having a CIE 2006 10° luminance of 1cd/m2 and surrounded by a dark background.

APPENDIX A: CAM18sl STEP BY STEP

Input: Radiance Le,λ(λ) of self-luminous stimulus and background

Step 1: Calculate the normalized cone excitations (CIE 2006) of the stimulus (ρ, γ, and β) and of the background (ρB, γB, and βB):

ρ=676.7390830Le,λ(λ)l¯10(λ)dλ,γ=794.0390830Le,λ(λ)m¯10(λ)dλ,β=1461.5390830Le,λ(λ)s¯10(λ)dλ.

Step 2: Von Kries chromatic adaption transformation in CIE 2006 LMS cone space,

[ρcγcβc]=[ρwr/ρB000γwr/γB000βwr/βB][ργβ],
where (ρwr, γwr, and βwr) represent the cone responses of the EEW reference white point at the same luminance as the test white; (ρB, γB, and βB) are the cone responses of the background, and (ρc, γc, and βc) are the cone responses of the corresponding colors.

Step 3: Luminance adaptation is modelled by the Michaelis–Menten formula,

ρc,a=ρc0.58ρc0.58+(291.20+71.8αwr0.78)0.58,γc,a=γc0.58γc0.58+(291.20+71.8αwr0.78)0.58,βc,a=βc0.58βc0.58+(291.20+71.8αwr0.78)0.58,
where ρc,a, γc,a, and βc,a are the adapted cone responses of the corresponding color of the stimulus and αwr is identical to any of the cone responses of the corresponding color of the background.

Step 4: Color opponent signals, a and b:

a=0.63(ρc,a1211γc,a+βc,a11),b=0.12(ρc,a+γc,a2βc,a).

Step 5: Perceptual attributes:

Colorfulness, M:

M=3260(a2+b2);
brightness, Q in bright:
Q=0.937((2ρc,a+γc,a+120βc,a)+0.0024M1.09);
saturation, S:
S=MQ;
amount of neutral, W:
W=11+2.29S2.09.

Funding

Onderzoeksraad, KU Leuven (C24/17/051); Fonds Wetenschappelijk Onderzoek (FWO) (12B4916N).

Acknowledgment

K. Smet thanks the FWO for the support through a postdoctoral fellowship.

REFERENCES

1. M. D. Fairchild, Color Appearance Models, 2nd ed. (Wiley, 2005).

2. C. Fernandez-Maloigne, Advanced Color Image Processing and Analysis (Springer International Publishing, 2013).

3. R. W. G. Hunt and M. Pointer, Measuring Colour, 4th ed. (Wiley, 2011).

4. N. Moroney, M. D. Fairchild, R. W. G. Hunt, C. Li, M. R. Luo, and T. Newman, “The CIECAM02 color appearance model,” in Color Imaging Conference (2002), pp. 23–27.

5. C. Li, Z. Li, Z. Wang, Y. Xu, M. R. Luo, G. Cui, M. Melgosa, and M. Pointer, “A revision of CIECAM02 and its CAT and UCS,” in Color Imaging Conference (2016), Vol. 1, pp. 208–212.

6. C. Fu, C. Li, G. Cui, M. R. Luo, R. W. G. Hunt, and M. R. Pointer, “An investigation of colour appearance for unrelated colours under photopic and mesopic vision,” Color Res. Appl. 37, 238–254 (2012). [CrossRef]  

7. M. Withouck, K. A. G. Smet, W. R. Ryckaert, M. R. Pointer, G. Deconinck, J. Koenderink, and P. Hanselaer, “Brightness perception of unrelated self-luminous colors,” J. Opt. Soc. Am. A 30, 1248–1255 (2013). [CrossRef]  

8. M. Withouck, K. A. G. Smet, W. R. Ryckaert, and P. Hanselaer, “Experimental driven modelling of the color appearance of unrelated self-luminous stimuli: CAM15u,” Opt. Express 23, 12045–12065 (2015). [CrossRef]  

9. G. Fechner, Elemente der Psychophysik, 1st ed. (Breitkopf und Härtel, 1860), Vol. 1.

10. S. S. Stevens, “The psychophysics of sensory function,” in Sensory Communication (Wiley, 1962), pp. 934–940.

11. C. J. Bartleson and E. J. Breneman, “Brightness perception in complex fields,” J. Opt. Soc. Am. 57, 953–957 (1967). [CrossRef]  

12. L. E. Arend and B. Spehar, “Lightness, brightness, and brightness contrast: 2. Reflectance variation,” Percept. Psychophys. 54, 457–468 (1993). [CrossRef]  

13. H.-W. Bodmann and M. La Toison, “Predicted brightness-luminance phenomena,” Light. Res. Technol. 26, 135–143 (1994). [CrossRef]  

14. S. L. Guth, “ATD01 model for color appearances, color differences, and chromatic adaptation,” Proc. SPIE 4421, 303–307 (2002). [CrossRef]  

15. M. Withouck, K. A. G. Smet, W. R. Ryckaert, G. Deconinck, and P. Hanselaer, “Predicting the brightness of unrelated self-luminous stimuli,” Opt. Express 22, 16298–16309 (2014). [CrossRef]  

16. Y. Nayatani, “Simple estimation methods for the Helmholtz–Kohlrausch effect,” Color Res. Appl. 22, 385–401 (1997). [CrossRef]  

17. CIE, “CIE international lighting vocabulary,” Tech. Rep. CIE S 017/E:2011: ILV (Commission Internationale de l’Éclairage, 2011).

18. CIE, “Grey-scale calculation for self-luminous devices,” Tech. Rep. CIE:TC-1.93 (Commission Internationale de l’Éclairage, 2018).

19. P. Whittle and P. D. C. Challands, “The effect of background luminance on the brightness of flashes,” Vision Res. 9, 1095–1110 (1969). [CrossRef]  

20. R. Carter, “Gray scale and achromatic color difference,” J. Opt. Soc. Am. A 10, 1380–1391 (1993). [CrossRef]  

21. S. Hermans, K. A. G. Smet, and P. Hanselaer, “Brightness model for neutral self-luminous stimuli and backgrounds,” LEUKOS 14, 231–244 (2018). [CrossRef]  

22. V. L. Michaelis, M. L. Maud Menten, R. S. Goody, and K. A. Johnson, “Die Kinetik der Invertinwirkung,” Biochem. Z. 49, 333–369 (1913) [Bul. Math. Biophys. 13(4), 303 (1951)].

23. D. Xing, A. Ouni, S. Chen, H. Sahmoud, J. Gordon, and R. Shapley, “Brightness-color interactions in human early visual cortex,” J. Neurosci. 35, 2226–2232 (2015). [CrossRef]  

24. D. Jameson and L. M. Hurvich, “Perceived color and its dependence on focal, surrounding, and preceding stimulus variables,” J. Opt. Soc. Am. 49, 890–898 (1959). [CrossRef]  

25. L. M. Hurvich and D. Jameson, “An opponent-process theory of color vision,” Psychol. Rev. 64, 384–404 (1957). [CrossRef]  

26. ASTM, “Standard test method for unipolar magnitude estimation of sensory attributes,” Tech. Rep. E1697-05 e1 (2012).

27. P. A. García, R. Huertas, M. Melgosa, and G. Cui, “Measurement of the relationship between perceived and computed color differences,” J. Opt. Soc. Am. A 24, 1823–1829 (2007). [CrossRef]  

28. F. Kingdom and N. Prins, Psychophysics: A Practical Introduction, 2nd ed. (Elsevier, 2013).

29. J. von Kries, “Theoretische Studien über die Umstimmung des Sehorgans,” Festschr. Albrecht-Ludwigs-Univ. Freiburg 32, 143–158 (1902).

30. CIE, “Fundamental chromaticity diagram with physiological axes. Part 1,” Tech. Rep. CIE 170-1:2006 (Commission Internationale de l’Éclairage, 2006).

31. A. Stockman, L. T. Sharpe, and C. Fach, “The spectral sensitivity of the human short-wavelength sensitive cones derived from thresholds and color matches,” Vision Res. 39, 2901–2927 (1999). [CrossRef]  

32. A. Stockman and L. T. Sharpe, “The spectral sensitivities of the middle- and long-wavelength-sensitive cones derived from measurements in observers of known genotype,” Vision Res. 40, 1711–1737 (2000). [CrossRef]  

33. K. A. Smet, Q. Zhai, M. R. Luo, and P. Hanselaer, “Study of chromatic adaptation using memory color matches, Part II: colored illuminants,” Opt. Express 25, 8350–8365 (2017). [CrossRef]  

34. K. A. G. Smet, Q. Zhai, M. R. Luo, and P. Hanselaer, “Study of chromatic adaptation using memory color matches, Part I: neutral illuminants,” Opt. Express 25, 7732–7748 (2017). [CrossRef]  

35. J. M. Valeton and D. van Norren, “Light adaptation of primate cones: an analysis based on extracellular data,” Vision Res. 23, 1539–1547 (1983). [CrossRef]  

36. M. H. Kim, T. Weyrich, and J. Kautz, “Modeling human color perception under extended luminance levels,” ACM Trans. Graph. 28, 27 (2009). [CrossRef]  

37. R. M. Hanes, “The construction of subjective brightness scales from fractionation data: a validation,” J. Exp. Psychol. 39, 719–728 (1949). [CrossRef]  

38. S. S. Stevens, G. Stevens, and L. E. Marks, Psychophysics: Introduction to Its Perceptual, Neural, and Social Prospects (Wiley, 1986).

39. M. R. Luo and C. Li, “CIECAM02 and its recent developments,” in Advanced Color Image Processing and Analysis (Springer, 2013).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. (left) Experimental room and (right) the central colored stimulus and part of the neutral low luminous background.
Fig. 2.
Fig. 2. Chromaticity coordinates of all stimuli plotted in the CIE 1976 u 10 , v 10 chromaticity diagram. (+) Saturated red, green, and blue; (triangle) less saturated red, green, and blue; and (circle) white stimuli (see Figs. 5 and 6).
Fig. 3.
Fig. 3. Graphical observer response sheet.
Fig. 4.
Fig. 4. Rescaled brightness values of the average observer as a function of the predicted brightness value of the achromatic brightness model together with all proportionality factors a ; R 2 and STRESS values for each background independently. Error bars are standard errors and boxplots are included for L b = 0 cd / m 2 .
Fig. 5.
Fig. 5. Brightness perception of the average observer as a function of the luminance level of the stimulus ( L B = 50 cd / m 2 ). (+) Most saturated red, green, and blue; (triangle) less saturated red, green, and blue; and (circle) white stimuli (see Fig. 2).
Fig. 6.
Fig. 6. Brightness perception as a function of the background luminance for all three different luminance levels of stimuli. (+) Saturated red, green, and blue; (triangle) less saturated red, green, and blue; and (circle) white stimuli (see Fig. 2). Full lines indicate the predicted brightness values by CAM18sl for the most saturated and white stimuli.
Fig. 7.
Fig. 7. Amount of neutral of the average observer as a function of the CIE1976 u 10 , v 10 saturation. Background luminance level, L B = 50 cd / m 2 . Error bars are standard errors.
Fig. 8.
Fig. 8. Hue quadrature of the average observer as a function of the predicted hue quadrature by the model. Error bars are standard errors.
Fig. 9.
Fig. 9. Brightness perception of the average observer as a function of the predicted brightness value by CAM18sl. Error bars are standard errors and RMSE, R 2 , and STRESS values for each background separately.
Fig. 10.
Fig. 10. Amount of neutral for the average observer as a function of the CAM18sl saturation, S.
Fig. 11.
Fig. 11. Average observer amount of white against the predicted amount of white of the CAM18sl model.

Tables (2)

Tables Icon

Table 1. Unique Hue Data for Calculating the Unique Hue Quadrature ( H i )

Tables Icon

Table 2. R 2 , RMSE, and Average STRESS Values of the Brightness, Hue Quadrature, and Amount of Neutral Perception of the Test Experiment and of the Validation Experiment

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

STRESS = 1 n i = 1 n j = 1 k ( A i , j f B i , j ) 2 j = 1 k ( f B i , j ) 2 , with f = j = 1 k ( A i , j ) 2 j = 1 k A i , j B i , j .
ρ = 676.7 390 830 L e , λ ( λ ) l ¯ 10 ( λ ) d λ , γ = 794.0 390 830 L e , λ ( λ ) m ¯ 10 ( λ ) d λ , β = 1461.5 390 830 L e , λ ( λ ) s ¯ 10 ( λ ) d λ .
[ ρ c γ c β c ] = [ ρ w r / ρ B 0 0 0 γ w r / γ B 0 0 0 β w r / β B ] [ ρ γ β ] .
ρ c , a = ρ c 0.58 ρ c 0.58 + ( 291.20 + 71.8 α w r 0.78 ) 0.58 , γ c , a = γ c 0.58 γ c 0.58 + ( 291.20 + 71.8 α w r 0.78 ) 0.58 , β c , a = β c 0.58 β c 0.58 + ( 291.20 + 71.8 α w r 0.78 ) 0.58 .
a = c a ( ρ c , a 12 11 γ c , a + β c , a 11 ) , b = c b ( ρ c , a + γ c , a 2 β c , a ) .
h = 180 π tan 1 ( b / a ) .
H = H i + 100 h h i h i + 1 h i ,
M = c M ( a 2 + b 2 ) .
A = ( 2 ρ c , a + γ c , a + 1 20 β c , a ) , Q = c A ( A + c 1 M c HK 2 ) .
S = M Q .
W = 1 1 + 2.29 S 2.09 .
Q = 0.937 ( ( 2 ρ c , a + γ c , a + 1 20 β c , a ) + 0.0024 M 1.09 ) .
ρ = 676.7 390 830 L e , λ ( λ ) l ¯ 10 ( λ ) d λ , γ = 794.0 390 830 L e , λ ( λ ) m ¯ 10 ( λ ) d λ , β = 1461.5 390 830 L e , λ ( λ ) s ¯ 10 ( λ ) d λ .
[ ρ c γ c β c ] = [ ρ w r / ρ B 0 0 0 γ w r / γ B 0 0 0 β w r / β B ] [ ρ γ β ] ,
ρ c , a = ρ c 0.58 ρ c 0.58 + ( 291.20 + 71.8 α w r 0.78 ) 0.58 , γ c , a = γ c 0.58 γ c 0.58 + ( 291.20 + 71.8 α w r 0.78 ) 0.58 , β c , a = β c 0.58 β c 0.58 + ( 291.20 + 71.8 α w r 0.78 ) 0.58 ,
a = 0.63 ( ρ c , a 12 11 γ c , a + β c , a 11 ) , b = 0.12 ( ρ c , a + γ c , a 2 β c , a ) .
M = 3260 ( a 2 + b 2 ) ;
Q = 0.937 ( ( 2 ρ c , a + γ c , a + 1 20 β c , a ) + 0.0024 M 1.09 ) ;
S = M Q ;
W = 1 1 + 2.29 S 2.09 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.