Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Colour gamut mapping between small and large colour gamuts: Part I. gamut compression

Open Access Open Access

Abstract

This paper describes an investigation into the performance of different gamut compression algorithms (GCAs) in different uniform colour spaces (UCSs) between small and large colour gamuts. Gamut mapping is a key component in a colour management system and has drawn much attention in the last two decades. Two new GCAs, i.e. vividness-preserved (VP) and depth-preserved (DP), based on the concepts of ‘vividness’ and ‘depth’ are proposed and compared with the other commonly used GCAs with the exception of spatial GCAs since the goal of this study was to develop an algorithm that could be implemented in real time for mobile phone applications. In addition, UCSs including CIELAB, CAM02-UCS, and a newly developed UCS, Jzazbz, were tested to verify how they affect the performance of the GCAs. A psychophysical experiment was conducted and the results showed that one of the newly proposed GCAs, VP, gave the best performance among different GCAs and the Jzazbz is a promising UCS for gamut mapping.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In the imaging industry, colour management systems [1] are widely used in colour communication to avoid mis-matched colour perceptions. Such a system usually involves three major components, i.e. display characterization [2], uniform colour space [3], and colour gamut mapping. All these components are essential to achieve a colour fidelity or preference reproduction when communicating colour information between devices and media.

Display characterization models are developed to ensure a faithful colour reproduction when transforming colours among different devices. However, this is almost impossible due to their different colour ranges. Hence, gamut mapping algorithms (GMAs) are used to minimize the difference in colour perception between two displays, for example, having different gamut volumes.

The overall project is to investigate various existing gamut mapping algorithms when applied between small and large colour gamuts, including gamut compression and gamut extension algorithms (GCAs and GEAs). This article forms the first part of the paper series.

Extensive studies [4] have been carried out to find a generic method for gamut mapping and these have led to numerous GMAs especially gamut compression algorithms (GCAs) [5], which are the focus of this study. Generally, these algorithms can be divided into two broad categories, i.e. global GCAs (colour-by-colour GCAs) and local GCAs (spatial based GCAs). The former one can be further divided into two sub-classes that use a clipping method and a compression method respectively. Clipping is the simplest way to perform gamut mapping. It simply maps all out-of-gamut colours onto a single nearest point on the destination gamut boundary while keeping all in-gamut colours unchanged [6,7]. Though the clipping method provides an easy and fast approach to gamut mapping, it may introduce some defects including loss of detail, loss of colourfulness, loss of contrast and change of appearance due to the loss of out-of-gamut colours. As a result, its performance largely depends on the difference of gamut shape and image intents. Hence, more investigations on compression were recently conducted. Differing from clipping algorithms which only changes source colors that are outside the destination gamut, compression algorithms render colours that are both inside and outside the destination gamut. This helps to avoid the blocking issues inherent in the clipping method. The most well-known algorithm in this class is the chroma-dependent sigmoidal lightness mapping algorithm followed by knee scaling toward the cusp, referred as SGCK, which is illustrated on the left of Fig. 1. This algorithm, together with the hue-angle preserving minimum colour difference (HPMINDE) algorithm, were recommended by the International Commission on Illumination (CIE) [8] as anchors to reconcile the different interval scales used in different experiments. A key aspect of SGCK is to map colours towards the focal point on the lightness axis having the same lightness value as the CUSP, i.e. the color of maximum chroma in a given hue plane. This mapping can make full use of chroma and makes the reproduction more pleasing [9]. Instead of mapping towards a single focal point, many algorithms first divide the original gamut into regions and then apply different focal points for each of them. A representative of this kind is that developed by Kang [10]. Kang’s model was based on a psychophysical experiment in which observers were asked to adjust each image in a restricted gamut until it was similar to the original images. In this model, the source gamut was divided into three sub-regions according to their lightness ranges. In the top and bottom regions, colours were mapped towards the bottom and top boundaries of their sub-regions on the lightness axis, respectively, and for colours in the middle region, they were mapped along lines of constant lightness. This method retains more lightness contrast while compromising the chroma. A more sophisticated method [11] was proposed by MacDonald and is referred as the topographic - TOPO - method. It follows topographic mapping routes and is presented on the right of Fig. 1. TOPO firstly defines a core region where colours are kept unchanged and then constructs a set of mapping chords, along which colours are mapped. The mapping chords are built in terms of the distance to the whitest point in the gamut. In this method, any point on the gamut boundary is defined by its distance to the white point along the boundary divided by the total length of the boundary. For example, S29% and C29% are two corresponding points in the source and destination gamuts, respectively and they account for 29% of the whole length of their own boundaries. The mapping direction then starts from S29% and ends at C29% for a colour on that particular chord. Hence, a unique mapping vector can be found for any colour in the source gamut. This helps to preserve the rendering of gradations in lightness and chroma to the greatest degree.

 figure: Fig. 1

Fig. 1 Left is an illustration of the SGCK algorithm, where the source color o is mapped to the destination color r. Point E is the focal point having the same lightness as the CUSP. On the right is an illustration of the TOPO algorithm, where there are multi mapping directions such as from S29% to C29%, and S71% to C71%.

Download Full Size | PDF

Unlike colour-by-colour GCAs, spatial GCAs preserve more of the local detail. This is important because some of the detail that is clearly perceptible in the original image can sometimes be mapped such that is no longer perceptible, or even lost, using some global GCAs. Hence, many of the spatial GCAs have been implemented, together with global GCAs, as an effective method of preserving local detail. Bala [12] described a spatial GCA where a gamut clipping method was first applied to the original image. Then it was subtracted from the original to obtain the luminance-difference image. Finally, it was processed with a spatial filter and the filtered result was added back to the gamut mapped image. Morovic and Wang [13] described a multi-resolution, full-colour spatial GCA where the original was initially decomposed into different spatial frequency bands. Subsequently, the lowest frequency-band image was processed with a lightness compression and initial gamut mapping. Then, the next higher frequency-band image was added to the gamut-mapped image and again, the same lightness mapping and gamut mapping were applied to it. This procedure was repeated until all the bands were utilized. Zamir [14] described a spatial gamut mapping algorithm that relies on a perceptually-based variational framework [15]. An image energy function was adopted in this work whose minimization leads to image enhancement and contrast modification. By reducing the image contrast, gamut compression can be achieved while keeping the perceived colours close to the original. Though these spatial based GCAs can often offer better detail and texture preservation, they are sometimes computationally expensive, or based on many assumptions and may report halo artifacts [14]. In this study, spatial GCAs were not studied since the goal of this study is to develop an algorithm that can be implemented in real time for mobile phone applications.

There are two new GCAs developed in this paper, which were inspired by new scales developed by Berns [16], i.e. vividness and depth. These variables represent a Euclidean distance from a colour to the white for depth and to the black for vividness respectively. In a typical three-dimensional colour space such as CIELAB, lightness, hue and chroma are usually adopted to describe the colour appearance. However, ‘chroma’ was found to be somehow difficult for a naïve observer to understand and led to a poor result in colour assessment [17]. Hence, studies were carried out to find a more appropriate attribute to describe human colour perception. Cho et al. [18, 19] developed four experimental scales, i.e. saturation, vividness, blackness, and whiteness, to predict colour perception, and found that their saturation scale gave close agreement to the Berns depth scale, and blackness agreed well with the Berns vividness scale. As is claimed, changes in these variables are more representative of our daily experiences and thus lead to a better description of this aspect of colour appearance. Hence, GCAs have been designed to preserve these more easily perceived attributes, i.e. vividness and depth. All the introduced GCAs are summarized in Table 1.

Tables Icon

Table 1. Summary of the GCAs introduced.

The contributions of this study are three-fold. Firstly, two new GCAs are proposed. Their differences from previous algorithms mainly lie in 1) a non-linear lightness mapping aimed to keep as many colours unchanged as possible; 2) a focal point at the black point (VP) or at the white point (DP) that conforms with human perception. Secondly, different mapping focal points were investigated. There are four kinds of focal points included in this study, i.e. a focal point at the blackest point (VP), at the whitest point (DP), on the lightness axis and having the same lightness as the cusp (SGCK), and multiple mapping focal points (TOPO). This was to find the most effective mapping route for a GCA. Thirdly, the performance of different UCSs and their impact on GCAs was investigated. In this study, three UCSs, including the most widely used, CIELAB [20]; the most uniform, CAM02-UCS [21] and a recently developed UCS, Jzazbz [22] for high dynamic range (HDR) and wide colour gamut devices, were tested. Note in the display industry, the HDR display must have either a peak brightness of over 1000 cd/m2 and a black level less than 0.05 cd/m2 (a contrast ratio of at least 20,000:1) or a peak brightness of over 540 cd/m2 and a black level less than 0.0005 cd/m2 (a contrast ratio of at least 1,080,000:1). To verify the results, a psychological experiment was conducted that used a paired-comparison method. All these GCAs, together with a clipping algorithm, HPMINDE, were evaluated using three different UCSs. The results were analyzed to find the best GCA and UCS combination amongst all the candidates, such that a successful colour reproduction was obtained when transforming images from one device to another.

In summary, our study aimed to develop a generic GCA for all images, to investigate the performance of different GCAs by compression towards different neutral points, and to evaluate the performance of different UCSs for gamut mapping.

2. Development of new GCAs

The steps for the newly proposed algorithms, i.e. vividness-preserved and depth-preserved (referred as VP and DP, respectively) are quite similar except for their different focal points, i.e. blackest point in the lightness scale for VP and whitest point for DP. The steps to implement these two algorithms are as follows:

Step 1: non-linear lightness mapping

When given original and reproduction gamuts, it is always necessary to implement a lightness mapping due to their difference in lightness range. The linear function as described by Buckley [23] was used as given in Eq. (1):

L'=Lmin(Lo)max(Lo)min(Lo)*(max(Lr)min(Lr))+min(Lr)
where L and L' represent the lightness value of the source colour and its lightness mapped result, respectively; o refers to the original and r refers to the reproduction respectively.

Although it has been reported to work well in several cases (e.g by Viggiano [24] and Laihanen [25]), defects may also be introduced that include undesired loss of image contrast. This is mainly because lightness difference was also compressed using this function. To address such a problem, the linear function was not directly used to perform lightness mapping; instead, only colours on the boundary were processed using it, while keeping their chroma values unchanged. They were regarded as a new gamut boundary as illustrated in Fig. 2 by the dashed line. Hence, the lightness mapping can be performed between the source gamut and the newly-built gamut. This newly-built gamut is called the lightness-mapped gamut, as shown in Fig. 2.

 figure: Fig. 2

Fig. 2 Illustration of lightness mapping. P is the colour to be mapped, P' is the point with lightness mapped.

Download Full Size | PDF

For any colour within the original gamut, a nonlinear knee-function was applied. If the colour was within the 90% region of the newly constructed gamut (the core region), it would be kept unchanged; otherwise it was mapped towards the focal point E, which has the same lightness of the CUSP of the original gamut. The function adopted to perform such mapping is as follows:

EP'¯={EP¯;EP¯0.9*EPd¯0.9*EPd¯+EP¯0.9*EPd¯EPs¯0.9*EPd¯*EPd¯10;EP¯>0.9*EPd¯
where E is the focal point, P is the source colour in the original gamut, P' is the mapping output of this step, Ps is the intersection of the line EPand the original gamut boundary, and Pd is the intersection of the line EP and the destination gamut boundary. This nonlinear lightness mapping is quite similar to the last step of the SGCK algorithm, which uses a similar knee function to that described here.

Step 2: focal knee scaling

After Step 1, colours from the original gamut were mapped to the lightness range of the reproduction. In this step, these colours were further mapped towards the focal point, i.e. the blackest point for VP and the whitest point for DP. Similar to Step 1, a knee function was applied to have as many points unchanged as possible. Specifically, if EPwas smaller than 0.9*EPdthen it was in the core region. Otherwise, it would be mapped towards the focal point using Eq. (2). Note that, if there was no intersection between the gamut boundary of the reproduction and the lineEP, then the upper side (or the lower side for DP) of the boundary should be extended as illustrated in the right of Fig. 3.

 figure: Fig. 3

Fig. 3 Illustration of mapping towards the focal point (VP).

Download Full Size | PDF

Step 3: mapping towards lightness axis

After Step 2, most colours were in the destination gamut. However, a small number of colours, e.g. P in Fig. 4, in the lower region (or the upper region for DP) still needed a final ‘make-up’. Instead of clipping, the same knee function as used in Steps 1 and 2 was adopted to ensure that the final outputs were within the destination gamut. In this step, the focal point was set at the lightness axis to have the same lightness as the colour to be mapped. Again, if EP was smaller than 0.9*EPd, the colour was kept unchanged. This is illustrated in Fig. 4.

 figure: Fig. 4

Fig. 4 Illustration of mapping towards the lightness axis.

Download Full Size | PDF

3. Experimental

In this experiment, sRGB [26] was designated as the destination gamut. sRGB is the most commonly used colour gamut and has been considered a de facto standard for many years. The conversion between sRGB and XYZ in this step complies with the calculation procedure as defined in IEC 61966-4(1999) standard [26] except for the addition of a small offset (dark noise), which was adopted to better simulate real conditions. The source gamut was set as the display gamut, which was slightly larger (6.6%) than the standard DCI-P3 [27] gamut. Figure 5 shows different hue planes of these two gamuts in CIELAB colour space and Fig. 6 demonstrates these gamuts in the uv diagram. It should be noted that the destination gamut is fully enclosed by the source gamut as shown in Fig. 5 and all the GCAs were tested under this particular condition.

 figure: Fig. 5

Fig. 5 The illustration of the original and reproduction gamuts in CIELAB space.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Illustration of the gamuts of display, standard DCI-P3 and sRGB.

Download Full Size | PDF

Apart from the traditional CIELAB colour space, this study also investigated two other state-of-the-art uniform colour spaces, CAM02-UCS and a newly proposed HDR-UCS named Jzazbz. CAM02-UCS is a further elaborated UCS devised from the CIECAM02 appearance model [28] which gives more accuracy in predicating the colour difference of different scales. The Jzazbz space is specially designed to display a larger scale of dynamic range for both SDR and HDR contents by a trade-off between perceptual uniform colour difference and hue linearity. All these UCSs were included to test how the colour space would affect the gamut mapping results.

3.1 Psychophysical experiment

The performance of different GCAs and UCSs were evaluated using the results from a psychophysical experiment. The whole experiment was carried out following the guidelines provided by CIE Publication No.156 [8] on a NEC PA272W 27” LCD display. The measurement process was implemented to automatically measure colours in a darkened room using a tele-spectroradiometer (TSR), Jeti Specbos 1211uv. The peak luminance of the display was set at 130 cd/m2 and CCT close to D65. The spatial uniformity of the display was found to be 0.95ΔEab*calculated as the mean of the colour difference between the average and the 9 indexed locations (the area of the display was divided into 3*3 segments) and the repeatability was 0.89ΔEab*averaged over a duration of 6 hours. A GoG characterization model [29], was implemented to characterize this display and its predication accuracy was 0.86ΔEab*averaged over the 24 colours in the Macbeth ColorChecker chart with a standard deviation of 0.45ΔEab*and a range from 0.42ΔEab*to 1.88ΔEab*, which ensures a good model accuracy.

Six different images were selected and they were shown in Fig. 7. Image 7(1) – 7(3) were selected from ISO 12640 [30,31] and named as N3 Fruit basket, N7 Threads and N7 Musician, respectively. Image 7(4) was composed of color patches specially selected from Swedish Natural Color System (NCS) [32] unitary hues, including red, green, blue and yellow. Image 7(5) was called ‘Picnic’ from the Sano et al. experiment [33]. Image 7(6) was selected from the CIE Technical Committee 8-3 test images and was named ‘ski’. In this study, many memory colours were included in Images 7(1) (Fruit basket), 7(3) (Musician) and 7(5) (Picnic), such as fruits, skin, sky and grass. Changes in lightness can be clearly seen in Images 7(2) (Threads) and 7(5) (Picnic), which have dark and bright tones, respectively. Image 7(4) (Colour patches) was included for to test hue linearity and colour fidelity. Artificial drawings with sharp lines, fine details, and subtle textures were included in Image 7(6) (Ski), as well as some saturated colour objects. In the experiment, Image 7(1) (Fruit basket) was repeated to reveal intra-observer variability. In total, 105 images were produced including 5 GCAs (HPMINDE, SGCK, TOPO, DP and VP) and 3 colour spaces (CIELAB, CAM02-UCS, Jzazbz). Each image included a 5% white border to ensure a fixed adapting white point. Thus, there were 15 different reproductions of each image.

 figure: Fig. 7

Fig. 7 The six test images: (1) Fruit basket, (2) Threads, (3) Musician, (4) Colour patches, (5) Picnic, and (6) Ski.

Download Full Size | PDF

The procedures to produce those reproductions were as follows, 1) the original image in RGB format was transformed into an XYZ image using the forward display model. 2) the XYZ image was processed using all the different GCAs and UCSs. 3) the processed XYZ image was transformed back into an RGB image using the reverse display model.

Figure 8 shows a typical experimental situation using the paired-comparison method. Fifteen observers, nine males and six females with a mean age of 27 and a deviation of 4.98, took part in this experiment. Three images were presented side by side within a scene, with the source image in the middle and two reproductions on each side. They were viewed at a distance of approximately 70 cm to give a viewing field of approximately 65°. The background of the interface was set at a mid grey with an L*a*b* value of [50 0 0], which was first calculated using the GoG model and further refined using the measured result. Observers were asked to make a decision as to which reproduction appeared to be more similar to the original. Each reproduction was compared with every other reproduction and thus there wereC152=15*142=105pair-wise comparisons per image per observer. In total, 105 (comparisons) * 7 (images) * 15 (observers) = 11,025 comparisons were made on the display.

 figure: Fig. 8

Fig. 8 Experimental setup for comparison of the original image (centre) with reproduction images (left and right).

Download Full Size | PDF

4. Results and discussion

4.1 Inter- and intra- observer variability

Table 2 shows the inter- and intra-observer variability in terms of wrong decision (WD) [34] to represent the agreement of the observers’ results with themselves and with the mean scores. For intra-observer variability, wrong decision is defined as the number of total wrong decisions divided by the total number of choices. A wrong decision is defined as being two repeated assessments that disagree with each other. For example, the first answer might be ‘yes’ and the second answer ‘no’. For inter-observer variation, a wrong decision was calculated as the wrong percentage averaged from all combination pairs. Wrong percentage was interpreted as the possibility to make a wrong decision, i.e. the number of minority choices divided by the total number of choices. A higher wrong decision means that the observer’s performance was poor. Table 2 summaries the intra- and inter-observer wrong decisions: their mean values were 17.33% and 25.32%, respectively. The results of Image 7(1) (Fruit basket) and the repeated image are plotted in Fig. 9. The points are very close to the 45-degree line, indicating that the results of the experiment agreed well with each other.

Tables Icon

Table 2. Wrong decisions of intra-observer and inter-observer in percentage.

 figure: Fig. 9

Fig. 9 Z-scores of different GCA and UCS combinations (5*3) for the repeated image.

Download Full Size | PDF

4.2 Result of the psychophysical experiment

The raw results were first calculated as probability data and then transformed into z-scores, following Case V of Thurstone’s Law of Comparative Judgment [35]. The standard deviation of the z-score values was assumed to be σ = 1/(20.5) and the 95% confidence interval (CI) of a z-score value A can therefore be calculated as Eq. (3):

CI=A±1.96σN

In this experiment, a higher z-score implies a closer match to the original image. In other words, the higher the z-score, the better the performance of the model. The overall z-scores for the 5 GCAs and 3 UCSs are plotted in Fig. 10 and their rankings are given in Table 3. These results show that HPMINDE and VP performed considerably better than the other GCAs. TOPO and SGCK ranked third and fourth. DP was evaluated to be the worst. HPMINDE ranked highest for Images 7(2) (Threads) and 7(6) (Ski). VP ranked highest for Images 7(1) (Fruit basket), 7(3) (Musician) and Images 7(4) (Colour patches).

 figure: Fig. 10

Fig. 10 Overall z-scores for the 5 GCAs and 3 UCSs.

Download Full Size | PDF

Tables Icon

Table 3. Ranking of GCAs and UCSs Performance versus Image. A mean rank of 1 denotes the best and of 15 denotes the worst, L = CIELAB, C = CAM02-UCS, J = Jzazbz.

In general, HPMINDE aims to find the closest match in the destination gamut for any source colour but may lead to lightness or chroma inversions, i.e. colours outside the intersection of the original and reproduction gamuts may be mapped into the same destination. In this way, Image 7(2) (Threads) and 7(5) (Picnic) were expected to lose contrast in high chromatic parts or lose detail in high spatial frequency areas, including the ball of yarn, the grass and the hair. However, this is not clearly shown due to the relatively small difference of the source and destination gamuts. After detailed inspection, it was found that some of the colours were actually within the destination gamut even though they were perceived as highly chromatic. Nevertheless, it was obvious that, for high chroma objects, such as the cello in Fig. 11, a great deal of detail was lost on the left (HPMINDE) while it remained unchanged on the right (VP). However, since observers were asked to make a decision on the overall impression of the image, such local difference in small areas could be easily ignored. This explains why HPMINDE was ranked the highest. Both SGCK and TOPO took full use of the high chroma region, thus lead to their similar performance for high chromatic objects. However, SGCK method adopted a single destination for all source colours, sometimes making light area too dark and shadow area too light, such as the highly saturated green in Image 7(4) (Colour patches). Therefore, the contrast has been reduced. Furthermore, the contrast was over-emphasized by the dark grey background in Image 7(3) (Musician) by TOPO. This resulted from its tendency to preserve the gradations of lightness and chroma. DP and VP were our newly proposed algorithms that aimed to preserve depth and vividness. They reflect the typical visual phenomena of colour perception. In this study, DP performed the worst among all the GCAs tested. This mainly lies in the boost to lightness of the whole image, which is very obvious for images with a dark tint. In contrast, VP outperformed all the other compression GCAs, indicating the consistency of vividness and human perception. This is particularly obvious for Image 7(4) (Colour patches). Observers even considered VP better than HPMINDE, although only the vividness difference was minimized by VP. A possible explanation is that there is a good balance in preserving both colour gradation and colour difference for VP. However, for an image with an overall bright impression, e.g. Image 7(5) (Picnic), VP did not perform well due to a noticeable change of lightness. Hence, further investigations should be conducted on how to maintain consistent human perception for colours having a high lightness level when using the VP method.

 figure: Fig. 11

Fig. 11 Disability of HPMINDE (left: HPMINDE, right: VP).

Download Full Size | PDF

In addition to testing GCAs, the performance of different UCSs were also investigated. There are many criteria for developing a UCS, such as uniformity, hue linearity, and grey-scale convergence [22]. Amongst the three UCSs tested, CIELAB gave the worst perceptual uniformity and a serious hue shift in the blue region. However, it had no grey-scale convergence problem, i.e. all chroma scales at different lightness levels converged to a single neutral chromaticity point. CAM02-UCS always gave the best perceptual uniformity, but was worse for hue linearity and grey-scale convergence. Jzazbz was a trade-off between hue linearity and perceptual uniformity, giving an acceptable level for all three creteria. The overall results showed that no single UCS performed better than the other UCSs. To be specific, CAM02-UCS always performed the best for images having large objects with a single colour, such as Image 7(1) (Fruit basket) and Image 7(4) (Colour patches). Jzazbz was ranked first for Image 7(3) (Musician) due to its good hue linearity especially in blue region. Surprisingly, CIELAB did not perform badly for the images tested: only some colour shifts in the blue region were observable. However, this is not significantly serious when focusing on the overall impression of an image. For the other images, no significant differences could be found between these UCSs, indicating that the choice of UCS may not be a dominant factor in gamut mapping. It has more to do with image content.

4.3 Image dependency

One of the major limitations for current GCAs is that their performance is not only dependent on the source and destination gamuts, but also dependent on image content. This indicates that even if one algorithm performs the best for some images, it still may fail badly for others. Hence, a good algorithm should have a consistent performance, i.e. in terms of z-scores, among different test images. This is the so-called effect of image dependency. In this study, the image dependency is evaluated using the range and standard deviation of their z-scores on different images. A smaller range or standard deviation means a weak image dependency and that algorithm is more likely to be applicable to different types of images. Table 4 summaries the image dependency in terms of different GCAs and UCSs. Ideally, for a better UCS or GCA, it should perform consistently better for all different images. This is a common method to test the performance of different UCSs or GCAs. It is obvious that image dependency is strongly related to UCSs. Jzazbz always gave the smallest standard deviation, followed by CAM02-UCS and CIELAB the largest. Similarly, for different GCAs, TOPO and SGCK showed the least image dependency, followed by HPMINDE and VP. However, their differences were not significant except for DP, which performed the worst.

Tables Icon

Table 4. Image Dependency for GCAs and UCSs.

5. Conclusion

Overall, when mapping colours from a typical larger display gamut to the traditional sRGB gamut, HPMINDE and VP were definitely better than any other GCAs investigated in this study. This indicates that, 1) the more colours that are preserved, the better – both VP and HPMINDE had a large core region within which colours were kept unchanged. 2) A compression to the black is significantly better than other focal points in the lightness axis. This can be concluded from VP (compression to the black point), DP (compression to the white point), SGCK (compression to the point having the same chroma with CUSP), and TOPO (compression to multiple converge points). The result that vividness is an effective scale not only for describing colour appearance but also for gamut compression is both interesting and encouraging.

The fact that Jzazbz is a promising uniform colour space in the field of gamut mapping is also of interest because its performance was comparable to CAM02-UCS and showed the least image dependency.

References and links

1. J. Morovic and J. Lammens, Color Management (John Wiley & Sons, Inc., 2007), pp. 159–206.

2. G. Sharma and R. Bala, Digital Color Imaging Handbook (CRC press, 2002).

3. M. D. Fairchild, Color Appearance Models (John Wiley & Sons, 2013).

4. J. Morovic and M. R. Luo, “Evaluating Gamut Mapping Algorithms for Universal Applicability,” Color Res. Appl. 26(1), 85–102 (2001). [CrossRef]  

5. J. Morovic, Color Gamut Mapping (John Wiley & Sons, 2008).

6. G. G. Marcu and S. Abe, “Gamut Mapping for Color Simulation on CRT Devices,” in Proceedings of SPIE-The International Society for Optical Engineering (1996), pp. 308–315. [CrossRef]  

7. J. Morovic and P. L. Sun, “Non-Iterative Minimum Gamut Clipping,” in Proceedings of the 9th Color Imaging Conference (2001), pp. 251–256.

8. Commission Internationale de l’Éclairage (CIE), “Guidelines for the Evaluation of Gamut Mapping Algorithms,” in CIE Publication No.156 (2003).

9. L. MacDonald, J. Morovic, and D. Saunders, “Evaluation of Colour Fidelity for Reproductions of Fine Art Paintings,” Mus. Manag. Curator. 14(3), 253–281 (1995). [CrossRef]  

10. B. H. Kang, M. S. Cho, J. Morovic, and M. R. Luo, “Gamut Compression Algorithm Development Using Observer Experimental Data,” in Proceedings of the 7th Color Imaging Conference (2000), pp. 295–300.

11. L. MacDonald, J. Morovic, and K. Xiao, “A Topographic Gamut Compression Algorithm,” J. Imaging Sci. Technol. 46(3), 228–236 (2002).

12. R. Bala, R. Dequeiroz, R. Eschbach, and W. Wu, “Gamut Mapping to Preserve Spatial Luminance Variations,” J. Imaging Sci. 45(5), 436–443 (2001).

13. J. Morovic and Y. Wang, “A Multi-Resolution, Full-Colour Spatial Gamut Mapping Algorithm,” in Proceedings of Color Imaging Conference (2003), pp. 282–287.

14. S. W. Zamir, J. Vazquez-Corral, and M. Bertalmío, “Gamut Mapping through Perceptually-Based Contrast Reduction,” in Pacific-Rim Symposium on Image and Video Technology (2013), pp. 1–11.

15. M. Bertalmío, V. Caselles, E. Provenzi, and A. Rizzi, “Perceptual Color Correction Through Variational Techniques,” IEEE Trans. Image Process. 16(4), 1058–1072 (2007). [CrossRef]   [PubMed]  

16. R. S. Berns, “Extending CIELAB: Vividness,Vab*Dab*Tab*,” Color Res. Appl. 39(4), 322–330 (2014). [CrossRef]  

17. M. R. Luo, A. A. Clarke, P. A. Rhodes, A. Schappo, S. A. R. Scrivener, and C. J. Tait, “Quantifying Colour Appearance. Part I. LUTCHI Colour Appearance Data,” Color Res. Appl. 16(3), 166–180 (1991). [CrossRef]  

18. Y. J. Cho, L. C. Ou, and R. Luo, “A Cross-Cultural Comparison of Saturation, Vividness, Blackness and Whiteness Scales,” Color Res. Appl. 42(2), 203–215 (2016). [CrossRef]  

19. Y. J. Cho, L. C. Ou, G. Cui, and R. Luo, “New Colour Appearance Scales for Describing Saturation, Vividness, Blackness, and Whiteness,” Color Res. Appl. 42(5), 552–563 (2017). [CrossRef]  

20. Commission Internationale de l’Éclairage (CIE), “Recommendations on Uniform Colour Spaces, Colour Difference Equations, Psychometrics Colour Terms,” in Supplement No. 2 of CIE Publication No. 15 (1971).

21. M. R. Luo, G. Cui, and C. Li, “Uniform Colour Spaces Based on CIECAM02 Colour Appearance Model,” Color Res. Appl. 31(4), 320–330 (2006). [CrossRef]  

22. M. Safdar, G. Cui, Y. J. Kim, and M. R. Luo, “Perceptually Uniform Color Space for Image Signals Including High Dynamic Range and Wide Gamut,” Opt. Express 25(13), 15131–15151 (2017). [CrossRef]   [PubMed]  

23. R. R. Buckley, Reproducing Pictures with Non-Reproducible Colors (Massachusetts Institute of Technology, 1978).

24. J. A. Viggiano and N. M. Moroney, “Color Reproduction Algorithms and Intent,” in Color and Imaging Conference (1995), pp. 152–154.

25. P. Laihanen, “Colour Reproduction Theory Based on the Principles of Colour Science,” in Proceedings of Advances in Printing Science and Technology (1988).

26. International Electrotechnical Commission (IEC), “Multimedia Systems and Equipment-Colour Measurement and Management-Part 2-1: Colour Management-Default RGB Colour Space-sRGB,” in IEC 61966–4(1999).

27. Society of Motion Picture and Television Engineers (SMPTE), “Digital Cinema Quality - Reference Projector and Environment,” in SMPTE RP 431–2(2011).

28. N. Moroney, M. D. Fairchild, R. W. G. Hunt, C. Li, M. R. Luo, and T. Newman, “The CIECAM02 Color Appearance Model,” in Color and Imaging Conference (2002), pp. 23–27.

29. R. S. Berns, “Methods for Characterizing CRT Displays,” Displays 16(4), 173–182 (1996). [CrossRef]  

30. ISO 12640–1:1997. Graphic Technology - Prepress Digital Data Exchange - Part 1: CMYK Standard Colour Image Data (CMYK/SCID), 1997.

31. ISO 12640–2:2004. Graphic Technology - Prepress Digital Data Exchange - Part 2: XYZ/sRGB Encoded Standard Colour Image Data (XYZ/SCID), 2004.

32. A. Hård and L. Sivik, “NCS-Natural Color System: A Swedish Standard for Color Notation,” Color Res. Appl. 6(3), 129–138 (1981). [CrossRef]  

33. C. Sano, T. Song, and M. R. Luo, “Colour Differences for Complex Images,” in Color Imaging Conference (2003), pp. 121–126.

34. K. McLaren, “An Introduction to Instrumental Shade Passing and Sorting and a Review of Recent Developments,” Color. Technol. 92(9), 317–326 (2010).

35. L. L. Thurstone, “A Law of Comparative Judgment,” Psychol. Rev. 34(4), 273–286 (1927). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 Left is an illustration of the SGCK algorithm, where the source color o is mapped to the destination color r. Point E is the focal point having the same lightness as the CUSP. On the right is an illustration of the TOPO algorithm, where there are multi mapping directions such as from S29% to C29%, and S71% to C71%.
Fig. 2
Fig. 2 Illustration of lightness mapping. P is the colour to be mapped, P' is the point with lightness mapped.
Fig. 3
Fig. 3 Illustration of mapping towards the focal point (VP).
Fig. 4
Fig. 4 Illustration of mapping towards the lightness axis.
Fig. 5
Fig. 5 The illustration of the original and reproduction gamuts in CIELAB space.
Fig. 6
Fig. 6 Illustration of the gamuts of display, standard DCI-P3 and sRGB.
Fig. 7
Fig. 7 The six test images: (1) Fruit basket, (2) Threads, (3) Musician, (4) Colour patches, (5) Picnic, and (6) Ski.
Fig. 8
Fig. 8 Experimental setup for comparison of the original image (centre) with reproduction images (left and right).
Fig. 9
Fig. 9 Z-scores of different GCA and UCS combinations (5*3) for the repeated image.
Fig. 10
Fig. 10 Overall z-scores for the 5 GCAs and 3 UCSs.
Fig. 11
Fig. 11 Disability of HPMINDE (left: HPMINDE, right: VP).

Tables (4)

Tables Icon

Table 1 Summary of the GCAs introduced.

Tables Icon

Table 2 Wrong decisions of intra-observer and inter-observer in percentage.

Tables Icon

Table 3 Ranking of GCAs and UCSs Performance versus Image. A mean rank of 1 denotes the best and of 15 denotes the worst, L = CIELAB, C = CAM02-UCS, J = Jzazbz.

Tables Icon

Table 4 Image Dependency for GCAs and UCSs.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

L ' = Lmin( L o ) max( L o )min( L o ) *(max( L r )min( L r ))+min( L r )
E P ' ¯ ={ EP ¯ ; EP ¯ 0.9* E P d ¯ 0.9* E P d ¯ + EP ¯ 0.9* E P d ¯ E P s ¯ 0.9* E P d ¯ * E P d ¯ 10 ; EP ¯ >0.9* E P d ¯
CI=A±1.96 σ N
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.