Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Application of the S-CIELAB color model to processed and calibrated images with a colorimetric dithering method

Open Access Open Access

Abstract

This work uses the S-CIELAB color model to compare images that have been calibrated and processed using a colorimetric dithering method which simulates increments in viewing distance. Firstly, we obtain XYZ calibrated images by applying the appropriate color transformations to the original images. These transformations depend on whether the image is viewed on a display device or encoded by a capture device, for example. Secondly, we use a colorimetric dithering method consisting of a partitive additive mixing of XYZ tristimulus values. The number of dithered pixels depends on simulated viewing distance. The dithered tristimulus values are transformed to digital data to observe the dithering effects in the image. Finally, we predict color differences using the S-CIELAB model as color appearance model for images. Moreover, this paper proposes some applications of this method to artistic and industrial problems where one must compare two images that appear different at close viewing distance, but match when they are seen from afar.

©2007 Optical Society of America

1. Introduction

When an observer is moved further away of a scene, the phenomenon produced in the visual system is a color mix. In the beginning the observer can see the difference between two colors because these colors activate two different sensors in the retina. But if the object is moved further away enough, these two colors will active the same sensor in the retina, and then, the observer will only be able to see one color, which is the mixture of the original colors.

Digital halftoning and dithering are techniques that are based on this visual spatial-chromatic dithering. Thanks to digital halftoning techniques it is possible to print continuous tone images using a few color inks [1–3]. For computer graphics dithering is analogous to the halftoning technique used in printing. Dithering is a method very useful to reproduce real images with a reduced color palette [4–8]. There are many authors that have developed algorithms, models or techniques based on digital halftoning or dithering. More recently, some authors have used other methods based on segmentation or k-means methods for selecting objects within an image or classifying texture colors [4, 7].

But almost all these works have a common point: color is characterized by RGB digital levels, CMYK values or similar parameters, and these parameters are not perceptual values. If we use any of these techniques to simulate the appearance of our images, the result will be colorimetrically incorrect, because these values do not predict color appearance. Our alternative is to propagate the RGB digital levels of an image to CIE XYZ tristimulus values. This can be done with different models, depending, for instance, on the display or capture procedures. If the image is observed through a display device we can use the sRGB model [9]. But if the image has been captured by a digital camera we may use a polynomial model [10–11] or an integral model [12] for colorimetrically characterizing it. After, to simulate color dithering, this article proposes to do an average of the CIE XYZ tristimulus values. We choose the CIE XYZ tristimulus values because they are the basic psychophysical encoding of Color Science and because the perceptual CIE L*a*b* values are based on the CIE XYZ tristimulus values. The main difference between the method presented here and those referenced above is that usually other methods work with RGB values or other parameters that are devoid of perceptual meaning whereas we work with CIE XYZ tristimulus values.

Once the new XYZ tristimulus values are obtained, we are able to evaluate color differences between two images using a color appearance model for images. There are several color appearance models that could be used, such as S-CIELAB [13] or i-CAM [14–16]. Although the i-CAM model is more complete than S-CIELAB, we will use S-CIELAB because it is quicker and easier to implement in our algorithm. However, in future works we may use other color appearances models (i-CAM or others) and compare the findings.

The colorimetric comparison between two images, which are different from near distances but are similar from far distances, may be interesting for some applications. This is what happens in the marble [17] or ceramic tiles industries: two stones (or ceramic tiles) of the same variety, but with different textures due to the presence of veins, spots, defects, etc, are classified at a near distance as different pieces, but at intermediate or far distances could be perceived as equal.

Results will show that if a pair of images are compared, for instance the image of a piece of marble and the same image where a colorimetric dithering method simulates a larger viewing distance, the color differences obtained with S-CIELAB increase with the colorimetric dithering factor. On the other hand, if we compare a pair of images that appear different at close distance, but that are similar at far distances, the color differences obtained with S-CIELAB decrease with the colorimetric dithering factor.

This method can be used in all process where correct image visualization is important. Some examples are software for 3D interior design, virtual reality or color appearance simulation in artificial vision systems. This method could even be used as a tool to determine whether an artistic picture is an original or a forgery.

2. Methods

A digital image capture device (scanner or camera) transforms the spectral information of a scene to three values, RGB. But the RGB digital data associated with a visual stimulus are not the same that would be encoded by a human observer, who is characterized by the CIE-1931 XYZ standard observer. A general algorithm of colorimetric characterization [12, 17] or a polynomial model [10, 11] can be used to propagate the RGB digital data to the CIE XYZ tristimulus values. And it is similar with a display device. The s-RGB model [9] can be used to predict the CIE XYZ tristimulus values depending on the RGB digital data. Consequently, the conversion from RGB to XYZ values depends on whether the image is visualized through a display device or is captured by digital device. Once the CIE XYZ tristimulus values of all image pixels have been computed, the subsequent processing is the same regardless of how the image was obtained. The first step is to apply colorimetric dithering to the tristimulus image. The theoretical background of this procedure is explained in the following section.

2.1 Spatiochromatic dithering

From a geometrical viewpoint, both the retina and the sensor plane of a digital device can be modeled as a spatially uniform array of small sensors. Each single sensor covers a specific solid angle in image space. As the distance between the object and the sensor plane increases, so does the area covered by the same sensor, just as it is observed in Fig. 1.

 figure: Fig. 1.

Fig. 1. Spatial dithering.

Download Full Size | PDF

Sensor spatial resolution (camera or retina) is determined by sensor size and the distance between object and sensor, according to Eq. (1):

u=2arctan(p2x)

The objects we use in this paper are images placed in the space object. If these objects are near enough to the sensor (camera o retina), the solid angle will be smaller than a pixel of the object and, therefore, the image will be correctly sampled. If objects are too far away from the sensor, the solid angle will be bigger than a pixel of the object and therefore the images will be correctly sampled. The borderline situation occurs when the solid angle subtended from the sensor exactly matches the solid angle subtended by a pixel of the image, because this implies that the spatial resolution of sensor and object are identical. In this situation the object will be at the appropriate distance x0 , and the area covered by the solid angle u is p0 . Then the distance x0 is fixed by sensor size and by the focal distance of the optical system and x0 will be the greatest distance from the image that does not cause spatial dithering in the sensor array.

If the object is at a greater distance, x, which is k times the initial distance x0, the solid angle u takes k2 pixels of the original image. Given that our aim is to simulate the appearance of an image placed at a greater viewing distance than the original one, without having to recapture the image at that new distance, we will average k2 pixels of the original image to calculate each pixel of the simulated image. The averaged quantities could be directly the digital level values of the image, but this would be colorimetrically unsound, because these values do not predict color appearance. Thereby this article proposes to average CIE XYZ tristimulus values. The CIE XYZ tristimulus values have been chosen because they are the psychophysical encoding of color science and the perceptual CIE L*a*b* values are based on the CIE XYZ tristimulus values. Then if the digital level values of k2 pixels are fitted by {Xi,j Yi,j Zi,j}i=1, k; j=1,…k, the new tristimulus values will be:

X=1k2i,j=1kXi,j
Y=1k2i,j=1kYi,j
Z=1k2i,j=1kzi,j

where all tristimulus values have the same weight, regardless of position. But using this method, image size is reduced by a factor k and sometimes this is not convenient, as we see below.

Once the new tristimulus values of image are obtained, they are propagated again to digital level values and so the image with colorimetric dithering can be visualized. However if we want to compare both the original and dithered image, they must have the same size. Consequently we will have to increase the size of the image with colorimetric dithering k times exactly.

Finally, we try to quantify the color difference between these two images. Because of this, a color appearance model for images is needed, and among the more recent models, we have decided to use the S-CIELAB model [13]. In spite of the fact that the original S-CIELAB algorithm is designed for images viewed by a display device, taking into account the Fig. 1, we have also adapted the S-CIELAB model for the image capture.

3. Results

3.1 Comparison between both an image and this image with colorimetric dithering

As we discuss above, the calculations of the transformation depend on whether the image is observed through a display device or captured by a digital camera and the only difference is the model used to propagate digital levels to tristimulus values. Next, we will show the results corresponding to an image viewed on a display device. Although we have worked with a set of different images, we will only show the results obtained with the natural image in Fig. 2 (left-top side), because the results obtained with all our data set were very similar.

The digital levels of the image must be transformed to CIE XYZ tristimulus values using the sRGB model [9]. Next, the image goes through the colorimetric dithering model [Fig. 1, Eqs. (1) and (2)] and the map of tristimulus values, describing the image observed at a distance further away, is obtained.

Finally, the new CIE XYZ tristimulus values are propagated to digital levels, so that the resulting image can be displayed. In this last transformation the s-RGB model is used again. Now the simulated image can be visualized by an observer.

However, if we aim to quantify the color difference between both the natural and the simulated images, both images must have the same size (in pixels). As the colorimetric dithering model decreases image size by a factor k, it is necessary to rescale the simulated image by a k factor. Digital levels are replied into cells of k by k pixels to do it.

Figure 2 shows the natural image and a set of simulated images, obtained with different dithering factors (2, 5 and 10). In other words, these images are simulated and expanded versions of what we would see if the viewing distance is 2, 5 or 10 times the original distance.

 figure: Fig. 2.

Fig. 2. Natural image and simulated images with colorimetric dithering factor 2, 5 and 10.

Download Full Size | PDF

Finally, our aim was to obtain the color difference between both natural image and the simulated images with colorimetric dithering factors 2, 5 and 10. So we need to use a color appearance model for images, such as S-CIELAB [13], which evaluates color differences between two images and obtains a color difference map. In Fig. 3 we show the histogram of the color differences of all pixels.

 figure: Fig. 3.

Fig. 3. Histogram of color difference are obtained when it compares both natural image and simulated images with factor 2, 5 and 10 (from left to right).

Download Full Size | PDF

The histograms show that the quantity of pixels with smaller color differences decreases and the quantity of pixels with bigger color differences increase with the dithering factor. This fact can been checked with the data in Table 1, where we show the mean, median and the ΔE value associated to the maximum frequency.

Tables Icon

Table 1. Mean, median and colour difference with maximum frequency of the histograms belonging to the same image viewed under simulated different distances.

The three values increase when the colorimetric dithering factor increases. Consequently the colour differences increase when the dithering factor increases.

3.2 Comparison between images that appear as different when viewed at near distance

In this section, we have worked with samples of natural stone (Fig. 4), because such samples look very different when they are compared at close distance, but tend to match when the viewing distance increases. Over these images we can use a polynomial model [10, 11] to propagate of digital levels to tristimulus values, because the images have been captured by a digital camera (CMOS Pixelink PL-A662).

Firstly, we use a color appearance model for images, S-CIELAB, to evaluate color differences between these original images. Results are shown in Fig. 4.

 figure: Fig. 4.

Fig. 4. Comparison between the two original samples, viewed at near distance.

Download Full Size | PDF

Next, we add the colorimetric dithering model over the XYZ images before implementation of the S-CIELAB model to evaluate color difference between both simulated images. Results are shown in Figs. 5–8. The histograms show a decrement of the number of pixels with large color differences and an increment of the quantity of pixels with smaller color differences. As colorimetric dithering factor and viewing distance are directly related, it follows that if the images are observed at a further distance they become more similar.

 figure: Fig. 5.

Fig. 5. Comparison between the two different natural stone samples when the colorimetric dithering factor is 2.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Comparison between the two different natural stone samples when the colorimetric dithering factor is 5.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Comparison between the two different natural stone samples when the colorimetric dithering factor is 10.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Comparison between the two different natural stone samples when the colorimetric dithering factor is 20.

Download Full Size | PDF

If we calculate the mean, the median and the colour difference value associated to the maximum frequency of the histograms (Table 2), we check that these values decrease when the colour dithering factor increases.

Tables Icon

Table 2. Mean, median and colour difference with maximum frequency of the histograms belonging to the same image pair viewed under simulated different distances.

4. Conclusions

The method presented here is useful to compare two images at several distances. Firstly, this article obtains XYZ images using the s-RGB model when the image is visualized on a standard display device or polynomial model when the image is captured on digital camera. Secondly, it applies to the images a colorimetric dithering method, consisting in a partitive additive mixing of tristimulus values XYZ. The number of pixels that are dithered depends on simulated observation distance. From the dithered tristimulus values, the digital levels are calculated to observe the dithering effects into the image. Finally, color differences are predicted using the S-CIELAB appearance model for images.

On the one hand, if we compare two images, a natural image and simulated images with several colorimetric dithering factors, the histograms and the calculated values show that the quantity of pixels with smaller color differences decrease and the quantity of pixels with larger color differences increase with the colorimetric dithering. Remember that the colorimetric dithering factor is directly related with viewing distance.

Besides, if we compare two images which appear to be different at near distance but look more similar at far distance, the histograms and the calculated values show that the quantity of pixels with bigger color differences decrease and the quantity of pixels with smaller color differences increase with the colorimetric dithering. Consequently, it is verified that if we observe the images at a further distance they are more similar.

Results agree with our expectations. So this method can be used on all processes where the correct visualization of the image is important. This method can become a tool to build an automatic classifier of textured images (such as natural stones, tiles, etc.), based on perceptual color differences. Some examples are software for 3D interior design or for color appearance simulation in artificial vision systems.

Acknowledgments

This research was supported by the Spanish Ministry for Education and Science by means of the grant number DPI2005-08999-C02-02. Esther Perales would like to thank the Spanish Ministry for Education and Science for the PhD grant that she has received. Thanks also to Eurostone S.A. (Novelda, Alicante, Spain) for providing the marble samples.

References and links

1. C. Hains, S. G. Wang, and K. Knox, “Digital color halftones,” in Digital Color Imaging handbook, G. Sharma, eds., (CRC PRESS, New York, 2003) pp. 385–490.

2. V. Ostromoukhov, P. Emmel, N. Rudaz, I. Amidror, and R. D. Hersch, “Multi-level colour halftoning algorithms,” Proc. SPIE 2949, 332–340 (1997). [CrossRef]  

3. H. R. Kang, Digital Color Halftoning (SPIE, 1999).

4. L. Brun and A. Trémeau, “Color quantization,” in Digital Color Imaging handbookG. Sharma, ed., (CRC PRESS, New York, 2003), pp. 589–638.

5. K. Man Kim, C. Soo Lee, E. Joo Lee, and Y. Ho Ha, “Color image quantization and dithering method based on human visual system characteristics,” J. Imaging Sci. Technol 40, 502–509 (1996).

6. M. Petrou and P. García-Sevilla, “Non-stationary grey texture images,” in Image Processing. Dealing with TextureM. Petrou and P. García-Sevilla, (Wiley, England, 2006), pp. 297–606. [CrossRef]  

7. J. Chen, T. N. Pappas, A. Mojsilovic, and B. E. Rogowitz, “Adaptive perceptual color-texture image segmentation,” IEEE Trans. Image Processing 14, 1524–1536 (2005). [CrossRef]  

8. The Mathworks. “Image Processing Toolbox” http://www.mathworks.com/access/helpdesk/help/toolbox/images/index.html?/access/helpdesk/help/toolbo x/images/f8-18177.html .

9. International Color Consortium, “A standard default color space for the internet: sRGB”, http://www.color.org/sRGB.html.

10. H. R. Kang, “Regression,” in Color technology for electronic imaging devices, H.R. Kang, ed. (SPIE-Press, Washingon, 1997), pp. 55–63.

11. G. Hong, M. R. Luo, and P. A. Rhodes, “A study of digital camera colorimetric characterization based on polynomial modeling,” Color. Res. Appl. 26, 76–84 (2000). [CrossRef]  

12. F. Martínez-Verdú, J. Pujol, and P. Capilla, “Characterization of a digital camera as an absolute tristimulus colorimeter,” J. Imaging Sci. Technol. 47, 279–295 (2003).

13. A. Poirson and B. Wandell, “S-CIELAB: A spatial extension to the CIEL*a*b* DeltaE Color Difference Metric”, http://white.stanford.edu/~brian/scielab/scielab.html.

14. G. M. Johnson and M. D. Fairchild, “Measuring images: Differences, Quality, and Appearance,” Proc SPIE/IS&T Electronic Imaging Conference, Santa Clara , 5007, 51–60 (2003).

15. M. D. Fairchild and G. M. Johnson, “The i-CAM framework for image appearance, image differences and image quality,” J. Electron. Imaging 13, 126–138 (2004). [CrossRef]  

16. G. M. Johnson, “The quality of appearance,” in Proceedings of 10th Congress of the International Colour Association, J.L. Nieves and J. Hernández-Andrés, ed. (Granada, Spain, 2005), pp. 303–308, http://www.cis.rit.edu/people/faculty/johnson/pubs.html. [PubMed]  

17. F. Martínez-Verdú, R. Balboa, E. Chorro, J. C. Alcaraz, D. de Fez, and V. Viqueira, “Color measurement of natural stones using a calibrated digital camera,” in Proceedings of 10th Congress of the International Colour AssociationJ. L. Nieves and J. Hernández-Andrés, ed., (Granada, Spain, 2005) pp. 1267–1270.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Spatial dithering.
Fig. 2.
Fig. 2. Natural image and simulated images with colorimetric dithering factor 2, 5 and 10.
Fig. 3.
Fig. 3. Histogram of color difference are obtained when it compares both natural image and simulated images with factor 2, 5 and 10 (from left to right).
Fig. 4.
Fig. 4. Comparison between the two original samples, viewed at near distance.
Fig. 5.
Fig. 5. Comparison between the two different natural stone samples when the colorimetric dithering factor is 2.
Fig. 6.
Fig. 6. Comparison between the two different natural stone samples when the colorimetric dithering factor is 5.
Fig. 7.
Fig. 7. Comparison between the two different natural stone samples when the colorimetric dithering factor is 10.
Fig. 8.
Fig. 8. Comparison between the two different natural stone samples when the colorimetric dithering factor is 20.

Tables (2)

Tables Icon

Table 1. Mean, median and colour difference with maximum frequency of the histograms belonging to the same image viewed under simulated different distances.

Tables Icon

Table 2. Mean, median and colour difference with maximum frequency of the histograms belonging to the same image pair viewed under simulated different distances.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

u = 2 arctan ( p 2 x )
X = 1 k 2 i , j = 1 k X i , j
Y = 1 k 2 i , j = 1 k Y i , j
Z = 1 k 2 i , j = 1 k z i , j
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.