Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Point-by-point visual enhancement with spatially and spectrally tunable laser illumination

Open Access Open Access

Abstract

Vision is responsible for most of the information that humans perceive of the surrounding world. Many studies attempt to enhance the visualization of the entire scene by optimizing and tuning the overall illumination spectrum. However, by using a spatially uniform illumination spectrum for the entire scene, only certain global color shifts with respect to a reference illumination spectrum can be realized, resulting in moderate visual enhancement. In this paper, a new visual enhancement method is presented that relies on a spatially variable illumination spectrum. Such an approach can target much more dedicated visual enhancements by optimizing the incident illumination spectrum to the surface reflectance at each position. First, a geometric calibration of the projector-camera system is carried out for determining the spatial mapping from the projected pixel grid to the imaged pixel grid. Secondly, the scene is segmented for implementing the visual enhancement approach. And finally, one of three visual enhancement scenarios is applied by projecting the required color image onto the considered segmented scene. The experimental results show that the visual salience of the scene or region of interest can be efficiently enhanced when our proposed method is applied to achieve colorfulness enhancement, hue tuning, and background lightness reduction.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

As a dominant sense, vision is responsible for most of the information that humans perceive of the surrounding world [1]. When light is incident on objects, the reflected light is registered by our eyes, leading to differences in colors and shapes [2]. Color differences are vital for distinguishing objects from one another. The visualization of objects is therefore directly influenced by their surface spectral reflectance and the spectral distribution of the impinging light.

Therefore, based on measurements of the object’s spectral reflectance, many studies [37] have been carried out to optimize the illumination spectrum in order to enhance the visualization of an object or scene, in order to perform certain human visual tasks. For example, illumination spectrum optimization methods are used to enhance the visualization for surgery between normal and inflamed biological tissues [35], save energy with high color quality [6,7], minimize light absorbed by artwork [8,9], and enhance faded colors of museums artworks for artwork restoration [10].

An important limitation of the above-mentioned studies is the fact that these attempts to enhance the visualization of the entire scene, by optimizing and tuning the overall uniform illumination spectrum. Such approaches are not very effective because the objects’ spectral reflectance’s vary spatially in a scene [11]. By using a spatially uniform illumination spectrum for the entire scene, only certain global color shifts with respect to a reference illumination spectrum can be realized, resulting in moderate visual enhancement. As a result, such visual enhancement approaches are focused on overall scene improvements in terms of contrast enhancement, color entropy enhancement [4] and color difference enhancement [5].

In this paper, a new visual enhancement method is presented that relies on a spatially variable illumination spectrum. Such an approach can target much more dedicated visual enhancement results, by optimizing the incident illumination spectrum to the surface reflectance at each position [12,13].

From a color science point of view, the visual property of an object can be divided into three parts: colorfulness, hue, and lightness [14]. With a spatially variable illumination spectrum, these three properties can all be tuned in a flexible manner at each position in the scene. This can result in a more systematic colorimetric visual enhancement of the illuminated scene.

The considered system is fully automated and makes use of a fast and low-cost spectral reflectance measurement approach with a regular camera. This measurement system is combined with a spectrally tunable laser projection system with a wide color gamut. The captured images and projected illumination patterns are calibrated with respect to each other, allowing to use the pre-measured spatial reflectance information in order to calculate the spatially varying illumination spectra that are required to achieve specific visual enhancement results.

The paper is ordered as follows. In section 2, the considered visual enhancement setup is proposed. Section 3 presents the experiment results of visual enhancement for a scene.

2. Setup and methods

In this section, firstly, the visual enhancement setup with the spatially and spectrally tunable illumination and the camera is illustrated. Secondly, the geometric calibration process for obtaining an accurate projector-camera mapping is described. Then, the specific visual enhancement approach is given in detail.

2.1 Visual enhancement setup

The visual enhancement setup that is considered in this paper is conceptually shown in Fig. 1. A colorful scene is illuminated by a laser diode-based lighting system capable of projecting variable spectra S(x,y)(λ) at different positions. These spectra are generated by redirecting parts of the emitted flux of three (RGB) laser diodes with an efficient light-projecting unit (Picobit, manufactured by Celluon Inc., USA). A portable digital camera images the scene, and the object spectral reflectance R(x,y)(λ) at each position in the illuminated scene, is estimated from the captured RGB images. For visual enhancement of the illuminated scene, the spectrum of the incident light at different positions is adapted by generating new input images for the light projection unit based on the spectral reflectance from the complete scene. The setup is placed in the dark.

 figure: Fig. 1.

Fig. 1. Conceptual illustration of the considered visual enhancement setup.

Download Full Size | PDF

A similar setup for tuning the color rendering properties of an illuminated scene, was described in previous work [13]. This older setup however used a hyperspectral camera and required some manual alignment, in contrast with the system that is described in this paper.

The visual enhancement process consists of four different fully automated steps. First, geometric calibration of the projector-camera system is carried out for determining the spatial mapping from the projected pixel grid to the imaged pixel grid. The process is described in detail in the following Section 2.2. Secondly, the scene is segmented, and the Region of Interest (ROI) is extracted for implementing the visual enhancement approach. The process is described in Section 2.3. Thirdly, visual enhancement approaches and three visual enhancement scenarios are proposed in Section 2.4. The proportions of the laser projector could be obtained at once by numerical calculations. One of three visual enhancement scenarios is applied by projecting the required color image onto the considered scene. And finally, the radiance calibration process of the laser projector primaries is described, to generate the required color image.

2.2 Geometric calibration of the projector-camera system

As shown in Fig. 2, a checkerboard pattern is projected by the laser projector onto the scene. This projected checkerboard pattern is imaged by the camera on its image sensor. In this manner, the spatial mapping from the projected image to the captured image can be described as a mapping relationship between the projection coordinate system and the camera coordinate system. This geometric calibration process can be achieved by control point matching based on the image alignment method [15].

 figure: Fig. 2.

Fig. 2. Illustration of the geometric calibration approach

Download Full Size | PDF

The corner points of the checkerboard pattern are considered to be control points. Therefore, each control point corresponds to a pair of coordinate values $({{\textrm{X}_i},\textrm{}{\textrm{Y}_i}} )$ and $({{x_i},\textrm{}{y_i}} )$ in the camera coordinate system and the projected image coordinate system, respectively. Each coordinate value (${\textrm{X}_i},\textrm{}{\textrm{Y}_i}$) of a control point in the camera coordinate system can be expressed as a function of the coordinate values $({{x_j},\textrm{}{y_j}} )$ of the control points in the projected image coordinate system:

$$\left\{ {\begin{array}{{c}} {{X_i} \approx \mathop \sum \nolimits_{j = 0}^n \mathop \sum \nolimits_{k = 0}^j {a_{j,k}}{x_i}^k{y_i}^{j - k}}\\ {{Y_i} \approx \mathop \sum \nolimits_{j = 0}^n \mathop \sum \nolimits_{k = 0}^j {b_{j,k}}{x_i}^k{y_i}^{j - k}} \end{array}\; \; \; i = 1,\; \cdots ,N\; ;\mathop \sum \nolimits_{j = 0}^n \mathop \sum \nolimits_{k = 0}^j 1 = N} \right.$$
Where N is the number of control points, the coefficients aj,k and bj,k are fitted by the polynomial least squares method, which satisfies (n + 1)2 ≤ N.

2.3 Scene segmentation

A scene image captured by a camera can be divided into two parts, ROI and the surrounding area. The ROI in the scene is automatically segmented after a manual assignment of the starting point. Then, the surrounding area is automatically segmented into several subareas. The detailed steps are described as follows.

The ROI in the image is segmented with the flood fill algorithm [16], which is often used to determine a bounded area connected to a given node in a multi-dimensional array. Firstly, a pixel in the ROI is assigned manually as the starting point. Secondly, the algorithm determines whether the color of the surrounding pixels is similar to the starting point by using a color difference threshold value. Thirdly, the color of the above-determined similar pixels is replaced with a new color (e.g., yellow in our case). Finally, the determined similar pixels are used as new starting points, and steps 2 and step 3 are repeated until all boundaries have been scanned. After the flood fill algorithm is executed, the color of the ROI is replaced with yellow. The ROI can be segmented in the scene without the surrounding background.

Afterward, a color threshold segmentation method is used to segment the other parts (surrounding area) in the scene. Firstly, the RGB color space is converted to the CIELAB color space. Similar colors at different positions in the image are gathered. Then the CIELAB color space is segmented into several subspaces with a specific size and shape. Finally, similar colors in the image are spatially segmented. Therefore, the surrounding area can be segmented automatically.

2.4 Visual enhancement approach

From a color science point of view, the visual perception of an object in CAM02UCS color-appearance space can be specified by three different characteristics : colorfulness, hue, and lightness [14]. The J’ value refers to the lightness, the angle that the projected (J’, a’, b’) color point makes in the (a’, b’) plane with respect to the positive a'-axis indicates the hue, and the distance to the origin indicates the colorfulness. This means that a systematic visual enhancement approach could be implemented by tuning the colorfulness, hue or lightness of the illuminated objects, as shown in Fig. 3(a).

 figure: Fig. 3.

Fig. 3. Illustration of different visual enhancement scenario’s (a. visual enhancement approach in CAM02UCS color-appearance space, b. object illuminated by D65, c. increasing colorfulness, d. hue tuning, e. lightness tuning).

Download Full Size | PDF

By considering a spatially and spectrally variable lighting system based on three narrow-band laser diode light sources, almost any color can be obtained for the reflected light from each position in the illuminated scene [13]. This implies that a systematic visual enhancement can be implemented using such a light projection unit, provided we know the surface spectral reflectivity at each position with sufficient accuracy.

As a starting point for the color appearance of the illuminated scene, we consider the situation where the object or scene would be illuminated by the reference illuminant D65. As an example of this D65 reference situation, we show the color appearance of a Tai Chi Diagram in Fig. 3(b). The “yin” part in the Tai Chi Diagram is defined as the region of interest (ROI) that needs to be visually enhanced. Increasing the colorfulness of the “yin” part (Fig. 3(c)), tuning the hue of the “yin” part (Fig. 3(d)), or decreasing the lightness of the “yang” part (Fig. 3(e)) are possible methods to enhance the visual salience of the “yin” part. The key step for these three visual enhancement scenarios’ is calculating the target color coordinate values (${J}^{\prime T}_{x,y}$, ${a}^{\prime T}_{x,y}$, ${b}^{\prime T}_{x,y}$) for each position, and the required spectral power distribution (SPD) for the emitted light by the projection light source towards these different positions in the illuminated scene. The required SPD’s can then be generated by tuning the RGB pixel values of the input image [17,18].

(1) Calculating target J'a'b’ values

In a first visual enhancement scenario, the colorfulness of the ROI is increased with a specific value. Hence, the ab’ values are increased in the positive radial direction, while leaving the lightness value (J’) fixed. The target values for the CAM02-UCS coordinates of the ROI can thus be calculated as

$$\left\{ \begin{matrix} {{J}^{\prime T}_{x,y} = {J}^{\prime {ref}}_{x,y} } \\ {a^{\prime T}_{x,y} = k_1\cdot {a}^{\prime {ref}}_{x,y} } \\ {b^{\prime T}_{x,y} = k_1\cdot {b}^{\prime {ref}}_{x,y} } \end{matrix}\right.{\; \; \; }\left( {x,y} \right)\in \textrm{ROI}$$
where k1 indicates the colorfulness enhancement factor (≥ 1) and (${J}^{\prime {ref}}_{x,y}$, ${a}^{\prime {ref}}_{x,y}$, ${b}^{\prime {ref}}_{x,y}$) denote the Jab’ values at different positions of the ROI, when illuminated by the reference illuminant D65.

As a second visual enhancement scenario, the hue of the ROI can be tuned by rotating the ab’ values over a specific angle (α) in the counterclockwise direction, while leaving the lightness value (J’) fixed. The target values for the CAM02-UCS coordinates of the ROI can then be calculated as

$$\left\{\begin{array}{ll}{{J}^{\prime T}_{x,y} = {J}^{\prime {ref}}_{x,y} } &\\ {{a}^{\prime T}_{x,y} = \textrm{cos}\left( \alpha \right){a}^{\prime {ref}}_{x,y} -\textrm{sin}\left( \alpha \right){b}^{\prime {ref}}_{x,y} }& {\left( {x,y} \right)\in \textrm{ROI}}\\ {{b}^{\prime T}_{x,y} = \textrm{sin}\left( \alpha \right){a}^{\prime {ref}}_{x,y} + \textrm{cos}\left( \alpha \right){b}^{\prime {ref}}_{x,y} } &\\ \end{array}\right. $$

As a third possible visual enhancement scenario, the lightness values (J’) of the area surrounding the ROI could be decreased, while leaving the a’ and b’ values fixed. The target values for the CAM02-UCS coordinates can then be calculated as

$$ \left\{ \begin{array}{@{}ll@{}}{{J}^{\prime T}_{x,y} = k_\textrm{l}\cdot {J}^{\prime {ref}}_{x,y} } & \\ {a^{\prime T}_{x,y} = a^{\prime {ref}}_{x,y} } & {\left( {x,y} \right)\notin \textrm{ROI}}\\ {b^{\prime T}_{x,y} = b^{\prime {ref}}_{x,y} } & \\ \end{array}\right.$$

(2) Calculating RGB values

The target (J’, a’, b’) coordinates for each position in and outside the ROI, can be directly transformed to the corresponding XYZ target tristimulus values [19]. The question is, what is the required spectral power distribution for the illumination of each position, and what are the corresponding RGB pixel values that need to be used for the light projection unit?

The spectral power distribution (SPD) of the laser diode-based illumination system onto a certain spatial area is given by

$${\mathbf S} = [{{{\mathbf S}_\textrm{b}}(\lambda ),{{\mathbf S}_\textrm{g}}(\lambda ),{{\mathbf S}_\textrm{r}}(\lambda )} ]\left[ {\begin{array}{{c}} {{p_\textrm{b}}}\\ {{p_\textrm{g}}}\\ {{p_\textrm{r}}} \end{array}} \right]$$
where Sb(λ), Sg(λ), and Sr(λ) correspond with the SPD of the maximal amount of light that can be emitted by the projector system’s blue (b), green (g) and red (r) channels towards the considered spatial area. The values pb, pg and pr thus correspond with the fractions of blue, green and red light towards that spatial area. When an object with a spectral reflectance of $r$(λ) is irradiated by a spectrally variable source, the XYZ tristimulus values of the reflected light are given by
$$\left[ {\begin{array}{{c}} X\\ Y\\ Z \end{array}} \right] = \left( {\textrm{K}\left[ {\begin{array}{{c}} {\bar{{\mathbf x}}}\\ {\bar{{\mathbf y}}}\\ {\bar{{\mathbf z}}} \end{array}} \right][{{{\mathbf S}_\textrm{b}}(\lambda ),{{\mathbf S}_\textrm{g}}(\lambda ),{{\mathbf S}_\textrm{r}}(\lambda )} ]\left[ {\begin{array}{{c}} {{p_b}}\\ {{p_\textrm{g}}}\\ {{p_r}} \end{array}} \right]} \right) \odot {\mathbf r} = \textrm{K}\left[ {\begin{array}{{ccc}} {\bar{{\mathbf x}}{{\mathbf s}_\textrm{b}} \odot {\mathbf r}}&{\bar{{\mathbf x}}{{\mathbf s}_\textrm{g}} \odot {\mathbf r}}&{\bar{{\mathbf x}}{{\mathbf s}_\textrm{r}} \odot {\mathbf r}}\\ {\bar{{\mathbf y}}{{\mathbf s}_\textrm{b}} \odot {\mathbf r}}&{\bar{{\mathbf y}}{{\mathbf s}_\textrm{g}} \odot {\mathbf r}}&{\bar{{\mathbf y}}{{\mathbf s}_\textrm{r}} \odot {\mathbf r}}\\ {\bar{{\mathbf z}}{{\mathbf s}_\textrm{b}} \odot {\mathbf r}}&{\bar{{\mathbf z}}{{\mathbf s}_\textrm{g}} \odot {\mathbf r}}&{\bar{{\mathbf z}}{{\mathbf s}_\textrm{r}} \odot {\mathbf r}} \end{array}} \right]\left[ {\begin{array}{{c}} {{p_b}}\\ {{p_\textrm{g}}}\\ {{p_r}} \end{array}} \right]$$
where $\bar{\mathbf{x}}$, $\bar{\mathbf{y}}$, $\bar{\mathbf{z}}$ are the three color matching functions of CIE1931 observer, K is the maximal luminous efficacy coefficient with a value of 683 lm/W, this means the XYZ values of the reflected light, for specific fractions (pb, pg, pr) can directly be calculated from the XcYcZc values (with c= b, g, and r) that are obtained by illuminating the objects with the SPD’s Sb(λ), Sg(λ) and Sr(λ) of the individual laser diode peaks. This relation can be expressed as
$$\left[ \begin{array}{l} X\\ Y\\ Z \end{array} \right]\textrm{ = }\left[ \begin{array}{l} {X_\textrm{b}}\textrm{ }{X_\textrm{g}}\textrm{ }{X_\textrm{r}}\\ {Y_\textrm{b}}\textrm{ }{Y_\textrm{g}}\textrm{ }{Y_\textrm{r}}\\ {Z_\textrm{b}}\textrm{ }{Z_\textrm{g}}\textrm{ }{Z_\textrm{r}} \end{array} \right] \cdot \left[ \begin{array}{l} {p_\textrm{b}}\\ {p_\textrm{g}}\\ {p_\textrm{r}} \end{array} \right]$$

The necessary fractions (pb, pg, pr) to obtain the XYZ target values for each position, can thus be calculated as

$$\left[ \begin{array}{l} {p_\textrm{b}}\\ {p_\textrm{g}}\\ {p_\textrm{r}} \end{array} \right] = {\left[ \begin{array}{l} {X_\textrm{b}}\textrm{ }{X_\textrm{g}}\textrm{ }{X_\textrm{r}}\\ {Y_\textrm{b}}\textrm{ }{Y_\textrm{g}}\textrm{ }{Y_\textrm{r}}\\ {Z_\textrm{b}}\textrm{ }{Z_\textrm{g}}\textrm{ }{Z_\textrm{r}} \end{array} \right]^{ - 1}} \cdot \left[ \begin{array}{l} X\\ Y\\ Z \end{array} \right]$$

After a radiance calibration between the RGB values and the output radiance of the projector, RGB values of the projector could be calculated from the values pr, pg and pb in Eq.8. The calculations could be finished at once by numerical calculations, instead of multiple iterations in the traditional optimization process [7,20]. The fast calculations could be beneficial for practical applications. The SPDs of the laser projector primaries were characterized as a function of the RGB values of the input image. The radiance calibration process of the laser projector primaries is described in the following part.

2.5 Radiance calibration

As shown in Fig. 4, a white calibration board (HSIA-CT, Sichuan Dualix Spectral Image Technology Co. Ltd, China) with a perfect Lambertian reflector and near-unity reflectance is used to uniformly reflect the light emitted by the projector. A digital camera then captures the reflected light. The data was obtained by measuring the output response of each color channel of the projector separately for RGB input value increasing from 0 to 255 with a step size of 1.

 figure: Fig. 4.

Fig. 4. Radiation calibration setup (a. blue channel calibration of the projector, b. green channel calibration of the projector, c. red channel calibration of the projector).

Download Full Size | PDF

The camera response is a linear combination of the spectral power distribution of the light source, the camera responsivity, and the surface spectral reflectance of the object, which can be expressed by

$${{\boldsymbol d}_i} = \mathop \smallint \nolimits_{{\lambda _1}}^{{\lambda _2}} {{\boldsymbol C}_i}(\lambda ){\boldsymbol S}(\lambda ){\boldsymbol r}(\lambda )d\lambda$$
where di represents the response of the ith channel of the camera (for a color camera, i = r(red), g(green), b(blue)), λ is the wavelength, ranging from 400 nm to 700 nm in the camera-sensitive wavelength range. Ci(λ) is the sensitivity function of the ith channel of the camera, S(λ) is the spectral power distribution of the light source, and r(λ) is the surface spectral reflectance of the illuminated object.

Since the output raw response values of the digital camera are linearly related to the input light, the output value of the camera is linearly related to the relative radiant power of the laser projector primaries. The relative radiant power and the input RGB value of the projector are related as follows:

$${p_i} = {\beta _i}{I_i},\; \; \textrm{with}\; i = \textrm{r},\textrm{g},\textrm{b}$$
where Ii is the input RGB value of the projector, pi is the relative radiant power of the laser projector primaries, βi is the correction coefficient for radiation calibration. Therefore, after a calculation of βi using the collected calibration data by an inverse look-up table method, the RGB input values of the laser projector primaries (Ii) can be easily determined for any required relative radiant power pi.

3. Experiments and results

3.1 Visual enhancement experiments

Experiments were conducted in order to prove the feasibility of the visual enhancement performance of a spatially variable laser illumination system. Note that the experiments were carried out without ambient light. The used test setup is shown in Fig. 5 and consists of a portable laser projector illuminating a painting for this experiment. The illuminated scene is captured by a camera. The reflected light is captured by a spectrometer that is equipped with an optical probe, such that the spectra of the laser projector primaries for R = G = B = 255 could be measured at the start of using the system. This allows calculating the necessary RGB values for each pixel in the projected image, such that the color coordinates of the reflected light by each patch correspond with the targeted J'a'b’ values.

 figure: Fig. 5.

Fig. 5. Experimental setup for visual enhancement tests (1. Illuminated scene, 2. Camera, 3. Optical probe, 4. Laser projector, 5. Spectrometer). The image is taken with ambient lighting for demonstration purposes.

Download Full Size | PDF

3.2 Geometric calibration

The portable laser projector that is used for the experimental results is a PicoBit (Celluon Inc., USA), which offers a resolution of 1280 pixels by 720 pixels and wide color gamut. It is used to generate the spatially variable SPD by varying the RGB values of the pixels in the projected image. The camera that is used for the experimental results is a XiQ camera (Ximea Inc., Germany), which offers a resolution of 1280 pixels by 1024 pixels.

A 12 × 14 checkerboard is projected on the projection plane as shown in Fig. 6(a). Since the camera is not parallel to the projected checkerboard, the captured checkerboard image by the camera is slanted as shown in Fig. 6(b). In order to realize the geometric calibration, firstly, the green corners (control points) of the checkerboard shown in Fig. 6(b) are detected automatically using a corner detection algorithm [21]. Secondly, the detected control points are geomatic mapped with the corners of the input checkerboard image of the projector (Fig. 6(a)). Therefore, the camera pixels and the projector pixels can be point-by-point mapped by pixel interpolation. Finally, the captured checkerboard could be geometrically calibrated to be parallel to the input checkerboard image as shown in Fig. 6(c). Henceforth, the geometric calibration process is realized. A painting is placed on the projection plane, and the original image of the painting captured by the camera is shown in Fig. 6(d). After geometric calibration, the image of the painting can be calibrated as shown in Fig. 6(e).

 figure: Fig. 6.

Fig. 6. The geometric correction process (a. projected 12 × 14 checkerboard, b. corner detection of the original image captured by the camera, c. image after geometric correction, d. painting directly captured by the camera, e. image after geometric correction, f. stacked images of Fig. 6(a) and Fig. 6(c)).

Download Full Size | PDF

To assess the performance of the geometric calibration process, the original input image Fig. 6(b) of the projector and the image after geometric correction Fig. 6(c) are stacked together, as shown in Fig. 6(f). It is found from Fig. 6(f) that the two images overlap well, which proves that the projected image coordinate system and the camera coordinate system, as shown in Fig. 2, are geometrically calibrated. The average calibration error is 3.31 pixels, which corresponds to a spatial error of 1.19 mm.

3.3 Scene segmentation for painting

As shown in Fig. 7(a), the painting for visual enhancement is inspired by color-blind test images. The central part of the painting is a geological hammer (defined as the Region of Interest (ROI)), and the surrounding area consists of a blur of circular spots painted in various colors and sizes to reduce the visual salience of the geological hammer. After the flood fill algorithm is executed, the color of the geological hammer is replaced with yellow, as shown in Fig. 7(b). The segmented ROI without the surrounding background is shown in Fig. 7(c). (See Supplement 1 for segmentation details.)

 figure: Fig. 7.

Fig. 7. The image segmentation method that is used in this study (a. original image, b. color of ROI replaced by the flood fill algorithm, c. segmented geological hammer, d. CIELAB color coordinates of the pixels of the original image, e. segmented spatial areas of a specific color, f. segmented spatial areas of another color).

Download Full Size | PDF

Afterward, the color threshold segmentation method is used to segment the other parts of the painting. Firstly, the RGB color space is converted to the CIELAB color space. Similar colors at different positions in the image are gathered together, as shown in Fig. 7(d). Then the CIELAB color space can be segmented into several subspaces with a specific size and shape. Finally, similar colors in the image can be spatially segmented, e.g., as shown in Fig. 7(e) and Fig. 7(f).

3.4 Visual enhancement

Firstly, the characterized pigments are separately painted on drawing papers. The ground-truth spectral reflectance functions of the characterized pigments are measured by a Flame-S spectrometer (Ocean optics Inc., USA), and the RGB values of the characterized pigments are captured by a camera under D65 illumination. Therefore, the mapping relationship of the reflectance and RGB values of the pigments under D65 illumination can be built. Then, an artificial painting for visual enhancement is painted on drawing paper with pure pigments. The image of the artificial painting is captured by a camera under the same D65 illumination. Finally, from the RGB values of the image of the artificial painting, the spatially varying spectral reflectance of the artificial painting can be obtained by mapping the built pigment spectral reflectance and RGB values on the image.

After segmenting the captured image described in the previous section, the following visual enhancement steps are executed. Firstly, the RGB values are calculated to ensure that the SPD of the reflected light by the painting, when illuminated by the projector, gives color coordinates that are equal to the color coordinates when the reference illuminant D65 illuminates the painting. Figure 8(a) shows the calculated input image for the projector. Figure 8(b) shows the geometrically corrected camera image of the painting when illuminated with this projected image (benchmark), and Fig. 8(c) shows the incident spectrum on the geological hammer. The input image for the projector is then adapted to realize the targeted colorfulness enhancement, hue tuning, and lightness tuning, respectively. The results of these visual enhancement tests can be seen in Fig. 9.

 figure: Fig. 8.

Fig. 8. The experimental painting (a. the input image of the projector to yield a color appearance equal to illumination with the reference illuminant, b. captured image when illuminated by this laser projected input image, c. incident spectra of the ROI area by the projector).

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Resulting images of the painting after increasing colorfulness, tuning hue, and reducing lightness of the background (a) geometrically corrected camera image of the painting when illuminated by the laser projector to have equal color appearance as when illuminated by the D65 reference illuminant, (b) the input image of the projector for increased colorfulness, (c) resulting image after increasing the colorfulness of ROI, (d) incident spectra on the ROI for increased colorfulness (the radiant power proportions of the RGB color channel of the projector are also shown), (e) the input image of the projector for tuning hue, f. the resulting image after tuning hue, g. incident spectra on the ROI for hue tuning, h. input image for decreased background lightness, (i) resulting image after decreasing the background lightness, (j) incident spectra on the chosen area (outside ROI) for decreased background lightness.)

Download Full Size | PDF

Figure 9(b) shows the result when the colorfulness of the geological hammer is increased. Note that the color appearance of the surrounding area (non-ROI) is the same as for the benchmark (i.e., compared with Fig. 9(a)). Such results cannot be obtained with a visual enhancement method in which the complete illumination spectrum is uniformly adapted. This selective visual enhancement is clearly a unique feature of our proposed method. The input image for the projector and the corresponding incident SPD on the ROI for realizing this colorfulness enhancement, are shown in Fig. 9(b) and Fig. 9(c), respectively. If we compare Fig. 8(c) and Fig. 9(d), it is clear that the amount of red light in the incident SPD on the geological hammer is highly reduced from 0.69 to 0.58, while the blue light is lightly increased from 0.55 to 0.59, and the green light is slightly increased from 0.54 to 0.58. The reason could be due to the color of the blue and green light mixed perception for cyan, while green and red as the complementary color, reducing the light source of the red channel can also enhance the blue-green color.

Figure 9(e) shows the geometrically corrected camera image of the painting when the projector illumination is adapted to yield a new hue for the ROI, which is rotated by 5/3π radian along the counterclockwise direction compared to the benchmark case. The hue of the geological hammer is changed from blue to green, and in order to do so, the projected light onto the geological hammer needs to be orange.

Figure 9(h) shows the obtained results when the lightness (J’) of the background (non ROI) is decreased to 30. In this case, the surrounding area is almost hidden, such that the visual salience of the geological hammer is significantly improved.

4. Conclusion and discussion

4.1 Conclusion

This paper investigates a new automated approach for point-by-point visual enhancement based on objects’ spatial varying reflectance, which is derived from a regular camera system. First, geometric calibration of the projector-camera system is carried out for determining the spatial mapping from the projected pixel grid to the imaged pixel grid. Secondly, the scene is segmented, and the Region of Interest (ROI) is extracted for implementing the visual enhancement approach. Thirdly, visual enhancement approaches and three visual enhancement scenarios are proposed. Note that the proportions of the laser projector could be obtained at once by numerical calculations. One of three visual enhancement scenarios is applied by projecting the required color image onto the considered scene. And finally, the radiance calibration process of the laser projector primaries is described to generate the required color image.

A painting for visual enhancement, consisting of a geological hammer (ROI) and the surrounding areas, is inspired by color-blind test images and artificially fabricated. An experimental setup is carried out. The color-tuning targets are quantitatively and arbitrarily variated based on the three visual enhancement approaches. The results show that the visual salience of an object or ROI can be efficiently enhanced by our proposed methods for colorfulness enhancement, hue tuning, and background lightness reduction, respectively. Background lightness reduction has the highest visual enhancement performance by almost hiding the surrounding area among the three approaches.

4.2 Discussion

This paper focuses on point-by-point visual enhancement approaches. Although the application field is limited for now to artificial paintings (as the spatial varying spectral reflectances of the painting are obtained by spectral reflectance and RGB values mapping in this work), our proposed method will be quite powerful when combined with recent, and especially future, developments of low-cost micro hyperspectral sensor technology [22] or methods for reflectance estimation from digital images [23,24,25]. After a low-cost and fast reflectance estimation from RGB images, the proposed method could be used to extend the application field (i.e., augmented reality [26], cultural heritage conservation [11]).

In addition, our proposed method can not only be used for flat objects (paintings, paper, et al.), but can also be potentially used for 3D objects (more common in our daily life) with newly developed 3D projection technologies [27]. In future work, we would like to study 3D reconstruction [28,29], 3D projection, and visual enhancement method for 3D objects applications.

Funding

Fundamental Research Funds for the Central Universities (CUGL180404).

Acknowledgments

We thank Miss Bobing Zhang for drawing the painting.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but can be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. I. Rock and J. Victor, “Vision and touch: an experimentally created conflict between the two senses,” Science 143(3606), 594–596 (1964). [CrossRef]  

2. Z. Y. Xuan, J. Y. Li, Q. Q. Liu, F. Yi, S. W. Wang, and W. Lu, “Artificial structural colors and applications,” Innovation 2(1), 100081 (2021). [CrossRef]  

3. H. C. Wang, Y. T. Chen, J. T. Lin, C. P. Chiang, and F. H. Cheng, “Enhanced visualization of oral cavity for early inflamed tissue detection,” Opt. Express 18(11), 11800–11809 (2010). [CrossRef]  

4. J. F. Shen, S. Q. Chang, H. H. Wang, and Z. R. Zheng, “Optimal illumination for visual enhancement based on color entropy evaluation,” Opt. Express 24(17), 19788–19800 (2016). [CrossRef]  

5. J. Shen, S. Chang, H. Wang, and Z. Zheng, “Optimising the illumination spectrum for enhancing tissue visualisation,” Light. Res. Technol. 51(1), 99–110 (2019). [CrossRef]  

6. D. Durmus and W. Davis, “Optimising light source spectrum for object reflectance,” Opt. Express 23(11), A456–A464 (2015). [CrossRef]  

7. J. J. Zhang, R. Hu, B. Xie, X. J. Yu, X. B. Luo, Z. H. Yu, L. J. Zhang, H. Wang, and X. Jin, “Energy-saving light source spectrum optimization by considering object's reflectance,” IEEE Photonics J. 9(2), 1–11 (2017). [CrossRef]  

8. D. Durmus, “Characterizing color quality, damage to artwork, and light intensity of multi-primary LEDs for museums,” Heritage 4(1), 188–197 (2021). [CrossRef]  

9. D. Durmus, D. Abdalla, A. Duis, and W. Davis, “Spectral optimization to minimize light absorbed by artwork,” LEUKOS 16(1), 45–54 (2020). [CrossRef]  

10. F. Vienot, G. Coron, and B. Lavedrine, “LEDs as a tool to enhance faded colours of museums artefacts,” J. Cult. Herit. 12(4), 431–440 (2011). [CrossRef]  

11. D. Vázquez, A. A. Fernández-Balbuena, H. Canabal, C. Muro, and D. Durmus, “Energy optimization of a light projection system for buildings that virtually restores artworks,” Digital Appl. Archaeol. Cultural Heritage 16, e00128 (2020). [CrossRef]  

12. D. Durmus and W. Davis, “Blur perception and visual clarity in light projection systems,” Opt. Express 27(4), A216–A223 (2019). [CrossRef]  

13. J. J. Zhang, K. A. G. Smet, and Y. Meuret, “Tuning color and saving energy with spatially variable laser illumination,” Opt. Express 27(19), 27136–27150 (2019). [CrossRef]  

14. C. Li, M. R. Luo, C. J. Li, and G. H. Cui, “The CRI-CAM02UCS colour rendering index,” Color Res. Appl. 37(3), 160–167 (2012). [CrossRef]  

15. M. Deshmukh and U. Bhosle, “A survey of image registration,” Int. J. Image Process 5, 245 (2011).

16. J. Lee and H. Kang, “Flood fill mean shift: a robust segmentation algorithm,” Int. J. Control Autom. Syst. 8(6), 1313–1319 (2010). [CrossRef]  

17. H. Y. Song, H. F. Li, and X. Liu, “Studies on different primaries for a nearly-ultimate gamut in a laser display,” Opt. Express 26(18), 23436–23448 (2018). [CrossRef]  

18. M. Higlett, J. O’Hagan, and M. Khazova, “Laser display systems: Do we see everything?” J. Laser Appl. 30(2), 022007 (2018). [CrossRef]  

19. Illuminating engineering society of north America, “TM-30-20: IES method for evaluating light source color rendition,” (2020).

20. J. J. Zhang, B. Xie, X. Yu, X. Luo, T. Zhang, S. Liu, Z. Yu, L. Liu, and X. Jin, “Blue light hazard performance comparison of phosphor converted LED sources with red quantum dots and red phosphor,” J. Appl. Phys. (Melville, NY, U. S.) 122(4), 043103 (2017). [CrossRef]  

21. A. Geiger, F. Moosmann, O. Car, and B. Schuster, “Automatic Camera and Range Sensor Calibration using a single Shot,” in IEEE International Conference on Robotics and Automation (ICRA, 2012), pp. 3936–3943.

22. “Spectral sensors and camera modules,” https://spectricity.com/product/.

23. S. Tominaga, S. Nishi, R. Ohtera, and H. Sakai, “Improved method for spectral reflectance estimation and application to mobile phone cameras,” J. Opt. Soc. Am. A 39(3), 494–508 (2022). [CrossRef]  

24. W. Y. Zhang, H. Y. Song, and X. He, “"Deeply learned broadband encoding stochastic hyperspectral imaging,” Light: Sci. Appl. 10(1), 108–975 (2021). [CrossRef]  

25. S. M. Park, M. A. Visbal-Onufrak, M. M. Haque, M. C. Were, V. Naanyu, M. K. Hasan, and Y. L. Kim, “mHealth spectroscopy of blood hemoglobin with spectral super-resolution,” Optica 7(6), 563–573 (2020). [CrossRef]  

26. B. X. Wu, P. Liu, C. Xiong, C. M. Li, F. Zhang, S. W. Shen, P. F. Shao, P. Yao, C. S. Niu, and R. Xu, “Stereotactic co-axial projection imaging for augmented reality neuronavigation: a proof-of-concept study,” Quant. Imag. Med. Surg. 12(7), 3792–3802 (2022). [CrossRef]  

27. P. Kurth, V. Lange, M. Stamminger, and F. Bauer, “Real-time adaptive color correction in dynamic projection mapping,” in IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 174–184 (2020).

28. Y. M. Li, X. H. Qu, F. M. Zhang, and Y. J. Zhang, “Separation method of superimposed gratings in double-projector structured-light vision 3D measurement system,” Opt. Commun. 456, 124676 (2020). [CrossRef]  

29. H. Laga, L. V. Jospin, F. Boussaid, and M. Bennamoun, “A Survey on Deep Learning Techniques for Stereo-Based Depth Estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 44(4), 1738–1764 (2022). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       painting design and the detailed scene segmentation results

Data availability

Data underlying the results presented in this paper are not publicly available at this time but can be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Conceptual illustration of the considered visual enhancement setup.
Fig. 2.
Fig. 2. Illustration of the geometric calibration approach
Fig. 3.
Fig. 3. Illustration of different visual enhancement scenario’s (a. visual enhancement approach in CAM02UCS color-appearance space, b. object illuminated by D65, c. increasing colorfulness, d. hue tuning, e. lightness tuning).
Fig. 4.
Fig. 4. Radiation calibration setup (a. blue channel calibration of the projector, b. green channel calibration of the projector, c. red channel calibration of the projector).
Fig. 5.
Fig. 5. Experimental setup for visual enhancement tests (1. Illuminated scene, 2. Camera, 3. Optical probe, 4. Laser projector, 5. Spectrometer). The image is taken with ambient lighting for demonstration purposes.
Fig. 6.
Fig. 6. The geometric correction process (a. projected 12 × 14 checkerboard, b. corner detection of the original image captured by the camera, c. image after geometric correction, d. painting directly captured by the camera, e. image after geometric correction, f. stacked images of Fig. 6(a) and Fig. 6(c)).
Fig. 7.
Fig. 7. The image segmentation method that is used in this study (a. original image, b. color of ROI replaced by the flood fill algorithm, c. segmented geological hammer, d. CIELAB color coordinates of the pixels of the original image, e. segmented spatial areas of a specific color, f. segmented spatial areas of another color).
Fig. 8.
Fig. 8. The experimental painting (a. the input image of the projector to yield a color appearance equal to illumination with the reference illuminant, b. captured image when illuminated by this laser projected input image, c. incident spectra of the ROI area by the projector).
Fig. 9.
Fig. 9. Resulting images of the painting after increasing colorfulness, tuning hue, and reducing lightness of the background (a) geometrically corrected camera image of the painting when illuminated by the laser projector to have equal color appearance as when illuminated by the D65 reference illuminant, (b) the input image of the projector for increased colorfulness, (c) resulting image after increasing the colorfulness of ROI, (d) incident spectra on the ROI for increased colorfulness (the radiant power proportions of the RGB color channel of the projector are also shown), (e) the input image of the projector for tuning hue, f. the resulting image after tuning hue, g. incident spectra on the ROI for hue tuning, h. input image for decreased background lightness, (i) resulting image after decreasing the background lightness, (j) incident spectra on the chosen area (outside ROI) for decreased background lightness.)

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

{ X i j = 0 n k = 0 j a j , k x i k y i j k Y i j = 0 n k = 0 j b j , k x i k y i j k i = 1 , , N ; j = 0 n k = 0 j 1 = N
{ J x , y T = J x , y r e f a x , y T = k 1 a x , y r e f b x , y T = k 1 b x , y r e f ( x , y ) ROI
{ J x , y T = J x , y r e f a x , y T = cos ( α ) a x , y r e f sin ( α ) b x , y r e f ( x , y ) ROI b x , y T = sin ( α ) a x , y r e f + cos ( α ) b x , y r e f
{ J x , y T = k l J x , y r e f a x , y T = a x , y r e f ( x , y ) ROI b x , y T = b x , y r e f
S = [ S b ( λ ) , S g ( λ ) , S r ( λ ) ] [ p b p g p r ]
[ X Y Z ] = ( K [ x ¯ y ¯ z ¯ ] [ S b ( λ ) , S g ( λ ) , S r ( λ ) ] [ p b p g p r ] ) r = K [ x ¯ s b r x ¯ s g r x ¯ s r r y ¯ s b r y ¯ s g r y ¯ s r r z ¯ s b r z ¯ s g r z ¯ s r r ] [ p b p g p r ]
[ X Y Z ]  =  [ X b   X g   X r Y b   Y g   Y r Z b   Z g   Z r ] [ p b p g p r ]
[ p b p g p r ] = [ X b   X g   X r Y b   Y g   Y r Z b   Z g   Z r ] 1 [ X Y Z ]
d i = λ 1 λ 2 C i ( λ ) S ( λ ) r ( λ ) d λ
p i = β i I i , with i = r , g , b
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.