Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Systematic comparison of head mounted display colorimetric performance using various color characterization models

Open Access Open Access

Abstract

The advancement of virtual reality in recent times has seen unprecedented applications in the scientific sphere. This work focuses on the colorimetric characterization of head mounted displays for psychophysical experiments for the study of color perception. Using a head mounted display to present stimuli to observers requires a full characterization of the display to ensure that the correct color is presented. In this paper, a simulation is done to mimic a practical display with color channel interactions and characterization of simulated data is done using the following models: gain offset gamma model, gain offset gamma offset model, gain gamma offset model, piecewise linear assuming chromaticity constancy model, piecewise linear model assuming variation in chromaticity, look-up table model, polynomial regression model, and an artificial neural network model. an analysis showed that the polynomial regression, artificial neural network, and look-up table models were substantially better than other models in predicting a set of rgb values, which can be passed as input to a head mounted display to output desired target xyz values. both the look-up table and polynomial regression models could achieve a just noticeable difference between the actual input and predicted output color of less than 1. the gain offset gamma, gain offset gamma offset, and gain gamma offset models were not effective in colorimetric characterization, performing badly for simulations as they do not incorporate color channel interactions. the gain offset gamma model was the best among these three models and the lowest just noticeable difference it could achieve was over 13, clearly too high for color science experiments.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Virtual reality devices are omnipresent in the current world. The use of head mounted displays (HMD) dates back to the mid-20th century. Morton Heilig, a cinematographer, designed a telesphere mask in 1960 comprised of optical and audio sensory units that could present immersive 3d slides [1]. In 1965, Ivan Sutherland envisioned the ultimate display that merged the real world into the digital to create an immersive space where the existence of matter could be controlled by a computer program. Three years after, he and his student Bob Sproull presented the first-ever HMD that enabled one to interact with the virtual world [2]. Fast forward many years later, virtual reality applications are everywhere in medicine [36], architecture [7,8], education [9,10], sports [1113] and so on. The immersive nature of virtual reality provides an opportunity to conduct complex simulations of real environments and the scope is immense. Especially in areas like medicine [14], for instance, clinical training [15] and applications like virtual prototyping in the industry [16,17], high-fidelity is extremely important. Low-fidelity image rendering for such applications can lead to wrong decisions [18]. Therefore, high-fidelity applications need accurately color-characterized virtual reality systems.

The monumental development of virtual reality HMDs in recent times provides a lot of promise, as they can provide immersive environments in a compact and economical way [1923]. The progress in its development makes it an ideal tool for researching the visual perception of complex scenes. Virtual reality allows a systematic control of stimuli and the generation of complex scenes. Unlike the real world where the creation of a new scene takes a lot of effort, the creation of a new virtual scene is made with just a button click. This makes it an efficient and cost-effective tool to study perception and do psychophysical experiments [24].

Although HMDs have been used for visual psychophysical experiments [25,26], there has not been a lot of research on the colorimetric characterization of HMDs. Pouli et. al. used Unity Engine, a popular game development and rendering software and OpenVR Software Development Kits (SDK) to create a colorimetric characterization pipeline for a virtual reality HMD [27]. They achieved an average CIELAB color difference ($\Delta$E*ab) of 1.39, 1.23, and 4.42 for the Oculus Rift, HTC Vive, and Samsung S7 respectively using the piecewise linear assuming chromaticity constancy (PLCC) model. Mehrfard et. al. analyzed popular headsets based on various criteria including color accuracy. Their colorimetric results indicate $\Delta$E*ab for different headsets ranging from 6.5 to 28.6, which is not sufficiently accurate for the study of color perception [28]. Clausen et. al. designed a color management system for the HTC Vive Pro and reported that their system can be used for virtual prototyping [18]. They achieved a maximum CIE $\Delta E2000$ color difference ($\Delta E_{00}$) of 2.3 and 3.4 for the HTC Vive Pro and Pimax 5K+ respectively. They used a gain offset gamma (GOG) model for color characterization. Although gamma-based characterization models and PLCC are quite common, the Piecewise Linear assuming Variation in Chromaticity (PLVC) model is also used for characterizing displays for which it provides better accuracy [29]. The PLCC and GOG models are easily invertible and do not require a lot of measurements for color characterization, but they are based on assumptions of channel independence and chromaticity constancy, which is not generally true for typical HMDs [27]. These assumptions were applicable for CRTs but for recent display technologies with complex colorimetric behaviors, such simple models may not be accurate enough to perform color characterization [2931].

Head-mounted displays can suffer from additivity issues and this has been reported in the literature [26,32]. When a display has additivity problems, the individual outputs of the red, green, and blue channels do not exactly add up to match the output when all three channels are turned on simultaneously. In such a case, a simple display characterization model like the GOG model may not be a good choice [33,34]. On the other hand, in such complex cases, Artificial Neural Networks (ANN) with their adeptness in handling non-linearities might be put forward [3537]. A neural network treats the head-mounted display like a black box and does not take the device’s properties explicitly into consideration.

Neural networks and machine learning methods have been used before to address the characterization of cameras and printers. Cheung et. al. used ANNs and polynomial models to characterize cameras. They found that for cameras both ANNs and polynomial models are effective color characterizers. Their best polynomial and neural network-based camera characterization models could achieve a median $\Delta$E*ab of 2.57 and 2.89 [35]. Coming from cameras to scanners, Vrhel and Trussell developed a characterization method for scanners using neural networks and they could achieve an average $\Delta$E*ab value of 2.26 [36]. Prats-climent et. al. showed that neural networks can be used for the display characterization of LCDs. They achieved a mean $\Delta E_{00}$ value of 2.6 with a neural network having hidden layers of 256 and 64 units [38], which shows the effectiveness of this family of models.

Although characterization models, such as PLCC, have been shown to have quite a good color accuracy for both regular LCD displays and VR HMDs, even higher accuracy might be required for certain high-fidelity applications, such as virtual prototyping or the study of visual perception in virtual environments and how it relates to that in real scenes. Therefore, in this paper, a systematic analysis of various display color characterization methods, such as GOG, PLCC, PLVC, look-up table (LUT), Polynomial-based, and ANN-based methods, is done with the aim of further reducing the color characterization error. The models are trained and tested on simulated colorimetric data and their colorimetric characterization ability is discussed in detail. Simulated data has been used to easily generate training and test data sets to investigate the impact of the number of training points and other parameter settings such as the degree of the polynomial in polynomial regression (POR) and hidden layer sizes in artificial neural networks (ANN) for the accuracy of the color characterization methods studied.

2. Methodology

First, the various color characterization models and parameters affecting their accuracy are discussed. Next, the generation of the training and test sets for the simulated display are explained. Finally, the quantification of color characterization accuracy is described.

2.1 Color characterization models

This segment introduces all the models tested on colorimetric data simulated using a virtual display.

2.1.1 Gamma function-based models

Gamma function-based models are traditional two-stage models used for color-characterizing displays. The first stage involves a non-linear transformation from RGB display input values to linearized RGB values, while in the second stage, the linearized RGB values are converted to XYZ tristimulus values using a $3 \times 3$ conversion matrix [33,39]. The GOGO model is the most general in this family of models [40]. The GOGO model is defined first and then the derivations of the GOG and GGO models are explained.

In the first stage, the linearized RGB values (R, G, B) can be obtained from display input values (r,g,b) using a power function as described in Eqs. (1)–(3), where $a$, $o_i$, and $o_o$ refer to the gain, internal and external offset of the model.

$$R = (a \times \frac{r}{2^N-1}+ o_i)^{\gamma}+ o_o$$
$$G = (a \times \frac{g}{2^N-1}+ o_i)^{\gamma}+ o_o$$
$$B = (a \times \frac{b}{2^N-1}+ o_i)^{\gamma}+ o_o$$

Here, N is the number of bits used to encode each color channel information. Next, the XYZ tristimulus values are derived from linearized RGB values using a conversion matrix as defined in Eq. (4).

$$\begin{bmatrix} X \\Y \\Z \end{bmatrix}= \begin{bmatrix} X_{r,max} & X_{g,max} & X_{b,max} \\ Y_{r,max} & Y_{g,max} & Y_{b,max}\\ Z_{r,max} & Z_{g,max} & Z_{b,max} \end{bmatrix} \begin{bmatrix} R \\G \\B \end{bmatrix}$$

Here, ($X_{r,max}, Y_{r,max}, Z_{r,max}$) are the measured tristimulus values when one measures the maximum red channel of the display. ($X_{g,max}, Y_{g,max}, Z_{g,max}$), ($X_{b,max}, Y_{b,max}, Z_{b,max}$) can be similarly defined for the maximum green and blue channel of the display respectively.

For some devices, there might still be some signals present even if there is no input to the device. If the tristimulus values from black level estimation are ($X_{k}, Y_{k}, Z_{k}$), then the corrected calculation is represented in Eq. (5) [29].

$$\begin{bmatrix} X \\Y \\Z \end{bmatrix} - \begin{bmatrix} X_{k} \\Y_{k} \\Z_{k} \end{bmatrix}= \begin{bmatrix} X_{r,max} - X_{k} & X_{g,max} - X_{k} & X_{b,max} - X_{k}\\ Y_{r,max} - Y_{k} & Y_{g,max} - Y_{k} & Y_{b,max} - Y_{k}\\ Z_{r,max} - Z_{k} & Z_{g,max} - Z_{k} & Z_{b,max} - Z_{k} \end{bmatrix} \begin{bmatrix} R \\G \\B \end{bmatrix}$$

The matrix in either Eq. (4) or (5) contains only 9 entries. Although this is very easy to create this matrix using 3 measurements, a more optimized matrix can be obtained by minimizing the root mean square error (RMSE) between the measured XYZ tristimulus values and those predicted by the model for a large number of displayed RGB values. In this work, conversion matrix elements were optimized by minimizing the RMSE in CIELAB color space. Henceforth, the GOGO models with a fixed (Eq. (5)) and with an optimized transfer matrix are referred to as GOGOmf and GOGOmo respectively.

The GOG and GGO models can be derived from Eqs. (1)–(3). If the external offset is set to null, the GOG model is obtained. If the internal offset is set to null, the GGO model is obtained. Also, like the GOGOmf and GOGOmo models, the GOGmf, GOGmo,the GGOmf, and GGOmo models can be defined.

2.1.2 PLCC

Unlike the gamma-based models, which use a gamma function to convert RGB display input values to linearized RGBs, the PLCC model uses a set of three piecewise linear functions, one for each channel [39,41]. Let the set of functions for channels $\{R, G, B\} \in C$ be $f_c$ for display input RGB values $c \in \{r, g, b\}$. Then the display input RGB to linearized RGB transformation for the corresponding elements of sets $C$ and $c$ can be represented by Eq. (6).

$$C = f_c(c)$$

The equations for converting the linearized RGBs to XYZs including black correction are identical to the ones defined earlier in Eqs. (4) and (5) for the GOGO model. Similar to the GOGOmo and GOGOmf models, a PLCC model with optimized transfer matrix - henceforth referred to as PLCCmo, has also been investigated, in addition to the standard PLCC model with fixed transfer matrix - referred to as PLCCmf.

2.1.3 PLVC

Both the gamma-based and PLCC models use a matrix to convert linearised RGB to tristimulus XYZ values with the assumption that primary chromaticities are constant with an increase in drive input [39]. But for displays with a lack of chromaticity constancy, the PLVC model is more robust and works better than gamma-based and PLCC models [39,40]. The chromaticity errors are also low for the PLVC model as compared to the PLCC model as the PLCC model has the primary colorimetric values set at maximum intensity [29,40].

For a device with N primaries, let $d_i(m_i)$ be the digital input to the ith primary and $m_i$ be an integer between 0 and $2^n-1$ for a channel encoded with n bits. Here, $i \in [0,N]$.

A color XYZ ($\cdots,d_i(m_i)$,…) can be presented by Eqs. (7)–(9) [39]:

$$X(\cdots,d_i(m_i),\ldots) = \sum_{i = 0, j = m_i}^{i = N - 1} [X(d_i(j)) - X_k] + X_k$$
$$Y(\cdots,d_i(m_i),\ldots) = \sum_{i = 0, j = m_i}^{i = N - 1} [Y(d_i(j)) - Y_k] + Y_k$$
$$Z(\cdots,d_i(m_i),\ldots) = \sum_{i = 0, j = m_i}^{i = N - 1} [Z(d_i(j)) - Z_k] + Z_k$$

Xk, Yk, and Zk are the tristimulus values for black.

Equations (10)–(12) display an instance for a device with three primaries R, G, and B with 8 bits used for encoding each channel. Let $d_r(i), d_g(j)$, and $d_b(l)$ be digital inputs with $i,j,l \in [0,255]$

$$X(d_r(i),d_g(j),d_b(l)) = [X(d_r(i)) - X_k] + [X(d_g(j)) - X_k] + [X(d_b(l)) - X_k] + X_k$$
$$Y(d_r(i),d_g(j),d_b(l)) = [Y(d_r(i)) - Y_k] + [Y(d_g(j)) - Y_k] + [Y(d_b(l)) - Y_k] + Y_k$$
$$Z(d_r(i),d_g(j),d_b(l)) = [Z(d_r(i)) - Z_k] + [Z(d_g(j)) - Z_k] + [Z(d_b(l)) - Z_k] + Z_k$$

To obtain $X(d_r(i),d_g(j),d_b(l)), Y(d_r(i),d_g(j),d_b(l)), Z(d_r(i),d_g(j),d_b(l))$, linear interpolation in one dimension is done and a color ramp is measured for red, green, and blue primaries [39].

In this work, as suggested by various authors, a black-level correction has always been adopted for the GOG-, PLCC-, and PLVC-based models. [39,42,43].

2.1.4 POR

POR is a special type of regression that is useful for non-linear data as it does not assume a linear relationship between the dependent and independent variables. POR models the non-linear relationship by adding higher degree terms to linear regression, which includes interactions between variables in the multivariate case. The advantage of the latter is that these models can therefore, in principle, deal with interactions that might occur between the display primary channels. [4446]. The simplest polynomial of degree 1 uses only R, G, and B. A degree 2 polynomial includes the interaction terms RG, GB, RB alongside R, G, and B. The Eqs. (13)–(14) list the terms used in POR models up to degree 2 [46].

$$p_1 = [1, R, G, B]$$
$$p_2 = [1, R, G, B, R^2, G^2, B^2, RG, RB, GB]$$

An instance of modeling linearized RGB to XYZ using a degree 2 polynomial is shown in Eq. (15) [47].

$$\begin{bmatrix} X \\Y \\Z \end{bmatrix}= \begin{bmatrix} c_{x,1} & c_{x,2} & c_{x,3} & c_{x,4} & c_{x,5} & c_{x,6} & c_{x,7} & c_{x,8} & c_{x,9} \\ c_{y,1} & c_{y,2} & c_{y,3} & c_{y,4} & c_{y,5} & c_{y,6} & c_{y,7} & c_{y,8} & c_{y,9}\\ c_{z,1} & c_{z,2} & c_{z,3} & c_{z,4} & c_{z,5} & c_{z,6} & c_{z,7} & c_{z,8} & c_{z,9} \end{bmatrix} \begin{bmatrix} 1 \\R \\G \\B \\R^2 \\G^2 \\RG \\RB \\GB \end{bmatrix}$$

The $c_{m,i}$s represent the polynomial coefficients where $1 \leq i \leq 9$ and $m \in \{x,y,z\}$. The coefficients are determined by solving a system of matrices.

To test the impact of the polynomial degree on the accuracy of the POR model, the degree was varied from 1 to 8. The dispcal toolbox from Luxpy package [48] for color science is used to model the relationship between XYZ and RGBdisplay.

2.1.5 LUT

A LUT is a dictionary that contains a (key, value) pair. When one wants to find a value of a particular key, one just needs to find that key in the dictionary. Even before the time of computers, LUTs were used for calculating complex functions like trigonometric and logarithmic functions [49]. In the context of color characterization, an XYZ-to-RGB LUT would have a set of XYZ values as the keys and a corresponding set of RGB values as the values.

However, due to the continuous nature of XYZ tristimulus values creating such a LUT would be impossible without requiring interpolation between the discrete entry points (keys) of the LUT. Even if the LUT contained only one (X, Y, Z) key for each of the $256^3$ (R, G, B) combinations (= 116777216 entries) for three 8-bit channels, interpolation would certainly not be of practical value, as such a LUT can take over 450 MB of space when the XYZs are stored as double precision floats [50]. LUTs for colorimetric characterization are therefore typically composed of only a limited number of entries whereby intermediate values are derived using interpolation. For a query point, the tetrahedron containing it is located and a barycentric interpolation is done. The tetrahedron is determined using a Qhull (quadratic hull) based technique. If a system has n particles $x_1,x_2,..x_n$ with weights $w_1,w_2,..w_n$, the barycenter of the system is a point x that satisfies Eq. (16) [51].

$$\sum_{i=0}^{n} w_i(x-x_i) = 0$$

Solving (16), one can get the formula for barycenter, as in Eq. (17).

$$x = \frac{\sum_{i=0}^{n} w_ix_i}{\sum_{i=0}^{n} w_i}$$

2.1.6 ANN

A fully connected feed-forward neural network is used as one of the models. It is a simple network with 3 input nodes for the X, Y, and Z input values, with N hidden layers and 3 output nodes for predicting the R, G, and B values. In a feed-forward network, the information is travelling in a single direction and does not involve any loops or cycles. An ANN with one or more hidden layers can approximate any function that is continuous [52,53]. So, an ANN can be used to estimate the relation between XYZs and corresponding RGBs.

To test the impact of hidden layer sizes on the accuracy of the ANN, ANNs with the following hidden layer sizes were trained on each (varying in the number of training points) of the simulated training data sets: 10, 20, 40, 80, 100, 130, 160, 200, 300, 400, 500, 600, 700 and 800. After testing the models with a single hidden layer having this number of neurons, the best-predicting model was chosen. For this model, the same layer was redistributed into multiple layers so that the new ANN had the same overall number of nodes. Thus the impact of the redistribution of nodes was also tested.

The MLPRegressor module from Scikit-learn was used to implement the model [54]. The MLPRegressor module provides 4 different options for an activation function: identity, logistic, tanh, and relu. An identity function, for which the output is the same as the input, would not be suitable here. The logistic activation function is more suitable for classification algorithms and tanh activation suffers from a vanishing gradient problem [55,56]. Also, the relu function has been found to produce superior results in a lot of cases, it was therefore chosen as the activation function [55]. The Adam optimizer was chosen as it is quite effective for relatively large datasets with respect to both training and validation and also quite robust to the choice of hyperparameters [54,57]. An adaptive learning rate helps in minimizing a network’s objective function and chooses the proper rate making use of the objective function’s gradient and the parameters of the network. If the learning rate is too high, a model can converge too quickly and the obtained solution would be suboptimal while a smaller learning rate can result in an extremely slow convergence. Thus, the learning rate was chosen to be adaptive [58].

2.2 Training and test data

2.2.1 Training data

For the purpose of training different color characterization models, training pairs (XYZtrain, RGBdisplay_train) are created. The models are trained with XYZtrain as input and RGBdisplay_train as output. By knowing this relation, one can figure out which RGBdisplay_train should be sent to the display to get a certain target XYZ.

Data simulation is done with the help of a virtual display, a set of mathematical equations that models a typical real display. The advantage of using a virtual display is that it allows quick generation of training sets by changing simple properties [59]. For instance, one can have a densely or sparsely spaced dataset by just changing the R, G, and B increments of the virtual display mode inputs. The virtual display allows the easy creation of large datasets, which can be very time-consuming to do using real measurements.

The designed virtual display is based on the display model by Kwak et al. [60]. The Kwak display model is very comprehensive and takes into account the additivity of primaries and inter-channel dependence, which as we have already seen are major issues in characterizing HMDs.

The process for converting input RGBdisplay values to CIE 1931 $2^{\circ }$ XYZ tristimulus values (with Ymax = 100), termed Forward Display Mode, is shown in Algorithm 1.

Tables Icon

Algorithm 1. Forward Display Mode

In the above algorithm, the matrix C = [1 R G B RG GB BR RGB]$'$ and the function f is given by Eq. (18). For red, green, and blue channels, the values of $\{\alpha, \beta, C\}$ in Eq. (18) are $\{3.308,3.157,3.118\}$, $\{10.783,7.166,7.956\}$, and $\{2.394,1.551,1.204\}$ respectively [60].

$$f(x) = \frac{x^\alpha}{x^\beta+C}$$

The derivative of the function $f$ is given by Eq. (19).

$$f'(x) = \frac{(\alpha-\beta)x^{\alpha+\beta-1}+\alpha C x^{\alpha-1}}{(x^\beta+C)(x^\beta+C)}$$

Equation (20) lists the values $A_{ij}$ where i, j $\in \{r, g, b\}$ [60].

$$\begin{bmatrix} A_{rr} & A_{rg} & A_{rb} \\ A_{gr} & A_{gg} & A_{gb}\\ A_{br} & A_{bg} & A_{bb} \end{bmatrix} = \begin{bmatrix} 3.394 & -0.030 & 0.016 \\ 0 & 2.550 & -0.007\\ 0 & 0.002 & 2.203 \end{bmatrix}$$

The matrices S and T represent the dominant linear relationship between monitor luminance levels and output CIE tristimulus values and the channel inter-dependency matrix. The values for S and T are obtained from [60] and shown in Eqs. (21) and (22).

$$S = \begin{bmatrix} 0.241 & 0.417 & 0.172 \\ 0.129 & 0.814 & 0.056\\ 0.001 & 0.036 & 0.945 \end{bmatrix}$$
$$T = \begin{bmatrix} -0.0023 & 1.0033 & -0.0011 & 0.0032 & 0.0004 & -0.0019 & -0.0043 & 0.0030 \\ -0.0008 & 0.0008 & 1.0011 & 0.0005 & -0.0012 & -0.0007 & -0.0006 & 0.0009\\ 0.0000 & 0.0002 & 0.002 & 1.000 & -0.0021 & -0.0002 & -0.0002 & 0.0023 \end{bmatrix}$$

Various training data sets were generated and each composed of two subsets: the first specifically impacting the accuracy of the tone response curve and the second impacting the accuracy of the transfer matrix. The first subset was generated by ramping each channel separately from 0 to 255 (255 always included) with a specific channel increment $\delta$1 (called Pure (R, G, B)-channel increments). The second subset was composed of an RGB cube with a specific increment $\delta$2 (called Non-pure (R, G, B)-channel increments), but where the RGB triplets with only one channel turned on (single channel ramps) have been deleted. Training data sets were generated for all the considered models by making all possible combinations of the first and second subsets for increment values of $\delta$1 and $\delta$2 equal to {6, 8, 10, 12, 14, 16, 18, 20, 25, 30, 35, 40, 50}. Note that the following triplets [255,0,0], [0,255,0], and [0,0,255] were also included in each set (if not already present).

2.2.2 Test data

Test data is needed to evaluate the performance of the models. Testing data is not seen by a model during training and so the accuracy of a trained model can be determined by using them. Test sets of (XYZground truth, XYZmeasured) pairs were generated for each model by generating a number of random XYZ values uniformly distributed over the display’s color gamut (= ground truth), calculating the corresponding RGB values using the trained method, and converting them to XYZ by using the Forward Display mode for the simulated data.

For each test set, 10000 pairs were generated. All models were tested with the same test sets. For analysis purposes, multiple test sets were generated, but with a different seed for the random number generator.

2.3 Model accuracy testing

The $\Delta$E*ab color differences between the ground truth XYZ values and the measured XYZ values (corresponding to a predicted RGBdisplay triplet) for the test sets give an idea of the accuracy of each model. A CIELAB color difference with a Euclidean distance of 1 is assumed to correspond to 1 just-noticeable-difference (JND) [61]. CIELAB color space has been shown to be especially useful for image analysis and applications where there are requirements for decision-making regarding color acceptability [62].

Similar to previous research [27,28,35,36], $\Delta$E*ab values were calculated using as the white point the XYZ value obtained using the forward display mode with all the RGB values equal to (255, 255, 255) as input. The color difference algorithm is presented as Algorithm 2.

Tables Icon

Algorithm 2. CIELAB color difference calculation

3. Results and discussion

For obtaining the results using simulated data, the simulations were done multiple times with different seeds for training and test data in each simulation. Each time, a model was trained and the 95 percentile error ($\Delta$E*ab) between ground truth and predicted value was calculated for the test set. The 95 percentile of all these errors was then calculated to obtain the error generated by a particular model. 95 percentiles were used to get an estimate of the maximum error while reducing the impact of outliers to which a max function would be highly susceptible.

3.1 Gamma function-based models

The GGO, GOG, and GOGO models were all applied to simulated data, with fixed and optimized matrices. Neither of the models performs well on simulated data. With a fixed matrix, the lowest JND obtained by the GGO model is over 19 and the highest surpasses 23. The scenario is similar for GOG and GOGO models with a fixed matrix. The highest JND obtained by a GOG model goes over 22 and for the GOGO model, the JND ranges from around 22 to 27. The GOGO model reports the highest JND of the three models when a fixed matrix is used. With a fixed matrix, the variations in pure or non-pure channel increments do not make that much of a difference.

With an optimized matrix, the GGO, GOG, and GOGO models could achieve a minimum JND of around 17, 13, and 17. All these ranges of JNDs are clearly too high for color perception studies.

3.2 PLCC

The minimum JND that is achieved by the PLCC model is nearly 8 and it is quite high. The JNDs go from nearly 9 to 12 when the pure channel increment is more than or equal to 35.

With an optimized matrix, the best that the PLCC model can do colorimetrically is to obtain a JND of around 8 at a pure and non-pure RGB increment of 8. For pure channel increments less than or equal to 20 and non-pure channel increments of 35 or below, the JNDs stay below 9. Beyond that, much higher JNDS are obtained and they can go above 12. The likely cause of high JNDs obtained by the PLCC model is the fact that this model can not take care of channel interactions and as mentioned earlier in the introduction section, modern-day display devices are complex, unlike CRTs.

3.3 PLVC

At lower RGB channel increments, the PLVC model performs fairly well and is comparable to models like ANN. The best JND of 1.3 is obtained when the pure and non-pure channel increments are both 6. Up to an increment of 14 or less, the JND stays around 3. The PLVC model works much better than the PLCC or GOG models with lower chromaticity errors in accordance with what has been mentioned earlier in the literature [39].

3.4 POR

Several polynomial regression models, which varied in terms of polynomial degree, were applied to the data. Without black level correction, they performed really badly, as seen in Fig. 1. The errors are really high with the minimum JND being almost 8. This confirms the fact stated in the literature and mentioned earlier that black-level correction is necessary for this model.

 figure: Fig. 1.

Fig. 1. Color characterization error (95th percentile of 95th percentile) as a function of RGB axes increments for Polynomial Regression model with simulated data, without black level correction.

Download Full Size | PDF

Therefore, all these models were black-level corrected. With the simplest polynomial model with a degree equal to one, a JND of around 5 can be obtained with a non-pure channel increment of 8 and a pure channel increment of 50. Even if the non-pure channel increment is increased to 16, the JND stays around the same at the expense of a few data points. The degree one polynomial regression is the simplest model possible.

With the increase in polynomial degrees, the models become more complex and can perform better colorimetric characterization. For the same non-pure channel increment of 8 and a pure channel increment of 50, a polynomial regression model equipped with a second-degree polynomial can obtain an improved JND below 5. The best JND obtained by the second-degree polynomial model is around 2 when the pure and non-pure RGB channel increments are 8. This configuration would need a lot of measurements as the inner RGB cube is really dense. If one increases the pure and non-pure RGB increments to 18, the JND drops only by a small amount. Moving from a second-degree to a third-degree model does not show a lot of improvement but JNDs of under 2 can be obtained.

With higher degree models of fourth and fifth degrees, JNDs of under 1.5 are obtained. In a degree 6 model, a JND of just under 1 can be obtained with a non-pure channel increment and a pure channel increment of 14. As such a 6-degree polynomial regression model can do a very accurate color characterization. This can be seen in Fig. 2.

 figure: Fig. 2.

Fig. 2. Color characterization error (95th percentile of 95th percentile) as a function of RGB axes increments for Polynomial Regression model (equipped with degree 6 polynomial) with simulated data.

Download Full Size | PDF

Yet further increasing the degree of the polynomial models does not lead to a betterment of the colorimetric characterization performance. For a seventh-degree model, although the minimum JND obtained is around 3, the maximum is almost 70. It gets worse for an eighth-degree model for which even the minimum JND is above 20. With an increase in the polynomial degree, the number of parameters used by the model increases, and overfitting may lead to bad predictions.

3.5 LUT

Although this model performs better because of including color channel interactions implicitly, it needs a lot of measurements. To achieve a JND of just around 1, a non-pure channel increment of 25, and a pure channel increment of 12 is needed. This can be seen in Fig. 3.

 figure: Fig. 3.

Fig. 3. Color characterization error (95th percentile of 95th percentile) as a function of the training set increments for LUT model with simulated data.

Download Full Size | PDF

3.6 ANN

With different hidden layer sizes of 130 or less, the model does not perform that well and reports exorbitant JNDs going up to 30. This is evident from Fig. 4. It is understandable because the model is not powerful enough to learn the relation between input and output data with such low complexity.

 figure: Fig. 4.

Fig. 4. Color characterization error (95th percentile of 95th percentile) as a function of the training set increment and the hidden layer sizes in the ANN for simulated data.

Download Full Size | PDF

At a lower increment of 6 and 300 hidden layers, the neural network can achieve a JND of under 3. The lowest JND for any number of hidden layers is always obtained at an increment of 6. At this increment with 600 or more hidden layers, a JND of just about 2 is obtained. If one looks at increased increments that correspond to a lesser amount of measurements of around 20, JNDs of nearly 4 can be obtained. At increments beyond 30 though, the model does not perform colorimetric characterization well which is clearly because the training dataset is too sparse.

The above analysis is done with an ANN having a single hidden layer with a number of neurons in the set: $\{10, 20, 40, 80, 100, 130, 160, 200, 300, 400, 500, 600, 700, 800\}$. Since at an increment of 600 hidden layers the best JND was obtained, an analysis was also done by redistributing the 600 neurons into multiple hidden layers: the first case with 3 hidden layers having 200 neurons each, the next case with 2 hidden layers having 300 neurons each, and the final case with 6 hidden layers having 100 neurons each. The results are presented in Fig. 5.

 figure: Fig. 5.

Fig. 5. Color characterization error (95th percentile of 95th percentile) as a function of the training set increment and multiple hidden layers in the ANN for simulated data.

Download Full Size | PDF

Changing the distribution of nodes in the ANN from a single hidden layer of 600 neurons to 2 layers of 300 neurons each can help in better colorimetric performance by the model, as the JND drops from 1.95 to 1.45 at an increment of 6. Fig. 5 shows the impact of node redistribution on the colorimetric performance of the model. With 3 hidden layers of 200 neurons each at an increment of 6, the JND stays below 2. With 6 hidden layers of 100 neurons each though at the same increment, the performance of the network deteriorates as the JND drops to above 2.5. The ANN, given its complexity and adeptness in handling non-linearities, works much better than the simple analytical gamma-based or PLCC models.

4. Conclusion

The current study tested the effectiveness of different color characterization models in performing colorimetric characterization. Using simulations of a practical display with color channel interactions, different datasets were made for training and testing. The performance of different models on the generated data was analyzed.

Firstly, the simulation results dictated that the POR, ANN, and LUT models could be effective in performing display characterization. The LUT with a non-pure channel increment of 25 and a pure channel increment of 12 could achieve a JND of just around 1. With more data available, the JNDs obtained by the LUT went down to around 0.5. The POR model with a degree 6 polynomial could also do accurate colorimetric characterization with JNDs under one. Finally, the ANN model with 2 hidden layers having 300 neurons each achieved a JND below 1.5.

Secondly, the Gamma function-based models were not found to be ineffective in characterizing colors for complex displays with color channel interactions. The minimum JNDs obtained by this family of models went above 13 which is extremely high to even consider them for color science experiments. Despite being simple and effective models for older technologies like CRTs, these models are not very useful for characterizing HMDs with more complex colorimetric subtleties. The PLCC model was better than the GOG, GGO, or GOGO model but the best JND achieved by it is also considered too high.

Thirdly it can be seen that the effectiveness of the models is dictated by the availability of the data. The better models like the POR model and ANN have the capability of producing refined colorimetric output with more data availability. There is of course a trade-off between data collection and better colorimetric performance, and data collection is an intensive process.

In the future, measurements would be done using a real HMD and analysis would be performed to see the appropriateness of the mentioned color characterization models.

Funding

Fonds Wetenschappelijk Onderzoek (G057021N); KU Leuven.

Disclosures

The authors declare no conflicts of interest.

Data availability

The data used in this work to obtain the results can be readily obtained from the corresponding author.

References

1. G. C. Burdea and P. Coiffet, Virtual reality technology (John Wiley & Sons, 2003).

2. F. Steinicke, “The science and fiction of the ultimate display,” in Being really virtual, (Springer, 2016), pp. 19–32.

3. L. Li, F. Yu, D. Shi, J. Shi, Z. Tian, J. Yang, X. Wang, and Q. Jiang, “Application of virtual reality technology in clinical medicine,” Am. journal of translational research 9, 3867 (2017).

4. G. Székely and R. M. Satava, “Virtual reality in medicine. interview by judy jones,” BMJ (Clinical research ed.) 319(7220), 1305 (1999). [CrossRef]  

5. H. Guan, Y. Xu, and D. Zhao, “Application of virtual reality technology in clinical practice, teaching, and research in complementary and alternative medicine,” Evidence-Based Complementary Altern. Med. 2022, 1–12 (2022). [CrossRef]  

6. M. Puri, A. Solanki, T. Padawer, S. M. Tipparaju, W. A. Moreno, and Y. Pathak, “Introduction to artificial neural network (ann) as a predictive tool for drug design, discovery, delivery, and disposition: Basic concepts and modeling,” in Artificial neural network for drug design, delivery and disposition, (Elsevier, 2016), pp. 3–13.

7. M. E. Portman, A. Natapov, and D. Fisher-Gewirtzman, “To go where no man has gone before: Virtual reality in architecture, landscape architecture and environmental planning,” Comput. Environ. Urban Syst. 54, 376–384 (2015). [CrossRef]  

8. E. Gomes, F. Rebelo, N. V. Boas, P. Noriega, and E. Vilar, “Architecture, virtual reality, and user experience,” in Virtual and Augmented Reality for Architecture and Design, (CRC Press, 2022), pp. 138–154.

9. L. Freina and M. Ott, “A literature review on immersive virtual reality in education: state of the art and perspectives,” in The international scientific conference elearning and software for education, vol. 1 (2015), pp. 10–1007.

10. S. Kavanagh, A. Luxton-Reilly, B. Wuensche, and B. Plimmer, “A systematic review of virtual reality in education,” Themes in Science and Technology Education 10, 85–119 (2017).

11. S. Gradl, B. M. Eskofier, D. Eskofier, C. Mutschler, and S. Otto, “Virtual and augmented reality in sports: an overview and acceptance study,” in Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct, (2016), pp. 885–888.

12. B. Bideau, R. Kulpa, N. Vignais, S. Brault, F. Multon, and C. Craig, “Using virtual reality to analyze sports performance,” IEEE Computer Graphics and Applications 30, 14–21 (2009). [CrossRef]  

13. R. Wilson, N. Ferreri, and C. B. Mayhorn, “Game on: Using virtual reality to explore the user-experience in sports media,” in Handbook of Usability and User Experience, (CRC Press, 2022), pp. 157–170.

14. A. Weaver, “High-fidelity patient simulationin nursing education: an integrative review,” Nurs. Educ. Perspect. 32(1), 37–40 (2011). [CrossRef]  

15. F. Munshi, H. Lababidi, and S. Alyousef, “Low-versus high-fidelity simulations in teaching and assessing clinical skills,” J. Taibah Univ. Med. Sci. 10(1), 12–15 (2015). [CrossRef]  

16. I. Gibson, Z. Gao, and I. Campbell, “A comparative study of virtual prototyping and physical prototyping,” Int. journal of manufacturing technology and management 6(6), 503–522 (2004). [CrossRef]  

17. C. Chua, S. Teh, and R. Gay, “Rapid prototyping versus virtual prototyping in product design and manufacturing,” Int. J. Adv. Manuf. Technol. 15(8), 597–603 (1999). [CrossRef]  

18. O. Clausen, G. Fischer, A. Furhmann, and R. Marroquim, “Towards predictive virtual prototyping: color calibration of consumer vr hmds,” in 16th GI AR/VR Workshop, (2019), pp. 13–24.

19. Y. Wang, W. Liu, X. Meng, H. Fu, D. Zhang, Y. Kang, R. Feng, Z. Wei, X. Zhu, and G. Jiang, “Development of an immersive virtual reality head-mounted display with high performance,” Appl. Opt. 55(25), 6969–6977 (2016). [CrossRef]  

20. M. C. tom Dieck, T. H. Jung, and S. M. Loureiro, Augmented Reality and Virtual Reality: New Trends in Immersive Technology (Springer Nature, 2021).

21. E.-L. Hsiang, Z. Yang, T. Zhan, J. Zou, H. Akimoto, and S.-T. Wu, “Optimizing the display performance for virtual reality systems,” OSA Continuum 4, 3052–3067 (2021). [CrossRef]  

22. J.-H. Park and B. Lee, “Holographic techniques for augmented reality and virtual reality near-eye displays,” Light: Advanced Manufacturing 3, 1–14 (2022). [CrossRef]  

23. G. Koo and Y. H. Won, “Foveated integral imaging system for near-eye 3d displays,” Optics Continuum 1, 1294–1304 (2022). [CrossRef]  

24. P. Scarfe and A. Glennerster, “Using high-fidelity virtual reality to study perception in freely moving observers,” J. Vis. 15(9), 3 (2015). [CrossRef]  

25. R. Gil Rodríguez, F. Bayer, M. Toscani, D. Guarnera, G. C. Guarnera, and K. R. Gegenfurtner, “Colour calibration of a head mounted display for colour vision research using virtual reality,” SN Computer Science 3, 1–10 (2022). [CrossRef]  

26. H. Ha, Y. Kwak, H. Kim, and Y.-J. Seo, “Discomfort luminance level of head-mounted displays depending on the adapting luminance,” Color Research & Application 45, 622–631 (2020). [CrossRef]  

27. T. Pouli, P. Morvan, S. Thiebaud, and N. Mitchell, “Color management for vr production,” in Proceedings of the Virtual Reality International Conference-Laval Virtual, (2018), pp. 1–8.

28. A. Mehrfard, J. Fotouhi, G. Taylor, T. Forster, N. Navab, and B. Fuerst, “A comparative analysis of virtual reality head-mounted display systems,” CoRR abs/1912.02913 (2019).

29. J.-B. Thomas, J. Y. Hardeberg, I. Foucherot, and P. Gouton, “The PLVC display color characterization model revisited,” Color Research & Application 33, 449–460 (2008). [CrossRef]  

30. Y. Yoshida and Y. Yamamoto, “Color calibration of lcds,” in Color and Imaging Conference, vol. 2002 (Society for Imaging Science and Technology, 2002), pp. 305–311.

31. W. B. Cowan and N. Rowell, “On the gun independence and phosphor constancy of colour video monitors,” COLOR reSearch and application 11, s34–s38 (1986).

32. M. Toscani, R. Gil, D. Guarnera, G. Guarnera, A. Kalouaz, and K. R. Gegenfurtner, “Assessment of oled head mounted display for vision research with virtual reality,” in 2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), (IEEE, 2019), pp. 738–745.

33. R. S. Berns, “Methods for characterizing crt displays,” Displays 16, 173–182 (1996). [CrossRef]  

34. J. E. Gibson and M. D. Fairchild, “Colorimetric characterization of three computer displays (lcd and crt),” Munsell color science laboratory technical report 40 (2000).

35. V. Cheung, S. Westland, D. Connah, and C. Ripamonti, “A comparative study of the characterisation of colour cameras by means of neural networks and polynomial transforms,” Coloration technology 120, 19–25 (2004). [CrossRef]  

36. M. J. Vrhel and H. J. Trussell, “Color scanner calibration via a neural network,” in 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings. ICASSP99 (Cat. No. 99CH36258), vol. 6 (IEEE, 1999), pp. 3465–3468.

37. A. Poljicak, J. Dolic, and J. Pibernik, “An optimized radial basis function model for color characterization of a mobile device display,” Displays 41, 61–68 (2016). [CrossRef]  

38. J. Prats-Climent, L. Gòmez-Robledo, R. Huertas, S. García-Nieto, M. J. Rodríguez-Álvarez, and S. Morillas, “A study of neural network-based lcd display characterization,” in London Imaging Meeting, vol. 2021 (Society for Imaging Science and Technology, 2021), pp. 97–100.

39. M. Vazirian, “Colour characterisation of lcd display systems,” Ph.D. thesis, University of Leeds (2018).

40. J.-B. Thomas, “Colorimetric characterization of displays and multi-display systems,” Univ. de Bourgogne (2009).

41. J.-M. Kim and S.-W. Lee, “Universal color characterization model for all types of displays,” Optical Engineering 54, 103103 (2015). [CrossRef]  

42. J. Y. Hardeberg, L. Seime, and T. Skogstad, “Colorimetric characterization of projection displays using a digital colorimetric camera,” in Projection Displays IX, vol. 5002 (SPIE, 2003), pp. 51–61.

43. R. S. Berns, S. R. Fernandez, and L. Taplin, “Estimating black-level emissions of computer-controlled displays,” Color Research & Application 28, 379–383 (2003). [CrossRef]  

44. Jason Brownlee, “How to Use Polynomial Feature Transforms for Machine Learning,” https://machinelearningmastery.com/polynomial-features-transforms-for-machine-learning/.

45. Jason Brownlee, “Deep Learning Models for Multi-Output Regression,” https://machinelearningmastery.com/deep-learning-models-for-multi-output-regression/.

46. G. Hong, M. R. Luo, and P. A. Rhodes, “A study of digital camera colorimetric characterization based on polynomial modeling,” Color Research & Application 26, 76–84 (2001). [CrossRef]  

47. H. R. Kang, Computational color technology (Spie Press Bellingham, 2006).

48. K. A. Smet, “Tutorial: the LuxPy Python toolbox for lighting and color science,” Leukos 16, 179–201 (2020). [CrossRef]  

49. M. Campbell-Kelly, M. Croarken, R. Flood, et al., The history of mathematical tables: from Sumer to spreadsheets (Oxford University Press, 2003).

50. R. Byshko and S. Li, “Characterization of iphone displays: a comparative study,” in 18. Workshop Farbbildverarbeitung, (2012), pp. 49–60.

51. K. Hormann, “Barycentric interpolation,” in Approximation Theory XIV: San Antonio 2013, (Springer, 2014), pp. 197–218.

52. J.-G. Attali and G. Pagès, “Approximations of functions by a multilayer perceptron: a new approach,” Neural networks 10, 1069–1081 (1997). [CrossRef]  

53. scikit-learn, “Neural network models (supervised),” https://scikit-learn.org/stable/modules/neural_networks_supervised.html.

54. scikit-learn, “Multi-layer Perceptron regressor,” https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPRegressor.html.

55. I. H. Witten, E. Frank, M. A. Hall, and C. J. Pal, “Chapter 10 - deep learning,” in Data Mining (Fourth Edition), I. H. Witten, E. Frank, M. A. Hall, and C. J. Pal, eds. (Morgan Kaufmann, 2017), pp. 417–466, fourth edition ed.

56. X. Wang, Y. Qin, Y. Wang, S. Xiang, and H. Chen, “ReLTanh: An activation function with vanishing gradient resistance for SAE-based DNNs and its application to rotating machinery fault diagnosis,” Neurocomputing 363, 88–98 (2019). [CrossRef]  

57. I. Goodfellow, Y. Bengio, and A. Courville, Deep learning (MIT press, 2016).

58. R. Reed and R. J. MarksII, Neural smithing: supervised learning in feedforward artificial neural networks (Mit Press, 1999).

59. U. Bhaumik, R. Spieringhs, and K. A. Smet, “Color characterization of displays using neural networks,” CVCS2022 Proceedings 3271 (2022).

60. Y. Kwak and L. MacDonald, “Characterisation of a desktop lcd projector,” Displays 21, 179–194 (2000). [CrossRef]  

61. M. Stone, D. A. Szafir, and V. Setlur, “An engineering model for color difference as a function of size,” in Color and Imaging Conference, vol. 2014 (Society for Imaging Science and Technology, 2014), pp. 253–258.

62. C. Connolly and T. Fleiss, “A study of efficiency and accuracy in the transformation from RGB to CIELAB color space,” IEEE transactions on image processing 6, 1046–1048 (1997). [CrossRef]  

Data availability

The data used in this work to obtain the results can be readily obtained from the corresponding author.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Color characterization error (95th percentile of 95th percentile) as a function of RGB axes increments for Polynomial Regression model with simulated data, without black level correction.
Fig. 2.
Fig. 2. Color characterization error (95th percentile of 95th percentile) as a function of RGB axes increments for Polynomial Regression model (equipped with degree 6 polynomial) with simulated data.
Fig. 3.
Fig. 3. Color characterization error (95th percentile of 95th percentile) as a function of the training set increments for LUT model with simulated data.
Fig. 4.
Fig. 4. Color characterization error (95th percentile of 95th percentile) as a function of the training set increment and the hidden layer sizes in the ANN for simulated data.
Fig. 5.
Fig. 5. Color characterization error (95th percentile of 95th percentile) as a function of the training set increment and multiple hidden layers in the ANN for simulated data.

Tables (2)

Tables Icon

Algorithm 1. Forward Display Mode

Tables Icon

Algorithm 2. CIELAB color difference calculation

Equations (22)

Equations on this page are rendered with MathJax. Learn more.

R = ( a × r 2 N 1 + o i ) γ + o o
G = ( a × g 2 N 1 + o i ) γ + o o
B = ( a × b 2 N 1 + o i ) γ + o o
[ X Y Z ] = [ X r , m a x X g , m a x X b , m a x Y r , m a x Y g , m a x Y b , m a x Z r , m a x Z g , m a x Z b , m a x ] [ R G B ]
[ X Y Z ] [ X k Y k Z k ] = [ X r , m a x X k X g , m a x X k X b , m a x X k Y r , m a x Y k Y g , m a x Y k Y b , m a x Y k Z r , m a x Z k Z g , m a x Z k Z b , m a x Z k ] [ R G B ]
C = f c ( c )
X ( , d i ( m i ) , ) = i = 0 , j = m i i = N 1 [ X ( d i ( j ) ) X k ] + X k
Y ( , d i ( m i ) , ) = i = 0 , j = m i i = N 1 [ Y ( d i ( j ) ) Y k ] + Y k
Z ( , d i ( m i ) , ) = i = 0 , j = m i i = N 1 [ Z ( d i ( j ) ) Z k ] + Z k
X ( d r ( i ) , d g ( j ) , d b ( l ) ) = [ X ( d r ( i ) ) X k ] + [ X ( d g ( j ) ) X k ] + [ X ( d b ( l ) ) X k ] + X k
Y ( d r ( i ) , d g ( j ) , d b ( l ) ) = [ Y ( d r ( i ) ) Y k ] + [ Y ( d g ( j ) ) Y k ] + [ Y ( d b ( l ) ) Y k ] + Y k
Z ( d r ( i ) , d g ( j ) , d b ( l ) ) = [ Z ( d r ( i ) ) Z k ] + [ Z ( d g ( j ) ) Z k ] + [ Z ( d b ( l ) ) Z k ] + Z k
p 1 = [ 1 , R , G , B ]
p 2 = [ 1 , R , G , B , R 2 , G 2 , B 2 , R G , R B , G B ]
[ X Y Z ] = [ c x , 1 c x , 2 c x , 3 c x , 4 c x , 5 c x , 6 c x , 7 c x , 8 c x , 9 c y , 1 c y , 2 c y , 3 c y , 4 c y , 5 c y , 6 c y , 7 c y , 8 c y , 9 c z , 1 c z , 2 c z , 3 c z , 4 c z , 5 c z , 6 c z , 7 c z , 8 c z , 9 ] [ 1 R G B R 2 G 2 R G R B G B ]
i = 0 n w i ( x x i ) = 0
x = i = 0 n w i x i i = 0 n w i
f ( x ) = x α x β + C
f ( x ) = ( α β ) x α + β 1 + α C x α 1 ( x β + C ) ( x β + C )
[ A r r A r g A r b A g r A g g A g b A b r A b g A b b ] = [ 3.394 0.030 0.016 0 2.550 0.007 0 0.002 2.203 ]
S = [ 0.241 0.417 0.172 0.129 0.814 0.056 0.001 0.036 0.945 ]
T = [ 0.0023 1.0033 0.0011 0.0032 0.0004 0.0019 0.0043 0.0030 0.0008 0.0008 1.0011 0.0005 0.0012 0.0007 0.0006 0.0009 0.0000 0.0002 0.002 1.000 0.0021 0.0002 0.0002 0.0023 ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.