Mahmoud Afifi, Abhijith Punnappurath, Graham Finlayson, and Michael S. Brown, "As-projective-as-possible bias correction for illumination estimation algorithms," J. Opt. Soc. Am. A 36, 71-78 (2019)
Illumination estimation is the key routine in a camera’s onboard auto-white-balance (AWB) function. Illumination estimation algorithms estimate the color of the scene’s illumination from an image in the form of an R, G, B vector in the sensor’s raw-RGB color space. While learning-based methods have demonstrated impressive performance for illumination estimation, cameras still rely on simple statistical-based algorithms that are less accurate but capable of executing quickly on the camera’s hardware. An effective strategy to improve the accuracy of these fast statistical-based algorithms is to apply a post-estimate bias-correction function to transform the estimated R, G, B vector such that it lies closer to the correct solution. Recent work by Finlayson [Interface Focus 8, 20180008 (2018) [CrossRef] ] showed that a bias-correction function can be formulated as a projective transform because the magnitude of the R, G, B illumination vector does not matter to the AWB procedure. This paper builds on this finding and shows that further improvements can be obtained by using an as-projective-as-possible (APAP) projective transform that locally adapts the projective transform to the input R, G, B vector. We demonstrate the effectiveness of the proposed APAP bias correction on several well-known statistical illumination estimation methods. We also describe a fast lookup method that allows the APAP transform to be performed with only a few lookup operations.
You do not have subscription access to this journal. Cited by links are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.
You do not have subscription access to this journal. Figure files are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.
You do not have subscription access to this journal. Article tables are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.
You do not have subscription access to this journal. Equations are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.
Estimating a Projective Transformation Using Alternating Least Squares
Input: Matrix containing the R, G, B estimates of the illuminant obtained using the chosen illumination estimation algorithm and matrix containing the corresponding ground truth R, G, B values of the light.
Output: The projective bias-correction matrix and an auxiliary variable that compensates for the difference in magnitude between the estimated illuminants and their corresponding ground truths.
The recovery angular error is reported for the statistical-based methods, learning-based methods, and the proposed projective correction transformations applied to the statistical-based methods. The statistical-based methods are as follows: gray world (GW) [2], shades of gray (SoG) [5], the first-order gray edges (GE-1) and the second-order gray edges (GE-2) [4], and the distribution PCA [8]. The learning-based methods are as follows: Bayesian [11], convolutional color constancy (CCC) [12], deep specialized network (DS-Net) [13], the FC4 method based on AlexNet (FC4-A) and SqueezeNet (FC4-S) [15], and fast Fourier color constancy (FFCC) [16]. The proposed projective bias correction is applied on the statistical-based methods using a downsampled version of the images ( pixels). The term (APAP) denotes that the as-projective-as-possible transformation is applied. The term (APAP-LUT) refers to the APAP using a -bins lookup table. The bold numbers refer to the state-of-the-art results reported on the dataset.
The recovery angular error is reported for the statistical-based methods, learning-based methods, and the proposed projective transformations. The statistical-based methods are as follows: gray world (GW) [2], shades of gray (SoG) [5], the first-order gray edges (GE-1) and the second-order gray edges (GE-2) [4], and the distribution PCA [8]. The learning-based methods are as follows: Bayesian [11], color constancy using natural image statistics and scene semantics (CCNIS) [26], exemplar-based color constancy [27], convolutional color constancy (CCC) [12], deep specialized network (DS-Net) [13], Oh and Kim’s method [14], the FC4 method based on AlexNet (FC4-A) and SqueezeNet (FC4-S) [15], and fast Fourier color constancy (FFCC) [16]. The proposed projective bias correction is applied on the statistical-based methods (i.e., GW, SoG, GE [first and second orders], and the distribution PCA methods) using a downsampled version of the images ( pixels). We also applied our transformations on three learning-based methods (i.e., Bayesian [11], CCNIS [26], and Exemplar-based [27]).
The recovery angular errors are reported for statistical-based methods with and without our proposed projective transformations. The methods are as follows: gray world (GW) [2], shades of gray (SoG) [5], the first-order gray edges (GE-1) and the second-order gray edges (GE-2) [4], and the distribution PCA [8]. The proposed projective bias correction is applied using a downsampled version of the images ( pixels).
Tables (4)
Algorithm 1.
Estimating a Projective Transformation Using Alternating Least Squares
Input: Matrix containing the R, G, B estimates of the illuminant obtained using the chosen illumination estimation algorithm and matrix containing the corresponding ground truth R, G, B values of the light.
Output: The projective bias-correction matrix and an auxiliary variable that compensates for the difference in magnitude between the estimated illuminants and their corresponding ground truths.
The recovery angular error is reported for the statistical-based methods, learning-based methods, and the proposed projective correction transformations applied to the statistical-based methods. The statistical-based methods are as follows: gray world (GW) [2], shades of gray (SoG) [5], the first-order gray edges (GE-1) and the second-order gray edges (GE-2) [4], and the distribution PCA [8]. The learning-based methods are as follows: Bayesian [11], convolutional color constancy (CCC) [12], deep specialized network (DS-Net) [13], the FC4 method based on AlexNet (FC4-A) and SqueezeNet (FC4-S) [15], and fast Fourier color constancy (FFCC) [16]. The proposed projective bias correction is applied on the statistical-based methods using a downsampled version of the images ( pixels). The term (APAP) denotes that the as-projective-as-possible transformation is applied. The term (APAP-LUT) refers to the APAP using a -bins lookup table. The bold numbers refer to the state-of-the-art results reported on the dataset.
The recovery angular error is reported for the statistical-based methods, learning-based methods, and the proposed projective transformations. The statistical-based methods are as follows: gray world (GW) [2], shades of gray (SoG) [5], the first-order gray edges (GE-1) and the second-order gray edges (GE-2) [4], and the distribution PCA [8]. The learning-based methods are as follows: Bayesian [11], color constancy using natural image statistics and scene semantics (CCNIS) [26], exemplar-based color constancy [27], convolutional color constancy (CCC) [12], deep specialized network (DS-Net) [13], Oh and Kim’s method [14], the FC4 method based on AlexNet (FC4-A) and SqueezeNet (FC4-S) [15], and fast Fourier color constancy (FFCC) [16]. The proposed projective bias correction is applied on the statistical-based methods (i.e., GW, SoG, GE [first and second orders], and the distribution PCA methods) using a downsampled version of the images ( pixels). We also applied our transformations on three learning-based methods (i.e., Bayesian [11], CCNIS [26], and Exemplar-based [27]).
The recovery angular errors are reported for statistical-based methods with and without our proposed projective transformations. The methods are as follows: gray world (GW) [2], shades of gray (SoG) [5], the first-order gray edges (GE-1) and the second-order gray edges (GE-2) [4], and the distribution PCA [8]. The proposed projective bias correction is applied using a downsampled version of the images ( pixels).