Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Scene-based dual domain non-uniformity correction algorithm for stripe and optics-caused fixed pattern noise removal

Open Access Open Access

Abstract

Non-uniformity is a long-standing problem that significantly degrades infrared images through fixed pattern noise (FPN). Existing scene-based algorithms for non-uniformity correction (NUC) effectively eliminate stripe FPN assuming consistent inter-frame non-uniformity. However, they are ineffective in handling spatially continuous optical FPN. In this paper, a scene-based dual domain correction approach is proposed to address the non-uniformity problem by simultaneously removing stripe and optics-caused FPN. We achieve this through gain correction in the frequency domain and offset correction in the spatial domain. To remove stripes, we approximate the desired image using a guided filter and iteratively update the bias correction parameters frame by frame. For optics-caused noise removal, we separate low frequency noise from the scene using Fourier transform and update the gain correction parameters accordingly. To mitigate ghost artifacts, a combined strategy is introduced to adaptively adjusts learning rates and weights during the updating stage. Comprehensive evaluations demonstrate that our proposed approach outperforms compared methods in both real and simulated non-uniformity infrared videos.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Infrared cameras suffer from non-uniformity due to factors such as inconsistent pixel responses, variations in readout circuits, and temperature fluctuations in the optical lens [1,2]. This non-uniformity results in fixed pattern noise (FPN). Common form of non-uniformity like stripe noise primarily arise from inconsistent pixel responses, dark currents, and electronic readout discrepancies. Another kind, induced by temperature shifts in the optical lens, manifests as spatially continuous, low-frequency noise akin to optical vignetting. It is different from the stripe FPN, as shown in Fig. 1(b) (c). These non-uniformities degrade the quality of infrared images, especially in low contrast scenes, and limit the applications of infrared imaging systems in remote sensing, night vision, and autonomous driving [35].

 figure: Fig. 1.

Fig. 1. Efficient and accurate solution for stripe and optics-caused NUC. (a) Infrared image before scene-based dual domain NUC. (b) Estimated gain field to compensate the intensity variations caused by optics. (c) Estimated bias field to compensate the stripe FPN. (d) Scene-based dual domain NUC result.

Download Full Size | PDF

Researchers have developed various methods to address the non-uniformity problem, known as non-uniformity correction (NUC) techniques. Traditional NUC approaches are calibration-based and involve single-point, two-point, and multi-point corrections [6]. These methods use a blackbody reference to calibrate each pixel by estimating gain and offset coefficients based on a linear noise model. In practical scenarios, a mechanical shutter is often used for single-point correction. However, these calibration processes rely heavily on the stability of camera performance. Over time and with temperature changes, the non-uniformity of the camera may drift, rendering the pre-correction parameters ineffective.

Recently, the scene-based paradigm has gained significant attention in the field [728]. These approaches work on the assumption that the scene moves while the object's irradiation remains unchanged between adjacent frames. By modeling the spatial and temporal relationship, these methods aim to correct non-uniformity. Currently, various techniques have been developed for scene-based NUC, including constant statistical method [8,9], temporal high-pass methods [1012], image-registration-based methods [1519], neural networks methods [2022], and deep leaning methods [2328].

Constant statistical methods assume that the mean and standard deviation of each pixel are equal, requiring sufficient scene motion and an adequate number of frames [8,9]. Temporal high-pass methods separate the non-uniformity, which is considered low-frequency noise in the temporal domain, from the moving scenes using a high-pass filter [1012]. However, both constant statistical methods and temporal high-pass methods can result in ghosting artifacts when there is limited motion in the scene.

Recent studies have focused on noise characteristics, commonly employing low-rankness and group sparsity as regularizations. A flexible regularization is proposed for real scenarios NUC [13]. Additionally, a novel model incorporating dual sparsity constraints has been introduced to concurrently correct intensity bias and eliminate striping [14].

Image-registration-based methods utilize the motion of the image background between adjacent frames to estimate an error matrix for non-uniformity noise removal. The effectiveness of these methods relies on the accuracy of registration and scene characteristics. These methods include the column projection method [15] and the cross power spectrum method [16]. Zuo propose a scene-based NUC by estimating the global translation between two adjacent frames and minimizing the mean square error by aligning the images properly [17]. Seo proposes a real-time NUC algorithm employing feature pattern matching [18]. An infrared NUC approach is tailored for low-contrast scenes leverages feature extraction and image registration [19]. These approaches have low computational complexity and can converge quickly. However, these methods can be affected by registration errors caused by camera rotation and changes in the foreground, as well as introduce additional ghost artifacts during the correction stage.

Neural network methods, which are the earliest artificial neural network and different from deep learning, assume that each pixel has the same output as its spatial neighbors. These methods apply spatial filtering to obtain the desired image and utilize mean square error for parameter estimation during the correction process [20]. Song introduces a temporal gating LMS approach that employs distinct temporal and spatial filters, subsequently adaptively integrating the two [21]. A scene-adaptive algorithm improves it by utilizing multi-scale statistics and Laplacian pyramid frequency traits [22]. However, this assumption may not be suitable for addressing low-frequency and low-magnitude optical-caused FPN.

In recent years, deep learning-based methods have emerged for NUC [2328]. One advantage is their ability to correct multiple types of noise without introducing ghosting artifacts [24]. To preserve image details and edges during correction, Li introduces a long-short term residual network that effectively learns from noise patterns [26]. Guan proposes a cascading CNN with residual connections to mitigate ghosting from scene motion and accelerate convergence for single-frame NUC [27]. A novel NUC technique utilizing a deep multiscale residual network addresses combined stripe and optical FPN noise [28]. While this approach shows promise in simulations, it requires further validation with real-world data. However, deep learning requires a large amount of specialized training data, including images with FPN and corresponding clear images, which can be challenging to acquire. Additionally, simulation noise data may not capture the unique noise characteristics of individual infrared camera effectively [2325]. The computational complexity of deep learning-based methods is also higher compared to traditional algorithms, which limits their application in real-time processing tasks.

Despite some success, existing scene-based NUC, struggle when both stripe noise and optics-caused FPN occur simultaneously. These methods primarily focus on modeling the spatiotemporal correlation of stripe FPN. While having a good spatiotemporal correlation model or an effective online update mechanism is vital, it can still pose challenges when dealing with spatially continuous FPN.

To address these issues, we aim at leveraging combing spatiotemporal correlation and Fourier transformation. As optical-caused FPN primarily exhibits low-frequency characteristics in the spatial domain, spatial-based filters struggle to handle this type of noise effectively. By representing the energy magnitude at different frequencies, the frequency response plays a crucial role in this transformation. Our basic idea is that, spatially continued optics-related FPN is easier to distinguish in the frequency domain than spatial domain. Additionally, it is worth noting that optical noise tends to remain relatively constant over short periods of time. This characteristic enables the possibility of correcting optical noise by utilizing its distribution properties in both the frequency domain and the temporal domain.

Based on these ideas, we propose a novel NUC model that consists of three sub-models: stripe removal, optical-caused FPN removal, and ghosting artifacts suppression. The stripe removal model corrects the spatial domain stripe FPN by bias matrix. The optical-caused FPN removal model corrects optics-related FPN by gain matrix solely based on frequency information in the frequency domain. Our third model, the ghosting artifacts suppression model, effectively suppresses ghosts caused by scene-based NUC and adaptively adjusts the learning rate. Figure 1(d) illustrates the NUC results obtained even when the target scene is heavily affected by both stripe and optics-caused FPN. The summary of our contributions is as follows:

A stripe removal model based on guided filter is proposed to correct bias coefficients in the spatial domain and adaptively suppress ghosting artifacts.

An optical-caused FPN removal model is introduced, which corrects gain coefficients in the frequency domain. This serves as a complement to stripe correction by eliminating spatially continuous FPN.

A unified NUC approach is presented that integrates gain correction in the frequency domain and bias correction in the spatial domain to address the stripe and optics-caused FPN problem.

2. Methodology

2.1 Non-uniformity noise model

After some level of flat-field calibration, such as blackbody calibration or single-point correction of the shutter), the raw response of IR focal place array sensors will convert to a nearly linear response. Assuming the actual output of the focal plane array detector has a linear function response during correction [29], the expression is as follows:

$${Y_n}\left( {x,y} \right) = {g_n}\left( {x,y} \right) \cdot {X_n}\left( {x,y} \right) + {o_n}\left( {x,y} \right)$$
where X denotes target infrared radiation received by the detector, Y represents the actual output of the detector, (x, y) indicate the pixel coordinates, g and o denote the gain and the offset coefficients of detector, respectively, and superscript n means the frame number of the infrared image sequence. According to this response model, the output value is able to corrected by using the inverse transformation of (1):
$${X_n}({x,y} )= {k_n}({x,y} )\cdot {Y_n}({x,y} )+ {b_n}({x,y} )$$
where ${k_n}({x,y} )= 1/{g_n}({x,y} )$, ${b_n}({x,y} )={-} ({o_n}({x,y} )/{g_n}({x,y} ))$.

As mentioned earlier, non-uniformity exhibits in various ways. Our analysis now concentrates on two types: vertical stripe FPN and optical-induced FPN, depicted in Fig. 2. Vertical stripe FPN has spatial high-frequency features, whereas optical-induced FPN shows spatial low-frequency features. These fixed pattern noises are stable over short intervals but may vary with time and temperature changes.

 figure: Fig. 2.

Fig. 2. Stripe and optics-caused FPN distribution characteristics on spatial domain. (a) Simulated FPN; (b) corresponding grayscale histogram; (c) corresponding three-dimensional gray value display.

Download Full Size | PDF

2.2 Algorithm structure

We propose a new NUC model to effectively remove both stripe and optics-caused noise simultaneously. The algorithm structure is illustrated in Fig. 3.

 figure: Fig. 3.

Fig. 3. The algorithm structure.

Download Full Size | PDF

Phase 1: Stripe removal. We found that stripes FPN in the images are mostly present in the form of bias. To address this, we employ an adaptive retina-like neural network method to calculate a bias correction matrix. This method approximates the desired image using guided filter through iteratively updating the correction parameters frame by frame (Section 2.3).

Phase 2: Removal of optical-caused FPN. After stripe removal, we found that the neural network method based on guided filter could not simultaneously remove optical PFN. To overcome this, we propose a frequency domain correction strategy that separates the gain noise from the scene using logarithmic operations. By utilizing a multi-frame superposition algorithm, we estimate the correction gain matrix and generate the corrected image (Section 2.4).

Phase 3: Ghosting artifacts suppression. Suppression of ghosting artifacts is crucial in scene-based correction algorithms. To minimize the influence of scenes on the correction matrices, we introduce a combined mechanism. An adaptive learning rate strategy based on local roughness reduces the impact of scene details on the bias correction matrix. Furthermore, a change detection strategy in the frequency domain is employed to separate scenarios and constant noise components, resulting in a purified gain correction matrix (Section 2.5)

2.3 Stripe removal

Stripe FPN, which is a typical form of non-uniformity noise, usually presents in high-frequency vertical stripe noise in the spatial domain. A simplest way is to directly eliminate high frequency image part. However, the extracted high frequency image part contains strip noise and a significant amount of scene details. According to previous research, there is a local linear relationship existing between infrared data and strip noise of pixels within a column [30]. This relationship is the same as the key assumption of guided filter that a local linear relationship exists between the guidance image and the filtering output within a defined window [31].

To estimate the stripe noise solely using bias correction matrix, ${k_n}({x,y} )$ is set to 1. Equation (2) is then rewritten as ${X_n}({x,y} )= {Y_n}({x,y} )+ {b_n}({x,y} )$. We apply guided filtering to perform edge-preserving image smoothing, and the calculation formula are as follows:

$${D_i} = {\bar{p}_k}{G_i} + {\bar{q}_k},\; i \in {w_k}$$
where
$${p_k} = \frac{{cov{{({G,I} )}^{{w_k}}}}}{{var{{(G )}^{{w_k}}} + \varepsilon }}$$
$${q_k} = avg({{{(I )}^{{w_k}}}} )- {q_k}avg({(G )^{{w_k}}}).$$

Here G is the guidance image, I is the input image, D is the filtering output image, ${w_k}$ represents a window centered at the pixel k, ɛ the blurring factor, ${\bar{q}_k}$ is the average of ${q_k}$, ${\bar{p}_k}$ is the average of ${p_k}$, cov denotes covariance operation, $var$ stands for variance calculation, avg means average operation. When the window ${w_k}$ slides to an edge area, the covariance value $cov{({G,I} )^{{w_k}}}$ becomes higher, and ${p_k}$ tends to higher, leading the output ${D_i}$ to be large value. When sliding to the smooth area, the trend is opposite, which means less scene details, making it easier to separating the scene from the stripe noise.

We use guided filter with edge-preserving smoothing property to estimate the expected image, and apply the steepest descent method as the iterative strategy. Its algorithmic framework is shown in Fig. 4.

 figure: Fig. 4.

Fig. 4. Block diagram of stripe removal model based on guided filter.

Download Full Size | PDF

By using image ${X_n}$ as both the input and guided image, we found that the expected image D is estimated more accurately by the guided filter [32]. Equation (3) is rewritten as:

$${D_i} = {\bar{p}_k}{X_i} + {\bar{q}_k},\; i \in {w_k}$$
where
$${p_k} = \frac{{cov{{({X,X} )}^{{w_k}}}}}{{var{{(X )}^{{w_k}}} + \varepsilon }}$$
$${q_k} = ({1 - {q_k}} )avg({(X )^{{w_k}}}).$$

The correction error can be obtained by using mean square error between X and D:

$${E_n}({x,y} )= {[{{X_n}({x,y} )- {D_n}({x,y} )} ]^2}.$$

In order to approach the expected image, we apply the steepest descent (SD) method to sequential inputs of each pixel (x, y) to obtain recursive estimates of b:

$${E_b} = \frac{{\partial E}}{{\partial b}} = 2({X - D} ).$$

By using a frame-by-frame iteration,

$${b_{n + 1}}({x,y} )= {b_n}({x,y} )- 2{\mu _n}[{{X_n}({x,y} )- {D_n}({x,y} )} ]$$
where ${\mu _n}$ represents the learning rate, controlling the step size. As the number of iterations increases, the component of stripe noise gradually decreases, where the corrected image ${X_n}({x,y} )= {Y_n}({x,y} )- {b_n}({x,y} )$.

2.4 Removal of optical-caused FPN

After removing stripe noise, our focus on eliminating optical-caused FPN, specifically through gain correction. Equation (2) is rewritten as ${X_n}({x,y} )= {k_n}({x,y} ){Y_n}({x,y} )$. Figure 5 reveals that spatially continuous noise differs from stripe FPN in the frequency domain. Recognizing that each pixel affected by such noise is closely related to its neighbors, we try to transfer the gain correction process to the frequency domain. The frequency response represents the magnitude of energy at different frequencies, as shown in Fig. 6. Therefore, changing to a particular pixel in the frequency domain have a significant impact in the spatial domain. However, it is still a difficult task for tradition methods to separate the gain components.

 figure: Fig. 5.

Fig. 5. FPN and corresponding frequency response. The first row are FPN, the second row are the corresponding frequency response. (a)-(c) Optical-caused FPN. (d)-(f) Stripe FPN.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Stripe and optics-caused FPN of real image. The first row are infrared images, and the second row are the corresponding spectrogram. (a) Raw image. (b) Estimated gain field to compensate the optics-caused FPN. (c) Estimated bias field to compensate the stripe FPN. (d) Scene-based dual domain NUC result.

Download Full Size | PDF

Here, we use a logarithmic operation to preprocess images, converting the multiplicative component into an additive component. The operation is as follows:

$$\log {X_n}({x,{\; }y} )= \log {k_n}({x,{\; }y} )+ \log {Y_n}({x,{\; }y} ).$$

Then, a Fourier transform F is performed on the image:

$$\mathrm{{\cal F}}({\log {X_n}({x,{\; }y} )} )= \mathrm{{\cal F}}({\log {k_n}({x,{\; }y} )} )+ \mathrm{{\cal F}}({\log {Y_n}({x,{\; }y} )} ).$$

Although optical-caused FPN gradually drifts over time and with changes in temperature, it is generally considered constant in the short term during the correction process. It means that optical-caused FPN is unchanged while moving scenes varies in the frequency domain. Based on these observations, we design our optical-caused FPN Removal model. The proposed model framework is shown in Fig. 7.

 figure: Fig. 7.

Fig. 7. Block diagram of optical-caused FPN removal model.

Download Full Size | PDF

Based on the idea of accumulation between frames, we estimate the constant noise ${K_n}$ in the frequency domain:

$${K_n} = ({1 - {\alpha_n}} ){K_{n - 1}} + {\alpha _n}\mathrm{{\cal F}}({\log {X_n}} )$$
where ${\alpha _n}$ denotes the weight coefficient. As time goes by, the scene keeps changing, resulting in less scene information and more constant noise in Eq. (10). Optical-caused FPN removal is then performed by:
$${F_n} = \mathrm{{\cal F}}({\log {X_n}} )= \mathrm{{\cal F}}({\log {Y_n}} )- {K_n}.$$

Finally, Fourier inverse transform and exponential conversion are performed to get corrected image ${X_n}$:

$${X_n} = {e^{{\mathrm{{\cal F}}^{ - 1}}({{F_n}} )}}.$$

2.5 Ghosting artifacts suppression

Filters in the spatial domain often blur image details. Additionally, the error matrix includes both stripe noise and high-frequency details, which can cause ghosting artifacts, particularly around edges and detailed regions of objects in images. To suppress these ghosts, we designed an adaptive learning rate strategy aimed at improving algorithm stability. In this strategy, each pixel has an independent adaptive learning rate ${\mu _n}({x,y} )$ in Eq. (7), which adjusts the learning rate factor based on local roughness in the spatial domain. The formula for this adjustment is as follows:

$${\mu _n}({x,y} )= \left\{ {\begin{array}{{c}} {\; \frac{{{e^{{\theta_b} - \sigma ({x,y} )}} - 1}}{{{e^{{\theta_b}}} - 1}}{\mu_{base\; }}\; \; \; \sigma ({x,y} )< {\theta_b}}\\ {\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; 0\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \sigma ({x,y} )\ge {\theta_b}} \end{array}} \right.$$
where
$$\sigma ({x,y} )= \frac{1}{{m \times m}}\mathop \sum \limits_{p,q ={-} 2}^2 |{I({x,y} )- I({x + p,y + q} )} |.$$

In the proposed strategy, ${\mu _{base}}$ denotes baseline learning rate and ${\theta _b}$ represents a threshold, while $\sigma ({x,y} )$ indicates the local roughness of each pixel within a m × m neighborhood. A higher local roughness value suggests more detailed image information. To adaptively adjust the learning rate, we set the learning rate ${\mu _n}({x,y} )$ to zero when $({x,y} )$ exceeds ${\theta _b}$. This prevents the bias coefficient ${b_n}({x,y} )$ from being updated according to Eq. (7). Samples of the adaptive learning rate curve are presented in Fig. 8(a).

 figure: Fig. 8.

Fig. 8. Adaptive curves in ghosting artifacts suppression. (a) Samples of the adaptive learning rate curve, where ${\mu _{base}}$ is set to 1 for clarity. (b) Samples of the adaptive weight coefficient curve, where ${\alpha _{base}}$ is set to 1 for clarity.

Download Full Size | PDF

When it comes to gain correction in the frequency domain, we observed that it tends to introduce ghosting artifacts more easily than bias correction in the spatial domain. To address this issue, we have developed a strategy that utilizes changes detection in the frequency response to mitigate the presence of ghosts. This strategy determines the adaptive weighting coefficient based on the changes observed in the spectrograms between two consecutive frames. The formula for this strategy is as follows:

$${\alpha _n}({\mu ,v} )= \left\{ {\; \; \; \begin{array}{{c}} {\; \; \; \; \; \; \; \; \; \; \; \; \; 0\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; {d_n}({\mu ,v} )> {\theta_k}}\\ {\frac{{{\theta_k} - {d_n}({\mu ,v} )}}{{{\theta_k}}}{\alpha_{base}}\; \; \; {d_n}({\mu ,v} )\le {\theta_k}} \end{array}} \right.$$
where
$${d_n}({\mu ,v} )= \frac{{|{F_n^X({\mu ,v} )- F_{n - 1}^X({\mu ,v} )} |}}{{|{F_n^X({\mu ,v} )} |}}.$$

Here ${\alpha _n}({\mu ,v} )$ represents the weight coefficient at pixel $({\mu ,v} )$ in the frequency domain, and ${\alpha _{base}}$ denotes baseline weight coefficient. Meanwhile, ${\theta _k}$ means a threshold, and ${d_n}({\mu ,v} )$ stands for the change in frequency response between two adjacent frames. Samples of the adaptive weight coefficient curve are presented in Fig. 8(b).

3. Results and discussion

3.1 Experiments details

A. Test data

For objectively analyzation, we use two clean infrared sequences and three real infrared video sequences with true non-uniformity.

We introduced simulated noise to the clear infrared videos (Data1 and Data2) by adding independent FPN noise to the gain and bias, following Eq. (1). Data 1 comprises 1860 frames with a resolution of 384 × 288 pixels. Data2 consists of 218 frames with a resolution of 640 × 480 pixels. In particular, Data1 is captured by a long wave uncooled infrared detector. Data2 is a subset of the publicly available infrared video dataset [33].

The simulation noise model is introduced in detail below. Since the optical FPN model is relatively complex, our simulation is simplified by using a two-dimensional Gaussian kernel. It is defined as:

$$G = 1 + n \cdot N\left( {\frac{{{e^{\frac{{ - {x^2} - {y^2}}}{{2{\sigma^2}}}}}}}{{2\pi {\sigma^2}}}} \right)$$
where $N({} )$ represents normalization, x and y denote the distance from image center point. The simulated stripe noise B is generated by random function in image rows and columns. Then, the simulated data Y can be achieved by:
$$Y = G \cdot X + B$$
where X denotes the real data without noise.

We set $\sigma $ to 100 and n to 0.05, making it more in line with real noise. Below we show the simulated case and compared them with real noise.

In addition, we tested our approach on three real infrared video sequences that contain true non-uniformity. The data were captured by using a 320 × 256 HgCdTe IRFPA camera operating in 3-5 um range and working at a rate of 25 FPS. These sequences were captured in a helicopter during midnight feature different scenes: mountains (Data3), high-rise buildings (Data4), and overpasses (Data5). The samples shown in Fig. 9 illustrate these scenes. All three videos exhibit heavy stripe FPN, optic-related FPN, and a few bad pixels. Data3 and Data4 consist of 200 frames with a resolution of 320 × 256 pixels, Data5 consists of 199 frames with the same resolution.

 figure: Fig. 9.

Fig. 9. An example of simulated case. (a) Real data, (b) stripe noise, (c) optical FPN, (d) simulated data. The first row is simulated case by adding simulated noise, the second is simulated case by adding real noise extracted from correction.

Download Full Size | PDF

B. Evaluation metrics

Commonly used metrics for evaluating the quality of infrared images include root mean square error (RMSE) and peak signal-to-noise ratio (PSNR). However, these metrics require a clean reference image for comparison with the corrected result. In practical scenarios where true non-uniformity exists in infrared data, it is not feasible to obtain a noise-free reference image.

For infrared data with true non-uniformity, we utilize the roughness index [34] as a metric to measure the performance of NUC. The roughness index is defined as:

$$R = \frac{{{{\left\| {h*I} \right\|}_1}}}{{{{\left\| I \right\|}_1}}}$$
where h is the discrete Laplacian convolution kernel, I is the image under analysis, $||\cdot ||$ means the L1 norm, and * denotes for discrete convolution.

Another metric for evaluating NUC is non-uniformity evaluation standard (NUES) index [35]. This index measures the percentage of the root mean square deviation of pixel responses Vi,j with average response ${V_{avg}}$. It is defined as:

$$NUES = \frac{1}{{{V_{avg}}}}\sqrt {\frac{1}{{MN}}\mathop \sum \limits_{i = 1}^M \mathop \sum \limits_{j = 1}^N {{({{V_{i,j}} - {V_{avg}}} )}^2}} $$
where
$${V_{avg}} = \frac{1}{{MN}}\mathop \sum \limits_{i = 1}^M \mathop \sum \limits_{j = 1}^N {V_{i,j}}.$$

Here, M and N represent height and width of the image, respectively, and Vi,j corresponds to the response of the (i, j)th pixel.

To evaluate the performance of NUC on simulation data, we utilize PSNR and RMSE as quantitative evaluation metrics. For real data evaluation, we rely on the roughness and NUES metrics. Additionally, we compare the running efficiency as another aspect of performance assessment.

C. Parameter settings

During the correction stage, we initialize b as zero matrices with the size of image in the spatial domain and k as zero matrices in frequency domain. In the guided filter, we use a 5 × 5 neighborhood (${w_k}$), and set ɛ to ${10^8}$. In the ghosting artifacts suppression strategy, the baseline learning rate ${\mu _{base}}$ and the thresholds ${\theta _b}$ are set to 0.03 and 70, respectively. The local roughness $\sigma ({x,y} )$ of each pixel is calculated within a 5 × 5 neighborhood. For the removal of optical-caused FPN module, the baseline weight coefficient ${\alpha _{base}}$ and the thresholds ${\theta _k}$ are set to 0.03 and 2.8, respectively.

3.2 Quantitative evaluation

In this section, we present a comprehensive comparison between our method and three state-of-the-art methods from the literature. The methods we compare are inter-frame registration-based NUC (IR-NUC) [17], constant statistical algorithm of adjacent ratios (CSAR) [9], and FPN estimation algorithm (FPNE) [12]. IR-NUC utilizes spatiotemporal information between two adjacent frames for NUC. CSAR uses spatial information for NUC. FPNE estimates FPN based on the temporal filtering results of adjacent pixel difference. All of these methods are scene-based approaches.

Table 1 presents the results of the compared methods in terms of mean PSNR and RMSE metrics on two videos with simulated FPN. Figure 10 and Fig. 11 show the PSNR and RMSE curves obtained from testing the simulated data. Our proposed method, namely “OURS,” outperforms other methods in terms of PSNR and RMSE performance. We also observe that our method is able to converge within 150 frames. Compared to our method, FPNE and CSAR converge faster. For Data2, the PSNR and RMSE curves of CSAR show drastically change. We will discuss these issues further in the qualitative evaluation.

 figure: Fig. 10.

Fig. 10. PSNR and RMSE curves of the corrected images on Data1. (a) PSNR; (b) RMSE.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. PSNR and RMSE curves of the corrected images on Data2. (a) PSNR; (b) RMSE.

Download Full Size | PDF

Tables Icon

Table 1. Mean PSNR and RMSE results of Data1 and Data2

Table 2 presents the results of the compared methods in terms of mean roughness and NUES metrics on three videos with real FPN. Our method achieves the best performance across both metrics. Specifically, our method outperforms the other compared methods by a significant margin in terms of NUES metrics. Figure 12 displays the NUES curves obtained from testing the real data.

 figure: Fig. 12.

Fig. 12. NUES curves of the corrected images on three real videos. (a) Data3; (b) Data4; (c) Data5.

Download Full Size | PDF

Tables Icon

Table 2. Mean roughness and NUES results of Data3, Data4 and Data5.

All algorithms were implemented in Matlab R2020b on a machine equipped with a 3.6 GHz CPU and 32GB of RAM. The average processing time per frame was calculated by measuring the runtime of 200 frames, as summarized in Table 3. It is worth noting that all algorithms demonstrated suitability for real-time image processing, considering an image size of 81,920 pixels.

Tables Icon

Table 3. Average runtime per frame for each algorithm (s × 10−3)

3.3 Qualitative evaluation

Figure 13 illustrates the correction results of each algorithm on Data1, which contains severe camera shake and rich scene details. By adding simulated strip and optical-caused FPN, it is really challenge for NUC method, as shown in Fig. 12(b). While CSAR corrects these types of noise, it introduces a large number of ghosts. Both FPNE and our method perform well in this challenging scenario. The corrected results of FPNE appear to approach clear images. Notably, at the 300th frame, there is a sky scene. The sky part in infrared images also exhibit spatial continues and low frequency characteristics similar to optical-caused FPN. Furthermore, our method compensates for the sky part, resulting in higher contrast at 300th and 1500th frame.

 figure: Fig. 13.

Fig. 13. Correction images for the 100th, 300th, 855th, 1500th frames on Data1. (a) Raw clear images; (b) raw images with simulated FPN; (c) IR-NUC [17]; (d) FPNE [12]; (e) CSAR [9]; (f) Ours.

Download Full Size | PDF

Figure 14 displays the correction results of each algorithm on Data2. Due to slow camera movement, Data2 presents challenge for all scene-based NUC methods. The camera slow forward motion and heavy FPN cause IR-NUC to fail. FPNE introduces numerous ghosts, and CSAR exhibits obvious tailing ghosting artifacts. Our method outperforms other methods in this scenario.

 figure: Fig. 14.

Fig. 14. Correction images for the 50th, 100th, 150th, 200th frames on Data2. (a) Raw clear images; (b) raw images with simulated FPN; (c) IR-NUC [17]; (d) FPNE [12]; (e) CSAR [9]; (f) Ours.

Download Full Size | PDF

Figure 15 demonstrates the correction results of each algorithm on Data3, an infrared video with true FPN. Due to the lack of distinct details in the mountain scene, IR-NUC fails to handle this situation through image registration. Compared to FPNE and CSAR that both introduce severe tailing ghosting artifacts, our method successfully removes the FPN.

 figure: Fig. 15.

Fig. 15. Correction images for the 50th, 100th, 150th, 200th frames on Data3. (a) Raw images with true FPN; (b) IR-NUC [17]; (c) FPNE [12]; (d) CSAR [9]; (e) Ours. The complete correction video can be seen in Visualization 1.

Download Full Size | PDF

In Fig. 16, the correction results of each algorithm on Data4 are displayed. Data4 is an aerial infrared video with true FPN, particularly exhibiting sloping thick strips caused by optical factors. While IR-NUC effectively eliminates the vertical stripe FPN, it struggles to address the optical-caused FPN. Both FPNE and CSAR generate significant amounts of ghosting artifacts. In contrast, our method continues to perform well in this scenario.

 figure: Fig. 16.

Fig. 16. Correction images for the 50th, 100th, 150th, 200th frames on Data4. (a) Raw images with true FPN; (b) IR-NUC [17]; (c) FPNE [12]; (d) CSAR [9]; (e) Ours. The complete correction video can be seen in Visualization 2.

Download Full Size | PDF

Figure 17 illustrates the correction results of each algorithm on Data5, which is captured by the same infrared camera as Data4. Similar to previous observations, IR-NUC fails to handle optical-caused FPN. FPNE exhibits noticeable trailing ghosting artifacts, while CSAR introduces a large area of shadow ghosting. In contrast, our method consistently performs successfully in all three aerial infrared videos with true FPN.

 figure: Fig. 17.

Fig. 17. Correction images for the 50th, 100th, 150th, 199th frames on Data5. (a) Raw images with true FPN; (b) IR-NUC [17]; (c) FPNE [12]; (d) CSAR [9]; (e) Ours. The complete correction video can be seen in Visualization 3.

Download Full Size | PDF

Figure 18 displays the gain and bias correction matrices generated by our algorithm on Data3. Initially, the gain matrix at frame 2 shows strong striping, compensating for both stripe and optical FPN. As the frames progress, striping reduces, and low-frequency spatial noise akin to optical vignetting becomes more prominent. This trend suggests our gain matrix increasingly targets optical FPN correction, while the bias matrix consistently addresses striping.

 figure: Fig. 18.

Fig. 18. Estimated bias and gain correction matrices on Data3 at (a) 2nd frame; (b) 60th frame; (c) 70th frame; (d) 115th frame; (e) 200th frame. The first row is bias correction matrices. The second row is gain correction matrices.

Download Full Size | PDF

3.4 Ablation studies

To evaluate the contribution of each component in our algorithm, we conducted additional investigations and designed several algorithm variants. In order to assess the effectiveness of guided filtering, we used mean filtering to generate the desired image, referred to as “mean-filter”. To evaluate the effectiveness of gain correction in the Fourier domain, we performed only bias correction in the spatial domain, referred to as “Ours-bias”. Conversely, we performed only gain correction in the frequency domain, referred to as “Ours-gain”. For combined gain and bias correction in the frequency domain, we used “Ours-F”. Regarding the ghost suppression strategy, we removed the learning rate adjustment (“without-lr”) and changes detection in the frequency response (“without-cd”). All ablation studies were evaluated on Data 2.

The comprehensive algorithm, referred to as “Ours”, outperformed all the variations. Furthermore, each component in our algorithm contributed positively to improved performance. Detailed results are presented in Table 4. The result of without-lr might suggests that learning rate adjustment make more contributions to ghosts suppression. The effectiveness of ghosts suppression strategy is presented in Fig. 19.

 figure: Fig. 19.

Fig. 19. Ghosting artifacts suppression. (a) Ghosts appear. (b) The result after applying ghosting artifacts suppression strategy.

Download Full Size | PDF

Tables Icon

Table 4. Mean roughness and NUES results of high-rise buildings video Data2 for ablation studies.

Our method corrects the gain in the frequency domain, and corrects the bias in the spatial domain. In order to avoid the impact of processing in different domains on FPN correction, we correct the offset and gain simultaneously in the frequency domain. The gain and bias correction matrices estimated by this variant algorithm (Ours-F) at end frame of Data3 are shown in Fig. 20. We found that the bias correction matrix of Ours-F is a mixture of optical FPN and vertical stripes, while the gain correction matrix is mainly optical FPN. It proves that our method in the frequency domain easily extracts optical FPN.

 figure: Fig. 20.

Fig. 20. Estimated bias and gain correction matrices of Ours-F on Data3 at 200th frame. (a) Gain correction matrix, (b) bias correction matrix.

Download Full Size | PDF

For the setting of our various threshold parameters, we conducted a thorough reanalysis, specifically focusing on the threshold parameters setting of the ghosting artifacts suppression strategy, the learning rate ${\mu _{base}}$ in bias correction and the weight coefficient ${\alpha _{base}}$ in gain correction. To verify the effect of various threshold parameters of the learning rate ${\mu _{base}}$ in bias correction and the weight coefficient ${\alpha _{base}}$ in gain correction, we conduct another ablation study. We set different ${\mu _{base}}$ and ${\alpha _{base}}$ thresholds for Data1, and compare the PSNR and RMSE values of the correction results, as shown in Fig. 21. The effects of different learning rates ${\mu _{base}}$ are very close. Thus we choose the median one 0.03 in our experiments. We found that the various weight coefficient ${\alpha _{base}}$ significantly affects the convergence speed. A higher ${\alpha _{base}}$ value accelerates convergence but also increases the risk of ghosting. Thus, we set ${\alpha _{base}}$ to 0.03 in our experiments to balance between performance and robustness.

 figure: Fig. 21.

Fig. 21. A comparison is conducted on Data1 with various threshold parameters ${\mu _{base}}$ and ${\alpha _{base}}$. Where (a) presents the PSNR curve with various ${\mu _{base}}$, (b) shows the RMSE curves with various ${\mu _{base}}$, (c) presents the RMSE curve with various ${\alpha _{base}}$.

Download Full Size | PDF

4. Conclusions

This paper proposes a scene-based dual domain NUC algorithm that removes stripe and optics-caused noise simultaneously. The study focuses on aerial scenes and demonstrates the algorithm's good scene adaptability. The proposed algorithm begins by analyzing the frequency characteristics of FPN in spatial and frequency domain. The first stage, for strip FPN removal, we optimize the expected image estimate based on guided filtering to improve correction accuracy. This approach significantly weakens stripe FPN without introducing ghosting artifacts, allowing for adaptive progression to the next phase. The second stage, for optical-caused FPN removal, adopts an iterative correction strategy based on accumulation between frames, separating spatial continues noise. In the third stage, a combined strategy is introduced for suppress ghosting artifacts. This strategy involves utilizing adaptive learning rates and adaptive weight coefficients to effectively address the issue. Experiment results validate the algorithm's robust adaptability and its effectiveness in correcting strong stripe and optical-caused FPN. Furthermore, the algorithm demonstrates fast convergence within a few hundred frames, making it suitable for real-time image processing applications. Improvements to the iterative approach employed in stage 2 can aid in effectively separating spatially continuous noise from scene details. However, it is crucial to exercise caution to prevent the mixing of scene information and the introduction of ghosting artifacts. Future research should focus on refining this aspect of the algorithm to improve its performance further.

Funding

National Natural Science Foundation of China (62105152, 62301253, 62305163); Scientific Research Foundation for the Introduction of Talent of Nanjing Vocational University of Industry Technology (YK20-02-03).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are available in Ref. [33].

References

1. D. A. Scribner, M. R. Kruer, and J. M. Killiany, “Infrared focal plane array technology,” Proc. IEEE 79(1), 66–85 (1991). [CrossRef]  

2. B. Narayanan, R.C. Hardie, and R.A Muse, “Scene-based nonuniformity correction technique that exploits knowledge of the focal-plane array readout architecture,” Appl. Opt. 44(17), 3482–3491 (2005). [CrossRef]  

3. J.J. Simpson, J.R. Stitt, and D.M. Leath, “Improved finite impulse response filters for enhanced destriping of geostationary satellite data,” Remote Sensing of Environment 66(3), 235–249 (1998). [CrossRef]  

4. H. Shen and L. A. Zhang, “MAP-based algorithm for destriping and inpainting of remotely sensed images,” IEEE Trans. Geosci. Remote Sensing 47(5), 1492–1502 (2009). [CrossRef]  

5. B. Wu, C. Liu, R. Xu, et al., “A Target-Based Non-Uniformity Self-Correction Method for Infrared Push-Broom Hyperspectral Sensors,” Remote Sens. 15(5), 1186 (2023). [CrossRef]  

6. A. Friedenberg and I. Goldblatt, “Nonuniformity two-point linear correction errors in infrared focal plane arrays,” Opt. Eng. 37(4), 1251–1253 (1998). [CrossRef]  

7. B.L. Hu, S.J. Hao, D. X. Sun, et al., “A novel scene-based non-uniformity correction method for SWIR push-broom hyperspectral sensors,” ISPRS J. Photogrammetry and Remote Sensing 131(2), 160–169 (2017). [CrossRef]  

8. C. Zhang and W. Zhao, “Scene-based nonuniformity correction using local constant statistics,” J. Opt. Soc. Am. A 25(6), 1444–1453 (2008). [CrossRef]  

9. D. Zhou, D. Wang, L. Huo, et al., “Scene-based nonuniformity correction for airborne point target detection systems,” Opt. Express 25(13), 14210–14226 (2017). [CrossRef]  

10. W. Qian, Q. Chen, and G. Gu, “Space low-pass and temporal high-pass nonuniformity correction algorithm,” Opt. Rev. 17(1), 24–29 (2010). [CrossRef]  

11. C. Zuo, Q. Chen, G. Gu, et al., “New temporal high-pass filter nonuniformity correction based on bilateral filter,” Opt. Rev. 18(2), 197–202 (2011). [CrossRef]  

12. C. Liu, X. Sui, Y. Liu, et al., “FPN estimation based nonuniformity correction for infrared imaging system,” Infrared Phys. Technol. 96, 22–29 (2019). [CrossRef]  

13. J. Wang, T. Huang, Zhao Xi-Le, et al., “Reweighted block sparsity regularization for remote sensing images destriping,” IEEE J. Sel. Top. Appl. Earth Observations Remote Sensing 12(12), 4951–4963 (2019). [CrossRef]  

14. L. Liu, L. Xu, and H. Fang, “Simultaneous intensity bias estimation and stripe noise removal in infrared images using the global and local sparsity constraints,” IEEE Trans. Geosci. Remote Sensing 58(3), 1777–1789 (2020). [CrossRef]  

15. R.C. Hardie, M.M. Hayat, E. Armstrong, et al., “Scene-based nonuniformity correction with video sequences and registration,” Appl. Opt. 39(8), 1241–1250 (2000). [CrossRef]  

16. C. Zuo, Q. Chen, G. Gu, et al., “Scene-based nonuniformity correction algorithm based on interframe registration,” J. Opt. Soc. Am. A 28(6), 1164–1176 (2011). [CrossRef]  

17. C. Zuo, Y. Zhang, Q. Chen, et al., “A two-frame approach for scene-based nonuniformity correction in array sensors,” Infrared Phys. Technol. 60, 190–196 (2013). [CrossRef]  

18. S. Seo and J. Jeon, “Real-time scene-based nonuniformity correction using feature pattern matching,” 15th International Conference on Ubiquitous Information Management and Communication (IEEE, 2021) 1–6.

19. S. Liu and H. Cui, “Low-contrast scene feature-based infrared nonuniformity correction method for airborne target detection,” Infrared Phys. Technol. 133, 104799 (2023). [CrossRef]  

20. D.A. Scribner, K.A. Sarkady, M.R. Kruer, et al., “Adaptive nonuniformity correction for IR focal-plane arrays using neural network,” Infrared Sensors: Detectors, Electronics, and Signal Processing (SPIE, 1991).

21. L. Song and H. Huang, “Spatial and temporal adaptive nonuniformity correction for infrared focal plane arrays,” Opt. Express 30(25), 44681–44700 (2022). [CrossRef]  

22. T. Liu and X. Sui, “Strong non-uniformity correction algorithm based on spectral shaping statistics and LMS,” Opt. Express 31(19), 30693–30709 (2023). [CrossRef]  

23. X. Kuang, Y. Sui, X. Liu, et al., “Robust destriping method based on data-driven learning,” Infrared Phys. Technol. 94, 142–150 (2018). [CrossRef]  

24. Z. Huang, Z. Zhu, Z. Wang, et al., “D3CNNs: Dual Denoiser Driven Convolutional Neural Networks for Mixed Noise Removal in Remotely Sensed Images,” Remote Sensing 15, 443 (2023). [CrossRef]  

25. Z. Huang, Y. Zhang, Q. Li, et al., “Unidirectional variation and deep CNN denoiser priors for simultaneously destriping and denoising optical remote sensing images,” International J. Remote Sensing 40(15), 5737–5748 (2019). [CrossRef]  

26. T. Li, Y. Zhao, Y. Li, et al., “Non-uniformity correction of infrared images based on improved CNN with long-short connections,” IEEE Photonics J. 13, 1–13 (2021). [CrossRef]  

27. J. Guan and R. Lai, “Fixed pattern noise reduction for infrared images based on cascade residual attention CNN,” Neurocomputing 377, 301–313 (2020). [CrossRef]  

28. Y. Chang and L. Yan, “Infrared aerothermal nonuniform correction via deep multiscale residual network,” IEEE Geosci. Remote Sensing Lett. 16(7), 1120–1124 (2019). [CrossRef]  

29. D. L. Perry and E.L. Dereniak, “Linear theory of nonuniformity correction in infrared staring sensors,” Opt. Eng. 32(8), 1854–1859 (1993). [CrossRef]  

30. Y. Cao, M.Y Yang, and C. L. Tisse, “Effective strip noise removal for low-textured infrared images based on 1-D guided filtering,” IEEE Trans. Circuits Syst. Video Technol. 26(12), 2176–2188 (2016). [CrossRef]  

31. K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2013). [CrossRef]  

32. S. Rong, H. Zhou, Z. Wen, et al., “An improved non-uniformity correction algorithm and its hardware implementation on FPGA,” Infrared Phys. Technol. 85, 410–420 (2017). [CrossRef]  

33. M. Felsberg, A. Berg, G. Hager, et al., “The thermal infrared visual object tracking VOT-TIR2015 challenge results,” International Conference on Computer Vision WorkshopsC (IEEE, 2015).

34. T. Svensson, “An evaluation of image quality metrics aiming to validate long term stability and the performance of NUC methods,” Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XXIV (SPIE, 2013).

35. X. Sui, Q. Chen, and G. Gu, “A novel non-uniformity evaluation metric of infrared imaging system,” Infrared Phys. Technol. 60, 155–160 (2013). [CrossRef]  

Supplementary Material (3)

NameDescription
Visualization 1       The complete correction video of the real experiments on Data 3.
Visualization 2       The complete correction video of the real experiments on Data 4.
Visualization 3       The complete correction video of the real experiments on Data 5.

Data availability

Data underlying the results presented in this paper are available in Ref. [33].

33. M. Felsberg, A. Berg, G. Hager, et al., “The thermal infrared visual object tracking VOT-TIR2015 challenge results,” International Conference on Computer Vision WorkshopsC (IEEE, 2015).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (21)

Fig. 1.
Fig. 1. Efficient and accurate solution for stripe and optics-caused NUC. (a) Infrared image before scene-based dual domain NUC. (b) Estimated gain field to compensate the intensity variations caused by optics. (c) Estimated bias field to compensate the stripe FPN. (d) Scene-based dual domain NUC result.
Fig. 2.
Fig. 2. Stripe and optics-caused FPN distribution characteristics on spatial domain. (a) Simulated FPN; (b) corresponding grayscale histogram; (c) corresponding three-dimensional gray value display.
Fig. 3.
Fig. 3. The algorithm structure.
Fig. 4.
Fig. 4. Block diagram of stripe removal model based on guided filter.
Fig. 5.
Fig. 5. FPN and corresponding frequency response. The first row are FPN, the second row are the corresponding frequency response. (a)-(c) Optical-caused FPN. (d)-(f) Stripe FPN.
Fig. 6.
Fig. 6. Stripe and optics-caused FPN of real image. The first row are infrared images, and the second row are the corresponding spectrogram. (a) Raw image. (b) Estimated gain field to compensate the optics-caused FPN. (c) Estimated bias field to compensate the stripe FPN. (d) Scene-based dual domain NUC result.
Fig. 7.
Fig. 7. Block diagram of optical-caused FPN removal model.
Fig. 8.
Fig. 8. Adaptive curves in ghosting artifacts suppression. (a) Samples of the adaptive learning rate curve, where ${\mu _{base}}$ is set to 1 for clarity. (b) Samples of the adaptive weight coefficient curve, where ${\alpha _{base}}$ is set to 1 for clarity.
Fig. 9.
Fig. 9. An example of simulated case. (a) Real data, (b) stripe noise, (c) optical FPN, (d) simulated data. The first row is simulated case by adding simulated noise, the second is simulated case by adding real noise extracted from correction.
Fig. 10.
Fig. 10. PSNR and RMSE curves of the corrected images on Data1. (a) PSNR; (b) RMSE.
Fig. 11.
Fig. 11. PSNR and RMSE curves of the corrected images on Data2. (a) PSNR; (b) RMSE.
Fig. 12.
Fig. 12. NUES curves of the corrected images on three real videos. (a) Data3; (b) Data4; (c) Data5.
Fig. 13.
Fig. 13. Correction images for the 100th, 300th, 855th, 1500th frames on Data1. (a) Raw clear images; (b) raw images with simulated FPN; (c) IR-NUC [17]; (d) FPNE [12]; (e) CSAR [9]; (f) Ours.
Fig. 14.
Fig. 14. Correction images for the 50th, 100th, 150th, 200th frames on Data2. (a) Raw clear images; (b) raw images with simulated FPN; (c) IR-NUC [17]; (d) FPNE [12]; (e) CSAR [9]; (f) Ours.
Fig. 15.
Fig. 15. Correction images for the 50th, 100th, 150th, 200th frames on Data3. (a) Raw images with true FPN; (b) IR-NUC [17]; (c) FPNE [12]; (d) CSAR [9]; (e) Ours. The complete correction video can be seen in Visualization 1.
Fig. 16.
Fig. 16. Correction images for the 50th, 100th, 150th, 200th frames on Data4. (a) Raw images with true FPN; (b) IR-NUC [17]; (c) FPNE [12]; (d) CSAR [9]; (e) Ours. The complete correction video can be seen in Visualization 2.
Fig. 17.
Fig. 17. Correction images for the 50th, 100th, 150th, 199th frames on Data5. (a) Raw images with true FPN; (b) IR-NUC [17]; (c) FPNE [12]; (d) CSAR [9]; (e) Ours. The complete correction video can be seen in Visualization 3.
Fig. 18.
Fig. 18. Estimated bias and gain correction matrices on Data3 at (a) 2nd frame; (b) 60th frame; (c) 70th frame; (d) 115th frame; (e) 200th frame. The first row is bias correction matrices. The second row is gain correction matrices.
Fig. 19.
Fig. 19. Ghosting artifacts suppression. (a) Ghosts appear. (b) The result after applying ghosting artifacts suppression strategy.
Fig. 20.
Fig. 20. Estimated bias and gain correction matrices of Ours-F on Data3 at 200th frame. (a) Gain correction matrix, (b) bias correction matrix.
Fig. 21.
Fig. 21. A comparison is conducted on Data1 with various threshold parameters ${\mu _{base}}$ and ${\alpha _{base}}$. Where (a) presents the PSNR curve with various ${\mu _{base}}$, (b) shows the RMSE curves with various ${\mu _{base}}$, (c) presents the RMSE curve with various ${\alpha _{base}}$.

Tables (4)

Tables Icon

Table 1. Mean PSNR and RMSE results of Data1 and Data2

Tables Icon

Table 2. Mean roughness and NUES results of Data3, Data4 and Data5.

Tables Icon

Table 3. Average runtime per frame for each algorithm (s × 10−3)

Tables Icon

Table 4. Mean roughness and NUES results of high-rise buildings video Data2 for ablation studies.

Equations (25)

Equations on this page are rendered with MathJax. Learn more.

Y n ( x , y ) = g n ( x , y ) X n ( x , y ) + o n ( x , y )
X n ( x , y ) = k n ( x , y ) Y n ( x , y ) + b n ( x , y )
D i = p ¯ k G i + q ¯ k , i w k
p k = c o v ( G , I ) w k v a r ( G ) w k + ε
q k = a v g ( ( I ) w k ) q k a v g ( ( G ) w k ) .
D i = p ¯ k X i + q ¯ k , i w k
p k = c o v ( X , X ) w k v a r ( X ) w k + ε
q k = ( 1 q k ) a v g ( ( X ) w k ) .
E n ( x , y ) = [ X n ( x , y ) D n ( x , y ) ] 2 .
E b = E b = 2 ( X D ) .
b n + 1 ( x , y ) = b n ( x , y ) 2 μ n [ X n ( x , y ) D n ( x , y ) ]
log X n ( x , y ) = log k n ( x , y ) + log Y n ( x , y ) .
F ( log X n ( x , y ) ) = F ( log k n ( x , y ) ) + F ( log Y n ( x , y ) ) .
K n = ( 1 α n ) K n 1 + α n F ( log X n )
F n = F ( log X n ) = F ( log Y n ) K n .
X n = e F 1 ( F n ) .
μ n ( x , y ) = { e θ b σ ( x , y ) 1 e θ b 1 μ b a s e σ ( x , y ) < θ b 0 σ ( x , y ) θ b
σ ( x , y ) = 1 m × m p , q = 2 2 | I ( x , y ) I ( x + p , y + q ) | .
α n ( μ , v ) = { 0 d n ( μ , v ) > θ k θ k d n ( μ , v ) θ k α b a s e d n ( μ , v ) θ k
d n ( μ , v ) = | F n X ( μ , v ) F n 1 X ( μ , v ) | | F n X ( μ , v ) | .
G = 1 + n N ( e x 2 y 2 2 σ 2 2 π σ 2 )
Y = G X + B
R = h I 1 I 1
N U E S = 1 V a v g 1 M N i = 1 M j = 1 N ( V i , j V a v g ) 2
V a v g = 1 M N i = 1 M j = 1 N V i , j .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.