Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Adaptive Wiener filter super-resolution of color filter array images

Open Access Open Access

Abstract

Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data.

©2013 Optical Society of America

1. Introduction

Many commercial visible color cameras are designed using a single focal plane array (FPA) with a color filter array (CFA) to filter for red, green, and blue (RGB). Cameras that divide the focal plane by spatial filter arrays can be called division-of-focal-plane (DoFP) cameras. The most common CFA-based DoFP camera was developed by Bryce Bayer at Eastman Kodak in 1976 [1]. Figure 1 shows an example Bayer CFA pattern. The pattern is repeated over a 2×2arrangement, where each 2×2filter pattern is defined RGGB, BGGR, RGBG, or GRGB (R = red, G = green, B = blue). This provides color information across any image but two issues arise. First, color information is not captured simultaneously at the same spatial location for all colors. Each detector element only collects photons for one color, resulting in a mosaic pattern. The other two colors must be estimated at that location. Extensive research had gone into addressing this demosaicing challenge. Good demosaicing overviews are provided in [24]. A second issue with DoFP designs is spatial sampling. Typical grayscale camera designs attempt to balance detector array sampling with other factors, such as signal-to-noise ratio (SNR). This generally results in a camera that is undersampled relative to Nyquist criterion, producing aliased images [5]. Spatial sampling challenges are further exacerbated by the filter mosaic. As shown in Fig. 1, the color channel sampling interval, δs, is twice the sampling interval of the basic detector array.

 figure: Fig. 1

Fig. 1 Bayer CFA showing color channel and fundamental detector sampling.

Download Full Size | PDF

Extensive multi-frame super-resolution (SR) reconstruction research has been performed to increase effective sampling frequency for grayscale imaging. The fundamental concept is to create a high resolution (HR) image from a series of shifted low resolution (LR) aliased frames. Subpixel motion between each LR image is used to increase the effective image sampling rate. A thorough SR overview is given in [6] and a compendium of recent research is provided in [7]. Most SR methods include three components: multi-frame registration, interpolation, and image restoration. These components may be implemented separately or simultaneously, depending on the approach. Multi-frame registration is a critical step because knowledge of the relative position of each LR frame is key to utilizing multiple frames of data. Because SR reconstruction is generally an ill-posed inverse problem, each method will typically make assumptions regarding prior information of the HR image to be estimated and knowledge of the imaging system parameters.

Many SR methods are based on iterative techniques and tend to be computationally-intensive. Such methods may not be currently amenable to real-time implementation. Another category of methods is based on non-uniform interpolation. These tend to be computationally simple and provide potentially fast, possibly real-time, implementation [815]. The general concept is to register observed LR frames and notionally place all frame samples onto a common HR space. Unless shifts are strictly controlled, this results in non-uniform sampling. The samples are then used to interpolate onto a uniform HR grid. Methods typically apply a restoration step after interpolation to reduce the effects of the system point spread function (PSF). A fast adaptive Wiener filter (AWF) based SR approach for grayscale imagery was developed by Hardie [16]. Hardie et al., [17] extended and applied this approach to multi-channel microgrid polarimeter data. AWF SR was extended for full affine inter-frame motion in [18]. AWF SR demonstrated comparable performance to more computationally-intensive iterative SR methods. The approach combines interpolation and restoration in a spatially-adaptive weighted sum. This provides computational advantages and robustness to possible interpolation errors that might be exacerbated by an independent restoration step. Additionally, some non-uniform interpolation methods rely on simplifying registration and interpolation by quantizing observed samples onto a uniform HR grid [19,20]. In contrast, the AWF SR method can make full use of registration information by allowing samples to be located at any continuous location in a continuous common HR space.

Color multi-frame SR is similar to demosaicing and can be viewed as a multi-frame extension combining demosaicing with SR. Many published approaches are variational, using iterative optimization methods to minimize a specific cost function. The primary difference in these approaches is the implementation of the data fidelity term and regularization methods [2128]. Farsiu, et al. [24], use a maximum a posteriori (MAP) estimation framework to create a fused SR and demosaicing method called “robust super-resolution”. The method minimizes a multi-term cost function that includes a data fidelity term and multiple regularization terms. A variation for accelerating the solution is used for the special case where input images are limited to translational motion and the system has a space invariant PSF. Results show good overall performance using this iterative method.

In addition to variational approaches for color CFA SR, [29] proposes a single step reconstruction by non-uniform interpolation in the luminance and chrominance domains. The work in [30] develops an artificial neural network for joint multi-frame demosaicing. The development in [31] proposes a spatially adaptive method using partition weighted sum (PWS) filtering for multi-frame CFA demosaicing and grayscale SR, quantizing registration parameters to an HR grid.

In this paper, we extend grayscale fast AWF SR reconstruction [16] to multi-channel color imaging. The proposed method employs novel parametric correlation models to exploit all observed data in multi-frame Bayer CFA images and includes a separate wavelength-dependent system PSF for each of the 3 color channels. One alternative approach would be to apply the grayscale AWF SR method independently to each color channel. However, we show in Section 4 that such independent channel AWF SR processing produces inferior results because it does not exploit key cross-channel correlation.

The proposed method registers multiple Bayer images to populate a common HR grid with samples from all color channels. The sample positions are not quantized to a finite grid, allowing us to exploit the full precision provided by the registration. A weighted sum of samples in a moving window on the HR grid forms the estimate for each output pixel in each channel. The minimum mean square error (MSE) weights are employed based on one of several proposed correlation models. We examine the benefit of placing observed samples onto a continuous HR space versus quantizing sample locations to the HR grid. We also show the potential for using the color AWF SR output as an initialization image for a simple variational method to improve the results. As far as we are aware, this paper represents the first development of AWF SR specifically for Bayer CFA cameras.

The remainder of this paper is organized as follows. In Section 2, the demosaicing SR problem is formulated. We define the observation model and how it incorporates the system PSF with sampling. We develop the AWF SR algorithm for use with CFA cameras in Section 3. Section 4 shows experimental results with comparison to other approaches for both real and simulated input data with translational inter-frame motion. Section 5 provides a summary and conclusions.

2. Problem formulation

2.1 Observation models

We start by presenting a forward system observation model, shown in Fig. 2. We assume the imaging system can be represented as a linear system. A wavelength-dependent continuous two-dimensional (2D) scene d(x,y,λ) is imaged by a system that includes motion between frames. Three color filters are applied, creating 3 separate continuous signals. Motion is captured for each of P image frames, resulting in global geometric transformation of d(x,y,λ), where spcontains the motion parameters for frame p (out ofPtotal frames). The output is convolved with a wavelength-dependent system PSF, represented byhj(x,y), wherejdesignates a unique PSF for each of the color filtered image. Each of the 3 continuous image outputs is sampled according to the detector array CFA patterning, creating f(p). This term is in lexicographical notation andp denotes frame number. We assume the frame f(p)is undersampled due to the physical detector array and CFA. Finally, additive noise n(p)is used to account for noise sources, resulting in the observed data g(p) for p=1,2,...,P.

 figure: Fig. 2

Fig. 2 Forward observation model relating the desired continuous image to the observed data.

Download Full Size | PDF

We desire a noiseless full 3-color output image that is not degraded by the system and is spatially sampled at a rate to avoid aliasing. This ideal output is shown as a 3MN×1vector, z, containing the desired color pixel values. We can also view the data as anM×N×3data cube, whereMandNare the number of samples in thexandydirections, respectively.

To develop the AWF SR method, we employ the alternative observation model in Fig. 3. Here we commute the motion and PSF, allowing us to combine motion and sampling operations into a single non-uniform sampling block, as shown. Commutation of motion and PSF holds true for translational motion and spatially invariant PSF [18]. Furthermore, it also holds for rotational motion when the PSF is circularly symmetric and approximately for full affine motion if zoom and shear are limited [18]. Let us assume that with proper registration for translation [16], or affine motion [18], we have knowledge of the location and color for every pixel in a common HR space, f(and hence, g=f+n). Accurate knowledge of system PSF and additive noise will also be required.

 figure: Fig. 3

Fig. 3 Alternate forward observation model combining frame motion and “mosaic” sampling.

Download Full Size | PDF

2.2 System PSF

The full system transfer function accounts for all imaging system effects. The effects can include [32], “optical aberrations, diffraction, wavefront irregularities, optical blur, finite sampling width, system stabilization (jitter), pointing system drift, atmospheric effects, and other system-specific modulation transfer function (MTF) limiters.” Here we assume diffraction and spatial detector integration are the primary contributors and those are the only two factors included in the PSF model.

The FPA detector element causes spatial averaging of the irradiance impinging upon it [33]. For a rectangular element, the detector PSF is

hd(x,y)=1abrect(xa,yb),
whereaandbare the active detector dimensions in thexandydirections, respectively [34]. Therefore, the detector optical transfer function (OTF) is
Hd(u,v)=sinc(au,bv)=sin(πau)sin(πbv)π2(au)(bv),
where (u,v) are the spatial frequencies in the (x,y) directions, respectively.

The OTF of an unobscured circular aperture, diffraction-limited optical system is

Ho(u,v)={2π[cos1(ρρc)ρρc1(ρρc)2],ρρc0,otherwise,
where ρ=x2+y2 and ρc is the optics cutoff frequency,
ρc=1λf/#(cyc/mm).
The parameter λis the wavelength of light into the system and f/#=f/Dois the optical f-number, where fis the effective focal length and Dois the effective optics diameter.

Figure 4 shows examples of the MTFs for these functions and resulting system MTF for λ=0.55μm,f/#=8.0, and a=b=5.0μm. The image will always be band-limited toρc even though the overall shape of the system OTF, H(u,v), is impacted by both diffraction and detector OTFs. The cutoff frequency in this example is ρc227cycles/mm.

 figure: Fig. 4

Fig. 4 MTF of the example detector, optical diffraction, and total system.

Download Full Size | PDF

2.3 Sampling, aliasing, and SR

According to the Nyquist Sampling Theorem [35], a band-limited image can be uniquely determined from its discrete samples if it is sampled at

1δs2ρc,
whereδsis the spatial sampling interval and1/δsis the spatial sampling frequency. In the prior example, the Nyquist rate is 2ρc=454cycles/mm, which is a sample interval of2.20μm.

For a single color image using a Bayer CFA, the spatial sampling pitch,δs, is based on the detector pitch and CFA as shown in Fig. 1. For a 100% fill factor FPA, detector sampling is equal to the detector element size,aandb, in thexandydirections, respectively. The CFA doubles the detector sampling for two color filters. In the example, the sampling interval for red and blue are 2a=10μm, or 100cycles/mm sampling frequency and 50cycles/mmfolding frequency. Comparing the physical system and required sampling to avoid aliasing, the system is undersampled by a factor of 10μm/2.2μm4.5. Matching optical f/# to the physical detector sampling to avoid any aliasing would require an f/40 system. High f/# systems are typically undesirable because they produce lower signal energy at the detector than low f/# systems. In addition, if the high f/# is achieved with a large focal length, the field of view is reduced. For these and other reasons, lower f/# systems are generally preferred and some aliasing is usually tolerated in the design [5]. As seen in this example, single frame imaging systems are typically undersampled. Producing smaller detector elements is an area of on-going research, but smaller detector area reduces signal energy for a given integration time. Even if there is sufficiently small detector spacing to avoid undersampling, it may not be desirable because of signal-to-noise considerations.

Multi-frame SR seeks to overcome these sampling issues by using multiple LR images to increase the effective sampling rate. A simple example might be 4 LR images shifted by exactly ½ the sampling interval in both spatial directions. Combining all LR image samples results in a single image with twice the sampling frequency in each direction. Subpixel motion in typical imaging applications is not as well-controlled as this example, so some method of subpixel registration must be used to obtain accurate knowledge of the LR inter-frame motion.

3. Color AWF SR algorithm

3.1 Full 12-parameter, 3 PSF AWF SR model

The objective of the color AWF SR algorithm is to use all the LR samples from every color channel within a sliding “observation” window to estimate all the HR color channel samples within an “estimation” window inside the observation window. Figure 5 shows an example for a general 4-channel DoFP camera with three LR input frames and translational motion. Each LR frame sample is represented by a specific shape and each color channel is by a specific color (red, blue, orange, green). This example may represent a Bayer CFA if the green channels in the CFA are divided into two separate channels (one shown as green, the other as orange). The HR grid is shown in blue dashed lines. In general, the LR frame samples will fall on continuous locations and not be constrained to lie on the HR grid.

 figure: Fig. 5

Fig. 5 Example observation & estimation windows for 3 LR frames of 4-channel DoFP sensor.

Download Full Size | PDF

For multiple frames with motion between frames, the observation window contains samples from each color. The observation window covers WxbyWyHR grid pixels and the estimation window is DxbyDyHR grid pixels. We keep WxandWyinteger multiples of the upsampling factor, L, defined as the ratio between the desired HR grid sample spacing to any observed color channel sample spacing, in each spatial directions. All samples of any color contained in the observation window are included in the vector gi=[gi,1gi,2gi,K]Tfor the ithobservation window, where K is the number of LR samples (of any color) in the observation window. For translational motion, the observed sample pattern is periodic for every observation window. ForPinput frames,

K=PWxWyL2,i.
We use a weighted sum of the samples in gito estimate the HR pixels for all color channels within the estimation window, d^i=[di,1di,2di,DxDyC]T, where C is the number of color channels, such that
d^i=WiTgi.
The weighting matrix,Wi, is of size KxDxDyC. Each column of Wi (row of WiT) has the weights to estimate one particular HR sample of one color channel within the estimation window from all the observed samples of all color channels in the ithobservation window. We need to find Wi that produces some optimal estimate for each estimation window. For AWF, we want to find weights that provide the minimum MSE. The additional challenge for color AWF is that we must consider both the spatial arrangement of observed samples and their respective color channel.

It can be shown through the orthogonality principle [36] that the minimum MSE weights are given by

Wi=Ri1Pi.
Here,Ri=E{gigiT}is the correlation matrix for the observation vector, gi, andPi=E{gidiT}is the cross-correlation between the desired vector,di, and gi. From the alternate observation model, let fibe the noise-free version of gi(i.e.,gi=fi+ni), whereniis the random noise vector. We assume that ni is zero-mean, uncorrelated Gaussian noise with independent and identically distributed elements and varianceσn2. We find (see Appendix A)
Ri=E{fifiT}+σn2I,
where Iis the identity matrix. Also (see Appendix B),
Pi=E{fidiT}.
Assuming we know or can estimateσn2, we need to find the elements ofE{fifiT} and E{fidiT}.

For color AWF, we need to consider the color channel in both fi and di. The elements of E{fifiT} are samples of the continuous autocorrelation function, rfjfk(x,y)wherejandkdesignate the specific color channels of the samples involved. For clarity, we define the condition whenj=k the same-channel autocorrelation and whenjk, the cross-channel autocorrelation. In a similar manner, the elements of E{fidiT}are samples of the continuous cross-correlation function,rdjfk(x,y)between the noise-free observationfiand the desired data di. We call the condition whenj=kthe same-channel cross-correlation and whenjk, the cross-channel cross-correlation.

Since each color channel has a unique PSF, the correlation function development requires application of the appropriate channel PSF. Assuming a stationary process and linear space-invariant system, the autocorrelation function is (see Appendix A)

rfjfk(x,y)=rdjdk(x,y)*hj(x,y)*hk(x,y).
Note that for symmetric and real PSFs,rfkfj(x,y)=rfjfk(x,y).

The cross-correlation function is (see Appendix B)

rdjfk(x,y)=rdjdk(x,y)*hk(x,y).
We also determine
rdkfj(x,y)=rdjdk(x,y)*hj(x,y).
Note that rdkfj(x,y)rdjfk(x,y) for jk (i.e., cross-channel cross-correlation).

If we know the channel PSFs and know, or can estimate, the desired autocorrelation functions,rdjdk(x,y),(j,k), we can calculate the color AWF weights. These autocorrelation functions can be developed by multiple methods, including empirical modeling using representative training data. Here we use a parametric model. An autocorrelation model often used in image processing [37] is a radially symmetric model given by

rdjdk(x,y)=σj,k2ρj,kx2+y2,
where σj,k2 controls correlation between two color channels at the same spatial location and ρj,kcontrols the exponential decay with distance between the observed sample of color channel k and the estimated sample of color channel j.

Figure 6 shows examples of the desired autocorrelation function, noise-free observed autocorrelation function, and cross-correlation function. Here j=k=r (red) with σr,r2=670 and ρr,r=0.45. The PSF parameters are those from the PSF example of Section 4.1. We see how σj,k2 and ρj,k influence the desired autocorrelation in Fig. 6(a) and the impact that the system PSF has on the observed autocorrelation in Fig. 6(b) and cross-correlation in Fig. 6(c).

 figure: Fig. 6

Fig. 6 Examples of the continuous (a) desired autocorrelation, (b) noise-free observed autocorrelation, and (c) cross-correlation models.

Download Full Size | PDF

We have 3 color channels for the Bayer CFA, so σj,k2and ρj,keach have 6 possible combinations (RR, GG, BB, RG, RB, and GB). Therefore, we have 12 separate parameters in the desired autocorrelation function models. These 12 parameters can be selected as constants spatially across the entire observed image set (global statistical model) or as variables that change for each estimation window. The global statistical model is desirable because it, along with global translational motion, results in the requirement to compute only one set of weights for processing all observation windows in the output SR image [16].

Attempting to optimize 12 parameters in the global statistical model can be challenging for a single set of LR images. Additionally, this high number of parameters would need to be re-optimized as the input image sets change. It would be valuable to find simplifications to the autocorrelation model in order to reduce the number of parameters that need to be optimized for the global statistical model. These model parameter reductions rely on assumptions of the statistical relations involved in the autocorrelation model.

3.2 Reduced 3-parameter, 1 PSF AWF SR model

To assess the possibility of simplifying the correlation model in Eq. (14) while maintaining reasonable performance, we develop a 3-parameter model. We use a single value forρj,k,

ρj,k=ρ.
Two parameters are used for correlation of two channels at the same spatial location,
σj,k2={σcorr2,j=kασcorr2,jk.
Basically, we assume the spatial parameter, ρ, is the same for any color pair. We use σcorr2to control maximum correlation forj=k. The term α is a weight to reduce σcorr2 whenjk. This parameter is set in the range 0 to 1. When α=0, the model assumes no correlation between color channels. Whenα=1, the model assumes identical correlation of cross-channel data and same-channel data. This is analogous to applying grayscale AWR SR to the CFA image as though it were a standard non-mosaiced array.

As a further simplification, we assume identical PSF for all colors. This isn’t true because PSF is wavelength dependent, but it simplifies the algorithm and we found relatively good performance is maintained. Here we model PSF at a center (green) wavelength,λ=0.55μm.

3.3 Reduced 6-parameter, 1 PSF AWF SR model

To examine potential improvements over the simpler 3-parameter AWF SR, we define a 6-parameter 1 PSF correlation model from Eq. (14) by defining ρj,kas:

ρj,k={ρcorr,j=kρr,g,j=R,k=Gρr,b,j=R,k=Bρg,b,j=G,k=B.
We allow the spatial decay parameter,ρj,k, to vary depending on thej,kpairing. We use one parameter forj=k(same-channel) and different parameters whenjk(cross-channel). We keep the same assumption on σj,k2as the 3-parameter model in Eq. (16). Additionally, we use the single PSF assumption of the 3-parameter 1 PSF model.

4. Experiments

We present experimental results using both simulated and real data. Results of several SR methods are compared. The proposed color AWF SR is implemented using the full 12-parameter 3-PSF correlation model, as well as simpler 6- and 3-parameter 1 PSF correlation models. We compare the performance of color AWF SR to the independent color channel AWF (no cross-channel correlation is exploited) and several other SR methods. Single frame demosaic/interpolation is applied using the built-in MATLAB “demosaic” function followed by cubic spline interpolation on each color channel. Demosaicing provides 2× upsampling, so interpolation is performed to match the output size of the color AWF SR methods.

A variational method using regularized least squares (RLS) is included. This is a multichannel extension to the method detailed in [34]. Here the only regularization parameter is a spatial smoothness constraint and RLS is performed independently on each color channel. While this method is simple, it does not fully exploit inherent cross-channel color correlation. Finally, we present results from two SR methods detailed in [24] and briefly described in Section 1. These are the one-step Shift & Add SR and Multi-Dimensional Signal Processing (MDSP) fast variational MAP-based SR (initialized with Shift & Add). Software for these methods is available from [38].

For the experiments, we limit inter-frame motion to only translational motion. This results in a non-uniform sampling pattern that is identical for every observation window. Therefore, we only compute one set of weights for all observation windows, increasing computational efficiency of AWF SR. Additionally, we compare performance with methods of [24], which assume translational motion. The proposed color AWF SR can be applied to limited affine motion [18], but here we focus on translational motion with emphasis on exploring the correlation models.

4.1 Simulation experiment #1: color AWF SR model complexity study

To quantify and compare AWF SR performance, simulated experiments are presented. We use the full-color (24-bit color) Kodak lighthouse image shown Fig. 7(a). The picket fence in the image provides spatial structures that are beneficial to examining SR performance.

 figure: Fig. 7

Fig. 7 Image data used in simulation experiment #1. (a) Full-color (24-bit) high resolution image, (b) sample simulated LR Bayer CFA image, and (c) sample red color channel image.

Download Full Size | PDF

The image is degraded as follows. The image is convolved with distinct PSF functions for each color channel with the PSF parameters shown in Table 1. Based on Eqs. (4) and (5), the channels are undersampled by 8.6×, 10.2×, and 12.4× for the red, green, and blue channels, respectively. We found good results when the SR resolution enhancement is approximately equal to the undersampling level, so the upsampling factor for SR is set to L=8.

Tables Icon

Table 1. PSF parameters for simulation experiment #1

We perform image shifting, decimation, and color channel mosaicing to create P=10LR images simulating multi-frame Bayer CFA data. Finally, we add white Gaussian noise with a noise varianceσn2=10.0. A sample Bayer image is shown in Fig. 7(b), where the mosaiced grid is apparent. Figure 7(c) shows the red channel separated from the mosaic pattern for a single LR image. The color channel is L=8 times smaller in each spatial direction compared to the original HR image.

We include three color AWF reconstruction methods utilizing the global statistical model: 12-parameter 3-PSF model, 6-parameter 1-PSF model, and 3-parameter 1-PSF model. The 12 parameters used for each model and the independent channel AWF SR method in this experiment are shown in Table 2. The brackets designate groups of parameters set to the same values so that the reduced parameter set is readily apparent in the 6-parameter and 3-parameter models. The parameters for each model are found by an optimization routine to search for parameters producing the lowest MSE. Multiple simulations with varying random shift patterns are performed for each method. The mean of the best parameters are used in this experiment. The MDSP code requires setting 6 parameters for tuning the algorithm [24]. We found good visual results for β=0.002, α=0.9, P=1, λ'=0.01, λ''=150, and λ'''=1.0. Figure 8 shows the output of all the methods and the original HR image for reference.

Tables Icon

Table 2. Model parameters for color AWF methods

 figure: Fig. 8

Fig. 8 Simulated data results for (a) original HR image, (b) demosaic/interpolation, (c) Shift & Add output, (d) MDSP output, (e) RLS with independent color channel regularization, (f) independent color channel AWF SR, (g) 12-parameter 3-PSF color AWF SR, (h) 6-parameter 1-PSF color AWF SR, (i) 3-parameter 1-PSF color AWF SR.

Download Full Size | PDF

Edge artifacts and a lack of sharpness are seen in single frame demosaic/interpolation. These effects are attenuated but still evident in the one-step Shift & Add SR method. Iterative MDSP shows a strong improvement in sharpness and reduced, but apparent, visual edge artifacts. Both RLS with independent color channel regularization and the independent channel AWF SR method show edge artifacts, best seen in the fence region. The proposed AWF SR methods appear to provide the best SR reconstruction.

Figure 9 shows output images cropped to the picket fence region of interest (ROI). The demosaic/interpolation image shows large color fringing and blurring with Shift & Add providing some improvement. MDSP captures more high spatial frequency but shows residual color artifacts. This may be due to a number of factors. While a number of attempts have been made to determine a good set of parameters, the 6 tuning parameters may not be optimal. Additionally, both Shift & Add and MDSP quantize image shift estimates to the reference HR grid. The simulated data uses random fractional shifts, so quantizing shift estimates may produce additional reconstruction errors. Given the level of SR applied and number of input LR images, there is insufficient data to completely fill the HR grid. Therefore, Shift & Add performs an additional interpolation step to fill missing HR grid samples.

 figure: Fig. 9

Fig. 9 Region of interest from results in Fig. 9. (a) original HR image, (b) demosaic/interpolation, (c) Shift & Add output, (d) MDSP output, (e) RLS with independent color channel regularization, (f) independent color channel AWF SR, (g) 12-parameter 3-PSF color AWF SR, (h) 6-parameter 1-PSF color AWF SR, (i) 3-parameter 1-PSF color AWF SR.

Download Full Size | PDF

We see RLS and independent channel AWF SR in Figs. 9(e) and 9(f) look similar, with good high frequency reconstruction but residual color fringing. The proposed color AWF methods greatly reduce color fringing while providing the best high frequency reconstruction. This is because the proposed methods make use of cross-channel correlation in the model.

Simulation experiments provide a means for quantifying performance and comparing methods. MSE and mean absolute error (MAE) are used and results are shown in Table 3. We can place results broadly into three groups. Demosaic/interpolation has the highest errors with Shift & Add showing a performance improvement. In the next group, MDSP reduces errors significantly over Shift & Add. RLS has lower MSE than MDSP, while MDSP performed better from an MAE perspective. Independent channel AWF SR performs slightly better. The best performing group is the proposed color AWF methods. Note that 12-parameter 3-PSF AWF SR provides nearly 30% improvement in MSE over independent channel AWF.

Tables Icon

Table 3. Performance results for simulation experiment #1

The benefit of capturing cross-channel correlation in the proposed color AWF method is enhanced when fewer LR input images are available. The same simulation was run using only 4 LR input frames and we examined the fence ROI for the 12 parameter 3 PSF AWF model and independent channel AWF methods, shown in Fig. 10. The independent channel AWF method results in an MSE of 285.23 while the proposed color AWF SR produced an MSE of 203.92, a nearly 40% improvement from including cross-channel correlation.

 figure: Fig. 10

Fig. 10 Region of interest for simulated results using 4 LR input frames. (a) independent color channel AWF SR, (b) 12-parameter 3-PSF color AWF SR.

Download Full Size | PDF

The proposed color AWF SR methods provided significantly lower MSE and MAE than the other methods. We believe there are three major reasons for this. In contrast to the independent channel AWF SR approach, the proposed color AWF SR captures inherent cross-channel correlations to address edge artifacts and color fringing. Secondly, the proposed color AWF SR does not round shift estimates, so the information contained in the fractional shift estimates are fully used and errors should be reduced. Finally, all the observed/measured data within the LR image set is used. In comparison, Shift & Add SR performs a median operation on redundant observed data. For example, if two or more data samples are found at the same shift location (after rounding shifts to the nearest HR grid location), Shift & Add utilizes only the median intensity sample. Since MDSP is initialized with Shift & Add, it is impacted by this median operation. Color AWF SR fully utilizes observed data via correlation models and exact shift estimates to reduce reconstruction errors.

Examining results of the proposed AWF SR methods, it appears that simplifying the model can impact performance. While the 3-parameter 1-PSF model has lower MAE than the 12-parameter 3-PSF model, MSE is approximately 5% worse. We have seen different shift simulations that produce slightly different performance, but the proposed color AWF approaches produce consistently the best results for this simulation. These methods assume a global statistical model which may be limiting the benefits of higher parameter and 3-PSF AWF models.

4.2 Simulation experiment #2: fractional vs. quantized shift study

To examine the benefits of fully utilizing fractional shift estimates, we constructed a second simulation experiment. A modified version of the 6-parameter 1-PSF color AWF SR approach is created. Shift estimates found with the AWF SR are rounded so that observed samples are located on the discrete HR grid spacing. The same AWF model is then applied to calculate weights with these discrete shifts. We call this the quantized color AWF SR method.

The imaging parameters and HR image from simulation experiment #1 are used and the same simulated LR image set is provided to each method. Parameters for both 6-parameter 1-PSF AWF SR and quantized AWF SR methods are the same as the 6-parameter model in simulation experiment #1, shown in Table 2. This keeps both AWF SR methods the same except for the shift estimates used in calculating AWF weights. Figure 11 shows shift estimates produced from both methods and the true shifts used to create the LR image set.

 figure: Fig. 11

Fig. 11 True shifts, AWF shift estimates, and quantized shift estimates in experiment #2.

Download Full Size | PDF

We found the estimated shift error for AWF SR has an MAE = 0.035 (fractional LR grid spacing) while estimated shift error for quantized AWF SR gives MAE = 0.080. An example of this increase is seen in the lower two shifts and their respective estimates. True random shifts are spatially close to each other, resulting in AWF shift estimates that are very close. However, quantized shift estimates round the shifts to different HR grid locations, resulting in increased error estimates. In general, quantizing estimated shifts will increase shift errors and SR methods that place observed samples onto a discrete HR grid may be impacted by this additional error.

Table 4 shows MAE and MSE results for each method. Using these error metrics, we see the benefit of using fractional shift estimates. With all other parameters being equal and using the same simulated LR image set, quantizing estimated shifts increases MSE by nearly 20%.

Tables Icon

Table 4. Performance results for simulation experiment #2, randomly shifted input

A different simulation was created where we quantized the simulated true shifts to lie exactly on the HR grid. This is equivalent to randomly shifting frames of LR images but controlling translational shifts to lie on the discrete HR grid spacing. With quantized true shifts, the quantized AWF SR approach have zero estimated shift errors but the proposed AWF fractional shift estimates produced an MAE = 0.034 shift error. Table 5 shows performance of the two methods. Quantized color AWF SR now outperforms color AWF SR in MSE because of the reduced estimated shift error used in the weights calculation.

Tables Icon

Table 5. Performance results for simulation experiment #2, quantized input shifts

4.3 Simulation experiment #3: initializing variational SR with color AWF SR study

We previously found the RLS variational method performed worse than AWF SR and found independent color regularization resulted in color fringing artifacts. The RLS method was initialized with an interpolated image created from one of the input LR images. Here, we initialize the RLS method with the color AWF SR output and examine potential improvement over AWF SR alone or RLS as previously implemented. This approach is similar to the MDSP method initialized with fast Shift & Add [24]. We use the full-color Kodak parrots image, shown in Fig. 12, and produce the simulated LR image set using the same approach and parameters as given in simulation experiment #1. We test three SR methods: 6-parameter 1-PSF color AWF SR, baseline RLS as previously implemented, and RLS initialized with the 6-parameter 1-PSF AWF output. AWF SR model parameters were set the same as simulation experiment #1, shown in Table 2.

 figure: Fig. 12

Fig. 12 HR image used in simulation experiment #3 variation SR performance comparison.

Download Full Size | PDF

Figure 13 shows a cropped region of the images to highlight reconstruction quality. In the black and white region of the parrot’s face, the baseline RLS has similar color fringing to previous results. It also lacks the sharpness obtained in the AWF SR output, but appears to provide better noise reduction. Comparing to RLS initialized with the AWF output, we see that this approach reduces color fringing. It also improves noise reduction compared to the AWF SR output while maintaining sharp edges.

 figure: Fig. 13

Fig. 13 ROI from Fig. 13. (a) input HR image for simulation, (b) 6-parameter 1-PSF color AWF SR, (c) RLS initialized with interpolated LR input image, (d) RLS initialized with color AWF SR result.

Download Full Size | PDF

Table 6 shows performance of these methods and single frame demosaic/interpolation for comparison. RLS initialized with the AWF SR has the lowest MSE, a nearly 20% improvement over color AWF SR and 40% improvement over RLS initialized with an interpolated image. Using the AWF SR output provides an initial image structure with reduced color fringing for the simple RLS implementation. Note that MAE is lowest for the baseline RLS method. In this particular simulation, MAE appears to favor the smoother output over reduced color fringing.

Tables Icon

Table 6. Performance results for simulation experiment #3

4.4 Real data experiment

We collected a video with an Imaging Source DFK21BU04 CCD color camera that uses a Sony ICX098BQ CCD with 640×480sensor and5.6μmsquare detector elements. We used a variable lens set tof/4. A set of 10 frames were used from a video sequence and spatially cropped to include the circular chirp resolution panel. Figure 14 shows an input frame and its red channel image. Spatial aliasing is evident in the images. We tested the following methods: demosaic/interpolation, Shift & Add, MDSP initialized with Shift & Add, RLS with independent color channel regularization, 6-parameter 1-PSF color AWF SR, and RLS initialized with color AWF SR. The camera parameters are the same as those used in simulation experiment #1, so we have the same level of undersampling. Therefore, we set the resolution enhancement level toL=8. Model parameters for the AWF SR and the MDSP SR were set the same as those in Section 4.1.

 figure: Fig. 14

Fig. 14 Sample input from real data experiment. (a) sample LR Bayer CFA image, (b) red color channel from sample LR Bayer CFA image.

Download Full Size | PDF

Two ROIs are shown in Figs. 15 and 16. Figure 15 contains a portion of the resolution panel and Fig. 16 shows textbooks. Qualitative performance is much the same as prior results. Shift & Add is better than demosaic/interpolation but appears to contain aliasing in the chirp pattern. MDSP sharpens the image compared to Shift & Add, but aliasing is still apparent and some color fringing is seen. Slight color fringing is seen in the baseline RLS but edges appear sharper than MDSP. Color fringing is very low in the chirp pattern of the AWF SR, although edges appear slightly more jagged than the baseline RLS. The RLS method initiated with color AWF SR output appears to provide the best reconstruction with sharp, but slightly less jagged, edges and little to no color fringing in the chirp pattern. The alphanumeric characters at the sides of the chirp pattern and the book lettering also appear to be the most readable.

 figure: Fig. 15

Fig. 15 Chirp resolution chart ROI. (a) demosaic/interpolation (Media 1 left), (b) Shift & Add output, (c) MDSP output, (d) RLS with independent channel regularization, (e) 6-parameter 1-PSF color AWF SR (Media 1 right), (f) RLS initialized with color AWF SR result.

Download Full Size | PDF

 figure: Fig. 16

Fig. 16 Textbooks ROI from real data experiment. (a) demosaic/interpolation (Media 1 left), (b) Shift & Add output, (c) MDSP output, (d) RLS with independent color channel regularization, (e) 6-parameter 1-PSF color AWF SR (Media 1 right), (f) RLS initialized with color AWF SR result.

Download Full Size | PDF

5. Conclusions

In this paper, we developed a new AWF SR specifically for color CFA cameras. The color AWF SR relies on modeling the autocorrelation of the desired HR image. Using a global statistical model, it is readily adaptable to real-time implementation. The color AWF SR implementation captures valuable cross-channel correlations in the model. Results from both simulated and real data showed color AWF SR outperformed other methods tested. When using a global statistical assumption, certain model simplifications can be used with little degradation in performance. These simplifications include using a single PSF for all color channels and reducing the number of model parameters. Additionally, color AWF SR fully utilizes fractional shift estimates and all observed data samples, leading to reduced reconstruction errors and better performance. We also showed improved performance of a simple variational SR method by using AWF SR as the initialization image..

Appendix A: derivation of the autocorrelation functions

The correlation matrix for the observation vector giis derived as

Ri=E{gigiT}=E{(fi+ni)(fi+ni)T}=E{(fi+ni)(fiT+niT)}=E{fifiT}+E{nifiT}+E{finiT}+E{niniT}Ri=E{fifiT}+σn2I

The elements of E{fifiT} are samples of the continuous autocorrelation function rfjfk(x,y), wherejandkdesignate the specific color channels of the samples involved. The autocorrelation function, given by [36]

rfjfk(x,y)=E{fj(α,β)fk(α+x,β+y)}=E{[dj(α,β)*hj(α,β)][dk(α+x,β+y)*hk(α+x,β+y)]}=E{[dj(αγ,βη)hj(γ,η)dγdη][dk(α+xθ,β+yϕ)hk(θ,ϕ)dθdϕ]}=E{[dj(αγ,βη)hj(γ,η)[dk(α+xθ,β+yϕ)hk(θ,ϕ)dθdϕ]dγdη]}=[E{dj(αγ,βη)dk(α+xθ,β+yϕ)}hk(θ,ϕ)dθdϕ]hj(γ,η)dγdη=[rdjdk(x+γθ,y+ηϕ)hk(θ,ϕ)dθdϕ]hj(γ,η)dγdη(letξ=θγ,ψ=ϕ-η)rfjfk(x,y)=[rdjdk(xξ,yψ)hk(γ+ξ,η+ψ)dξdψ]hj(γ,η)dγdη=rdjdk(xξ,yψ)[hj(γ,η)hk(γ+ξ,η+ψ)dγdη]dξdψ=rdjdk(xξ,yψ)[hj(ξ,ψ)hk(ξ,ψ)]dξdψrfjfk(x,y)=rdjdk(x,y)*[hj(x,y)*hk(x,y)],
is dependent on both jand kcolor channel PSFs.

Appendix B: derivation of the cross-correlation functions

The cross-correlation,Pi, between the true desired vector di and the observation vector gi,

Pi=E{gidiT}=E{(fi+ni)diT}=E{fidiT}+E{nidiT}Pi=E{fidiT}
The elements of E{fidiT}are samples of the continuous cross-correlation function, rdjfk(x,y), between the noise-free observation,fi, and the desired data,di. The cross-correlation function is [36]
rdjfk(x,y)=E{dj(α,β)fk(α+x,β+y)}=E{dj(α,β)[dk(α+x,β+y)*hk(α+x,β+y)]}=E{dj(α,β)[dk(α+xγ,β+yη)hk(γ,η)dγdη]}=E{dj(α,β)dk(α+xγ,β+yη)hk(γ,η)}dγdη=hk(γ,η)E{dj(α,β)dk(α+xγ,β+yη)}dγdη=hk(γ,η)rdjdk(xγ,yη)dγdηrdjfk(x,y)=rdjdk(x,y)*hk(x,y).
Also,

rdkfj(x,y)=E{dk(α,β)fj(α+x,β+y)}=E{dk(α,β)[dj(α+x,β+y)*hj(α+x,β+y)]}=E{dk(α,β)[dj(α+xγ,β+yη)hj(γ,η)dγdη]}=E{dk(α,β)dj(α+xγ,β+yη)hj(γ,η)}dγdη=hj(γ,η)E{dk(α,β)dj(α+xγ,β+yη)}dγdη=hj(γ,η)rdkdj(xγ,yη)dγdη=rdkdj(x,y)*hj(x,y)rdkfj(x,y)=rdjdk(x,y)*hj(x,y)

References and links

1. B. E. Bayer, “Color imaging array,” Eastman Kodak Co., U.S. Patent 3,971,065 (Jul 20 1976).

2. R. Ramanath, W. Snyder, G. Bilbro, and W. Sander, “Demosaicking methods for the Bayer color arrays,” J. Electron. Imaging 11(3), 306–315 (2002). [CrossRef]  

3. B. K. Gunturk, J. Glotzbach, Y. Altunbasak, R. W. Schafer, and R. M. Mersereau, “Demosaicking: color filter array interpolation,” IEEE Signal Proc. Mag. 22(1), 44–54 (2005). [CrossRef]  

4. X. Li, B. Gunturk, and L. Zhang, “Image demosaicing: a systematic survey,” Proc. SPIE 6822, 68221J, 68221J-15 (2008). [CrossRef]  

5. R. D. Fiete, “Image quality and λFN/p for remote sensing systems,” Opt. Eng. 38(7), 1229–1240 (1999). [CrossRef]  

6. S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003). [CrossRef]  

7. P. Milanfar, Super-Resolution Imaging (CRC, 2010).

8. J. C. Gillette, T. M. Stadtmiller, and R. C. Hardie, “Aliasing reduction in staring infrared imagers utilizing subpixel techniques,” Opt. Eng. 34(11), 3130–3137 (1995). [CrossRef]  

9. M. Alam, M. S. Bognar, R. C. Hardie, and B. J. Yasuda, “Infrared image registration and high-resolution reconstruction using multiple translationally shifted aliased video frames,” IEEE Trans. Instrum. Meas. 49(5), 915–923 (2000). [CrossRef]  

10. S. Lertrattanapanich and N. K. Bose, “High resolution image formation from low resolution frames using Delaunay triangulation,” IEEE Trans. Image Process. 11(12), 1427–1441 (2002). [CrossRef]   [PubMed]  

11. K. Aizawa, T. Komatsu, and T. Saito, “Acquisition of very high resolution images using stereo cameras,” P. Soc. Photo-Opt. Ins. 1605, 318–328 (1991).

12. A. M. Tekalp, M. K. Ozkan, and M. I. Sezan, “High-resolution image reconstruction from lower-resolution image sequences and space-varying image restoration,” Int. Conf. Acoust Spee.3, 169–172 (1992). [CrossRef]  

13. H. Shekarforoush and R. Chellappa, “Data-driven multichannel superresolution with application to video sequences,” J. Opt. Soc. Am. A 16(3), 481–492 (1999). [CrossRef]  

14. T. Q. Pham, L. J. Van Vliet, and K. Schutte, “Robust fusion of irregularly sampled data using adaptive normalized convolution,” EURASIP J. Adv. Sig. Pr. 1–12 (2006).

15. J. Shi, S. E. Reichenbach, and J. D. Howe, “Small-kernel superresolution methods for microscanning imaging systems,” Appl. Opt. 45(6), 1203–1214 (2006). [CrossRef]   [PubMed]  

16. R. C. Hardie, “A fast image super-resolution algorithm using an adaptive Wiener filter,” IEEE Trans. Image Process. 16(12), 2953–2964 (2007). [CrossRef]   [PubMed]  

17. R. C. Hardie, D. A. LeMaster, and B. M. Ratliff, “Super-resolution for imagery from integrated microgrid polarimeters,” Opt. Express 19(14), 12937–12960 (2011). [CrossRef]   [PubMed]  

18. R. C. Hardie, K. J. Barnard, and R. Ordonez, “Fast super-resolution with affine motion using an adaptive Wiener filter and its application to airborne imaging,” Opt. Express 19(27), 26208–26231 (2011). [CrossRef]   [PubMed]  

19. B. Narayanan, R. C. Hardie, K. E. Barner, and M. Shao, “A computationally efficient super-resolution algorithm for video processing using partition filters,” IEEE Circ. Syst. Vid. 17(5), 621–634 (2007). [CrossRef]  

20. M. Elad and Y. Hel-Or, “A fast super-resolution reconstruction algorithm for pure translational motion and common space-invariant blur,” IEEE Trans. Image Process. 10(8), 1187–1193 (2001). [CrossRef]   [PubMed]  

21. M. Shimizu, T. Yano, and M. Okutomi, “Super-resolution under image deformation,” Int. C. Patt. Recog. 3, 586–589 (2004).

22. T. Gotoh and M. Okutomi, “Direct super-resolution and registration using raw CFA images,” Proc. CVPR IEEE2, 600–607 (2004). [CrossRef]  

23. R. Sasahara, H. Hasegawa, I. Yamada, and K. Sakaniwa, “A color super-resolution with multiple nonsmooth constraints by hybrid steepest descent method,” IEEE Trans. Image Proc. 1, 857–860 (2005).

24. S. Farsiu, M. Elad, and P. Milanfar, “Multiframe demosaicing and super-resolution of color images,” IEEE Trans. Image Proc. 15(1), 141–159 (2006). [CrossRef]   [PubMed]  

25. M. Gevrekci, B. K. Gunturk, and Y. Altunbasak, “POCS-based restoration of Bayer-sampled image sequences,” IEEE Conf. Acoust. Spee. 1, 753–756 (2007). [CrossRef]  

26. Y. R. Li and D. Q. Dai, “Color superresolution reconstruction and demosaicing using elastic net and tight frame,” IEEE Circuits-I 55(11), 3500–3512 (2008). [CrossRef]  

27. M. Trimeche, “Color demosaicing using multi-frame super-resolution,” Eur. Signal Process. Conf. (2008).

28. D. A. Sorrentino and A. Antoniou, “Improved hybrid demosaicing and color super-resolution implementation using quasi-Newton algorithms,” Can. Con. El. Comp. En. 815–818 (2009).

29. P. Vandewalle, K. Krichane, D. Alleysson, and S. Süsstrunk, “Joint demosaicing and super-resolution imaging from a set of unregistered aliased images” Proc. SPIE IS&T Elect. Im. 65020A (2007).

30. T. Heinze, M. von Lowis, and A. Polze, “Joint multi-frame demosaicing and super-resolution with artificial neural networks,” IWSSIP 2012, 540–543 (2012).

31. M. Shao, K. E. Barner, and R. C. Hardie, “Partition-based interpolation for color filter array demosaicking and super-resolution reconstruction,” Opt. Eng. 44(10), 107003 (2005). [CrossRef]  

32. M. T. Eismann, Hyperspectral Remote Sensing (SPIE, 2012).

33. G. D. Boreman, Modulation Transfer Function in Optical and Electro-Optical Systems (SPIE, 2001).

34. R. C. Hardie, K. J. Barnard, J. G. Bognar, E. E. Armstrong, and E. A. Watson, “High-resolution image reconstruction from a sequence of rotated and translated frames and its application to an infrared imaging system,” Opt. Eng. 37(1), 247–260 (1998). [CrossRef]  

35. A. Oppenheim and R. Schafer, Discrete-Time Signal Processing (Prentice Hall, 1989).

36. C. W. Therrien, Discrete Random Signals and Statistical Signal Processing (Prentice Hall PTR, 1992).

37. A. K. Jain, Fundamentals of Digital Image Processing (Prentice Hall, 1989).

38. P. Milanfar, “MDSP resolution enhancement software,” Copyright 2009 by University of California. http://users.soe.ucsc.edu/~milanfar/software/superresolution.html.

Supplementary Material (1)

Media 1: MOV (4486 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1
Fig. 1 Bayer CFA showing color channel and fundamental detector sampling.
Fig. 2
Fig. 2 Forward observation model relating the desired continuous image to the observed data.
Fig. 3
Fig. 3 Alternate forward observation model combining frame motion and “mosaic” sampling.
Fig. 4
Fig. 4 MTF of the example detector, optical diffraction, and total system.
Fig. 5
Fig. 5 Example observation & estimation windows for 3 LR frames of 4-channel DoFP sensor.
Fig. 6
Fig. 6 Examples of the continuous (a) desired autocorrelation, (b) noise-free observed autocorrelation, and (c) cross-correlation models.
Fig. 7
Fig. 7 Image data used in simulation experiment #1. (a) Full-color (24-bit) high resolution image, (b) sample simulated LR Bayer CFA image, and (c) sample red color channel image.
Fig. 8
Fig. 8 Simulated data results for (a) original HR image, (b) demosaic/interpolation, (c) Shift & Add output, (d) MDSP output, (e) RLS with independent color channel regularization, (f) independent color channel AWF SR, (g) 12-parameter 3-PSF color AWF SR, (h) 6-parameter 1-PSF color AWF SR, (i) 3-parameter 1-PSF color AWF SR.
Fig. 9
Fig. 9 Region of interest from results in Fig. 9. (a) original HR image, (b) demosaic/interpolation, (c) Shift & Add output, (d) MDSP output, (e) RLS with independent color channel regularization, (f) independent color channel AWF SR, (g) 12-parameter 3-PSF color AWF SR, (h) 6-parameter 1-PSF color AWF SR, (i) 3-parameter 1-PSF color AWF SR.
Fig. 10
Fig. 10 Region of interest for simulated results using 4 LR input frames. (a) independent color channel AWF SR, (b) 12-parameter 3-PSF color AWF SR.
Fig. 11
Fig. 11 True shifts, AWF shift estimates, and quantized shift estimates in experiment #2.
Fig. 12
Fig. 12 HR image used in simulation experiment #3 variation SR performance comparison.
Fig. 13
Fig. 13 ROI from Fig. 13. (a) input HR image for simulation, (b) 6-parameter 1-PSF color AWF SR, (c) RLS initialized with interpolated LR input image, (d) RLS initialized with color AWF SR result.
Fig. 14
Fig. 14 Sample input from real data experiment. (a) sample LR Bayer CFA image, (b) red color channel from sample LR Bayer CFA image.
Fig. 15
Fig. 15 Chirp resolution chart ROI. (a) demosaic/interpolation (Media 1 left), (b) Shift & Add output, (c) MDSP output, (d) RLS with independent channel regularization, (e) 6-parameter 1-PSF color AWF SR (Media 1 right), (f) RLS initialized with color AWF SR result.
Fig. 16
Fig. 16 Textbooks ROI from real data experiment. (a) demosaic/interpolation (Media 1 left), (b) Shift & Add output, (c) MDSP output, (d) RLS with independent color channel regularization, (e) 6-parameter 1-PSF color AWF SR (Media 1 right), (f) RLS initialized with color AWF SR result.

Tables (6)

Tables Icon

Table 1 PSF parameters for simulation experiment #1

Tables Icon

Table 2 Model parameters for color AWF methods

Tables Icon

Table 3 Performance results for simulation experiment #1

Tables Icon

Table 4 Performance results for simulation experiment #2, randomly shifted input

Tables Icon

Table 5 Performance results for simulation experiment #2, quantized input shifts

Tables Icon

Table 6 Performance results for simulation experiment #3

Equations (22)

Equations on this page are rendered with MathJax. Learn more.

h d (x,y)= 1 ab rect( x a , y b ),
H d (u,v)=sinc(au,bv)= sin(πau)sin(πbv) π 2 (au)(bv) ,
H o (u,v)={ 2 π [ cos 1 ( ρ ρ c ) ρ ρ c 1 ( ρ ρ c ) 2 ],ρ ρ c 0, otherwise ,
ρ c = 1 λf/# (cyc/mm).
1 δ s 2 ρ c ,
K= P W x W y L 2 , i.
d ^ i = W i T g i .
W i = R i 1 P i .
R i =E{ f i f i T }+ σ n 2 I,
P i =E{ f i d i T }.
r f j f k (x,y)= r d j d k (x,y)* h j (x,y)* h k (x,y).
r d j f k (x,y)= r d j d k (x,y)* h k (x,y).
r d k f j (x,y)= r d j d k (x,y)* h j (x,y).
r d j d k (x,y)= σ j,k 2 ρ j,k x 2 + y 2 ,
ρ j,k =ρ.
σ j,k 2 ={ σ corr 2 , j=k α σ corr 2 , jk .
ρ j,k ={ ρ corr , j=k ρ r,g , j=R, k=G ρ r,b , j=R, k=B ρ g,b , j=G, k=B .
R i =E{ g i g i T }=E{( f i + n i ) ( f i + n i ) T } =E{( f i + n i )( f i T + n i T )} =E{ f i f i T }+E{ n i f i T }+E{ f i n i T }+E{ n i n i T } R i =E{ f i f i T }+ σ n 2 I
r f j f k (x,y)=E{ f j (α,β) f k (α+x,β+y) } =E{ [ d j (α,β)* h j (α,β) ][ d k (α+x,β+y)* h k (α+x,β+y) ] } =E{ [ d j (αγ,βη) h j (γ,η)dγdη ][ d k (α+xθ,β+yϕ) h k (θ,ϕ)dθdϕ ] } =E{ [ d j (αγ,βη) h j (γ,η)[ d k (α+xθ,β+yϕ) h k (θ,ϕ)dθdϕ ]dγdη ] } = [ E{ d j (αγ,βη) d k (α+xθ,β+yϕ) } h k (θ,ϕ)dθdϕ ] h j (γ,η)dγdη = [ r d j d k (x+γθ,y+ηϕ) h k (θ,ϕ)dθdϕ ] h j (γ,η)dγdη (let ξ=θγ, ψ=ϕ-η) r f j f k (x,y)= [ r d j d k (xξ,yψ) h k (γ+ξ,η+ψ)dξdψ ] h j (γ,η)dγdη = r d j d k (xξ,yψ)[ h j (γ,η) h k (γ+ξ,η+ψ)dγdη ]dξdψ = r d j d k (xξ,yψ)[ h j (ξ,ψ) h k (ξ,ψ) ]dξdψ r f j f k (x,y)= r d j d k (x,y)*[ h j (x,y)* h k (x,y) ],
P i =E{ g i d i T } =E{( f i + n i ) d i T } =E{ f i d i T }+E{ n i d i T } P i =E{ f i d i T }
r d j f k (x,y)=E{ d j (α,β) f k (α+x,β+y) } =E{ d j (α,β)[ d k (α+x,β+y)* h k (α+x,β+y) ] } =E{ d j (α,β)[ d k (α+xγ,β+yη) h k (γ,η)dγdη ] } = E{ d j (α,β) d k (α+xγ,β+yη) h k (γ,η) }dγdη = h k (γ,η)E{ d j (α,β) d k (α+xγ,β+yη) }dγdη = h k (γ,η) r d j d k (xγ,yη)dγdη r d j f k (x,y)= r d j d k (x,y)* h k (x,y).
r d k f j (x,y)=E{ d k (α,β) f j (α+x,β+y) } =E{ d k (α,β)[ d j (α+x,β+y)* h j (α+x,β+y) ] } =E{ d k (α,β)[ d j (α+xγ,β+yη) h j (γ,η)dγdη ] } = E{ d k (α,β) d j (α+xγ,β+yη) h j (γ,η) }dγdη = h j (γ,η)E{ d k (α,β) d j (α+xγ,β+yη) }dγdη = h j (γ,η) r d k d j (xγ,yη)dγdη = r d k d j (x,y)* h j (x,y) r d k f j (x,y)= r d j d k (x,y)* h j (x,y)
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.