Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Characterization of a spectrograph based hyperspectral imaging system

Open Access Open Access

Abstract

A significant part of the uniformity degradation in the acquired hyperspectral images can be attributed to the coregistration distortions and spectrally and spatially dependent resolution arising from the misalignments and the operation principle of the spectrograph based hyperspectral imaging system. The aim of this study was the development and validation of a practical method for characterization of the geometric coregistration distortions and position dependent resolution. The proposed method is based on modeling the imaging system response to several affordable reference objects. The results of the characterization can be used for calibration of the acquired images or as a tool for assessment of the expected errors in various hyperspectral imaging systems.

©2013 Optical Society of America

1. Introduction

In a basic scheme of a line-scan hyperspectral imaging system (HIS) the observed sample is illuminated by a broadband light source. The diffused or transmitted light, containing information on the observed sample, is collected by the input optics, then passed through a narrow slit, and finally dispersed across the detector array forming one spatial slice of the hyperspectral image. For proper interpretation of the recorded information either an absolute relation between the physical properties of the observed sample and the recorded image or at least instrument-level uniformity with respect to the spectral and spatial position is required. Due to the optical properties and aberrations of the integrated lens (e.g. astigmatism, distortion, chromatic aberration, etc.), and aberrations arising from the dispersing element, the imaging system response is strongly spectrally and spatially dependent [1]. In an ideal image, free of geometric coregistration errors, the spectral and the spatial information would be aligned with the image columns (spatial channels) and rows (spectral channels), respectively. However, due to the higher order effects the recorded image is geometrically distorted. Speaking in relative terms, the recorded image appears wrapped in the spectral direction due to change of the dispersion angle with the spatial position (smile). Similar effect, arising from the wavelength-dependent magnification, is observed in the spatial direction (keystone). Images can be additionally distorted in both directions by rotational and angular misalignment between the spectrograph and the camera. On top of the geometric distortions the uniformity of the recorded image is further degraded by a spectrally and spatially dependent point spread function (PSF). Therefore, a simple global linear model (scaling and shifting of the image) is insufficient for the proper spectral and spatial calibration of the HIS.

Considering that the artifacts arising from the geometric misalignments and position variant PSF are well known issues in the field of remote sensing, the problem is surprisingly rarely addressed in the field of chemical imaging. To the best of our knowledge, there is only one publication [2] studying and dealing with the geometric coregistration problem, which was conducted in the field of Raman spectroscopy. The study has shown that even minor geometric misalignments lead to significant nonlinear intensity errors in the acquired spectra, which degrade the subsequent hyperspectral image analysis. The findings were in agreement with the studies made in the field of remote sensing, where numerous methods were proposed to both analyze and alleviate the impact of the problem [36]. However, these methods are specific to the recorded data, employed equipment and challenges in the field of remote sensing, and usually discuss only the spectral calibration.

Polder et al. [7] proposed a set of practical procedures suitable for characterization of several laboratory HIS setups. The spectral calibration was performed by finding the polynomial relation between measured peak locations of several known spectra and corresponding real spectral line wavelengths. Spectral resolution was reported as the full width at half maximum (FWHM) of a laser spectral line at 670 nm. The spatial resolution was assessed in terms of FWHM at three wavelengths by several methods. Most reliable results were obtained by calculating the derivative of the b-spline interpolated grey levels of a black-to-white step edge. The image position variant spectral response function (SRF) is usually estimated by stepping trough the spectral range by means of a spectrally tunable light source ([8, 9]). Unfortunately, this approach requires a calibrated monochromator and a relatively complex measurement setup. Analogous approach was used for the spatial calibration by observing a distant, physically thin, light source at predetermined positions [10]. The proposed practical method [2] for mitigation of the geometric coregistration problem in a Raman imaging spectrograph, utilized a feature-based image registration. The required control points were extracted from a pair of reference images corresponding to a neon light source and patterned white-illumination. Unfortunately, the SRF and the spatial response function (SiRF) were not assessed in this work. Most popular methods for the measurement of the optical system line spread function (LSF), closely related to the SiRF, are based on precise sub-pixel shifting of the calibration object [11, 12]. While this method is straightforward and accurate, it requires a precise mechanical actuator. Latter can pose a problem either when it is necessary to characterize an optical assembly prior to integration into the final setup or when scanning is performed on a system lacking the ability of precise positioning, e.g. conveyor belt.

In summary, the previously published methods either do not consider the complete spatial-spectral dependence of the HIS or require complex measurement setups, which render them impractical. To alleviate this problem, we propose a novel method based on modeling of the HIS response to a scene exhibiting well-defined spectral and spatial features. For a reliable and unambiguous modeling, spatial and spectral features were retrieved by acquisition and analysis of images corresponding to different spectral reference lamps and a spatial reference object. The proposed method requires a relatively simple measurement setup and does not depend on any special equipment.

2. Methodology

Let {g(θ,λ)2} represent the true object defined in object coordinates (θ, λ), where θ is a spatial position and λ a wavelength. Firstly, the geometric properties of the HIS are captured by mapping the true object into the image space by a forward tuning function{T:(θ,λ)(υ,ω)2}. Secondly, presence of the blur is modeled by convolving the transformed object and the PSF of the HIS. Assuming the 2D PSF is separable it can be broken into a pair of 1D line spread functions, namely the spectral response function (SRF) and the spatial response function (SiRF). SRF and SiRF functions take into account the properties of the entire HIS, i.e. the lens, spectrograph and camera. Therefore, the acquired image I(u,w) of size U × W pixels can be estimated within error ε as:

I(u,w)=D(u,w)[T(g(θ,λ))SRF(υ,ω)SiRF(υ,ω)](υ,ω)dυdω+ε(u,w)
where * denotes the convolution. Assuming a uniform response within the pixel borders, the signal (I(u,w)) sampled by equally spaced finite size pixels {(u,w)[1,U]×[1,W]}is an integral over the corresponding square region ∂D(u,w) of size δu × δw. Using the two images, shown in Fig. 1, where each contains information varying either in the direction of the SRF or the SiRF, Eq. (1) can be expressed as a system of two equations, each describing the spatial or the spectral direction:

 figure: Fig. 1

Fig. 1 Example of a spectral (Ne) (a) and a spatial (b) reference image.

Download Full Size | PDF

SA(u,w)=D(u,w)[TS(gS(θ,λ))SRF(υ,ω)](υ,ω)dυdω+εS(u,w),
SiA(u,w)=D(u,w)[TSi(gSi(θ,λ))SiRF(υ,ω)](υ,ω)dυdω+εSi(u,w).

The image (SA) of the spectral reference object (gS) provides a good definition of the properties in the spectral direction and is vastly insensitive to the distortions in the spatial direction, while the opposite holds true for the image (SiA) of the spatial reference object (gSi). In general, the SRF and SiRF are aligned with the spatial and the spectral axis of the spectrograph and not necessarily with the columns and rows of the acquired image. However, to lower the computational complexity of the proposed method, the alignment of the spatial and spectral axes between the spectrograph and the camera is assumed. The assumption holds if the initial geometric coregistration errors are small, which is realistic, since the proposed method is used after the proper adjustments are made by other, sufficiently accurate procedures [1]. Furthermore, the SRF and SiRF are position variant. Therefore, the Eq. (2) and Eq. (3) should be directly computed only for small regions, where the local invariance can be adopted.

The first goal of the characterization is the assessment of the forward (υ,ω) = (TS(θ,λ),TSi(θ,λ)) and of the corresponding inverse tuning function (θ,λ) = (ZS(υ,ω),ZSi(υ,ω)). The first is used to estimate the image of a given object, while the second is required for a physically meaningful interpretation of the acquired image. The tuning functions are determined by establishing a correspondence between the features in the object space and the corresponding positions in the acquired image. For this purpose, the feature locations have to be extracted from the acquired image. Furthermore, the quality of HIS is also defined by its resolution, therefore the position dependent response functions SRF(υ,ω) and SiRF(υ,ω) are also determined. With respect to the HIS properties and technological limitations of the reference object design, solving the two equations, namely Eq. (2) and Eq. (3), requires a different approach.

2.1 Spectral feature extraction and SRF characterization

Since the atomic emission spectral lines are significantly narrower than the width of the instrument’s SRF, these can be considered as shifted Dirac delta functions. Assuming locally invariant SRF and considering the delayed delta function shifting role with respect to the convolution, the measured response to a single spectral line is a good approximation of the SRF. In the majority of publications SRF is modeled as a Gaussian function. However, for the studied HIS, such model exhibited poor fit of the acquired data. Significantly better results were produced using the symmetric normalized Voigt profile V(ω-w0;γL,γG), defined as a convolution of the centered Gaussian and Lorentzian broadening functions. Therefore, spectral response of the HIS to a single spectral line, centered at a position w0 = TS(θ,λ0), is completely defined by two spectrally and spatially variant SRF parameters (γL,γG). Using a fast approximation formula proposed in [13] this model retains global parameterization and reasonable computational efficiency.

Coverage of the utilized spectral range by a single spectral reference source is limited. Therefore, separate spectra of L different sources, each contributing Jl unique spectral lines, were simultaneously modeled. A given j-th spectral line in the spectrum of l-th reference lamp is defined by its wavelength λ0,l,j and intensity. By modeling the spectral line with delta function, transforming it by forward spectral tuning function, and modeling intensity αl,j, the Ts(gS(θ,λ)) in Eq. (2) is substituted by αl,jδu(ω-w0,l,j,u). Using the Voigt profile as a SRF model in Eq. (2), and taking into account the convolution rule of SRF and shifted delta function, the modeled spectrum SM,l,u of the l-th reference lamp corresponding to a spatial channel u of SA,l can be written as a sum of responses to the individual spectral lines, i.e. shifted Voigt profiles αl,j∙V(ω-w0,l,j,u,γG(u,w0,l,j,u),γL(u,w0,l,j,u)). Approximating the integral in Eq. (2) over a rectangular area of size δu × δw by the rectangle rule [14], and adding a polynomial model of the baseline shift, the final spectrum model can be written as:

SM,l,u(w)=δuδwj=1Jlαl,jV(ww0,l,j,u,γG(u,w0,l,j,u),γL(u,w0,l,j,u))+k=0Kβl,k,uwk,
where βl,k,u are the K + 1 polynomial coefficients of the baseline. Since the baseline is mostly due to the thermal drift of the camera and the majority of the SRF is attributed to the spectrograph and lens, the baseline term is not convolved by the SRF. The goal of the spectral characterization algorithm is finding the accurate image positions of the spectral lines and the parameters of the SRF by matching the modeled and the acquired spectrum. Unfortunately, the spectral lines of affordable atomic emission reference sources are usually clustered in non-uniformly distributed fields, which lead to ill-conditioned optimization problem due to poor separation of the instrument response to the individual spectral lines. Furthermore, accurate intensities of the individual spectral lines in the light reaching the optical sensor are elusive, due (but not limited to) to the variability of the reference source properties within manufacturing tolerances, operating environment, and the spectrally and spatially dependent absorption of the utilized diffusing element. Problem was mitigated by imposing additional physically meaningful constraints: (1) the SRF exhibits smooth variation with respect to wavelength, (2) the relation between the true (λ0,l,j) and the recorded (w0,l,j,u) wavelengths of the spectral lines is smooth and continuous, (3) intensities of the spectral lines are always non-negative (αl,j≥0). First constraint (1) was imposed by the global parameterization of the SRF γG(u,w0,l,j,u) = f(w0,l,j,u;kB,bG(kB,u)) and γL(u,w0,l,j,u) = f(w0,l,j,u;kB,bL(kB,u)). Where f is a global, cubic spline based model, parameterized by bG and bL defined at KB fixed uniformly spaced set of breaks kB. Since similar properties are expected of the second constraint (2), analogous parameterization w0,l,j,u = f(λ0,l,j;kG,bS(kG,u)) with KG breaks was used. It should be noted that KB and KG are significantly smaller than the total number of the modeled spectral lines. The third constraint (3) was imposed by the constrained optimization method. Search for the parameter set, defining the modeled spectral characteristics of the HIS, was performed separately for each spatial position by minimization of the dissimilarity (χS) between the modeled and the acquired spectrum {bG,Opt, bL,Opt, bS,Opt} = argmin(χS(bG, bL, bS, αl,j, βl,k,u)). The dissimilarity was defined by combining the weighted correlation coefficient (wcc) and the weighted sum of squared errors (wsse):
χS,u=1Ll=1L{[1wcc(SM,l,u(w),SA,l,u(w),μl(w))]+wsse(SM,l,u(w),SA,l,u(w),μl(w))},
where the weight array µl(w) was used to emphasize the more informative portions of the spectral range. The weighted correlation coefficient [15] based on the Fischer’s definition of the correlation coefficient between two arrays containing the intensity values of the modeled sM(w) = SM,l,u(w) and the acquired sA(w) = SA,l,u(w) spectra in arbitrary spatial channel u can be written as
wcc(sM,sA,μ)=w=1Wμ(w)(sA(w)sA)(sM(w)sM)[w=1W(μ(w)(sA(w)sA)2w=1Wμ(w)(sM(w)sM)2]1/2,
where <sM> = Σ µ(w)∙sM(w) / Σ µ(w) and <sA> = Σµ(w)∙sA(w)/ Σ µ(w) are the weighted means of the corresponding arrays. In this notation, w is considered as the index of pixel centers and all summations are from w = 1 to w = W. Wsse can be expressed as the sum of squared differences between the modeled and acquired spectra multiplied by the normalized weights:

wsse(sM,sA,μ)=w=1Wμ(w)[sA(w)sM(w)]2/w=1Wμ(w).

The three utilized spectral reference lamps (L = 3) shown in Fig. 2 provided 55 useful spectral lines. The most informative spectral range and the corresponding spectral lines were identified for each lamp. Three mercury lines of the Hg(Ar) lamp in the range from 527 nm to 588 nm were used and the Ne-lamp provided further 28 spectral lines in the range from 565 nm to 772 nm. Since the Hg(Ar) lamp exhibited a large intensity contrast between the argon and the mercury spectral lines, the additional Ar lamp was utilized in the range from 732 nm to 990 nm, providing 24 spectral lines. The accurate wavelengths of all the spectral lines were taken from the NIST spectral database [16].

 figure: Fig. 2

Fig. 2 Weighted acquired spectra of the three utilized reference sources. The problem of low spectral resolution is clearly demonstrated by the two recorded spectral peaks of the Hg(Ar) lamp. The spectral peak Hg-I corresponds to the single Hg spectral line at 546.07 nm while the Hg-II peak is a response to two Hg spectral lines at 576.96 nm and 579.07 nm, unresolvable due to the limited instrument resolution and discrete sampling. This is the reason for the difference between the number of observed peaks and the number of tabulated spectral lines. Furthermore, at shorter wavelengths peaks are approximately Gaussian. However, towards the NIR end of the observed spectral range, the spectral peaks (e.g. Ar-I at 965.78 nm) become wider and their tails longer than the Gaussian function would model.

Download Full Size | PDF

Computation of the optimal parameters was performed by a nested approach. Parameters defining the properties of the HIS (bG, bL, bS) were optimized by outer Broyden-Fletcher-Goldfarb-Shanno (BFGS) [17] nonlinear optimization method. This gradient-based iterative quasi-Newton method exhibits quadratic convergence and employs the observed behavior of the dissimilarity χS and its gradient for construction of the Hessian matrix approximation and step size selection. In each BFGS iteration the optimal auxiliary parameters αl,j and βl,k,u, capturing the unknown experimental variations (HIS independent), were computed by the constrained linear least-squares imposing the non-negativity of the spectral line intensities. Initial approximation of bS were computed by a linear fit between the real and the acquired positions of the distinct spectral peaks (Hg:546.10 nm, Ne:671.70 nm, Ar:763.51 nm, 965.78 nm). While the initial SRF parameters bG and bL were set to values fitting a single Ar peak at the central part of the acquired image (Ar:763.51 nm). Prior to optimization the parameters were scaled to improve convergence and aid the proper selection of stopping criterion. Furthermore, convergence to local minima was prevented by iteratively increasing the number of breaks employed by the subsequent runs of the optimization. The number of breaks KB and KG act as regularization parameters. For values less than four the model exhibited underfitting. Between five and seven the optimization problem was properly regularized and best results were achieved. Further increasing the number of breaks led to overfitting. Since the computation time increases with the number of breaks, both KB and KG were set to five. Optimal parameters for a single spectrum were computed in approximately 11 s on a current generation personal computer.

The elements of the weight arrays µl(w), corresponding to the lamp specific informative spectral range were set to unit and left zero otherwise. In the HIS under study, the baseline shift was mainly contributed by the thermal drift of the camera sensor. The approximate shape of the baseline was assessed by recording an image of white reference material illuminated by a halogen broadband light and a series of images corresponding to the camera dark current. The frequency at which the dark frames were captured was low enough that the camera drift was measurable. For each frame a difference between the corresponding dark frame and the first dark frame in the series was computed. This difference was further divided by the difference between the white frame and the first dark frame in the series. The resulting series of images represented the time-lapse of baseline drift for each spatial channel. By fitting each baseline in the series it was determined that a baseline can be approximated by second order polynomial. Since the weights were nonzero only within narrow spectral bands, baseline was locally modeled by a first order polynomial (K = 1).

2.2 Spatial feature extraction and nonparametric SiRF characterization

The spatial reference object can be designed to suit the task at hand and, unlike the spectral lines, the modeled features can be well separated. Due to the manufacturing limitations the delta function approximation of these features is hard to achieve in an affordable reference object. Furthermore, in contrast to the SRF characterization, modeling the spatial resolution is significantly more demanding due to the SiRF complexity. However, reliable and computationally straightforward results were obtained by using a series of black-to-white step edges. Each spatial profile of the acquired spatial reference image at a given spectral channel w, may be interpreted as a sequence of blurred edges of alternating orientations. The model of the single edge with transition located at the image position u0 is written as a Heaviside step function H(υ-u0) modulated by a multiplicative cM and an additive term cA, arising from uneven illumination, temperature drift of the sensor and the reflected light. Therefore, in Eq. (3), the term TSi(gSi(θ,λ)) is substituted by the edge model at the wavelength λ

TSi(gSi(θ,λ))=cMH(τ(υu0))+cA,
where orientation of the given edge is expressed by integer τ taking values ± 1. The complexity of a realistic edge blurring was captured by modeling the SiRF as a normalized mixture of M Gaussian functions, each weighted by a nonnegative ρm:
SiRF(υ)=m=1MρmN(υ;μG,m,σm).
Each Gaussian function in the mixture is parameterized by its center µG,m and standard deviation σm. The convolution of a single Gaussian and a Heaviside function results in a cumulative distribution function (cdf) for the normal distribution, which can be expressed by the Gauss error function (erf). Considering the convolution properties (distributivity and scalar multiplication associativity) the convolution of SiRF and the Heaviside edge model can be considered as a weighted sum of M separate convolutions and therefore as a weighted sum of M cdfs, scaled by cM and shifted by cA. Taking into account the function normalization Σρm = 1, the convolution of edge model and SiRF can be written as:
[TSi(gSi(θ,λ))SiRF](υ,ω)=12cM(1+m=1Mρmerf(τ(υ(μG,m+u0)2σm))+cA.
The acquired image SiA(u,w) is sampled with optical elements of finite size. Therefore, to model the recorded intensity of a pixel located at (u,w), Eq. (10) has to be integrated over the rectangular area of size δu × δw. Using the rectangle rule of integral approximation and taking into account the wavelength invariant intensity, the acquired profile of a single e-th edge of the spectral channel w can be modeled within error εSi as:
SiA,e,w(u)=δuδw(12cM,e(1+m=1Mρmerf(τe(u(μG,m+u0,e)2σm))+cA,e)+εSi,w(u)
The ambiguity of the parameters is resolved by imposing constraints arising from the SiRF normalization and by setting the SiRF center of gravity to zero:

m=1Mρm=1,m=1MρmμG,m=0.

Since the SiRF model is nonparametric it was, under the approximation of local invariance, fitted to a number NR of small (2∙∆uR + 1) × (2∙∆wR + 1) sized image regions Rr centered at (uR,r,wR,r). A region and corresponding variables are shown in Fig. 3. The two keys for selecting the region denoted by index r were capturing the information relevant to the convolution of the SiRF and keeping the edge er, approximately in the center of the region. The spatial region size ∆uR was chosen to be the rounded half of the average distance between two adjacent edge approximationsu˜0,e,w:

ΔuR=round(121W(E-1)w=1We=1E-1|u˜0,e+1,wu˜0,e,w|),
where E is the number of edges and W is the number of spectral channels. The u˜0,e,w were computed by zero-crossing of the smoothed first-derivative profile. Using the same initial approximation, the center uR,r of the r-th region was set to the rounded value of u˜0,e,w. Thereby, the spatial range covered by a region r can be expressed as: ηr ϵ [uR,r -∆uR, uR,r + ∆uR]. Size of the region in the spectral direction (∆wR) was chosen with respect to the balance between the influence of the noise and keeping the variation of the SiRF and edge position small within the region. Spectral centers wR,r were uniformly distributed across the spectral axis to accommodate non-overlapping and aligned stack of (2∙∆wR + 1) pixel wide regions. Taking into account the limited spatial scope of the region (u = ηr) around the edge e = er, and substituting SiA,e with the mean region profile into Eq. (11), the optimal parameters defining SiRF and edge positions were assessed by minimizing the sum of squared fit errors:
{u0,Opt,r,ρOpt,r,μG,Opt,r,σOpt,r}=argmin(ηrεSi,r2(ηr,u0,r,ρr,μG,r,σr,cM,r,cA,r)),
where arrays ρr, σr, υr contain the M parameters specifying the corresponding SiRF, u0,r is the location of the edge encompassed by the region, and cM,r, cA,r are the multiplicative and additive coefficients. The summation was performed over all the elements in the region ηr.

 figure: Fig. 3

Fig. 3 Illustration of the selected spatial region.

Download Full Size | PDF

The optimization of the main parameters (u0,r, ρr, σr, µG r) was performed by a nonlinear constrained sequential quadratic programming (SQP) [18] algorithm, while the auxiliary parameters cM,r and cA,r were computed by a pseudo-inverse. At each iteration of the SQP method an approximation of the Hessian of the Lagrangian function is made using the BFGS update method. Step direction is determined by a quadratic programing solution of a subproblem, which is formed using the approximated Hessian of the Lagrangian. The step size is determined by a line search procedure. The smoothness of SiRF was implicitly enforced by setting M = 3 and imposing the lower limit of the parameters σm,r>0.5. The optimization problem was further constrained by solving the Eq. (12) and limiting the weights to nonnegative values (ρm,r≥0). Initial values of the edge positions were set to match the corresponding region centers u0,r = uR,r. while the parameters σm,r, ρm,r and υm,r were determined by fitting the given edge with simplified Gaussian SiRF model. According to the earlier described region size selection criterion, ∆wR and ∆uR were set to 16 and 2, respectively. Optimal parameters for a single region were computed in approximately 0.9 s.

2.3 Global modeling of the computed parameters

The parameters were actually computed only on a mesh of points formed by spectral lines and spatial edges. Therefore, the final step of the characterization is regularization and computation of global tuning function and interpolation functions for the SRF and the SiRF. Starting with the spectral direction, the first step is the definition of features in the image space. This is accomplished by inserting values of the optimized parameters into the constraint equation for each spatial channel, resulting in a 2D grid of points (u,w0,l,j,u). By assigning the values of the target properties to each point of the grid, three sets of data are formed: one for the spectral tuning function λ(u, w0,l,j,u), and two for the SRF shape parameters γG(u,w0,l,j,u) and γL(u,w0,l,j,u). By fitting 2D polynomial surfaces to each of the data sets, the interpolation functions (ΓG, ΓL) and the inverse spectral tuning function ZS were computed:

γG(υ,ω)ΓG(υ,ω),
γL(υ,ω)ΓL(υ,ω),
λZS(υ,ω).

Similar grid was assembled for the image of the spatial reference object by combining the edge positions and their corresponding region centers. Using the true edge positions (θ0,e) the correspondence pairs θ0,e(u0,r, wR,r) were formed and the inverse spatial tuning function ZSi was obtained by a procedure analogous to the computation of the ZS:

θZSi(υ,ω).
Since the SiRF was in a non-parametric form, its value at a given point was set to the SiRF determined for the nearest image region. Using the computed inverse tuning function, the coordinates of both grids were computed in the object space. The forward tuning function was determined by fitting a 2D polynomial surface to the reversed data sets:
ωTS(θ,λ),
υTSi(θ,λ).
Polynomial order of ΓG and ΓL was five, while all the functions describing the tuning functions were seventh order.

2.4 Hyperspectral imaging system and experimental setup

Calibration and validation images were acquired by a hyperspectral imaging system, shown in Fig. 4(a), consisting of a near infrared enhanced MV1-D1312(l)-160-CL-12 camera (Photonfocus) with 1312 × 1082 light sensitive pixels, an ImSpector V10E (from 400 nm to 1000 nm) spectrograph (Specim Ltd, Finland), and S23-f/2.4 lens (Specim Ltd, Finland). The broadband illumination source comprised a research QTH lamp housing (Newport, Oriel Instruments – model: 66881), and a radiometric power supply (Newport, Oriel Instruments – model: 69931). The spectral reference and validation images were recordings of the Hg(Ar), Ar, Ne, and Kr pen spectral reference lamps (Newport, Oriel instruments) models: 6035, 6030, 6032, 6031 respectively. Since a significant portion of the energy emitted by these lamps is in a form of ultraviolet light, optical 525 nm long-pass filter (Edmund Optics, model: NT64-634) was used to attenuate higher energy radiation, preventing hazards and unwanted effects of the light induced fluorescence of organic contaminants (mainly cellulose dust particles). The modulated light was dispersed by a tilted solid thermoplastic diffuse reflectance standard (T, Spectralon, Labsphere) and a thin diffuser (D) placed on the imaging plane. Despite the modeling of the multiplicative and additive effects in Eq. (4), care was taken to ensure a reasonably homogeneous illumination to satisfy the premise of minimal spatial variation of the spectral reference images. By replacing filtered gas emission lamps with broadband light source, and placing the spatial reference object on the imaging plane the system was prepared for the acquisition of the spatial reference image. The employed spatial reference object was a custom laser-machined 150 µm thick stainless-steel mask, shown in Fig. 4(b), with overall manufacturing accuracy of ± 8 µm. Prior to modeling or analysis all the acquired images were corrected by point-to-point subtraction of the dark image and division by the difference between the bright and the dark image. With regards to the position dependent signal-to-noise ratio, the spectral axis was cropped to the spectral range from 527 nm to 980 nm (W = 581 pixel). Likewise, due to the design of the studied HIS, the spatial image dimension was cropped to cover approximately 155 mm wide range (U = 1000 pixel).

 figure: Fig. 4

Fig. 4 Schematic of the experimental setup (a). Spectralon tile (T), long-pass filter (F), diffuser (D). The spatial reference object (b).

Download Full Size | PDF

3. Results and validation of the proposed characterization method

In a perfect HIS we expect that spectra at different spatial channels are mutually aligned and relation between the image spectral positions ω and their corresponding wavelengths λ can be described by a first order polynomial model λ = zS,1ω + zS,0 + ES. In such case, the only contribution to the fit error would be the spatially and spectrally independent noise ES. However, due to the coregistration errors, the first order polynomial model has to be expanded by a term that includes spectral and spatial dependence of the inverse tuning function: λ = zS,1ω + zS,0 + ES + δZS(υ,ω). The parameters zS,1 and zS,0 were assessed by a first order polynomial fit between the true wavelengths of the spectral lines λ0,l,j and their corresponding locations w0,l,j,u in all spatial channels of the acquired image (zS,1 = 0.8004, zS,0 = 524.3097). Example of the estimated spectral line positions is shown in Figs. 5(a) and 5(b). The position dependent residual δZS(υ,ω) is shown in Fig. 5(c). Likewise, the relation between true spatial position θ and the corresponding spatial position in the acquired image υ can be written as θ = zSi,1υ + zSi,0 + E0 + δZSi(υ,ω). The parameters zSi,1 and zSi,0 were assessed by a first order polynomial fit between the true edge positions θ0,e and their corresponding locations u0,r in all regions Rr of the acquired image (zSi,1 = 0.1525, zSi,0 = 0.6108). Distribution of the extracted edge positions and the residual δZSi, are presented in Figs. 5(d)-5(f).

 figure: Fig. 5

Fig. 5 Extracted image positions of two spectral lines (a, b) and the corresponding positions estimated by the computed model, clearly demonstrate subpixel rotation of the image due to misalignment of the camera and the spectrograph slit, and the bending of the spectral lines which is in agreement with previously published work [1]. Furthermore, the spectral axis nonlinearity is evident from spectral dependency of the spectral tuning function residual δZS (c). Extracted positions of the two outermost edges at different spectral channels and the corresponding edge positions estimated by the computed characterization model (d, e) suggest rotation and the wavelength dependent magnification. The higher order spatial tuning function residual δZSi (f).

Download Full Size | PDF

The SRF, shown in Fig. 6, exhibits a significant variation with wavelength, starting as a Gaussian at short wavelengths and becoming increasingly influenced by the Lorentzian broadening at the near infrared (NIR). The maximum spectral half width at half maximum (HWHM) of the SRF was 2.17 nm assuming an average dispersion of:0.8 nm/pixel. Spatial resolution is the highest in the central part of the image. While the SiRF, depicted in Fig. 7, shows some variance with spatial position, the main variability is with respect to the wavelength. With increasing deviation towards shorter wavelengths, the complex structure and asymmetry of the SiRF becomes evident, while in the opposite direction the symmetry of the SiRF improves, but the resolution is lower. Both can be observed in Fig. 8.

 figure: Fig. 6

Fig. 6 Plots of the Voigt profile parameter interpolation surfaces ΓL(u,w) (a), and ΓG(u,w) (b) show the increasing influence of Lorentzian broadening with longer wavelength. Spectral resolution in terms of Half Width at Half Maximum HWHM of the Voigt profile (c). At the NIR end of the utilized spectral range the resolution is significantly lower than at shorter wavelengths.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Normalized SiRFs at different image locations, plotted on 32 pixel wide intervals marked by black vertical lines. Best resolution is exhibited in the central part of the acquired image.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 (a) Asymmetry and complexity of the SiRF are easily observed in plots of the SiRF for spatial channel 742 at two wavelengths corresponding to spectral channels 10 and 570. (b) Quality of the proposed model fit and wavelength dependent shifts of the edges due to coregistration errors are evident from overlay of the acquired (black) and the estimated (red) profiles of two edges in the image of the validation object. Plotted profiles correspond to the two depicted SiRFs.

Download Full Size | PDF

Accuracy of the characterization method was assessed by analyzing the validation image of Kr spectral reference source. Firstly, the forward tuning function and the SRF model were validated by predicting the acquired validation image, shown in Fig. 9(a), by the computed characterization model. Secondly, the positions of four well resolved spectral lines in the acquired validation image were extracted by fitting the SRF model to the acquired intensity profiles of the emission lines. The extracted positions wV,j,u of j-th spectral line at spatial channel u were then transformed into corresponding wavelengths λV,j,u by the inverse tuning function. The wavelength prediction error was calculated as δλV,j,u = λV,j,u-λV,0,j, where λV,0,j are the true wavelengths of the spectral lines. For comparison, additional spectral calibration was performed according to the method proposed by Polder et.al [7]. The spectral line pixel positions were extracted from each spatial channel of the acquired Hg(Ar) reference lamp image. The relation between pixel locations and the true wavelengths was modeled by a third order polynomial fit. For the purpose of validation, the extracted pixel positions of the four well resolved Kr spectral lines were transformed by the calculated polynomial fit and compared to the corresponding true wavelengths. The prediction error statistics of the two methods for each of the four selected Kr spectral lines is presented in Table 1.

 figure: Fig. 9

Fig. 9 Accuracy of the forward tuning function and SRF estimation are presented by comparison of the acquired image and the estimated image of the spectral and spatial validation objects. Estimated spectrum in the central part of the Kr validation image and the fit error expressed as a difference between the estimated and acquired spectrum (a). Estimated image profile in the central part of the spatial validation image and the fit error computed by subtracting the estimated and the acquired profile (b).

Download Full Size | PDF

Tables Icon

Table 1. The wavelength prediction error statistics of the four Kr spectral lines.

Likewise, the spatial validation image shown in Fig. 9(b), was produced by imaging the reference object displaced by a known shift. Quantitative assessment was made by finding the image-wise edge positions and calculating their expected locations in the object space. Analogically to the spectral direction, the 30 equally distributed edges covering the entire spatial region were located by locally fitting the modeled edge response to each of the 581 spectral channels resulting in a total of 17430 validation points. The residual misalignments, shown in Fig. 10, were computed as a difference between the expected and the true edge positions in the object space. The mean residual misalignment for all the validation points was 0.001 mm, with the standard deviation of 0.005 mm. Speaking in pixel-size terms, the mean error was 0.007 pixel and the standard deviation was 0.05 pixel. The validation shows that the expected error was in the range of 1/10 of a pixel, which is comparable to the spatial reference object uncertainty specified by the manufacturer.

 figure: Fig. 10

Fig. 10 Residual misalignment between the measured and the true edge positions. Centers represent the mean error and the error bars correspond to the standard deviation. Dashed line denotes the reference object uncertainty. The edge locations correspond to the edges in the profile shown in Fig. 9(b).

Download Full Size | PDF

The computed forward tuning function can be used for calibration of the acquired image by means of geometric image transform [19]. First, a pixel grid (θq, λq) is formed in the object space. Each pixel q is then transformed by the forward tuning function into the acquired image space (υq,ωq) = (TS(θq, λq), TSi(θq, λq)). Finally, the intensities of the object space image are computed by bicubic interpolation. Results of the validation image calibration are displayed in Fig. 11.

 figure: Fig. 11

Fig. 11 The effects of the calibration observed in the spectral and spatial validation images. For better visualization of the small coregistration errors only spectra and spatial profiles limited to image subregions are shown. (a) Spectra of the 760.15 nm and 769.45 nm Kr spectral lines at spatial channels 10 and 990 corresponding to true spatial positions 2.07 mm and 152.16 mm. Due to coregistration errors, the spectra in the acquired image are shifted by approximately 0.25 pixel. (b) After transforming the acquired image into the object space the image is calibrated to real physical units and the coregistration improves. Similar observation holds for the spatial validation image. (c) Plot of the three spatial validation image profiles of a single edge at spectral channels 17, 282 and 555 corresponding to wavelengths 540 nm, 750 nm and 970 nm clearly shows wavelength dependent spatial distortions. (d) After the calibration the profiles are spatially aligned.

Download Full Size | PDF

4. Conclusion

Prior work has demonstrated the significant impact of the errors arising from poor uniformity of the acquired images on the accuracy of the subsequent hyperspectral image analysis. Various methods were proposed for the assessment and/or correction of the effects arising from the operation principle of the spectrograph based hyperspectral imaging system. However, the majority of these methods require equipment often not present on the application site or the characterization is based on a small number of image locations, effectively ignoring the spectral-spatial dependency. Moreover, many of the spectral lines emitted by the standard reference sources are unresolvable by the moderate spectral resolution of the imaging spectrographs. Consequently, the simple estimation of the spectral line position by its center of gravity is inaccurate. This significantly limits the number of well resolved spectral lines useful for the calibration, and thereby the degree of modeling, and consequently the accuracy of the fitting. We have alleviated this problem by using spectral matching method, which allowed taking into account all the significant spectral lines in the observed range and the spectral resolution of the device. Furthermore, a simple visual inspection suggested asymmetry of the spatial response function (SiRF). This was an important finding, as the edges extracted by the popular gradient based edge detection algorithms would be inevitably biased. Therefore, we proposed an extension of the SiRF model allowing greater flexibility. The results of the proposed characterization method can be used for calibration of the acquired images and/or as a tool for quantization of the expected errors in various hyperspectral imaging techniques. Furthermore, we expect, the proposed characterization of the SRF and SiRF can be used in future work for development of hyperspectral image restoration methods based on previously published work on hyperspectral image deconvolution [20, 21].

Acknowledgments

This work was supported by the Slovenian Research Agency (ARRS) under grants L2-4072, J7-2246 and L2-2023, by Sensum, Computer Vision Systems, and by the European Union, European Social Fund.

References and links

1. R. N. Jørgensen and F. Risø, The VTTVIS line imaging spectrometer: principles, error sources, and calibration (Risø National Laboratory, 2002).

2. F. W. L. Esmonde-White, K. A. Esmonde-White, and M. D. Morris, “Minor Distortions with Major Consequences: Correcting Distortions in Imaging Spectrographs,” Appl. Spectrosc. 65(1), 85–98 (2011). [CrossRef]   [PubMed]  

3. L. Guanter, K. Segl, B. Sang, L. Alonso, H. Kaufmann, and J. Moreno, “Scene-based spectral calibration assessment of high spectral resolution imaging spectrometers,” Opt. Express 17(14), 11594–11606 (2009). [CrossRef]   [PubMed]  

4. M. Meroni, L. Busetto, L. Guanter, S. Cogliati, G. F. Crosta, M. Migliavacca, C. Panigada, M. Rossini, and R. Colombo, “Characterization of fine resolution field spectrometers using solar Fraunhofer lines and atmospheric absorption features,” Appl. Opt. 49(15), 2858–2871 (2010). [CrossRef]   [PubMed]  

5. Y. Feng and Y. Xiang, “Mitigation of spectral mis-registration effects in imaging spectrometers via cubic spline interpolation,” Opt. Express 16(20), 15366–15374 (2008). [CrossRef]   [PubMed]  

6. D. Schlapfer, J. Nieke, and K. I. Itten, “Spatial PSF Nonuniformity Effects in Airborne Pushbroom Imaging Spectrometry Data,” IEEE Trans. Geosci. Rem. Sens. 45(2), 458–468 (2007). [CrossRef]  

7. G. Polder, G. van der Heijden, L. Keizer, and I. Young, “Calibration and characterisation of imaging spectrographs,” J. Near Infrared Spectrosc. 11(1), 193–193 (2003). [CrossRef]  

8. J. Zadnik, D. Guerin, R. Moss, A. Orbeta, R. Dixon, C. G. Simi, S. Dunbar, and A. Hill, “Calibration procedures and measurements for the COMPASS hyperspectral imager,” Proc. SPIE 5425, 182–188 (2004). [CrossRef]  

9. T. G. Chrien, R. O. Green, and M. L. Eastwood, “Accuracy of the spectral and radiometric laboratory calibration of the Airborne Visible/Infrared Imaging Spectrometer,” Proc. SPIE 1298, 37–49 (1990). [CrossRef]  

10. D. Kohler, W. Bissett, R. Steward, and C. Davis, “New approach for the radiometric calibration of spectral imaging systems,” Opt. Express 12(11), 2463–2477 (2004). [CrossRef]   [PubMed]  

11. C. D. Claxton and R. C. Staunton, “Measurement of the point-spread function of a noisy imaging system,” J. Opt. Soc. Am. A 25(1), 159–170 (2008). [CrossRef]   [PubMed]  

12. T. Skauli, “An upper-bound metric for characterizing spectral and spatial coregistration errors in spectral imaging,” Opt. Express 20(2), 918–933 (2012). [CrossRef]   [PubMed]  

13. E. E. Whiting, “An empirical approximation to the Voigt profile,” J. Quant. Spectroc. Rad. 8(6), 1379–1384 (1968). [CrossRef]  

14. C. D. Cantrell, Modern Mathematical Methods for Physicists and Engineers (Cambridge University Press, Cambridge, 2000).

15. P. R. Griffiths and L. Shao, “Self-Weighted Correlation Coefficients and Their Application to Measure Spectral Similarity,” Appl. Spectrosc. 63(8), 916–919 (2009). [CrossRef]   [PubMed]  

16. A. Kramida and Y. Ralchenko, J. Reader, and N. A. Team, “NIST Atomic Spectra Database (version 5.0),” (National Institute of Standards and Technology, Gaithersburg, MD., 2012).

17. C. T. Kelley, Iterative methods for optimization (Society for Industrial Mathematics, Philadelphia, 1999).

18. R. Fletcher, Practical methods of optimization (Wiley, Chichester, 2000).

19. R. A. Schowengerdt, Remote Sensing: Models and Methods for Image Processing, 3rd ed. (Academic Press, 2006).

20. Ž. Špiclin, M. Bürmen, F. Pernuš, and B. Likar, “Characterization and modelling of the spatially- and spectrally-varying point-spread function in hyperspectral imaging systems for computational correction of axial optical aberrations,” Proc. SPIE 8215, 82150R, 82150R-9 (2012). [CrossRef]  

21. J. Katrašnik, F. Pernuš, and B. Likar, “Deconvolution in Acousto-Optical Tunable Filter Spectrometry,” Appl. Spectrosc. 64(11), 1265–1273 (2010). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 Example of a spectral (Ne) (a) and a spatial (b) reference image.
Fig. 2
Fig. 2 Weighted acquired spectra of the three utilized reference sources. The problem of low spectral resolution is clearly demonstrated by the two recorded spectral peaks of the Hg(Ar) lamp. The spectral peak Hg-I corresponds to the single Hg spectral line at 546.07 nm while the Hg-II peak is a response to two Hg spectral lines at 576.96 nm and 579.07 nm, unresolvable due to the limited instrument resolution and discrete sampling. This is the reason for the difference between the number of observed peaks and the number of tabulated spectral lines. Furthermore, at shorter wavelengths peaks are approximately Gaussian. However, towards the NIR end of the observed spectral range, the spectral peaks (e.g. Ar-I at 965.78 nm) become wider and their tails longer than the Gaussian function would model.
Fig. 3
Fig. 3 Illustration of the selected spatial region.
Fig. 4
Fig. 4 Schematic of the experimental setup (a). Spectralon tile (T), long-pass filter (F), diffuser (D). The spatial reference object (b).
Fig. 5
Fig. 5 Extracted image positions of two spectral lines (a, b) and the corresponding positions estimated by the computed model, clearly demonstrate subpixel rotation of the image due to misalignment of the camera and the spectrograph slit, and the bending of the spectral lines which is in agreement with previously published work [1]. Furthermore, the spectral axis nonlinearity is evident from spectral dependency of the spectral tuning function residual δZS (c). Extracted positions of the two outermost edges at different spectral channels and the corresponding edge positions estimated by the computed characterization model (d, e) suggest rotation and the wavelength dependent magnification. The higher order spatial tuning function residual δZSi (f).
Fig. 6
Fig. 6 Plots of the Voigt profile parameter interpolation surfaces ΓL(u,w) (a), and ΓG(u,w) (b) show the increasing influence of Lorentzian broadening with longer wavelength. Spectral resolution in terms of Half Width at Half Maximum HWHM of the Voigt profile (c). At the NIR end of the utilized spectral range the resolution is significantly lower than at shorter wavelengths.
Fig. 7
Fig. 7 Normalized SiRFs at different image locations, plotted on 32 pixel wide intervals marked by black vertical lines. Best resolution is exhibited in the central part of the acquired image.
Fig. 8
Fig. 8 (a) Asymmetry and complexity of the SiRF are easily observed in plots of the SiRF for spatial channel 742 at two wavelengths corresponding to spectral channels 10 and 570. (b) Quality of the proposed model fit and wavelength dependent shifts of the edges due to coregistration errors are evident from overlay of the acquired (black) and the estimated (red) profiles of two edges in the image of the validation object. Plotted profiles correspond to the two depicted SiRFs.
Fig. 9
Fig. 9 Accuracy of the forward tuning function and SRF estimation are presented by comparison of the acquired image and the estimated image of the spectral and spatial validation objects. Estimated spectrum in the central part of the Kr validation image and the fit error expressed as a difference between the estimated and acquired spectrum (a). Estimated image profile in the central part of the spatial validation image and the fit error computed by subtracting the estimated and the acquired profile (b).
Fig. 10
Fig. 10 Residual misalignment between the measured and the true edge positions. Centers represent the mean error and the error bars correspond to the standard deviation. Dashed line denotes the reference object uncertainty. The edge locations correspond to the edges in the profile shown in Fig. 9(b).
Fig. 11
Fig. 11 The effects of the calibration observed in the spectral and spatial validation images. For better visualization of the small coregistration errors only spectra and spatial profiles limited to image subregions are shown. (a) Spectra of the 760.15 nm and 769.45 nm Kr spectral lines at spatial channels 10 and 990 corresponding to true spatial positions 2.07 mm and 152.16 mm. Due to coregistration errors, the spectra in the acquired image are shifted by approximately 0.25 pixel. (b) After transforming the acquired image into the object space the image is calibrated to real physical units and the coregistration improves. Similar observation holds for the spatial validation image. (c) Plot of the three spatial validation image profiles of a single edge at spectral channels 17, 282 and 555 corresponding to wavelengths 540 nm, 750 nm and 970 nm clearly shows wavelength dependent spatial distortions. (d) After the calibration the profiles are spatially aligned.

Tables (1)

Tables Icon

Table 1 The wavelength prediction error statistics of the four Kr spectral lines.

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

I(u,w)= D(u,w) [ T(g(θ,λ))SRF(υ,ω)SiRF(υ,ω) ] (υ,ω)dυdω+ε(u,w)
S A (u,w)= D(u,w) [ T S ( g S (θ,λ))SRF(υ,ω) ](υ,ω) dυdω+ ε S (u,w),
S i A (u,w)= D(u,w) [ T Si ( g Si (θ,λ))SiRF(υ,ω) ](υ,ω) dυdω+ ε Si (u,w).
S M ,l ,u (w)=δuδw j=1 J l α l, j V(w w 0,l,j ,u , γ G (u, w 0,l,j ,u ), γ L (u, w 0,l,j ,u ))+ k=0 K β l,k ,u w k ,
χ S,u = 1 L l=1 L { [ 1wcc( S M,l ,u (w), S A,l ,u (w), μ l (w)) ]+wsse( S M,l ,u (w), S A,l ,u (w), μ l (w)) } ,
wcc( s M , s A ,μ)= w=1 W μ(w)( s A (w) s A )( s M (w) s M ) [ w=1 W (μ(w) ( s A (w) s A ) 2 w=1 W μ(w) ( s M (w) s M ) 2 ] 1/2 ,
wsse( s M , s A ,μ)= w=1 W μ(w) [ s A (w) s M (w)] 2 / w=1 W μ(w) .
T Si ( g Si (θ,λ))= c M H(τ(υ u 0 ))+ c A ,
SiRF(υ)= m=1 M ρ m N(υ; μ G, m , σ m ) .
[ T Si ( g Si (θ,λ))SiRF ](υ,ω)= 1 2 c M ( 1+ m=1 M ρ m erf( τ (υ( μ G, m + u 0 ) 2 σ m ) )+ c A .
S i A ,e , w (u)=δuδw( 1 2 c M ,e ( 1+ m=1 M ρ m erf( τ e (u( μ G, m + u 0,e ) 2 σ m ) )+ c A,e )+ ε Si ,w (u)
m=1 M ρ m =1, m=1 M ρ m μ G, m =0.
Δ u R =round( 1 2 1 W(E-1) w=1 W e=1 E-1 | u ˜ 0,e+1,w u ˜ 0,e,w | ),
{ u 0,Opt,r , ρ Opt,r , μ G, Opt,r , σ Opt,r }=argmin( η r ε Si,r 2 ( η r , u 0,r , ρ r , μ G, r , σ r , c M ,r , c A,r ) ),
γ G (υ,ω) Γ G (υ,ω),
γ L (υ,ω) Γ L (υ,ω),
λ Z S (υ,ω).
θ Z Si (υ,ω).
ω T S (θ,λ),
υ T Si (θ,λ).
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.