Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Efficient assessment method of on-board modulation transfer function of optical remote sensing sensors

Open Access Open Access

Abstract

Modulation transfer function (MTF) can be used to evaluate the imaging performance of on-board optical remote sensing sensors, as well as recover and restore images to improve imaging quality. Laboratory measurement approaches for MTF have achieved high precision. However, they are not yet suitable for on-board measurement. In this paper, a new five-step approach to calculate MTF of space optical remote sensing sensors is proposed. First, a pixel motion model is used to extract the conditional sub-frame images. Second, a mathematical morphology algorithm and a correlation–homomorphic filter algorithm are used to eliminate noise and enhance sub-frame image. Third, an image partial differentiation determines the accurate position of edge points. Fourth, a model optical function is used to build a high-resolution edge spread function. Finally, MTF is calculated by derivation and Fourier transform. The experiment shows that the assessment method of MTF is superior to others.

© 2015 Optical Society of America

1. Introduction

Nowadays, high-resolution optical remote sensing sensors are of interest for a wide variety of applications, particularly in remote sensing. An optical remote sensing sensor is a very powerful tool to realize remote sensing on the satellite platform [1–3]. The imaging performance of an on-board CCD camera is generally described by modulation transfer function (MTF). Although the MTFs of on-board CCD cameras are measured in laboratories before launch, they may change owing to vibration during launch or aging of materials over time [4–9]. For this reason, on-orbit MTF measurement is necessary to monitor the actual performance of on-orbit imagers and is significant for further image restoration.

Majority of existing MTF estimation methods are developed based on the theoretical method proposed in [10–13]; these studies aimed to calculate MTF from its knife-edge response. These methods are frequently used because knife-edge or step profiles can be easily found in observed images, such as rooflines, farmlands, roadways, and tarps laid on the ground ahead of time [14, 15]. In astronomical applications, reasonably bright, isolated stars can serve as point sources to derive the MTF [16–18]. Similarly, for earth-observing satellites, MTF can also be obtained by imaging a point source lying on the ground [19].

In this work, we present an effective MTF acquisition approach for on-board remote sensing image. This method is applicable to both CCD and CMOS cameras but is more useful to CCD-based time delay integration (TDI) camera as CCD cameras usually have more image motion noise that may affect dynamic imaging performance. We use a pixel motion model in Section 2 to extract edge sub-frame image. In this manner, pixel motion direction information can be a positive factor in the imaging as it can help extract edge information because it provides real motion direction of an object on a focal plane. We use mathematical morphology and correlation–homomorphic filter in Section 3 to denoise and enhance edge energy. Using this approach, the noise can be removed so as not to affect MTF extracting. We use image partial differentiation in Section 4 to extract the sub-pixel position of edge points. In our work, we use the optical transfer function model of on-board optical sensors to calculate the missing edge points and build high-precision edge spread function (ESF) in Section 5.

2. Edge sub-frame image extraction based on pixel motion velocity model

Optical remote sensing sensors can be considered as a linear system. The image-capturing process of optical remote sensing sensors can be modeled into a convolution operation. The relationship between the ground object and image can be expressed as Eq. (1):

g(x,y)=f(x,y)h(x,y),
where g(x,y) is an image, f(x,y) is the ground object, h(x,y) is the impulse response function of optical remote sensing sensors, and is a convolution operation. After performing Fourier transform on Eq. (1), the result can be expressed as Eq. (2):
G(ξ,η)=F(ξ,η)H(ξ,η).
The MTF can be expressed as Eq. (3):

MTF=|G(ξ,η)F(ξ,η)|=|H(ξ,η)ejθ(ξ,η)|=|H(ξ,η)|.

When f(x,y)is a point object, h(x,y)is a point spread function denoted byPSF(x,y). The image gpoint(x,y) (shown in Fig. 1) can be expressed as Eq. (4):

gpoint(x,y)=fpoint(x,y)PSF(x,y).
The MTF can be expressed as Eq. (5):
MTF=|{PSF(x,y)}|.
When f(x,y)is a line object, h(x,y)is a line spread function (LSF) denoted byLSF(x,y). The MTF can be expressed as Eq. (6):
MTF=|{LSF(x)}|.
The LSF is the derivative of the ESF [20], as expressed in Eq. (7):

 figure: Fig. 1

Fig. 1 The blurring point image can be seen as the result of the convolution of the point spread function and the original image.

Download Full Size | PDF

LSF(x)=ddx[ESF(x)].

The edge method uses an edge sub-frame image to assess MTF. We usew(x,y) to represent a window function and ω(x,y)to represent a Dirac comb function. The edge sub-frame image can be expressed as Eq. (8):

g(x,y)=[f(x,y)h(x,y)]w(x,y)ω(x,y).
In Fourier domain, Eq. (8) can be expressed as Eq. (9):
G(ξ,η)=[F(ξ,η).H(ξ,η)]W(ξ,η)Ω(ξ,η),
where denotes the convolution operation. In Fourier domain, alias problems tend to occur more frequently because the Dirac comb function performs a convolution operation. To avoid these problems, the edge is placed at a slant angle along the image row or column direction. The ground edge scene f(x,y) in the x-direction can be expressed as Eq. (10):
f(x,y)=axxδ(τ)dτ+bx=axt(x)+bx,
where ax and bx are edge direction parameters and t(x)=xδ(τ)dτ. The edge image g(x,y) can be expressed as Eq. (11):
g(x,y)=[axt(x)+bx]h(x,y).
By Fourier transform, Eq. (11) yields Eq. (12):
G(ξ,η)=[ax.T(ξ)+bxδ(ξ)]H(ξ,η).
The MTF can be expressed as Eq. (13):
MTF=|G(ξ,μ)axT(ξ)+bxδ(ξ)|,
Therefore, the edge direction influences MTF assessment.

To accurately extract MTF, the precise angle between the edge direction and the vertical track or along-track direction needs to be determined. In the process of on-orbit imaging of optical remote sensing sensors, several factors such as satellite motion on orbit, attitude maneuver, and the chatter lead to the complex relative motions between the optical system and the earth’s surface. In addition, the ellipsoidal surface of the earth is the body target of optical remote sensing sensors. As points in the object space cannot be considered as the same object plane that is vertical to the optical axis, dynamic imaging is easily distorted by the effect of the coupling between attitude maneuver and the chatter. Pixel velocity field is anisotropic; it changes both in time and in space. To capture clear images, a drift-adjusting mechanism is used to adjust the vertical track or along-track direction of optical remote sensing sensors real time during flight. Therefore, the vertical track or along-track direction changes real time in the process of optical remote sensing.

In this work, we use an image motion velocity model based on coordinate transform to calculate the real-time vertical or along-track direction of optical remote sensing sensors. According to the determined vertical or along-track direction parameters, the imagery along the edge direction is searched in the remote sensing image. The angle between the edge direction and vertical track or along-track direction should be as small as possible to assess MTF. We use coordinate transform method to build the relationships between ground object target and image position.

Figure 2 shows the instantaneous imaging geometrical relationships of optical remote sensing sensors. We define several coordinate systems from ground target to image plane as follows:

 figure: Fig. 2

Fig. 2 Imaging geometrical relationships of optical remote sensing sensors.

Download Full Size | PDF

  • U: Landscape geographic coordinate systemU(U1,U2,U3)
  • E: Terrestrial coordinate system E(E1,E2,E3)
  • I: Earth-centered inertial coordinate systemI(I1,I2,I3)
  • B: Satellite orbit coordinate systemB(B1,B2,B3)
  • S: Satellite coordinate systemS(S1,S2,S3)
  • C: Camera coordinate systemC(C1,C2,C3)
  • P: Image plane coordinate systemP(P1,P2,P3)

The transformation order of the coordinate systems can be expressed as UEIBSCP.

We use i0 to represent orbital angle, Rgij is the distance between the object and the earth’s center, and γ0 is the orbit central angle between satellite and descending node in the orbit plane. U coordinate system is transformed into E coordinate system by translating −Rgijalong the U3 axis, rotating −γ0around the U2 axis, and rotating −i0around the U1 axis. The transformation between U and E can be expressed as Eq. (14):

LUE=[cos(i0)sin(i0)00sin(i0)cos(i0)0000100001][cos(γ0)0sin(γ0)00100sin(γ0)0cos(γ0)00001][10000100001Rgij0001].
We use ω to represent the earth’s rotation angular velocity. E coordinate system is transformed into I coordinate system by rotating −ωtaround the E2 axis. The transformation between E and I can be expressed as Eq. (15):
LEI=[cosωt0sinωt00100sinωt0cosωt00001],
We use H to represent the orbit height and Ω to denote the angular rates of orbit motion versus geocenter at t. I coordinate system is transformed into B coordinate system by translating R+H along the I3 axis, rotating γ=γ0+Ωt around the I2 axis, and rotating i0around the I1 axis. The transformation between I and B can be expressed as Eq. (16):
LIB=[cos(i0)sin(i0)00sin(i0)cos(i0)0000100001][10000100001RH0001][cos(γ)sin(γ)00sin(γ)cos(γ)0000100001].
We use ψ, θ, and φ to represent the yawing angle between the satellite coordinate system and orbit coordinate system, pitch angle, and roll angle, respectively. B coordinate system is transformed into S coordinate system by rotating ψalong the B3 axis, rotating θaround the B2 axis, and rotating φaround the B1 axis. The transformation between B and S can be expressed as Eq. (17):
LBS=[cos(θ)0sin(θ)00100sin(θ)0cos(θ)00001][10000cos(φ)sin(φ)00sin(φ)cos(φ)00001][cos(ψ)sin(ψ)00sin(ψ)cos(ψ)0000100001],
We use L to represent the distance between the satellite and the origin of G coordinate system, and f to represent the focus of the camera. S coordinate system is transformed into C coordinate system by scalingf/L. C coordinate system is transformed into P coordinate system by translating falong the C3 axis. The transformation among S, C, and P can be expressed as Eq. (18):
LSCP=[f/L0000f/L0000f/Lf0001].
From the above discussion, the transformation of object–image position relationships can be expressed as Eq. (19):
P=[p1p2p31]=LGE×LEI×LIB×LBS×LSCP×[g1g201].
By taking the derivative of the object–image position relationships with respect to time t, the image motion velocity vector can be expressed as Eq. (20):
VP=dPdt|=[dP1/dtdP2/dtdP3/dt0]=[Vp1Vp2Vp30],
where Vp1 is longitudinal velocity and Vp2 is lateral velocity. The drift angle position can be expressed as Eq. (21):
β=arctan(Vp2Vp1).
According to the drift angle position, the along-track direction or vertical track direction can be obtained. We use mathematical morphology to detect the edge along the track direction.

3. Edge sub-frame image preprocessing based on mathematical morphology algorithm and correlation–homomorphic filter

In this work, dilation and erosion operators of mathematical morphology and correlation–homomorphic filter are adopted in the edge sub-frame image preprocessing method. The mathematical morphology is employed to eliminate noise. The correlation–homomorphic filter is used to enhance the contrast ratio of two sides of the edge.

3.1 Denoising

In actual on-board imaging process, noises exist. Therefore, Eq. (2) can be modified into Eq. (22):

G(ξ,η)=F(ξ,η)H(ξ,η)+N(ξ,η).
The transfer function can be expressed as Eq. (23):
TF=G(ξ,η)F(ξ,η)+N(ξ,η)F(ξ,η).
The transfer function of optical remote sensing sensors is composed of signal transfer function and noise transfer function. For the edge sub-frame image, F(ξ,η)is a decreasing function of the frequency. The noise transfer function increases with the frequency. In low frequency, the MTF of noise is much less than that of signal and can be negligible. In high frequency, the MTF of noise is even larger than that of signal. Therefore, noise may spoil the MTF assessment in high frequency. Figures 3 and 4 show the comparison of MTF with noise and MTF without noise, respectively. The noise severely spoils the MTF assessment. Therefore, denoising of edge sub-frame image is essential to further processing.

 figure: Fig. 3

Fig. 3 MTF without noise: (a) edge image, (b) LSF, (c) PSF, and (d) MTF.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 MTF without noise: (a) edge image, (b) LSF, (c) PSF, and (d) MTF.

Download Full Size | PDF

Image denoising has been widely studied in the field of image processing. Not all image denoising approaches are suitable for the edge image because of different application environments and different image characteristics. As edge position is very important in MTF extraction, the denoising approach usually ensures that edge will not be influenced after processing. Some nonlinear filters like histogram equalization and median filter could cause unpredictable deviations of edge position. Therefore, we use mathematical morphology to perform denoising in remote sensing of edge sub-frame image.

In mathematical morphology, erosion algorithm is one of the basic operations. Let f(x,y) be the gray value at (x,y), b(x,y)be the structural elements, Db be the domain of definition of b(x,y), and Df be the domain of the definition of f(x,y). The process through which f(x,y) is eroded by b(x,y) can be expressed as fΘb,

h(x,y)=fΘb=min(i,j)Db(x+i,y+j)Df[f(x+i,y+j)b(i,j)].
The dilation algorithm is also one of the basic operations in mathematical morphology; it can be expressed as fb,
h(x,y)=fb=min(i,j)Db(x+i,y+j)Df[f(xi,yj)+b(i,j)].
The open and close operators can be expressed by erosion and dilation operators as

Open operators: fb=(fΘb)b;

Close operators: f^b=(fb)Θb;

The open and close operators can efficiently remove all pulse points or areas less than structural elementb. For edge sub-frame image, multistructural elements have strong retention capacity of the details. The designed operators can be expressed as

Dilation denoising operators: NMD1=(f^b)b(f^b)^b;

Erosion denoising operators: NMD2=(f^b)^b(f^b)Θb;

For dilation operators and erosion operators, the impulse responses are zero. They can efficiently suppress the Gaussian and pepper noise [21].

Through the above processing, salt-and-pepper noise and Gaussian noise can be suppressed through the dilation and erosion denoising operators. However, gradually changed background noise remains and affects edge position and MTF extraction. In this paper, we use open operators of mathematical morphology to remove background noise. The open operators ocan be calculated by the combination of dilation and erosion operations: o=fb=(fΘb)b. When the radius of the structural element bis larger than the radius of the edge sub-frame image, the open operation can conveniently acquire the initial background of edge sub-frame image. Background B(x,y)can be expressed as:

B(x,y)=12(K1)+1×12(L1)+1×i=(K1)K1j=(L1)L1o(x+i,y+i).
We use the difference T(x,y) between f(x,y) and B(x,y) in the MTF extraction processing to suppress the background noise, which can be expressed as:

T(x,y)=f(x,y)B(x,y).

3.2 Edge enhancement

The high contrast ratio of the two sides of the edge facilitates MTF extraction. An enhancing processing method can improve the contrast ratio of the two sides of the edge. An excellent processing method has an enhancement effect for edge signal and provides noise suppression. In this work, we choose the correlation–homomorphic filter and a Gaussian distribution filter template.

Let f(x,y) be the gray value at (x,y) in the original edge sub-frame image after denoising, whereas f(x,y) can be expressed as the combination of illumination component i(x,y) and reflection component r(x,y),

f(x,y)=i(x,y)×r(x,y),
ln(f(x,y))=ln(i(x,y))+ln(r(x,y)).
We use h(x,y) as an operator of the correlation filter. Similar to the convolution operation, a one-dimensional correlation function exists between lnf and h as expressed in Eq. (30):
c(t)=lnf(t)h(t)=ln(i(t))h(t)+ln(r(t))h(t)=0lni(τ)h(τt)dτ+0lnr(τ)h(τt)dτ.
Two-dimensional correlation function exists between lnf and h as expressed in Eq. (31):
c(x,y)=lnf(x,y)h(x,y)=ln(i(x,y))h(x,y)+ln(r(x,y))h(x,y)=m=0M1n=0N1lni(m,n)h(mx,ny)+lnr(m,n)h(mx,ny).
Correlation–homomorphic filtering of the edge sub-frame image Forg can be expressed as Eq. (32):
Cfilter=lnForgHfilter=lnIorgHfilter+lnRorgHfilter.
We defineF(u,v)=FT(lnForg),H(u,v)=FT(Hfilter),I(u,v)=FT(Iorg),R(u,v)=FT(Rorg), where FT() is the Fourier transform. Let’s take the Fourier transform of both side of Eq. (32). The Fourier transform of Eq. (32) is Eq. (33):
C(u,v)=F(u,v)H(u,v)=I(u,v)H(u,v)+R(u,v)H(u,v).
H(u,v), which adopts a Gaussian filter, can be expressed as Eq. (34):
H(u,v)=(γHγL)[1ec(D2(u,v))/D02]+γL,
whereγL = 0.5,γH = 2.0,c is the normal number between γL and γHD0 is the cutoff frequency. D(u,v) is the distance from the original point of Fourier transform. Gaussian high-pass filter can enhance edges and detail information. The correlation–homomorphic filter can improve edge energy without changing the position.

4. Determination of sub-pixel edge point position based on partial differentiation

To extract MTF, the accurate edge point position is first determined. We use image partial differentiation to identify edge point position. According to edge position parameter, sub-pixel position of edge points in each row can be calculated in a window. A low-frequency component exists in the edge direction and high-frequency components in other directions. Using this property, we perform image partial differentiation, i.e., direction of high-pass filter. When the filtering direction is the edge direction, the sum of the gray values of the differential image in this direction is considered the minimum.

We use c(x,y)to express the edge image to be analyzed. We perform partial differentiation in the horizontal axis to obtain all possible edge points:

Δc(x,y)=c(x+1,y)c(x,y).
The obtained Δc is ordered from the largest to the smallest. The maximum Δc point or the other larger Δcpoints might be the edge point. We use edge parameter to determine the sub-pixel position of the edge point. We use bicubic interpolation to zoom the local image. Let c'(x,y) be the processed image, which is not the entire edge sub-frame image but denotes the 3×3window. The center of the window is located on pixel ξ(i,j), which has the maximum Δc. The zoomed result can be represented as g'(u,v). The result of partial differentiation can be expressed as
Δg'(i,j)=g'(u+1,v)g'(u,v).
The sum of the gray values in horizontal direction can be expressed as
I(Δg')=i=02j=02|Δg'(i,j)|.
The edge direction can be obtained by Eq. (38):
α=angular(min(I(Δg'))).
After obtaining the edge directionα, the edge point is located on the zoomed region, which is expressed as P. In P, we use the four points near ξ(i,j)to perform cubic polynomial curve fitting, which can be expressed as
a(x)=a1x3+a2x2+a3x+a4.
We perform second-order derivative to obtain the sub-pixel edge point,

an(x)=6a1x+2a2=0,
x=(2a2)(6a1).

5. Building process of high-resolution edge spread function

ESF is a response of the optical system for input of ideal edge. To obtain high-precision MTF, the built ESF should be close to the real edge as much as possible. When building ESF, adequate samples of the edges are often unavailable. The traditional approaches use the spline interpolation to build a high-resolution edge. However, the spline interpolation approach is a numerical statistical method that affects the integrity of ESF and decreases the precision of MTF. We use the optical transfer function model of the camera to calculate the missing edge points. Then, high-precision ESF is built.

The imaging link model of on-board optical remote sensing sensors is shown in Fig. 5.

 figure: Fig. 5

Fig. 5 Optical camera imaging link model.

Download Full Size | PDF

From Fig. 5, an ideal transfer function of optical remote sensing sensors can be defined as

MTFtotal=MTFoptics·MTFdefocus·MTFdetector·MTF image_motion,
MTFoptics=MTFdiffraction·MTFoptical_system,
MTFdiffraction=2π{arccos[(fx2+fy2)D/λF](fx2+fy2)D/λF[1fx2+fy2(D/λF)2]},
MTFoptical_system=MTFdesign·MTFmanufacture·MTFaberrations=exp(αx2fx2+αy2fy2),
MTFdefocus=2J(πΔN(fx2+fy2)1/2[1λN](fx2+fy2)1/2)πΔN(fx2+fy2)1/2[1λN](fx2+fy2)1/2,
MTFdetector=sin(πfxfsx)×sin(πfyfsy),
MTFmotion=sin(πfyfsy),
Where N is the f-number; D is the aperture diameter; λis the central wavelength; αx andαyare the aberration coefficients; Δ is the defocusing amount parameter; fx and fy are row and column spatial frequencies, respectively; and fsx and fsy are sampling frequencies along the x and y directions. The Eq. (44) is taken from [22], Eq. (45) is taken from [23, 24], Eq. (46) is taken from [25], and Eq. (47) and (48) is taken from [22], [26].

The process of building ESF based on optical transfer function model can be summarized as follows.

First, an actual edge is computed based on edge sub-frame image. We use l(x,y)to represent the actual edge.

Second, an ideal transfer function of optical remote sensing sensors MTFtotal is computed.

Third, the inverse Fourier transform is performed on the ideal optical remote sensing sensors’ transfer function MTFmodel to obtain an ideal line spread function denoted as h(x,y).

Fourth, a model edge is obtained by convolution operation in Eq. (49):

i(x,y)=l(x,y)h(x,y),
where l(x,y) is the actual edge.

Fifth, the quadratic distance between the actual edge and the model edge is computed then minimized,

minαx,αy,Δ12ESFactualESFmodelF2.

Finally, the optimal ESF replaces the actual edge to obtain high-resolution ESF. Figure 6 shows our inverse method in building high-resolution ESF. First, we extract the initial edge spread function to obtain direction parameters, which are used to compute the ideal optical remote sensing sensors’ transfer function. The extraction of the initial edge spread function is realized through the following steps: (1) determining the accurate sub-pixel position of the edge point according to Eq. (35) to Eq. (41); (2) fitting the straight line by least square criterion and adjusting the position of the edge point to the straight line position of fitting; and (3) making spline interpolation three times for each line of image by a certain interpolation resolution. Through this method, the data points of each line are sufficient, and the gray level distribution of each line can facilitate acquisition of an ESF, average value of the edge spread function, and average line spread function that can be regarded as the initial edge spread function. Then, the ideal transfer function of optical remote sensing sensors is computed and a high-resolution ESF is finally built. The ideal transfer function of optical remote sensing sensors is obtained from Eq. (42) to Eq. (48). The building process of the high-resolution ESF can be performed as the above five steps.

 figure: Fig. 6

Fig. 6 Building high-resolution ESF.

Download Full Size | PDF

6. Description of the process

The entire on-board MTF extraction procedure is summarized in the diagram in Fig. 7.

 figure: Fig. 7

Fig. 7 The entire MTF acquisition procedure.

Download Full Size | PDF

The entire procedure is classified into six steps.

The first step is to calculate along the track direction or vertical track direction based on the image motion velocity model. The image motion velocity model is obtained from Eq. (16) to Eq. (21). The track direction can be obtained by Eq. (22).

Then, the edge sub-frame image is extracted.

The second step is to denoise the edge sub-frame image based on the mathematical morphology to effectively suppress the noise. The dilation and erosion denoising operators are used to suppress the salt-and-pepper noise and Gaussian noise. And the background noise is removed by Eq. (26) and Eq. (27).

Then, we use correlation–homomorphic filter algorithm to enhance the edge sub-frame image by improving the contrast ratio of edge.The correlation–homomorphic filter is performed according to Eq. (26) and Eq. (27).

The third step is to determine the accuracy edge points based on image partial differentiation. The accuracy edge points are determined to obtain the initial edge spread function from Eq. (35) to Eq. (41).

The fourth step is to build a high-resolution ESF based on the optical transfer function of the camera. The building process of the high-resolution ESF can be performed as Fig. 6 and Eq. (42) to Eq. (48).

The fifth step is to take the derivative of the ESF once to acquire the LSF according to Eq. (7).

The sixth step is to perform Fourier transform on LSF and normalization processing. Then, the final MTF curve can be obtained.

7. Experimental results and analysis

7.1 Verification experiments on the assessment method of MTF

To verify the feasibility of our approach, we use a CCD camera to conduct the outdoor imaging experiments. First, we carried out laboratory testing to calibrate the MTF of the camera at the Nyquist frequency. The MTF tester is composed of target stripes, collimator, and turntable. The test results are shown in Fig. 8. As shown in Fig. 8, MTFs of different CCD locations range from 0.225 to 0.26, and the average of MTFs is 0.241.

 figure: Fig. 8

Fig. 8 MTFs at the Nyquist frequency by laboratory testing method.

Download Full Size | PDF

Second, we use our method to extract MTF from the edge sub-frame image. The edge sub-frame image is extracted based on pixel motion velocity model. Figure 9 shows the extracted edge sub-frame image. The edge sub-frame image is denoised and enhanced. The sub-pixel position of the edge is shown in Fig. 10. After model edge reconstruction and gray value normalization processing, the ESF can be extracted as shown in Fig. 11. The derivative of ESF is computed to obtain LSF, which is shown in Fig. 12. Finally, the Fourier transform of LSF is computed to obtain MTF, which is shown in Fig. 13. For comparison, we use other algorithms like knife-edge method [27] and ISO12233 edge method [28] to extract the edge sub-frame image. At Nyquist frequency rate, the extracted average MTF of the three methods is shown in Table 1. The MTF of our method at Nyquist frequency rate is 0.1876. The result of our method is lower than the laboratory calibration value because the outdoor imaging process is affected by vibration and environment. The result of the knife-edge is as effective as ISO12233 edge method. The processing result of our method is better than that of other methods.

 figure: Fig. 9

Fig. 9 Extracted edge sub-frame image.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Sub-pixel position of edge.

Download Full Size | PDF

 figure: Fig. 11

Fig. 11 ESF curve.

Download Full Size | PDF

 figure: Fig. 12

Fig. 12 LSF curve.

Download Full Size | PDF

 figure: Fig. 13

Fig. 13 MTF curve.

Download Full Size | PDF

Tables Icon

Table 1. Extracted average MTF of the three methods at the Nyquist frequency

7.2 Performance testing experiments on the assessment method of MTF

To efficiently test the assessment performance of MTF, we apply the extracted MTF of the different methods to image restoration. We use the restored effect of the image to invert the accuracy of MTF. First, the MTF is extracted from the remote sensing image. Then, the extracted MTF restores point spread function (PSF). The restored PSF can be considered as the degradation transfer function of the entire optical remote sensing sensors in the image recording process. The degradation of MTF has a degrading effect on the entire optical imaging link process, including optical system, detector, satellite platform motion, data transmission, and atmospheric effects. The obtained PSF employs the Wiener filter algorithm to obtain the restored image. Therefore, the restored effect of the image depends on the extraction accuracy of MTF. The restored effect of the image can indirectly invert the performance of extraction of MTF. Figure 14(a) shows the original image. The different methods are employed to calculate the corresponding MTF. Then, the calculated MTFs are used to restore PSF. Finally, the Wiener filter algorithm is employed to restore the image. Figures 14(b) and 4(c) show the restored images by different MTF extraction methods.

 figure: Fig. 14

Fig. 14 (a) Original image, (b) restored image by ISO12233 method, and (c) restored image by our method.

Download Full Size | PDF

We perform statistical analysis on the restored images in terms of the mean gray level, mean grads, and edge strong information. The mean grads can be defined as:

G¯=1M×Ni=1MjN[f(i,j)f(i+1,j)]2+[f(i,j)f(i,j+1)]22,
Where M and N is the size of image, f (i, j) is the gray value of image. The mean gray level can be defined as:
μ=1M×Ni=1MjNf(i,j).
The edge strong can be defined as:
g(i,j)=sx(i,j)2+sy(i,j)2,
Where sx(i,j) and sy(i,j)the filtering results of Sobel operator along horizontal and vertical direction, respectively.

The statistical results are shown in Table 2. As shown in the table, although the mean gray of the restored image based on the MTF of our method is about the same as that of ISO12233 method, the mean grads and edge strong information of our method are higher than those of ISO12233 method. Therefore, the results of the restored image based on the extracted MTF of our method are better than those of ISO12233 method. Consequently, our method can extract more accurate MTF than ISO12233 method. Other remote sensing images of different textures are tested, and the same results are obtained. Using the same statistical analysis, we compare the results of our method with those of knife-edge method. The same results are obtained.

Tables Icon

Table 2. Statistical results

7.3 Performance testing experiments of QuickBird satellite images

We use a QuickBird satellite image to further compare the MTF extraction performance of our method with those of knife-edge method and spot method. The QuickBird satellite testing image is shown in Fig. 15. We use the three methods to extract MTFs from the QuickBird satellite testing image. We used different sub-frame images from the testing image to measure the corresponding MTFs. The average MTF is considered as the MTF of the entire testing image. The extracted MTFs of the different methods are shown in Fig. 16. The extracted MTF in Fig. 16 represents the entire of the QuickBird satellite testing image. The MTF values at Nyquist frequency are shown in Table 3. Because we use several key technologies, such as the pixel motion model to extract the conditional sub-frame images, the mathematical morphology algorithm and a correlation–homomorphic filter algorithm to eliminate noise and enhance sub-frame image, the image partial differentiation to determine the accurate position of edge points, and the model optical function to build high-resolution ESF, our method can improve the assessment value of MTF by 28.71%. From the above discussion, the on-board MTF extraction method is feasible. The assessment performance of our proposed method’s MTF is better than those of other methods.

 figure: Fig. 15

Fig. 15 QuickBird satellite testing image.

Download Full Size | PDF

 figure: Fig. 16

Fig. 16 Extracted MTFs of the different methods.

Download Full Size | PDF

Tables Icon

Table 3. MTF values at Nyquist frequency

7.4 Experiments of other testing images

We use a standard vertical edge target (See Fig. 17(b)) to test our method. The standard vertical edge image is acquired by our camera in laboratory in order to measure MTF by using our method, ISO12233 edge method and knife-edge method. The testing camera is shown Fig. 17(a). The camera is a 3m resolution remote sensor. The spectral bands are 500nm-900nm. The designed MTF is greater than 9%. The extracted MTFs of the different methods are shown in Fig. 18. The MTF values at Nyquist frequency are shown in Table 4. Compare with the MTF results of two methods, they are very similar. So, it can see that our method rightly to the MTF. From Table 4, our method can improve the assessment value of MTF by18.21% at Nyquist frequency.

 figure: Fig. 17

Fig. 17 (a) experiment camera and (b) standard vertical edge image.

Download Full Size | PDF

 figure: Fig. 18

Fig. 18 Standard vertical edge image is acquired by our camera in laboratory.

Download Full Size | PDF

Tables Icon

Table 4. MTF values at Nyquist frequency

Finally, we use SPOT5 satellite edge image in order to perform an experiment. SPOT5 is a high resolution satellite at 5m resolution panchromatic. The edge image is shown in Fig. 19. We use the three methods to extract MTFs from Fig. 19. The extracted MTFs of the different methods are shown in Fig. 20. Table 5 shows the MTF values of three methods, where Nyquist frequency is 0.5 cy/pixel. The MTFs of our method, Knife-edge and ISO12233 edge are 0.129, 0.113 and 0.114 at Nyquist frequency, respectively. So, our method can improve the assessment value of MTF by13.15% at Nyquist frequency.

 figure: Fig. 19

Fig. 19 SPOT satellite edge image.

Download Full Size | PDF

 figure: Fig. 20

Fig. 20 Extracted MTFs of the different methods.

Download Full Size | PDF

Tables Icon

Table 5. MTF values at Nyquist frequency

8. Conclusion

In this paper, we propose a new approach to calculate MTF for space optical remote sensing sensors. A pixel motion model is used to extract the conditional sub-frame images. Mathematical morphology and correlation–homomorphic filter algorithms are used to eliminate noise and enhance the sub-frame image. Image partial differentiation determines the accuracy of the position of edge points. A model optical function is used to build a high-resolution ESF. Finally, MTF is calculated by derivation and Fourier transform. The experiments show that the assessment method of MTF is superior to others. The extracted MTF can be used to evaluate the imaging performance of on-board optical remote sensing sensors, as well as recover and restore images to improve imaging quality. The proposed method can provide a reference for on-board MTF assessment of space CCD camera in the future.

Acknowledgments

This work is supported by the Foundation for Postdoctoral Science of China (Grant no. 2014M550720), and the National High Technology Research and Development Program of China (863Program) (Grant no. 2012AA121503).

References and links

1. K. Sun, L. Huang, X. Cheng, and H. Jiang, “Analysis and simulation of the phenomenon of secondary spots of the TDI CCD camera irradiated by CW laser,” Opt. Express 19(24), 23901–23907 (2011). [CrossRef]   [PubMed]  

2. D. Wang, T. Zhang, and H. Kuang, “Clocking smear analysis and reduction for multi phase TDI CCD in remote sensing system,” Opt. Express 19(6), 4868–4880 (2011). [CrossRef]   [PubMed]  

3. J. Jeong and M. Y. Kim, “Adaptive imaging system with spatial light modulator for robust shape measurement of partially specular objects,” Opt. Express 18(26), 27787–27801 (2010). [CrossRef]   [PubMed]  

4. E. Oh and J. K. Choi, “GOCI image enhancement using an MTF compensation technique for coastal water applications,” Opt. Express 22(22), 26908–26918 (2014). [CrossRef]   [PubMed]  

5. L. Y. Cui, B. D. Xue, X. G. Cao, J. K. Dong, and J. N. Wang, “Generalized atmospheric turbulence MTF for wave propagating through non-Kolmogorov turbulence,” Opt. Express 18(20), 21269–21283 (2010). [CrossRef]   [PubMed]  

6. T. Vettenburg, N. Bustin, and A. R. Harvey, “Fidelity optimization for aberration-tolerant hybrid imaging systems,” Opt. Express 18(9), 9220–9228 (2010). [CrossRef]   [PubMed]  

7. S. Liu and H. Hua, “A systematic method for designing depth-fused multi-focal plane three-dimensional displays,” Opt. Express 18(11), 11562–11573 (2010). [CrossRef]   [PubMed]  

8. H. T. Hsieh, H. C. Wei, M. H. Lin, W. Y. Hsu, Y. C. Cheng, and G. D. Su, “Thin autofocus camera module by a large-stroke micromachined deformable mirror,” Opt. Express 18(11), 11097–11104 (2010). [CrossRef]   [PubMed]  

9. I. Klapp and D. Mendlovic, “Improvement of matrix condition of Hybrid, space variant optics by the means of parallel optics design,” Opt. Express 17(14), 11673–11689 (2009). [CrossRef]   [PubMed]  

10. Z. Zhou, F. Gao, H. Zhao, L. Zhang, L. Ren, Z. Li, M. U. Ghani, and H. Liu, “Monotone spline regression for accurate MTF measurement at low frequencies,” Opt. Express 22(19), 22446–22455 (2014). [CrossRef]   [PubMed]  

11. S. M. Backman, A. J. Makynen, T. T. Kolehmainen, and K. M. Ojala, “Random target method for fast MTF inspection,” Opt. Express 12(12), 2610–2615 (2004). [CrossRef]   [PubMed]  

12. F. Viallefont-Robinet, “Edge method for on-orbit defocus assessment,” Opt. Express 18(20), 20845–20851 (2010). [CrossRef]   [PubMed]  

13. K. Masaoka, T. Yamashita, Y. Nishida, and M. Sugawara, “Modified slanted-edge method and multidirectional modulation transfer function estimation,” Opt. Express 22(5), 6040–6046 (2014). [CrossRef]   [PubMed]  

14. S. Horiuchi, S. Yoshida, and M. Yamamoto, “Simulation of modulation transfer function using a rendering method,” Opt. Express 21(6), 7373–7383 (2013). [CrossRef]   [PubMed]  

15. P. W. Nugent, J. A. Shaw, M. R. Kehoe, C. W. Smith, T. S. Moon, and R. C. Swanson, “Measuring the modulation transfer function of an imaging spectrometer with rooflines of opportunity,” Opt. Eng. 49(10), 103201 (2010). [CrossRef]  

16. H. L. Robert, L. Zeljko, E. G. James, K. Gillian, A. S. Michael, and Y. Naoki, “The SDSS imaging pipelines,” Proc. SPIE 4836, 350–356 (2002). [CrossRef]  

17. X. Chen, N. George, G. Agranov, C. Liu, and B. Gravelle, “Sensor modulation transfer function measurement using band-limited laser speckle,” Opt. Express 16(24), 20047–20059 (2008). [CrossRef]   [PubMed]  

18. A. M. Pozo, A. Ferrero, M. Rubiño, J. Campos, and A. Pons, “Improvements for determining the modulation transfer function of charge-coupled devices by the speckle method,” Opt. Express 14(13), 5928–5936 (2006). [CrossRef]   [PubMed]  

19. D. Leger, J. Duffaut, and F. Robinet, “MTF measurement using spotlight,” in Proceedings of IEEE International Geoscience and Remote Sesning Symposium, (Pasadena, CA, USA, 1994). pp. 2010–2012.

20. T. Choi, D. Helder, and E. Micijevic, “Edge method” in IKONOS satellite in orbit modulation transfer function (MTF) measurement using edge and pulse method (Electrical Engineering Department South Dakota State University, USA, 2002).

21. G. S. El-tawel and A. K. Helmy, “An edge detection scheme based on least squares support vector machine in a contourlet HMT domian,” Appl. Soft Comput. 26, 418–427 (2015). [CrossRef]  

22. G. D. Boreman, “Diffraction modulation transfer function,” in Moudlation Transfer Function in Optical and Electro-Optical Systems, SPIE Press, (Bellingham, WA, 2001).

23. S. E. Reichenbach, D. E. Koehler, and D. W. Strelow, “Restoration and reconstruction of AVHRR images,” IEEE Trans. Geosci. Rem. Sens. 33(4), 997–1007 (1995). [CrossRef]  

24. C. Latry, V. Despringre, and C. Valorge, “Automatic MTF measurement through a least square method,” Proc. SPIE 5570, 233–244 (2004). [CrossRef]  

25. W. H. Steel, “The defocused image of sinusoidal gratings,” Opt. Acta (Lond.) 3(2), 65–74 (1956). [CrossRef]  

26. H.-S. Wong, Y. L. Yao, and E. S. Schlig, “TDI charge-coupled devices:design and application,” IBM J. Res. Develop. 36(1), 83–106 (1992). [CrossRef]  

27. F. Viallefont-Robinet and D. Léger, “Improvement of the edge method for on-orbit MTF measurement,” Opt. Express 18(4), 3531–3545 (2010). [CrossRef]   [PubMed]  

28. H. Hwang, Y. Choi, S. Kwak, M. Kim, and W. Park, “MTF assessment of high resolution satellite images using ISO 12233 slanted-edge method,” Proc. SPIE 7109, 710905 (2008). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (20)

Fig. 1
Fig. 1 The blurring point image can be seen as the result of the convolution of the point spread function and the original image.
Fig. 2
Fig. 2 Imaging geometrical relationships of optical remote sensing sensors.
Fig. 3
Fig. 3 MTF without noise: (a) edge image, (b) LSF, (c) PSF, and (d) MTF.
Fig. 4
Fig. 4 MTF without noise: (a) edge image, (b) LSF, (c) PSF, and (d) MTF.
Fig. 5
Fig. 5 Optical camera imaging link model.
Fig. 6
Fig. 6 Building high-resolution ESF.
Fig. 7
Fig. 7 The entire MTF acquisition procedure.
Fig. 8
Fig. 8 MTFs at the Nyquist frequency by laboratory testing method.
Fig. 9
Fig. 9 Extracted edge sub-frame image.
Fig. 10
Fig. 10 Sub-pixel position of edge.
Fig. 11
Fig. 11 ESF curve.
Fig. 12
Fig. 12 LSF curve.
Fig. 13
Fig. 13 MTF curve.
Fig. 14
Fig. 14 (a) Original image, (b) restored image by ISO12233 method, and (c) restored image by our method.
Fig. 15
Fig. 15 QuickBird satellite testing image.
Fig. 16
Fig. 16 Extracted MTFs of the different methods.
Fig. 17
Fig. 17 (a) experiment camera and (b) standard vertical edge image.
Fig. 18
Fig. 18 Standard vertical edge image is acquired by our camera in laboratory.
Fig. 19
Fig. 19 SPOT satellite edge image.
Fig. 20
Fig. 20 Extracted MTFs of the different methods.

Tables (5)

Tables Icon

Table 1 Extracted average MTF of the three methods at the Nyquist frequency

Tables Icon

Table 2 Statistical results

Tables Icon

Table 3 MTF values at Nyquist frequency

Tables Icon

Table 4 MTF values at Nyquist frequency

Tables Icon

Table 5 MTF values at Nyquist frequency

Equations (53)

Equations on this page are rendered with MathJax. Learn more.

g(x,y)=f(x,y)h(x,y),
G(ξ,η)=F(ξ,η)H(ξ,η).
MTF=| G(ξ,η) F(ξ,η) |=| H(ξ,η) e jθ(ξ,η) |=| H(ξ,η) |.
g point (x,y)= f point (x,y)PSF(x,y).
MTF=| {PSF(x,y)} |.
MTF=| {LSF(x)} |.
LSF(x)= d dx [ESF(x)].
g(x,y)=[ f(x,y)h(x,y) ]w(x,y)ω(x,y).
G(ξ,η)=[ F(ξ,η).H(ξ,η) ]W(ξ,η)Ω(ξ,η) ,
f(x,y)= a x x δ(τ)dτ + b x = a x t(x)+ b x ,
g(x,y)=[ a x t(x)+ b x ]h(x,y).
G(ξ,η)=[ a x .T(ξ)+ b x δ(ξ) ]H(ξ,η).
MTF=| G(ξ,μ) a x T(ξ)+ b x δ(ξ) |,
L UE =[ cos( i 0 ) sin( i 0 ) 0 0 sin( i 0 ) cos( i 0 ) 0 0 0 0 1 0 0 0 0 1 ][ cos( γ 0 ) 0 sin( γ 0 ) 0 0 1 0 0 sin( γ 0 ) 0 cos( γ 0 ) 0 0 0 0 1 ][ 1 0 0 0 0 1 0 0 0 0 1 R gij 0 0 0 1 ].
L EI =[ cosωt 0 sinωt 0 0 1 0 0 sinωt 0 cosωt 0 0 0 0 1 ],
L IB =[ cos( i 0 ) sin( i 0 ) 0 0 sin( i 0 ) cos( i 0 ) 0 0 0 0 1 0 0 0 0 1 ][ 1 0 0 0 0 1 0 0 0 0 1 RH 0 0 0 1 ][ cos(γ) sin(γ) 0 0 sin(γ) cos(γ) 0 0 0 0 1 0 0 0 0 1 ].
L BS =[ cos(θ) 0 sin(θ) 0 0 1 0 0 sin(θ) 0 cos(θ) 0 0 0 0 1 ][ 1 0 0 0 0 cos(φ) sin(φ) 0 0 sin(φ) cos(φ) 0 0 0 0 1 ][ cos(ψ) sin(ψ) 0 0 sin(ψ) cos(ψ) 0 0 0 0 1 0 0 0 0 1 ],
L SCP =[ f/L 0 0 0 0 f/L 0 0 0 0 f/L f 0 0 0 1 ].
P=[ p 1 p 2 p 3 1 ]= L GE × L EI × L IB × L BS × L SCP ×[ g 1 g 2 0 1 ].
V P = dP dt |=[ d P 1 /dt d P 2 /dt d P 3 /dt 0 ]=[ V p1 V p2 V p3 0 ],
β=arctan( V p2 V p1 ).
G(ξ,η)=F(ξ,η)H(ξ,η)+N(ξ,η).
TF= G(ξ,η) F(ξ,η) + N(ξ,η) F(ξ,η) .
h(x,y)=fΘb = min (i,j) D b (x+i,y+j) D f [ f(x+i,y+j)b(i,j) ].
h(x,y)=fb = min (i,j) D b (x+i,y+j) D f [ f(xi,yj)+b(i,j) ].
B(x,y) = 1 2(K1)+1 × 1 2(L1)+1 × i=(K1) K1 j=(L1) L1 o(x+i,y+i) .
T(x,y) = f(x,y)B(x,y).
f(x,y)=i(x,y)×r(x,y),
ln(f(x,y))=ln(i(x,y))+ln(r(x,y)).
c(t) = lnf(t)h(t)=ln(i(t))h(t)+ln(r(t))h(t) = 0 lni(τ) h(τt)dτ+ 0 lnr(τ) h(τt)dτ.
c(x,y) = lnf(x,y)h(x,y) =ln(i(x,y))h(x,y)+ln(r(x,y))h(x,y) = m=0 M1 n=0 N1 lni(m,n)h(mx,ny) +lnr(m,n)h(mx,ny).
C filter = ln F org H filter =ln I org H filter +ln R org H filter .
C(u,v) = F(u,v)H(u,v)=I(u,v)H(u,v)+R(u,v)H(u,v).
H(u,v)=( γ H γ L )[ 1 e c( D 2 ( u,v ) )/ D 0 2 ]+ γ L ,
Δc(x,y)=c(x+1,y)c(x,y).
Δg'(i,j)=g'(u+1,v)g'(u,v).
I(Δg')= i=0 2 j=0 2 | Δg'(i,j) | .
α=angular(min(I(Δg'))).
a( x )= a 1 x 3 + a 2 x 2 + a 3 x+ a 4 .
a n ( x )=6 a 1 x+2 a 2 =0,
x= ( 2 a 2 ) ( 6 a 1 ) .
MT F total =MT F optics ·MT F defocus ·MT F detector ·MT F  image_motion ,
MT F optics =MT F diffraction ·MT F optical_system ,
MT F diffraction = 2 π { arccos[ ( f x 2 + f y 2 ) D/λF ] ( f x 2 + f y 2 ) D/λF [ 1 f x 2 + f y 2 (D/λF) 2 ] },
MT F optical_system =MT F design ·MT F manufacture ·MT F aberrations =exp( α x 2 f x 2 + α y 2 f y 2 ),
MT F defocus = 2J( πΔ N ( f x 2 + f y 2 ) 1/2 [ 1λN ] ( f x 2 + f y 2 ) 1/2 ) πΔ N ( f x 2 + f y 2 ) 1/2 [ 1λN ] ( f x 2 + f y 2 ) 1/2 ,
MT F detector =sin( π f x f sx )×sin( π f y f sy ),
MT F motion =sin( π f y f sy ) ,
i(x,y)=l(x,y)h(x,y),
min α x , α y ,Δ 1 2 ES F actual ES F model F 2 .
G ¯ = 1 M×N i=1 M j N [ f(i,j)f(i+1,j) ] 2 + [ f(i,j)f(i,j+1) ] 2 2 ,
μ= 1 M×N i=1 M j N f(i,j) .
g(i,j)= s x (i,j) 2 + s y (i,j) 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.