Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Imaging modeling and error analysis of the star sensor under rolling shutter exposure mode

Open Access Open Access

Abstract

As the star sensor works under high dynamic conditions, the spot formed by the star on the imaging plane will become a tail, which directly reduces the accuracy of centroid positioning. In addition, the imaging quality of the star sensor is seriously hit by the rolling shutter effect in the rolling shutter exposure mode, which further increases positioning error. Considering the diffusion radius and the dynamic tailing of the star spot, the imaging trajectory and the energy distribution models of the star spot under the rolling shutter exposure mode are established in this paper. Furthermore, based on the purposed models, the influence of the starting positions of stars and the dispersion of star spots to the centroid positioning error are analyzed by numerical simulation respectively, from which the variation laws of the two kinds of errors are obtained. Then, the laboratory experiments are implemented to evaluate the latter error; it indicates from the experimental results that the variation of the latter error is consistent with the simulation results, which is simultaneously proved that it cannot be ignored in practical engineering application. These results can be a valuable reference for developing a high precision star sensor. The models proposed in this paper can describe the star imaging process and evaluate the centroid positioning accuracy under the roller shutter exposure mode effectively, which lays a foundation for further eliminating the rolling shutter effect in the following research and improving the dynamic performance of star sensors.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In high dynamic, there will be a severe star tailing in the process of star sensor imaging, which makes the difficulty of star point extraction increase, and makes the extraction accuracy decrease rapidly or fail, even leading to the result that the star image recognition and attitude calculation cannot be completed normally [12]. The imaging chip of star sensor plays an important role in the dynamic star imaging. There are mainly two types of chips that can be used in star sensor, one is based on global shutter exposure mode, and the other is based on roller shutter exposure mode. Many scholars have conducted research on the global shutter exposure mode adopted by the first type of chips. Shen et al. proposed a star image simulation algorithm based on the pinhole imaging model and the star point motion blur to test the high dynamic performance of star sensor [3]. Juan Shen et al. studied the parameters affecting the measurement accuracy of the star sensor by using the star point imaging model under high dynamic [4]. References [57] reported the distortion caused by star point imaging in high dynamic state under global exposure, which has a serious impact on the centroid positioning accuracy of star sensor. Though wavelet decomposition and other image processing methods have been used to correct the deformation, the low signal-to-noise ratio star image generated by the star sensor in this mode is still the main factor restricting its accuracy. In contrast, since the complementary metal oxide semiconductor (CMOS) active pixel sensor with rolling shutter exposure mode has high anti-interference and noise suppression ability, it is gradually applied to high-precision star sensor to improve the dynamic performance of the star sensor [810]. However, the exposure time of different rows of image sensors is not synchronous under the rolling shutter exposure mode, which increases the error of star sensor [11]. If a complete star point imaging model in rolling exposure mode can be established, and the possible errors can be corrected, a high-quality star map can be obtained, which will further improve the dynamic performance of the star sensor. In this regard, relevant scholars have carried out some research. Meingast et al. used camera perspective projection to derive the imaging law of the target object under rolling shutter exposure mode, and described the motion of the object on this basis [12], and the model can be used when the target object has obvious feature points. Different from regular target object, the image of a star can only be a spot, whose feature information is less. Enright et al. used multi-frame images to estimate the ideal star point motion under the rolling shutter effect, and then obtained the error size caused by the effect on the ideal star point [13]. Since it is an ideal star point, the error does not include the influence of the dynamic trailing and the dispersion size of the star spot. Some scholars [1416] used the corresponding relationship between the two-dimensional point in the image and the three-dimensional point of the target to obtain the camera pose by simplifying the camera model and then obtaining the matching error. However, only the two-dimensional image information of the star points can be used in star map recognition, and the feature points are single, so this kind of model cannot be used for effective analysis. Therefore, in order to further accurately describe the real imaging situation of stars under roller shutter exposure mode, this paper establishes the imaging model that considering both of the dynamic tailing and the dispersion size of star spot under rolling shutter exposure mode. In addition, the centroid positioning error of the star point under the rolling shutter exposure mode is analyzed through simulations and laboratory experiments. The model lays a foundation for further correction of rolling shutter effect.

The content of this paper is arranged as follows: Section 1 is the introduction. The theoretical basis of this study is analysed in Section 2, including the dynamic imaging of star sensor and the principle of rolling shutter effect. In Section 3, the star trajectory is modeled by considering the dispersion radius and the dynamic tailing of star spot. Then, by analyzing the imaging characteristics of star spot under the rolling shutter exposure mode, the energy distribution model of star trajectory is obtained. In Section 4, the numerical simulation experiment is implemented. The star image of dynamic under the rolling shutter mode is generated according to the proposed star point trajectory and energy distribution model. At the same time, the total error of centroid positioning is obtained through the simulation results. Additionally, we thoroughly analyze the errors caused by the different starting positions of stars and the consideration of star dispersion and dynamic tailing. In Section 5, the experiments are carried out by using the high precision three-axis turntable and star simulator. The accuracy of the proposed model is verified through the analysis of experimental data, and the influence of star dispersion on the centroid positioning is quantified. In Section 6, the conclusions of this research and its significance for improving the dynamic performance of the star sensor are summarized.

2. Theoretical analysis

2.1. Dynamic imaging theory of star sensor

Star sensor is at the heart device for satellite attitude determination in celestial navigation system, which takes pictures of stars in the real sky, and then images that on the image plane. The attitude information of the satellite can be obtained after preprocessing the star images, locating the centroid and matching the star map. Figure 1 shows the dynamic imaging process.

 figure: Fig. 1.

Fig. 1. Schematic diagram of star sensor dynamic imaging.

Download Full Size | PDF

Assuming that the star vector of a given star point in the star sensor coordinate system at the time of $t_\textrm{0}$ is:

$$W(t_\textrm{0}) = \frac{{{{[\textrm{ - }x(t_\textrm{0})\;\:\textrm{ - }y(t_\textrm{0})\;\:f]}^T}}}{{\sqrt {{{(x(t_\textrm{0}))}^\textrm{2}} + {{(y(t_\textrm{0}))}^\textrm{2}} + {f^2}} }}.$$
After the time of $\Delta t$, the star vector becomes:
$$W(t_\textrm{0} + \Delta t) = C_{t_\textrm{0}}^{t_\textrm{0} + \Delta t} \times W(t_\textrm{0}) = (I - [\omega \times ] \times \Delta t),$$
Wherein, $C_{{t_\textrm{0}}}^{{t_\textrm{0}}\textrm{ + }\Delta \textrm{t}}$ is the rotation matrix and $[\omega \times ] = [0\;\:\omega_{z}\;\:\omega_{y};\;\omega_{z}\;\:0\;\: - \omega_{x};\: - \omega_{y}\;\:\omega_{x}\;\:0]$ is an antisymmetric matrix.

When $\omega \cdot \Delta t$ is small enough, the image coordinates at any time during the exposure time can be obtained through the camera model, which is the trajectory model:

$$\left\{ {\begin{array}{l} {x(t + \Delta t) = x(t_\textrm{0}) + (y(t_\textrm{0}) \times \omega_{z} + f \times \omega_{y}) \times \Delta t}\\ {y(t + \Delta t) = y(t_\textrm{0}) - (x(t_\textrm{0}) \times \omega_{z} + f \times \omega_{x}) \times \Delta t.} \end{array}} \right.$$

2.2. Rolling shutter exposure mode of a star sensor

Different from the global shutter exposure mode, the start and end exposure time of each row in a rolling shutter exposure mode occur at different time points, but the actual exposure time of all rows is equal [17]. Because there is no memory unit in each pixel of the imaging chip with rolling shutter exposure, the signal must be read out immediately after exposure. In addition, the sensor cannot read all the lines at the same time. Therefore, the exposure must stop line by line and read line by line. In order to make sure that the exposure time of each row is the same, the start exposure time of each row also needs to move backward.

The rolling shutter exposure imaging process is illustrated in Fig. 2. And the horizontal axis represents the time, and the vertical axis represents the number of lines on the image plane of the sensor. The $t_{int}$ represents the effective integration time of the star imaging, which is equal to the exposure time $T$ of each frame. The $t_{rd}$ represents the readout time, which is set equal to the time interval between rows. And $t_{res}$ represents the extremely short reset time, which can be ignored. In order to realize simultaneous exposure and readout, the inter frame time can be set to zero, then:

$$t_{int} = T = (\textrm{n - 1}) \cdot t_{rd},$$
wherein, ${n}$ is the number of rows of the sensor image plane. It is worth noting that the above relationship may change with different forms of exposure time definition.

 figure: Fig. 2.

Fig. 2. Schematic diagram of rolling shutter exposure mode.

Download Full Size | PDF

3. Modeling for dynamic star point in the rolling shutter exposure mode

3.1. Star velocity model on image plane under three axis rotation of the star sensor

As shown in Fig. 3, $O_{s} - XYZ$ is the star sensor coordinate system, $o - xy$ is the image plane coordinate system of the star sensor during the rotation process of the star point on the image surface $xoy$, the red line $L_{xy}$ is the star sensor rotating around the $XY$ two axis directions Similarly, the blue line segment $L_{z}$ is the trajectory formed by the rotation around the $Z$ axis. $\theta $ is the angle of incidence of the star and represents the angle between the star vector and the visual axis $Z$ in the coordinate system of the star sensor, and $\gamma $ is the initial imaging position angle, which represents the angle between the projection of the star point on the image plane $xoy$ and the line between the point $o$ and the $x$ axis Angle. Supposing the angular velocity of the star sensor is:

$$\bar{\omega }\textrm{ = }\omega [\begin{array}{cccc} {\cos \varphi \cdot \cos \beta }&{\cos \varphi \cdot \sin \beta }&{\sin \varphi } \end{array}],$$
wherein, $\varphi $ is the angle between the three-axis angular velocity in the star sensor coordinate system and its projection on surface $XO_{s}Y$, and $\beta $ is the angle between the projection of the three-axis angular velocity on surface $XO_{s}Y$ in the star sensor coordinate system and the X axis.

 figure: Fig. 3.

Fig. 3. Star point trajectory under three-axis rotation.

Download Full Size | PDF

It is assumed that the star sensor rotates around the $xy$ axis first, the imaging trajectory on the image plane is $L_{xy}$ in Fig. 3. If the imaging position of star in the star sensor image plane is $(x_{0},y_{0})$, and the velocity components of two coordinate axis on the image plane are $V_{x1}$ and $V_{y1}$ respectively, and the star point movement time is the exposure integration time T, then [18]:

$$\left\{ {\begin{array}{l} {{x_1} = {x_{0}} + {V}_{x1} \cdot T}\\ {{y_1} = {y_{0}} + {V}_{y1} \cdot T,} \end{array}} \right.$$
wherein, $V_{x1} = \frac{{f \cdot \omega_{y}}}{{{{\cos }^2}\theta }}$, $V_{y1} = \frac{{f \cdot \omega_{x}}}{{{{\cos }^2}\theta }}$. On this basis, rotating around the $\textrm{z}$ axis for $\omega_{z} \cdot T$, then [19]:
$$\left\{ {\begin{array}{l} {x_{2} = x_1 \cdot \cos (\omega_{z} \cdot T) + y_{1} \cdot \sin (\omega_{z} \cdot T)}\\ {y_{2} = y_{1} \cdot \cos (\omega_{z} \cdot T) - x_1 \cdot \sin (\omega_{z} \cdot T).} \end{array}} \right.$$
If $\omega_{z} \cdot T$ is very small, the above two parts of motion can be regarded as uniform motion, by subtracting the abscissa and the ordinate of the starting and ending positions of the star on the image plane, and then dividing it by time T, the following results can be obtained:
$$V_{x} = \frac{{x_{2} - x_{0}}}{T} = \frac{1}{T} \cdot f \cdot \tan \theta \cdot [\cos (\gamma - \omega \cdot T \cdot \sin \varphi ) - \cos \gamma ] + \frac{{f \cdot \omega \cdot \cos \varphi }}{{{{\cos }^2}\theta }}\sin (\beta \textrm{ + }\omega \cdot T \cdot \sin \varphi ).$$
Similarly, the velocity in $\textrm{y}$ direction can be obtained:
$$V_{y} = \frac{{{y_2} - {y_0}}}{T} = \frac{1}{T} \cdot f \cdot \tan \theta \cdot [\sin (\gamma - \omega \cdot T \cdot \sin \varphi ) - \sin \gamma ] + \frac{{f \cdot \omega \cdot \cos \varphi }}{{{{\cos }^2}\theta }}\cos (\beta - \omega \cdot T \cdot \sin \varphi ).$$

3.2. Modeling for star trajectory under the rolling shutter exposure mode

When ignoring the diffusion radius and the dynamic blur of star point, the trajectory on the image plane is shown in Fig. 4. In the analysis of this section, the ${M}$ is defined as the number of rows of the sensor image plane corresponding to the y axis of the sensor image plane, and ${N}$ is defined as the number of rows of the sensor image plane corresponding to the $x$ axis of the sensor image plane. And the coordinate position of star point is represented by the combination of row and column at here. It is assumed that the initial coordinates of the star point on the image plane is $({M_{0},N_{0}})$, and the imaging position of the star point is $({M_{1},N_1})$ after the exposure complement. Additionally, the speed of the star point along the $x$ axis and$y$ axis of the image plane are set as $Vx$ and $V_{y}$ respectively, and the velocity in any other direction is $Vx\textrm{y}$. Since the characteristics that the rolling shutter exposure are integrated and read out line-by-line, the initial position and the final symbol of the centroid position in star point track are $({a,b})$ and $({a^{\prime},b^{\prime}})$ respectively, then:

$$\begin{array}{cc} {\left\{ {\begin{array}{l} {{a = M_{0} = M_{1}}}\\ {{b = N_{0}}} \end{array}} \right.}&{\left\{ {\begin{array}{l} {{a^{\prime} = a = M_{0} = M_{1}}}\\ {{b^{\prime} = N_{0} + \varDelta N},} \end{array}} \right.} \end{array}$$
$$\left\{ {\begin{array}{l} {a = {M_{0}}}\\ {b = {N_{0}} = {N}_{1}} \end{array}} \right.\begin{array}{cc} {}&{\left\{ {\begin{array}{l} {{a}{^{\prime}} = {M_{0}} + \Delta M}\\ {{b}{^{\prime}} = b = {N_{0}} = {N}_{1},} \end{array}} \right.} \end{array}$$
$$\left\{ {\begin{array}{c} {a = {M_{0}}}\\ {b = {N_{0}}} \end{array}} \right.\begin{array}{cc} {}&{\left\{ {\begin{array}{l} {{a}{^{\prime}} = {M_{0}} + ({M_{1}} - {M_{0}}) \cdot t_{rd} \cdot V_{x}}\\ {{b}{^{\prime}} = {N_{0}} + ({M_{1}} - {M_{0}}) \cdot t_{rd} \cdot V_{y},} \end{array}} \right.} \end{array}$$
$$\left\{ {\begin{array}{l} {\Delta N\textrm{ = }V_{x} \cdot ({M_{1}} - 1) \cdot {t_{rd}}}\\ {\Delta M\textrm{ = }V_{y} \cdot ({M_{1}} - 1) \cdot {t_{rd}}}\\ {\Delta MN\textrm{ = }V_{xy} \cdot ({M_{1}} - 1) \cdot {t_{rd}}.} \end{array}} \right.$$

 figure: Fig. 4.

Fig. 4. Motion track of ideal star point along image plane under rolling shutter exposure mode.

Download Full Size | PDF

Since the star point is an ideal star point, Eqs. (10)–(13) above can be regarded as the centroid trajectory changement of star point. It can be seen that the effect of rolling shutter on the position of star point depends on the number of imaging lines ${M_{1}}$ and the reading time $t_{rd}$.

The above analysis ignores the star dispersion radius and dynamic blurring, which will cause certain error to the actual imaging position of the star point. Therefore, the star point trajectory modeling that considered star point dispersion radius and dynamic blurring was analysed detailedly in following. First, the definitions are as follows: The trailing length of star point is defined as $L$. In order to ensure that more than 99.7% of the energy of the star point is included in the diffusion radius area of the star spot, the spot dispersion radius is taken as $3\rho$ [18]. Additionally, to clearly analyze the real change of the scattered star point under the rolling shutter exposure mode and establish a convenient model for the energy distribution of the star spot under the rolling shutter exposure mode in the following Section 3.3, the sampling window is defined as $U \times V$, that is, the length and width of the pixel in the star point involved in the calculation, which is determined by the diffusion radius and trailing length of the star. As shown in Fig. 5, the gray square is the star spot under static state, the purple block is the dynamic star spot imaging under global shutter exposure mode, and the yellow block is the dynamic star spot imaging under rolling exposure mode. And the figure shows the change of the star spot from static to dynamic. And the imaging process of global shutter exposure mode and rolling shutter exposure mode is shown in the lower left part of Fig. 5. It can be observed that the star points are imaged at the same time under the global exposure, while the rolling exposure is delayed by a certain time. If only choose a single row of the imaging chip in rolling exposure mode to image for star, it can be considered as an independent global exposure process, and their exposure time is the same. Therefore, the trailing length of the star spot in the single line of the chip under rolling shutter exposure mode is equal to that under global shutter exposure mode, and the line of the chip is integrated downward sequentially at a certain time interval. After synthesis, the final image of the star spot under the rolling shutter exposure mode is obtained.

 figure: Fig. 5.

Fig. 5. Motion track of real star under rolling shutter exposure mode along x-axis of image plane.

Download Full Size | PDF

As shown in Fig. 5, the motion of star spot along the $x$ axis of the image plane is analyzed, and the star spot imaging analysis is divided into three steps under the above rolling shutter exposure mode:

In the first step, if there is no fixed inter row delay, then all pixels are exposed at the same time, and the imaging is under global shutter exposure mode, as shown in step one of Fig. 5.

In the second step, the inter row delay is considered. Since the initial position of the star is not in the first row of the image plane, as shown in the first step, the distance between the initial imaging position and the top of the image plane is n pixel. Additionally, in the rolling shutter exposure mode, the imaging chip integrates line by line from the top to the bottom image plane, so the star spot is not imaged in the first $n$ line integration process, and there is a velocity along the $x$ axis of the image plane. The $V_{S}$ is the scanning speed of the sensor chip, that is, the number of integrated lines per second, and so the scanning speed to complete one line is $1/V_{S}$. And the $t_{rd}$ is the time required to read one line. In the rolling shutter exposure mode, one line of scanning is completed and this line is read out, therefore, set the $V_{S}$ equals $1/t_{rd}$ and the star spot start to integrate at time $t_{1}$, as shown in the step two of Fig. 5.

In the third step, because of the inter line delay of the rolling shutter exposure mode, the star point integration in the subsequent image plane will produce distortion, which is related to the reading time and the number of lines that the star spot located. The final imaging result is shown in Step 3 of the Fig. 5.

Considering the short exposure time of star sensor, the motion is regarded as uniform linear motion. Similarly, set the coordinates of the initial position (Step 1 in Fig. 5) of the centroid position be $({a},{b})$ and let the position of the centroid (Step 3) of the final star point trajectory be $({a^{\prime}},{b^{\prime}})$, then:

$$\left\{ {\begin{array}{l} {{a} = {M_{0}} = {M_{1}}}\\ {{b} = {N_{0}} + \frac{L_{x}}{2},} \end{array}} \right.$$
$$\left\{ {\begin{array}{l} {{a}{^{\prime}} = a = {M_{0}} = {M_{1}}}\\ {{b}{^{\prime}} = {N_{0}} + \frac{L_{x}}{2} + V_{x} \cdot ({M_{1}} - 1) \cdot {t_{rd}},} \end{array}} \right.$$
wherein, $Lx\textrm{ = }V_{x} \cdot t_{int}$, and the maximum deformation of the star spot is as follows:
$$H_{max,x} = V_{x} \cdot 6\rho \cdot t_{rd}.$$
In the same way, the motion of star spot along the $y$ axis is analyzed, as shown in Fig. 6, which is also divided into three steps:

 figure: Fig. 6.

Fig. 6. Motion track of real star under rolling shutter exposure mode along y-axis of image plane.

Download Full Size | PDF

The first step is the same as the movement of the star point along the x axis.

In the second step, the star point moves along the $y$ axis direction, and the distance from the top of the image plane changes from $n$ to ${m}$, which is the critical time ${t_2}$ when the star integral begins.

In the third step, due to the line by line integration under the rolling shutter exposure mode, and the star spot continues to move downward at the same time, the imaging of the star point on the image plane will stretch, and the final imaging position is time ${t_3}$.

Therefore, when the star spot moves along the $y$-axis direction under rolling shutter exposure mode, the starting and imaging position of the star centroid can be calculated are as follows:

$$\left\{ {\begin{array}{c} {{a} = {M_{0}} + \frac{{L_{y}}}{2}}\\ {{b} = {N_{0}} = {N}_{1},} \end{array}} \right.$$
$$\left\{ {\begin{array}{l} {{a}{^{\prime}} = \frac{{Ly + 6\rho + H_{max,y} + m}}{2}}\\ {{b}{^{\prime}} = {b} = {N_{0}} = {N}_{1},} \end{array}} \right.$$
wherein, $L_{y}\textrm{ = }V_{y} \cdot t_{int}$, and the maximum deformation of the star spot and the parameter $m$ in step two are as follows:
$$H_{max,y}\textrm{ = }\frac{{(L_{y} + 6\rho ) \cdot V_{y}}}{{V_{s} - V_{y}}},$$
$$m = n + \frac{{n \cdot V_{y}}}{{V_{s} - V_{y}}} = ({M_{0}} - 3\rho ) + \frac{{({M_{0}} - 3\rho ) \cdot V_{y}}}{{V_{s} - V_{y}}}.$$
When the star point moves in any direction of theimage plane, the analysis method is the same as that of the two other axes alone. Because the star point is stretched along the y axis, the abscissa of the center the star centroid will change, simultaneously. The centroid coordinates of the initial point and the actual imaging point are as follows:
$$\left\{ {\begin{array}{c} {{a} = {M_{0}} + \frac{{L_{y}}}{2}}\\ {{b} = {N_{0}} + \frac{{L_{x}}}{2},} \end{array}} \right.$$
$$\left\{ {\begin{array}{l} {{a}{^{\prime}} = \frac{{L_{y} + 6\rho + H_{max,y} + m}}{2}}\\ {{b}{^{\prime}}\textrm{ = }{N_{0}} + \frac{L_{x}}{2}\textrm{ + }(\frac{{L_{y} + 6\rho + H_{max,y} + m}}{2} - 1) \cdot V_{x} \cdot {t_{rd}}.} \end{array}} \right.$$
In conclusion, without considering the dispersion radius and tailing, the star point is an ideal particle, whose position is the actual position of the star point, as shown in Eqs. (14), (17), and (21). Considering the imaging of the real star and the tailing caused by the motion, the centroid position of the star will be a certain point in its trajectory, as shown in Eqs. (15), (18), and (22). Compared with these results, the error cannot be ignored. Therefore, in the case of no correction, there can be a large error in the output attitude because of the wrong star recognition. Furthermore, in Section 5, the centroid positioning accuracy of star point under rolling shutter exposure mode is studied in detail.

3.3. Modeling of star energy distribution under the rolling shutter exposure mode

The stars will be imaged by star sensors, which will be converted into digital images through a series of photoelectric and analog-to-digital conversion. In Section 3.1 and Section 3.2, the velocity and the trajectory of the star spot on the image plane have been obtained respectively. Because the integration time and the readout time of each line in the image sensor are fixed under the rolling shutter exposure mode, it can be analogous to the global exposure when the image sensor integrates a single line. In addition, under the shutter exposure mode, the starting integration time between the front and rear lines is different, and each line is read out immediately after finishing integration, and then turns to the integration of the next frame. Based on the above analysis, sequentially integrating and reading out each row of star points with a certain time delay, the imaging model of star spot under rolling shutter exposure mode can be built.

The point spread function of optical lens is as follows:

$$h(x,y) = h_{Gau}(x) \cdot h_{Gau}(y) = \frac{1}{{2\pi {\rho ^2}}} \cdot \exp ( - \frac{{{x^2} + {y^2}}}{{2\rho _{}^2}}),$$
wherein, $h_{Gau}({\cdot} )$ is the Gaussian distribution function. If the input light source is a segment, the dispersion distribution function of the optical lens will be [19]:
$$h_{LSSF}(x) = \frac{1}{L}\int_{x - L}^x {h_{Gau}(\textrm{u})du} = \frac{1}{{2L}}[{erf}(\frac{x}{{\sqrt 2 \rho }}) - {erf}(\frac{{x - L}}{{\sqrt 2 \rho }})],$$
wherein, $L$is the dynamic trailing length of star point and $erf({\cdot} )$ is the Gaussian error function.

It is assumed that the star point moves at a uniform speed on the image plane at a speed of Vxy, and the angle between the movement direction and the x-axis of the image plane is θ, additionally the trajectory conforms to the centroid movement discipline derived in Section 3.2. Set the starting point as $({x_{0}},{y_{0}})$, and the center coordinate of the star spot at a certain time as, $({{x_{0}} + V_{x} \cdot k \cdot t_{rd} \cdot cos\theta ,\textrm{ }{y_{0}} + V_{y} \cdot k \cdot t_{rd} \cdot sin\theta } )$, then the energy distribution of the star spot is:

$$\begin{aligned} E_{dyn}(x,y) &= \mathrm{\Phi }\mathop \smallint \limits_0^{\frac{{T \cdot V_{s}}}{{V_{s} - V_{y}}}} h_{Gau}(x - {x_{0}} - V_{x} \cdot \textrm{k} \cdot {t_{rd}} \cdot \cos \theta ) \cdot h_{Gau}(y - {y_{0}} - V_{y} \cdot k \cdot {t_{rd}} \cdot \sin \theta )dt\\ & = \mathrm{\Phi } \cdot \frac{{T \cdot V_{s}}}{{V_{s} - V_{y}}} \cdot h_{LSSF}(u) \cdot h_{Gau}(v), \end{aligned}$$
wherein, $k$ is the number of pixel rows, whose values are $1,2,3 \cdot{\cdot} \cdot{\cdot} \cdot{\cdot} k$. It can be seen that the integration time of stars trajectory in the energy distribution model of roller shutter exposure mode is increased. It is worth noting that the exposure time of star sensor is not really increased here. That is only because the motion of the star and the rolling shutter effect will make the trajectory of the star to be stretched, and it is necessary to integrate the stretch length $H_{max}$. The time that the star point moves through this length is the “increasing time”, which can be obtained according to Section 3.2. This expression is actually a vivid explanation for the process of generating the energy distribution of star trajectory under rolling shutter exposure mode. It is difficult to calculate the above integral directly. Therefore, the orthogonal coordinate rotation transformation is carried out, and the coordinate system $xoy$ is rotated to obtain the coordinate system $XOY$, namely:
$$\left\{ {\begin{array}{l} {{x} = {x_{0}} + {X} \cdot \cos \theta - Y \cdot \sin \theta }\\ {y = {y_{0}} + X \cdot \sin \theta + Y \cdot \cos \theta .} \end{array}} \right.$$
By introducing Eq. (26) into (25), then [20,21]:
$$E_{dyn}(X,Y) = \mathrm{\Phi } \cdot \frac{{T \cdot V_{s}}}{{V_{s} - V_{y}}} \cdot h_{LSSF}(X - {X_0}) \cdot h_{Gau}(Y - {Y_0}).$$
Discretize and integrate the dynamic energy of Eq. (27) to obtain the energy of each pixel:
$$\begin{array}{l} I_{ij} = \mathop \smallint \limits_{{y_j} - 0.5}^{{y_j} + 0.5} \mathop \smallint \limits_{{x_i} - 0.5}^{{x_i} + 0.5} {\eta _{{QE}}} \cdot K \cdot E_{\textrm{dy}n}^{}(x,y)dxdy\\ \begin{array}{cc} {}& = \end{array}\mathrm{\Phi } \cdot \frac{{T \cdot V_{s}}}{{V_{s} - V_{y}}} \cdot {\eta _{{QE}}} \cdot K\mathop \smallint \limits_{{x_i} - 0.5}^{{x_i} + 0.5} h_{LSSF}(x - {x_{0}})dx\mathop \smallint \limits_{{y_j} - 0.5}^{{y_j} + 0.5} h_{Gau}(y - {y_{0}})dy, \end{array}$$
wherein, $\Phi $ is the incident flux of the star on the image plane, ${\eta _{\textrm{QE}}}$ is the quantum efficiency of the image sensor, and the conversion gain K is a constant, in addition:
$$\Phi = {E_{0}} \cdot {2.512^{ - m}} \cdot \frac{{\pi {D^2}}}{4},$$
wherein, E0 is the irradiance on the outer surface of the earth atmosphere, and its value is 2.96${\times} $10−14W/mm2. The D is the optical lens aperture of the star sensor, and $m$ is the magnitude of the incident star light.

Defining $\Lambda \textrm{j} = \{{j,({i,\textrm{ }j} )} \}$ as the integral region of function $I_j$ when $j$ takes $1,2,3 \cdot{\cdot} \cdot{\cdot} \cdot{\cdot} R$ respectively, and the value of $i$ is the same as that of $j$. In addition, the $\{{{\Theta _\textrm{j}}} \}_R$ is defined as the j-th row of zero matrix, then:

$$\begin{array}{l} {I_j} = {\mathop \smallint \limits_{\Lambda \textrm{j}}^{}} {\mathop \smallint \limits_{{x_i} - 0.5}^{{x_i} + 0.5}} {\eta _{{QE}}} \cdot K \cdot E_{dyn}(x,y)dxdy\\ = \mathrm{\Phi } \cdot \frac{{T \cdot V_{s}}}{{V_{s} - V_{y}}} \cdot {\eta _{{QE}}} \cdot K {\mathop \smallint \limits_{{{x}_i} - 0.5}^{{x_i} + 0.5}} h_{LSSF}({x - {x_{0}}} )dx{ {\mathop \smallint \limits_{\Lambda \textrm{j}}^{}}}h_{Gau}({y - {y_{0}}} )dy. \end{array}$$
Therefore, the imaging energy distribution model under rolling shutter exposure mode is as follows:
$$\Pi_{ij} = \sum\limits_{j}^R {(I_{j}} + \{ \Theta_{j}\}_R).$$

4. Simulation experiment and result analysis

4.1. Simulation star map analysis

According to the star point trajectory and energy distribution model in Section 3, simulation verification is performed. And the optical parameters of the star sensor are shown in Table 1. Stars with magnitude less than 6 are screened out from the SAO60 main star catalog, and the binary or variable stars are eliminated. The remaining 5103 stars constitute the star catalog of this study. The angular velocity of the star sensor is set to (3°/s 3°/s 0°/s). An optical axis is randomly generated using the Monte Carlo method and subsequently a motion-blurred star image is simulated, whose brightest eight stars are selected, as shown in Fig. 7(a), simultaneously, the star under global shutter exposure mode in dynamic is simulated under the same optical axis, as shown in Fig. 7(b). Because the reading speed is fast in the imaging process, it is hard to see the length change of these two kinds of exposure mode directly, as shown in Fig. 8.

 figure: Fig. 7.

Fig. 7. Simulation star map based on the proposed model.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Comparison of centroid position in two modes.

Download Full Size | PDF

Tables Icon

Table 1. Parameter setting of star sensor

The star coordinates $({x}_{s,q},y_{s,q})$ on Fig. 7(a) are obtained by combining with the traditional centroid positioning method [22,23] as the simulation value, and the value of q ranges from 1 to 8. And the coordinates of the central time of star imaging in the global shutter exposure mode; i.e., the exposure mode without rolling shutter effect, are selected as the reference value $(\widetilde {x}_{q},\widetilde {y}_{q})$, which can be obtained directly (see [20]). The difference of the above coordinates is the total error in the two directions under the roller shutter exposure mode, including the x-axis error ${\delta _{x,q}}$ and the y-axis error ${\delta _{y,q}}$, whose expression is shown in Eq. (32). The simulation results are shown in Table 2.

$$\left\{ {\begin{array}{c} {{\delta_{{x,q}}} = {x}_{s,q} - \widetilde {x}_{q}}\\ {{\delta_{y,q}} = y_{s,q} - \widetilde {y}_{q}\textrm{.}} \end{array}} \right.$$
It can be seen from the statistical results that there will be an offset on the star position under the roller shutter exposure mode compared with that under the global shutter exposure mode. That is because the star will move for a distance before the effective integration, and the error will be generated, whose value depends on the initial position of the star on the imaging plane. In addition, considering the dispersion of the star point under rolling shutter exposure mode will bring further offset to the centroid position. Moreover, the specific influence factors will be detailedly studied in the following Sections 4.2 and 4.3.

Tables Icon

Table 2. Statistical results of centroid location

4.2. Centroid error caused by the initial imaging position of a star point

It can be seen from Eq. (20) that the centroid error of star imaging on the image plane is related to the initial imaging position on the image plane, angular velocity of star sensor and exposure time when imaging the stars in dynamic.

Since all the star points in the same frame have the same velocity and exposure time, the influence of different initial positions of stars on the centroid positioning error under rolling exposure is following analyzed. In order to correspond to the subsequent experiments, the parameters used for the star sensor are constant, as shown in Table 1. In the global shutter exposure mode, a row number is selected every 4 rows along the y-axis direction of the image plane, and the column number is randomly selected along the x-axis direction of the image plane to generate the star coordinates as the true value of centroid coordinates. Then, the angular velocity is set to be 3, 5, 7, 10°/s, and the exposure time is set to be 15, 50, and 100 ms respectively, and the above star points are imaged in the rolling shutter exposure mode without considering the star dispersion and trailing. Using the star extraction algorithm to extract the centroid of the star map, and the error is the difference between the centroid coordinates of stars in roller shutter exposure mode and the true value coordinates of stars in global shutter exposure mode, which is described as the position error with first line as reference that caused by the different initial positions of the star under dynamic condition. Figures 9 and 10 show the error changes of star centroid positioning with different initial positions.

 figure: Fig. 9.

Fig. 9. Centroid positioning error in y-axis direction caused by different initial star positions.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Centroid positioning error in x-axis direction caused by different initial star positions.

Download Full Size | PDF

It indicates from the results that the centroid position error due to the different initial star position increases with the increase of the number of rows of the star point on image plane, in addition, under typical working conditions with an angular velocity of 3°/s and an exposure time of 100 ms, the maximum centroid error of single star relative to the first line of image plane can reach 33.75 pixels and 33.62 pixels along the y-axis and x-axis directions, respectively. This type of error is the main error in the rolling shutter exposure mode, which can be compensated by the geometric or kinematic methods [11,24]. Because this type of error is greatly affected by the angular velocity, the evaluation of the correction method largely depends on the maximum range and accuracy of the angular velocity estimation. In the final analysis, this kind of error can be compensated in the existing methods. Moreover, it is necessary to consider the compensating of this type of error by using more efficient methods such as image processing or some other algorithm.

4.3. Centroid error considering the star spot dispersion and dynamic tailing

In Section 3.2, considering the spot dispersion of stars, the error has been thoroughly analyzed. Therefore, the error curve under the typical working conditions will be obtained by numerical simulation experiment in this section. Considering the dynamic tailing, the main factors that influence the centroid error are stars exposure time and angular velocity [25,26]. To ensure that the star can be completely imaged on the image plane and the consistency of experimental parameters before and after. Set the angular velocity $\omega_{x}$ as 3, 5, 7, and 10°/s respectively, and then analyze the change of centroid error of x-axis with the exposure time from 5 to 100 ms, in addition, considering the star imaging and sensor performance in the actual situation, set the simulated star as 4th magnitude, in addition, the random noise with 8 electrons is added to the simulation. The error simulation results of x-axis are shown in Fig. 11. Under the same conditions, set the angular velocity $\omega_{y}$ as 3, 5, 7, 10°/s respectively, and the error analysis results of y-axis are shown in Fig. 12.

 figure: Fig. 11.

Fig. 11. Variation of x-axis centroid error with exposure time and angular velocity.

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. Variation of y-axis centroid error with exposure time and angular velocity.

Download Full Size | PDF

It indicates from the results that when the angular velocity is constant, the centroid positioning error first decreases and then increases with the increase of exposure time. In the process of star imaging, with the increase of exposure time, the energy of star spot increases rapidly from zero, and the increase of energy is much greater than the increase of noise, so the random error of star spot centroid positioning decreases. However, when the exposure time increases to a certain extent, the influence of noise on centroid location increases with the increase of trailing length, so the random error of centroid location increases. The results of Figs. 11 and 12 show that under the working condition with an angular velocity of 10°/s and an exposure time of 100 ms, the maximum errors along the x-axis of the image plane and the y-axis of the image plane can reach 0.511 pixels and 1.125 pixels, respectively. Additionally, when the exposure time is fixed, the centroid positioning error increases with the increase of angular velocity. And the error along the y-axis of the image plane is larger than that along the x-axis direction with the increasing angular velocity, which is caused by the rolling shutter effect that makes the image sensor to integrate line by line along the y direction, and increases the error in the y direction. Comparatively, the error along the x direction of the image plane only depends on the original radius of star spot, and the motion in this direction does not change the star dispersion range in the integral direction of the image sensor, therefore, it is smaller. In addition, although compared with the error caused by the initial position of the star, the error caused by considering the star dispersion is smaller, it can be observed from the results that the error cannot be ignored when developing a high-precision star sensor. And the existence of the above error results provides the possibility to further improve the accuracy of the star sensor, and has a guiding significance for the improvement of the dynamic measurement accuracy of the star sensor under the rolling shutter effect.

5. Experiment analysis

Multi-frame continuous exposure is used to analyze the centroid error. The experimental platform is shown in Fig. 13. The specific steps are as follows: First, use the single-satellite simulator and the high-precision three-axis turntable to calibrate the camera according to the established procedure. Then, the rotation interval of the outer frame of the high-precision three-axis turntable is set to -20° ∼ +20°, meanwhile, the collection time interval of two consecutive images and the exposure time are set to 50 ms and 15 ms respectively. The angular velocity (that is, the x-axis angular velocity) is given as 3, 5, 7, 10°/s, and use the star sensor to aim at the 4th magnitude star that produced by the single-star simulator for continuous exposure sampling. Then, adjust the exposure time to 50 ms, 100 ms and repeat the above acquisition operation. A total of 12 groups of images with different angular velocity under different exposure time were collected. Similarly, after the outer frame acquisition is completed, the middle frame is rotated to the position of 20°, and the corresponding star map is collected in the same interval.

 figure: Fig. 13.

Fig. 13. The experimental platform.

Download Full Size | PDF

The collected star maps are not at the same time, so it is necessary to normalize the continuously collected star maps. According to the previous analysis, the centroid positioning error of star point includes the centroid error $P_{1}$ that caused by the different starting position of imaging stars and the centroid error $\varepsilon $ that caused by the diffusion length of spot under rolling shutter exposure mode.

The following analysis takes the exposure time of 50 ms and the x-axis angular velocity of 3°/s as an example for explanation. Firstly, the background noise of 50 consecutive images is counted, and then the star image is preprocessed to eliminate the background noise. In addition, the median filter is used to remove the interference of other factors. Then, 20 consecutive star images are taken, and the star extraction algorithm is used to locate the centroid of the first frame of the images, and the ordinate is marked as $y_{1}$, which is used as the fixed point. If the sampling interval $\Delta T$is 50 ms, the rolling shutter effect is not considered, and $V_{y} \cdot \Delta T$should be added on the basis of $y_{1}$. In addition, both the error $P_{1}$ caused by the different starting positions of stars and the centroid error $\varepsilon$ caused by the diffusion length of the star spot should be considered. Therefore, the star position in the second frame can be expressed as:

$$y_{2} = y_{1} + {V_{y}} \cdot \Delta T + P_{1} + \varepsilon_{1},$$
wherein, $P_{1}$ can be obtained according to the analysis conclusion in Section 4.2.

Additionally, the moving distance ${V_{y}} \cdot \Delta T$ of the dynamic star is known, and the measurement value of the second frame is also known, therefore, the measurement value of $\varepsilon_{1}$ can be further obtained. In the same way, we can get the error measurement values $\varepsilon_{2}$ and $\varepsilon_{3}$, those are caused by the diffusion length of the spot in the second frame and the third frame. Therefore, the error $\varepsilon$ of all images is obtained by analogy, in addition, another 20 consecutive frames are selected to repeat 100 times, and take the average value as the error value. And then set different angular velocity to repeat the above process. After finishing the experiment of different angular velocity influence on the error under 50 ms exposure time, select the exposure time of 15, 30, 70, and 100 ms respectively to repeat the above experiments.

We counted the centroid error at each point in every group, and the experimental statistical results and the simulation results are simultaneously shown in Figs. 14 and 15, it indicates that the change trend of the two results is consistent. And when the exposure time is fixed, the centroid error increases with the increase of angular velocity. Similarly, the error increases with the increase of exposure time as the angular velocity is fixed. In addition, it clearly indicates that the experimental centroid error lines are consistent with the simulation results, and the maximum errors of the two are 0.054 pixels and 0.143 pixels in the x-axis and y-axis directions, respectively. The existence of this kind of error provides the possibility to further improve the accuracy of the star sensor. At the same time, the results can be used as the theoretical results of the centroid error under the rolling shutter exposure mode, which is beneficial to verify the accuracy of the related compensation method. In the following study, we will use the star map in rolling shutter exposure mode to estimate the angular velocity to compensate for this error and improve the accuracy of the star sensor, and then evaluate the compensation method based on the experimental results of this paper.

 figure: Fig. 14.

Fig. 14. Centroid error caused by star spot dispersion and dynamic trailing along the x-axis of the image plane.

Download Full Size | PDF

 figure: Fig. 15.

Fig. 15. Centroid error caused by star spot dispersion and dynamic trailing along the y-axis of the image plane.

Download Full Size | PDF

6. Conclusion

The main contributions of this paper are summarized as follows:

Firstly, the mathematical models of star distortion and energy distribution under rolling shutter exposure mode are established, and the quantitative relationship between the star distortion variables and the influencing factors is given. Secondly, the star spot trajectory under the rolling shutter exposure mode is obtained through the simulation experiment, whose accuracy is accurately evaluated as well. Thirdly, the centroid positioning error that influenced by the initial positions and the dispersion of the stars in rolling shutter exposure mode are analyzed in detail. The experimental results show that the dispersion will affect the centroid positioning accuracy, and the error cannot be ignored. In this paper, by establishing an accurate imaging model of the star point under rolling shutter exposure mode and analyzing the centroid error in this mode, it lays a foundation for further eliminating the rolling shutter effect.

The specific conclusions are as follows:

Considering the star dispersion radius and dynamic tailing, the star trajectory and energy distribution modeling are proposed under rolling shutter exposure mode. In addition, the positioning accuracy of star centroid in rolling mode is analysed, which is mainly affected by two factors. On the one hand, the initial position of the star point will have a terrible impact on the centroid positioning, and the simulation results show that under the typical condition with an angular velocity of 3°/s and an exposure time of 100 ms, the errors along y-axis and x-axis caused by different initial positions of the star are 33.75 pixels and 33.62 pixels respectively with the first line of sensor as reference. In addition, the error gradually increases as the row position of star on the image plane increases, which can be compensated by the existing geometric and kinematic methods. On the other hand, the accuracy of centroid positioning is affected by the dispersion length of the star spot, which decreases with the increase of exposure time when the angular velocity is fixed, and the maximum errors of two axis in the image plane can reach 0.511 pixels and 1.125 pixels respectively in the case of the angular velocity of 10°/s and the exposure time of 100 ms, which cannot to be ignored in developing high precision star sensor. The experimental results indicate that the centroid positioning errors are in good agreement with the simulation results, and the star map in rolling shutter exposure mode can be effectively generated to analyze the deformation under complex conditions through the model purposed in this paper. Moreover, this model lays a foundation for eliminating the rolling effect by using the star map of rolling shutter exposure mode for the following research, which will help to further improve the dynamic performance of star sensor.

Funding

National Key Research and Development Program of China (2019YFA0706002).

Acknowledgments

The authors thank the Key Laboratory of Precision Opto-Mechatronics Technology, Ministry of Education, Beihang University for their support.

Disclosures

The authors declare no conflicts of interest.

References

1. Z. Jun, H. Yuncai, W. Li, and L. Da, “Studies on dynamic motion compensation and positioning accuracy on star tracker,” Appl. Opt. 54(28), 8417–8424 (2015). [CrossRef]  

2. C. C. Liebe, K. Gromov, and D. M. Meller, “Toward a stellar gyroscope for spacecraft attitude determination,” J. Guid. Control Dyn. 27(1), 91–99 (2004). [CrossRef]  

3. X. Shen, C. Liu, Y. Gao, Z. Zhou, and J. Xu, “An approach on motion blurred star map simulation for star sensor,” Proc. SPIE 108460, 10846–10851 (2018). [CrossRef]  

4. J. Shen, G. Zhang, and X. Wei, “Simulation analysis of dynamic working performance for star trackers,” J. Opt. Soc. Am. A 27(12), 2638–2647 (2010). [CrossRef]  

5. D. Liu, X. Chen, X. Liu, and C. Shi, “Star image prediction and restoration under dynamic conditions,” Sensors 19(8), 1890–1913 (2019). [CrossRef]  

6. S. Wang, S. Zhang, M. Ning, and B. Zhou, “Motion Blurred Star Image Restoration Based on MEMS Gyroscope Aid and Blur Kernel Correction,” Sensors 18(8), 2662–2688 (2018). [CrossRef]  

7. W. Tan, S. Qin, R. M. Myers, E. T. J. Morris, and D. Dai, “Centroid error compensation method for a star tracker under complex dynamic conditions,” Opt. Express 25(26), 33559–33574 (2017). [CrossRef]  

8. S. Zhang, F. Xing, T. Sun, Z. You, and M. Wei, “Novel approach to improve the attitude update rate of a star tracker,” Opt. Express 26(5), 5164–5181 (2018). [CrossRef]  

9. A. Broggi, A. Cionini, F. Ghidini, and P. Zani, “Handling rolling shutter effects on semi-global matching in automotive scenarios,” in Proceedings of IEEE Intelligent Vehicles Symposium (IEEE, 2017), pp. 1134–1139.

10. Y. Zhou, M. Daakir, E. Rupnik, and M. Pierrot-Deseillign, “A two-step approach for the correction of rolling shutter distortion in UAV photogrammetry,” ISPRS J. Photogrammetry and Remote Sensing 160, 51–66 (2020). [CrossRef]  

11. D. Spiller, F. Curti, A. Luigi, B. Simone, and S. Gianfranco, “High Angular Rate Determination Algorithm Based on Star Sensing,” in Proceedings of Advances in the Astronautical Sciences, I. J. Gravseth, ed. (Academic, 2015), pp. 645–656.

12. M. Meingast, C. Geyer, and S. Sastry. “Geometric Models of Rolling-Shutter Cameras,” Computer Science, (2005).

13. J. Enright and T. Dzamba, “Rolling Shutter Compensation for Star Trackers,” in Proceedings of AIAA Guidance, Navigation, and Control Conference, Minneapolis, Minnesota (Academic, 2012).

14. C. Albl, Z. Kukelova, V. Larsson, and T. Pajdla, “Rolling Shutter Camera Absolute Pose,” IEEE Trans. Pattern Anal. Mach. Intell. 42(6), 1439–1452 (2020). [CrossRef]  

15. R. M. Haralick, D. Lee, K. Ottenburg, and M. Nolle, “Analysis and solutions of the three point perspective pose estimation problem,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1991), pp. 592–598.

16. O. Ait-Aider, N. Andreff, J. M. Lavest, and P. Martinet, “Exploiting Rolling Shutter Distortions for Simultaneous Object Pose and Velocity Computation Using a Single View,” in Proceedings of IEEE International Conference on Computer Vision Systems (IEEE, 2006), p. 35.

17. W. Zhang, W. Quan, and L. Guo, “Blurred star image processing for star sensors under dynamic conditions,” Sensors 12(5), 6712–6726 (2012). [CrossRef]  

18. L. Wan, Y. Zhang, P. Jia, and J. Xu, “Modeling and rectification of rolling shutter effect in CMOS aerial cameras,” J. Harbin Institute Technol. 24(4), 71–77 (2017). [CrossRef]  

19. Z. Wang, J. Jiang, and G. Zhang, “Global field-of-view imaging model and parameter optimization for high dynamic star tracker,” Opt. Express 26(25), 33314–33332 (2018). [CrossRef]  

20. J. Yan, J. Jiang, and G. Zhang, “Dynamic imaging model and parameter optimization for a star tracker,” Opt. Express 24(6), 5961–5983 (2016). [CrossRef]  

21. J. Yan, J. Jiang, and G. Zhang, “Modeling of intensified high dynamic star tracker,” Opt. Express 25(2), 927–948 (2017). [CrossRef]  

22. J. Jiang, K. Xiong, W. Yu, J. Yan, and G. Zhang, “Star centroiding error compensation for intensified star sensors,” Opt. Express 24(26), 29830–29842 (2016). [CrossRef]  

23. X. Wan, G. Wang, X. Wei, and G. Zhang, “Star Centroiding Based on Fast Gaussian Fitting for Star Sensors,” Sensors 18(9), 2836 (2018). [CrossRef]  

24. D. Spiller and F. Curti, “A geometrical approach for the angular velocity determination using a star sensor,” Acta Astronautica, (2020), https://doi.org/10.1016/j.actaastro.2020.11.043.

25. X. Wei, W. Tan, J. Li, and G. Zhang, “Exposure Time Optimization for Highly Dynamic Star Trackers,” Sensors 14(3), 4914–4931 (2014). [CrossRef]  

26. T. Sun, F. Xing, Z. You, and M. Wei, “Motion-blurred star acquisition method of the star tracker under highdynamic conditions,” Opt. Express 21(17), 20096–20110 (2013). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. Schematic diagram of star sensor dynamic imaging.
Fig. 2.
Fig. 2. Schematic diagram of rolling shutter exposure mode.
Fig. 3.
Fig. 3. Star point trajectory under three-axis rotation.
Fig. 4.
Fig. 4. Motion track of ideal star point along image plane under rolling shutter exposure mode.
Fig. 5.
Fig. 5. Motion track of real star under rolling shutter exposure mode along x-axis of image plane.
Fig. 6.
Fig. 6. Motion track of real star under rolling shutter exposure mode along y-axis of image plane.
Fig. 7.
Fig. 7. Simulation star map based on the proposed model.
Fig. 8.
Fig. 8. Comparison of centroid position in two modes.
Fig. 9.
Fig. 9. Centroid positioning error in y-axis direction caused by different initial star positions.
Fig. 10.
Fig. 10. Centroid positioning error in x-axis direction caused by different initial star positions.
Fig. 11.
Fig. 11. Variation of x-axis centroid error with exposure time and angular velocity.
Fig. 12.
Fig. 12. Variation of y-axis centroid error with exposure time and angular velocity.
Fig. 13.
Fig. 13. The experimental platform.
Fig. 14.
Fig. 14. Centroid error caused by star spot dispersion and dynamic trailing along the x-axis of the image plane.
Fig. 15.
Fig. 15. Centroid error caused by star spot dispersion and dynamic trailing along the y-axis of the image plane.

Tables (2)

Tables Icon

Table 1. Parameter setting of star sensor

Tables Icon

Table 2. Statistical results of centroid location

Equations (33)

Equations on this page are rendered with MathJax. Learn more.

$$W(t_\textrm{0}) = \frac{{{{[\textrm{ - }x(t_\textrm{0})\;\:\textrm{ - }y(t_\textrm{0})\;\:f]}^T}}}{{\sqrt {{{(x(t_\textrm{0}))}^\textrm{2}} + {{(y(t_\textrm{0}))}^\textrm{2}} + {f^2}} }}.$$
$$W(t_\textrm{0} + \Delta t) = C_{t_\textrm{0}}^{t_\textrm{0} + \Delta t} \times W(t_\textrm{0}) = (I - [\omega \times ] \times \Delta t),$$
$$\left\{ {\begin{array}{l} {x(t + \Delta t) = x(t_\textrm{0}) + (y(t_\textrm{0}) \times \omega_{z} + f \times \omega_{y}) \times \Delta t}\\ {y(t + \Delta t) = y(t_\textrm{0}) - (x(t_\textrm{0}) \times \omega_{z} + f \times \omega_{x}) \times \Delta t.} \end{array}} \right.$$
$$t_{int} = T = (\textrm{n - 1}) \cdot t_{rd},$$
$$\bar{\omega }\textrm{ = }\omega [\begin{array}{cccc} {\cos \varphi \cdot \cos \beta }&{\cos \varphi \cdot \sin \beta }&{\sin \varphi } \end{array}],$$
$$\left\{ {\begin{array}{l} {{x_1} = {x_{0}} + {V}_{x1} \cdot T}\\ {{y_1} = {y_{0}} + {V}_{y1} \cdot T,} \end{array}} \right.$$
$$\left\{ {\begin{array}{l} {x_{2} = x_1 \cdot \cos (\omega_{z} \cdot T) + y_{1} \cdot \sin (\omega_{z} \cdot T)}\\ {y_{2} = y_{1} \cdot \cos (\omega_{z} \cdot T) - x_1 \cdot \sin (\omega_{z} \cdot T).} \end{array}} \right.$$
$$V_{x} = \frac{{x_{2} - x_{0}}}{T} = \frac{1}{T} \cdot f \cdot \tan \theta \cdot [\cos (\gamma - \omega \cdot T \cdot \sin \varphi ) - \cos \gamma ] + \frac{{f \cdot \omega \cdot \cos \varphi }}{{{{\cos }^2}\theta }}\sin (\beta \textrm{ + }\omega \cdot T \cdot \sin \varphi ).$$
$$V_{y} = \frac{{{y_2} - {y_0}}}{T} = \frac{1}{T} \cdot f \cdot \tan \theta \cdot [\sin (\gamma - \omega \cdot T \cdot \sin \varphi ) - \sin \gamma ] + \frac{{f \cdot \omega \cdot \cos \varphi }}{{{{\cos }^2}\theta }}\cos (\beta - \omega \cdot T \cdot \sin \varphi ).$$
$$\begin{array}{cc} {\left\{ {\begin{array}{l} {{a = M_{0} = M_{1}}}\\ {{b = N_{0}}} \end{array}} \right.}&{\left\{ {\begin{array}{l} {{a^{\prime} = a = M_{0} = M_{1}}}\\ {{b^{\prime} = N_{0} + \varDelta N},} \end{array}} \right.} \end{array}$$
$$\left\{ {\begin{array}{l} {a = {M_{0}}}\\ {b = {N_{0}} = {N}_{1}} \end{array}} \right.\begin{array}{cc} {}&{\left\{ {\begin{array}{l} {{a}{^{\prime}} = {M_{0}} + \Delta M}\\ {{b}{^{\prime}} = b = {N_{0}} = {N}_{1},} \end{array}} \right.} \end{array}$$
$$\left\{ {\begin{array}{c} {a = {M_{0}}}\\ {b = {N_{0}}} \end{array}} \right.\begin{array}{cc} {}&{\left\{ {\begin{array}{l} {{a}{^{\prime}} = {M_{0}} + ({M_{1}} - {M_{0}}) \cdot t_{rd} \cdot V_{x}}\\ {{b}{^{\prime}} = {N_{0}} + ({M_{1}} - {M_{0}}) \cdot t_{rd} \cdot V_{y},} \end{array}} \right.} \end{array}$$
$$\left\{ {\begin{array}{l} {\Delta N\textrm{ = }V_{x} \cdot ({M_{1}} - 1) \cdot {t_{rd}}}\\ {\Delta M\textrm{ = }V_{y} \cdot ({M_{1}} - 1) \cdot {t_{rd}}}\\ {\Delta MN\textrm{ = }V_{xy} \cdot ({M_{1}} - 1) \cdot {t_{rd}}.} \end{array}} \right.$$
$$\left\{ {\begin{array}{l} {{a} = {M_{0}} = {M_{1}}}\\ {{b} = {N_{0}} + \frac{L_{x}}{2},} \end{array}} \right.$$
$$\left\{ {\begin{array}{l} {{a}{^{\prime}} = a = {M_{0}} = {M_{1}}}\\ {{b}{^{\prime}} = {N_{0}} + \frac{L_{x}}{2} + V_{x} \cdot ({M_{1}} - 1) \cdot {t_{rd}},} \end{array}} \right.$$
$$H_{max,x} = V_{x} \cdot 6\rho \cdot t_{rd}.$$
$$\left\{ {\begin{array}{c} {{a} = {M_{0}} + \frac{{L_{y}}}{2}}\\ {{b} = {N_{0}} = {N}_{1},} \end{array}} \right.$$
$$\left\{ {\begin{array}{l} {{a}{^{\prime}} = \frac{{Ly + 6\rho + H_{max,y} + m}}{2}}\\ {{b}{^{\prime}} = {b} = {N_{0}} = {N}_{1},} \end{array}} \right.$$
$$H_{max,y}\textrm{ = }\frac{{(L_{y} + 6\rho ) \cdot V_{y}}}{{V_{s} - V_{y}}},$$
$$m = n + \frac{{n \cdot V_{y}}}{{V_{s} - V_{y}}} = ({M_{0}} - 3\rho ) + \frac{{({M_{0}} - 3\rho ) \cdot V_{y}}}{{V_{s} - V_{y}}}.$$
$$\left\{ {\begin{array}{c} {{a} = {M_{0}} + \frac{{L_{y}}}{2}}\\ {{b} = {N_{0}} + \frac{{L_{x}}}{2},} \end{array}} \right.$$
$$\left\{ {\begin{array}{l} {{a}{^{\prime}} = \frac{{L_{y} + 6\rho + H_{max,y} + m}}{2}}\\ {{b}{^{\prime}}\textrm{ = }{N_{0}} + \frac{L_{x}}{2}\textrm{ + }(\frac{{L_{y} + 6\rho + H_{max,y} + m}}{2} - 1) \cdot V_{x} \cdot {t_{rd}}.} \end{array}} \right.$$
$$h(x,y) = h_{Gau}(x) \cdot h_{Gau}(y) = \frac{1}{{2\pi {\rho ^2}}} \cdot \exp ( - \frac{{{x^2} + {y^2}}}{{2\rho _{}^2}}),$$
$$h_{LSSF}(x) = \frac{1}{L}\int_{x - L}^x {h_{Gau}(\textrm{u})du} = \frac{1}{{2L}}[{erf}(\frac{x}{{\sqrt 2 \rho }}) - {erf}(\frac{{x - L}}{{\sqrt 2 \rho }})],$$
$$\begin{aligned} E_{dyn}(x,y) &= \mathrm{\Phi }\mathop \smallint \limits_0^{\frac{{T \cdot V_{s}}}{{V_{s} - V_{y}}}} h_{Gau}(x - {x_{0}} - V_{x} \cdot \textrm{k} \cdot {t_{rd}} \cdot \cos \theta ) \cdot h_{Gau}(y - {y_{0}} - V_{y} \cdot k \cdot {t_{rd}} \cdot \sin \theta )dt\\ & = \mathrm{\Phi } \cdot \frac{{T \cdot V_{s}}}{{V_{s} - V_{y}}} \cdot h_{LSSF}(u) \cdot h_{Gau}(v), \end{aligned}$$
$$\left\{ {\begin{array}{l} {{x} = {x_{0}} + {X} \cdot \cos \theta - Y \cdot \sin \theta }\\ {y = {y_{0}} + X \cdot \sin \theta + Y \cdot \cos \theta .} \end{array}} \right.$$
$$E_{dyn}(X,Y) = \mathrm{\Phi } \cdot \frac{{T \cdot V_{s}}}{{V_{s} - V_{y}}} \cdot h_{LSSF}(X - {X_0}) \cdot h_{Gau}(Y - {Y_0}).$$
$$\begin{array}{l} I_{ij} = \mathop \smallint \limits_{{y_j} - 0.5}^{{y_j} + 0.5} \mathop \smallint \limits_{{x_i} - 0.5}^{{x_i} + 0.5} {\eta _{{QE}}} \cdot K \cdot E_{\textrm{dy}n}^{}(x,y)dxdy\\ \begin{array}{cc} {}& = \end{array}\mathrm{\Phi } \cdot \frac{{T \cdot V_{s}}}{{V_{s} - V_{y}}} \cdot {\eta _{{QE}}} \cdot K\mathop \smallint \limits_{{x_i} - 0.5}^{{x_i} + 0.5} h_{LSSF}(x - {x_{0}})dx\mathop \smallint \limits_{{y_j} - 0.5}^{{y_j} + 0.5} h_{Gau}(y - {y_{0}})dy, \end{array}$$
$$\Phi = {E_{0}} \cdot {2.512^{ - m}} \cdot \frac{{\pi {D^2}}}{4},$$
$$\begin{array}{l} {I_j} = {\mathop \smallint \limits_{\Lambda \textrm{j}}^{}} {\mathop \smallint \limits_{{x_i} - 0.5}^{{x_i} + 0.5}} {\eta _{{QE}}} \cdot K \cdot E_{dyn}(x,y)dxdy\\ = \mathrm{\Phi } \cdot \frac{{T \cdot V_{s}}}{{V_{s} - V_{y}}} \cdot {\eta _{{QE}}} \cdot K {\mathop \smallint \limits_{{{x}_i} - 0.5}^{{x_i} + 0.5}} h_{LSSF}({x - {x_{0}}} )dx{ {\mathop \smallint \limits_{\Lambda \textrm{j}}^{}}}h_{Gau}({y - {y_{0}}} )dy. \end{array}$$
$$\Pi_{ij} = \sum\limits_{j}^R {(I_{j}} + \{ \Theta_{j}\}_R).$$
$$\left\{ {\begin{array}{c} {{\delta_{{x,q}}} = {x}_{s,q} - \widetilde {x}_{q}}\\ {{\delta_{y,q}} = y_{s,q} - \widetilde {y}_{q}\textrm{.}} \end{array}} \right.$$
$$y_{2} = y_{1} + {V_{y}} \cdot \Delta T + P_{1} + \varepsilon_{1},$$
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.