Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Method of space object detection by wide field of view telescope based on its following error

Open Access Open Access

Abstract

Space objects and stars appear similar in images acquired by the wide field of view (FOV) survey telescope. This work investigates a unique property of the telescope observing a space object in satellite tracking mode, namely that the azimuth and altitude angles of the object and those of the optical axis of the telescope vary, in theory, in the same way. Based on this property we derive that the movement distance of the object between the two adjacent frames is minimal compared to the distance of the star. With this conclusion, it is possible to detect the object from a large number of background stars. To improve the robustness of the detection, the set of candidate objects is created. Finally, a clustering algorithm is employed to successfully extract the motion trajectory of the object. Unlike traditional detection methods or techniques based on image processing and analysis, our proposed detection is closely related to the parameters of the trajectory-following performance, which provides a more reliable basis for improving the detection rate. The feasibility and accuracy of the algorithm was verified by the 1.2-meter wide FOV survey telescope at the Jilin base of the Changchun observatory, with a detection rate of over 98%. The test results indicate that the method can satisfy the demand for detecting the object in an open-loop tracking. If the detection method is implemented in hardware, it can detect the object in a closed-loop tracking. As a result, it will have a wider scope for applications.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Ground-based telescopes are important equipment for the observation of space objects in medium and high earth orbit. Telescopes can make up to the limitations in the range of radar detection, and provide highly accurate astronomical positioning of the object [1,2]. In order to improve the accuracy of astronomical positioning, the optical system of the telescope is generally designed with short focal length and wide FOV [36]. And it can accommodate large area camera platform that is suited to long exposure astronomy applications. Telescopes can take a sufficient number of reference stars while photographing space objects. As long as the positions of reference stars are known, the positions of space objects can be determined [7]. The advantage of the optical measurement method is that the accuracy of astronomical positioning is not affected by the error in the various subsystems of the telescope. However, the telescope with short focal length makes the object into the form of dots on the image; specifically, the object occupies only a small number of pixels on the image, resulting in a lack of detail. At the same time a vast number of stars are introduced in the image under the effect of the wide FOV, which causes the object to be overwhelmed by them. Therefore, how to detect space objects from a vast number of stars is a key issue for astronomical positioning.

The algorithms for the detection of space objects have been widely discussed in recent years. There are three types of traditional space object detection methods according to the imaging strategy, which are masking technique, scanning technique and stacking method. The masking technique uses a template frame to mask all background stars on the observed frames. The unmasked parts of the masked observed frames are then scanned for objects. The essence of the mask is the difference of an observed frame and a template frame. A template frame is used to generate a mask covering every star found on the frame. A template frame can be one of the frame sequence with the same field of view [8]. Alternatively, it can also be simulated by accessing the star catalogue [3]. This technique has been applied to the optical tracking subsystem of the ZIMLAT telescope, which is a laser ranging system at the Zimmerwald observatory [8,9]. The ESA Space-Debris Telescope at the Teide Observatory on Tenerife, also uses the technique to detect space debris [10]. The scanning technique fits GEO (Geostationary Orbit) objects using several characteristics (e.g.shape). In order to optimize the signal-to-noise-ratio (SNR) for the GEO object during the exposures, which means that the telescope tracks an object with its expected motion. For example, during the exposure time, the MODEST (Michigan Orbital Debris Survey Telescope) telescope tracks the direction at the sidereal rate, and the charge on the CCD is shifted in reverse so that the GEO objects are seen as dots and the stars are seen as streaks [1113]. Unlike the MODEST, the TAROT (Rapid Action Telescopes for Transient Object) telescope remains pointed in a set direction for the exposure time. The photons from the observed object are cumulated on the same pixels of the CCD. The object will appear in the form of dots or stains, and the stars will, in contrast, appear in the form of streaks [14,15]. Both track modes make it possible to distinguish between space objects and background stars in a single frame. Due to the difference in geometry of two, more methods are being used to accurately measure this difference in order to find potential objects. Examples include the PSF (Point Spread Function) fitting method [16], mathematic morphological [17], and the moment of inertia [18]. The stacking method primarily works for the detection of faint GEO objects that are undetectable on a single frame, especially space debris [19]. A large number of observed frames are cut in sub-frames to match an object movement, and a median frame is created from these sub-frames. In this method, photons from objects are accumulated on the same pixels and background stars are completely removed by taking the median. At present, the method has been succeeded in the detection of GEO debris using the 35cm telescope at the Mt. Nyukasa, Nagano Prefecture. The line-identifying technique is complementary to the stacking method [20]. This technique does not need to presume any particular movements of an object, as the stacking method does. Using this technique enables us to detect near-Earth asteroids and unknown space debris.

Many new algorithms have also been proposed to improve the detection rate of space object, including optical flow [21], image deconvolution [22], quasi-hypothesis-testing [23,24], support vector machine [25] and hybrid convolutional neural network [26].

The main topic of this research is concerned with the detection of space object from the vast number of background stars. The study object of this research is the wide FOV survey telescope that uses a satellite tracking mode to observe satellites or debris. The mission of such telescopes is to observe an object for which orbit prediction is already available, and to reacquire data of astronomical positioning for its orbit improvements.

In this research, we propose a detection method for the space object observed by the wide FOV telescopes. Unlike the methods and techniques presented above, this method is characterized by the fact that the success rate of detection is closely related to the trajectory-following performance of telescope. The research demonstrates the matters one by one according to the characteristics of the method, and it is about three aspects: open-loop tracking, following error of telescope and space object detection. Open-loop tracking is used when tracking the object, and it is designed to enable the motion control and the space object detection to be achieved separately. Following error reflects the quality of motion control. The high quality motion control can reduce the following error. An equivalent sine simulation is used to obtain the value of the following error. Space object detection consists of four components: (A) measuring the centroid of the star, (B) calculating movement distance between stars in adjacent frames, (C) creating the suspicious objects list, (D) associating coordinates of the suspicious objects. And the following error has been selected as thresholds for the space object detection.

2. Open-loop tracking

The major difference between optical survey for the object and astronomical observation is that the object moves fast with respect to the stellar background. Therefore, the telescope needs to observe the object in the satellite tracking mode, which can be divided into closed-loop tracking and open-loop tracking.

Closed-loop tracking is a real-time approach. In addition to depending on ephemeris to keep track of the object, the telescope uses image data to correct its pointing as well, which brings the object into the center of the FOV [27]. However, as camera technology develops, image data becomes increasingly large and computers struggle to cope with the huge volume of data. Closed-loop tracking is hard to satisfy the demand for real-time performance, so that the telescope cannot observe an object for a long time.

Open-loop tracking means that only the ephemeris is utilized to track the object by the telescope during the observation. At present, the orbit prediction accuracy of the cataloged object is sufficient to support open-loop tracking [28]. Compared with closed-loop tracking, open-loop tracking can realize the separation of the motion control and the object detection. Its anti-interference and robustness are very strong.

As the FOV of the survey telescope is sufficiently large, the object must be within the FOV for the observation. Therefore, the telescope can observe the object by means of open-loop tracking. There is no need to satisfy the demand for real-time performance on the object detection, so that it can be processed afterwards.

3. Following error of telescope

3.1 Motion control

Based on the previous discussion, the telescope adopts open-loop tracking to observe the object. In general, the complete satellite tracking mode consists of two steps including motion control and object detection. As open-loop tracking allows the motion control and the object detection to be implemented separately, open-loop tracking only includes a step of the motion control. In contrast, closed-loop tracking must consist of two complete steps. Particularly the trajectory of telescope can be predicted in advance because the orbit prediction of the object is available. The servo system controller then sends control commands, which includes position, velocity, acceleration and acceleration jerk, to its driver in accordance with this trajectory. This process of the motion control is called program tracking [29]. It is an open-loop control process. The program tracking has a poor accuracy in the pointing compared with the closed-loop tracking. However, the survey telescope is still able to capture satellites due to its wide FOV.

The quality of motion control depends on trajectory generation and PID (Proportional Integral Derivative) control. The former being implemented in the controller and the latter in the driver.

The core problem of trajectory generation is to get a smooth path in any motion control. For a smoothing path, it is not only the position, but also the velocity and acceleration must be continuous [30]. At any point of a path, the jitter will occur theoretically if the above three points are not met, which results in undesired vibration of the servo system. As the predicted ephemeris of a satellite is given as discrete data points, we can employ polynomial interpolation to generate a motion path which is able to pass through all the given data points. In general, a fifth-order polynomial for the motion path allows position, velocity and acceleration to be continuous, and position and velocity to still be smooth.

The position interpolation polynomial takes the form [30]:

$$s(t) = {C_0} + {C_1}t + {C_2}{t^2} + {C_3}{t^3} + {C_4}{t^4} + {C_5}{t^5}.$$

Then the velocity can be written as:

$$v(t) = \frac{{ds}}{{dt}} = {C_1} + 2{C_2}t + 3{C_3}{t^2} + 4{C_4}{t^3} + 5{C_5}{t^4}.$$

Again, using the second derivative, we obtain the following acceleration:

$$a(t) = \frac{{{d^2}s}}{{d{t^2}}} = 2{C_2} + 6{C_3}t + 12{C_4}{t^2} + 20{C_5}{t^3}.$$

Similarly,

$$j(t) = \frac{{{d^3}s}}{{d{t^3}}} = 6{C_3} + 24{C_4}t + 60{C_5}{t^2},$$
where, j(t) is the acceleration jerk.

In addition, the velocity, acceleration and acceleration jerk are used as feedforward to eliminate most of the lag and further improve the trajectory-following performance [31]. Specifically, the feedforward control provides an open-loop control branch outside of the PID control loop, where the velocity and the acceleration and the acceleration jerk are input to correct the PID control loop. In its most simplistic form, the feedforward control can be modeled as shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. The diagram of feedforward cascade control.

Download Full Size | PDF

3.2 Equivalent sine simulation

An equivalent sine simulation is a very practical engineering method for checking the trajectory-following accuracy of the servo system [32]. Any trajectory can be decomposed into sinusoids by Fourier transform, and therefore one or more sinusoidal motions can be substituted for the object motion. If the maximum velocity ${\dot{\theta }_{\max }}$ and acceleration ${\ddot{\theta }_{\max }}$ of an object are known, the sinusoidal motion $\theta (t)$ can be derived.

$$\theta (t) = \theta \sin (\omega t),$$
where, the sinusoidal amplitude is $\theta = \frac{{{{\dot{\theta }}^2}_{\max }}}{{{{\ddot{\theta }}_{\max }}}}$, the sinusoidal frequency is $\omega = \frac{{{{\ddot{\theta }}_{\max }}}}{{{{\dot{\theta }}_{\max }}}}$, and the sinusoidal period is $T = \frac{{2\pi }}{\omega }$. The sinusoidal motion, which can detect the following error of telescope, is generated according to the maximum velocity and acceleration of the object.

It is assumed that both the peaks and RMS (Root Mean Square) of following error have been obtained through an equivalent sine simulation.

And assuming that the pixel size of the CCD is $a \times b(\mu {m^2})$, the instantaneous FOV of the CCD is [33]

$$\left\{ {\begin{array}{c} {\begin{array}{cc} {\alpha = a/f}&{(\textrm{rad)}} \end{array}}\\ {\begin{array}{cc} {\beta = b/f}&{(\textrm{rad)}} \end{array}} \end{array}} \right.,$$
where, f is the focal length of the optical system, α is the field angle in the x-direction of the CCD, and β is the field angle in the x-direction of the CCD. Therefore, it can be deduced that the peaks of following error in the CCD are
$$\left\{ {\begin{array}{cc} {N_{\max }^\alpha = {\delta_A}/(\frac{{{{360}^o}}}{{2\pi }} \times \alpha \times 3600^{\prime\prime})}&{(\textrm{pixel})}\\ {N_{\max }^\beta = {\delta_h}/(\frac{{{{360}^o}}}{{2\pi }} \times \beta \times 3600^{\prime\prime})}&{(\textrm{pixel})} \end{array}} \right.,$$
where, ${\delta _A}(")$ and ${\delta _h}(")$ are the peaks of following error for the azimuth angle A and the altitude angle h respectively.

The RMS values of following error in the CCD are

$$\left\{ {\begin{array}{cc} {N_{\textrm{RMS}}^\alpha = {\sigma_A}/(\frac{{{{360}^o}}}{{2\pi }} \times \alpha \times 3600^{\prime\prime})}&{(\textrm{pixel})}\\ {N_{\textrm{RMS}}^\beta = {\sigma_h}/(\frac{{{{360}^o}}}{{2\pi }} \times \beta \times 3600^{\prime\prime})}&{(\textrm{pixel})} \end{array}} \right.,$$
where, ${\sigma _A}(")$ and ${\sigma _h}(")$ are the RMS values of following error for the azimuth angle A and the altitude angle h respectively.

4. Space object detection

4.1 Theoretical foundations

Because the survey telescope employs program tracking to observe the object, the azimuth and altitude angles of the object and those of the optical axis of the telescope vary, in theory, in the same way. Therefore, the core idea of the algorithm is to compare the motion direction of the object with the motion direction of the telescope. If the two directions are the same, the object can be detected from the stellar background, as shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. Schematic diagram of telescope tracking space object.

Download Full Size | PDF

The right-angle coordinate system (x,y,z) and the spherical coordinate system (A, h) for the same star at the same time are related as follows [34]:

$$\left[ {\begin{array}{c} x\\ y\\ z \end{array}} \right] = \left[ {\begin{array}{c} {\rho \cos h\cos A}\\ {\rho \cos h\sin A}\\ {\rho \sin h} \end{array}} \right],$$
where, ρ is the distance between the star and the center of celestial sphere, A is the azimuth angle and h is the altitude angle. Thus, the motion vector of the star from (k−1)th time to kth time is
$${\boldsymbol K}_{k,k - 1}^s = \left[ {\begin{array}{c} {x_k^s - x_{k - 1}^s}\\ {y_k^s - y_{k - 1}^s}\\ {z_k^s - z_{k - 1}^s} \end{array}} \right] = \left[ {\begin{array}{c} {\rho_k^s\cos h_k^s\cos A_k^s - \rho_{k - 1}^s\cos h_{k - 1}^s\cos A_{k - 1}^s}\\ {\rho_k^s\cos h_k^s\sin A_k^s - \rho_{k - 1}^s\cos h_{k - 1}^s\sin A_{k - 1}^s}\\ {\rho_k^s\sin h_k^s - \rho_{k - 1}^s\sin h_{k - 1}^s} \end{array}} \right].$$

Similarly, the motion vector of the telescope from (k−1)th time to kth time is

$${\boldsymbol K}_{k,k - 1}^t = \left[ {\begin{array}{c} {x_k^t - x_{k - 1}^t}\\ {y_k^t - y_{k - 1}^t}\\ {z_k^t - z_{k - 1}^t} \end{array}} \right] = \left[ {\begin{array}{c} {\rho_k^t\cos h_k^t\cos A_k^t - \rho_{k - 1}^t\cos h_{k - 1}^t\cos A_{k - 1}^t}\\ {\rho_k^t\cos h_k^t\sin A_k^t - \rho_{k - 1}^t\cos h_{k - 1}^t\sin A_{k - 1}^t}\\ {\rho_k^t\sin h_k^t - \rho_{k - 1}^t\sin h_{k - 1}^t} \end{array}} \right].$$

Both have the same changes in the azimuth and altitude angles, thus, $A_k^s = A_k^t$, $A_{k - 1}^s = A_{k - 1}^t$, $h_k^s = h_k^t$ and $h_{k - 1}^s = h_{k - 1}^t$. It can be seen from Fig. 2 that ${\rho ^s} \gg {\rho ^t}$.When rotation angle α is sufficiently small, it can be approximated that $\rho _k^s \approx \rho _{k - 1}^s$. Based on the above results, we can obtain $||{{\boldsymbol K}_{k,k - 1}^s} ||\gg ||{{\boldsymbol K}_{k,k - 1}^t} ||$.

The direction cosine of the two motion vectors is

$$cos\theta = \frac{{{\boldsymbol K}_{k,k - 1}^s \bullet {\boldsymbol K}_{k,k - 1}^t}}{{||{{\boldsymbol K}_{k,k - 1}^s} ||||{{\boldsymbol K}_{k,k - 1}^t} ||}}.$$

Since $||{{\boldsymbol K}_{k,k - 1}^s} ||\gg ||{{\boldsymbol K}_{k,k - 1}^t} ||$, we have

$$cos\theta = \frac{{{\boldsymbol K}_{k,k - 1}^s \bullet {\boldsymbol K}_{k,k - 1}^t}}{{||{{\boldsymbol K}_{k,k - 1}^s} ||||{{\boldsymbol K}_{k,k - 1}^t} ||}} > \frac{{{\boldsymbol K}_{k,k - 1}^t \bullet {\boldsymbol K}_{k,k - 1}^t}}{{||{{\boldsymbol K}_{k,k - 1}^s} ||||{{\boldsymbol K}_{k,k - 1}^t} ||}} = \frac{{||{{\boldsymbol K}_{k,k - 1}^t} ||}}{{||{{\boldsymbol K}_{k,k - 1}^s} ||}}.$$

As can be seen from Fig. 2, in the case of a very small rotation angle α, only the motion vector of the object ${{\boldsymbol K}^{s0}}$ is parallel to the motion vector of the telescope ${{\boldsymbol K}^t}$. Therefore, the direction cosine cosθ can be used to detect the object. The larger the value of cosθ, the greater is the probability that the motion vector ${{\boldsymbol K}^s}$ equals to the motion vector ${{\boldsymbol K}^{s0}}$. Using the program tracking, it is possible to make α sufficiently small. The modulus of vector $||{{\boldsymbol K}_{k,k - 1}^t} ||$ can be considered as a fixed value, which gives rise to an inversely proportional relationship:

$$\cos \theta \propto \frac{1}{{||{{\boldsymbol K}_{k,k - 1}^s} ||}}.$$

Therefore, the smaller the value of $||{{\boldsymbol K}_{k,k - 1}^s} ||$ is, the greater the probability that the motion vector ${{\boldsymbol K}^s}$ equals to the motion vector ${{\boldsymbol K}^{s0}}$ is.

Under ideal conditions, the imaging system of the telescope is equivalent to a pinhole camera, so that the dot in the focal plane corresponds to the star on the celestial sphere. As a result, the movement distance $||{{\boldsymbol K}_{k,k - 1}^{s0}} ||$ of the object between two adjacent frames is theoretically minimal compared to the movement distance $||{{\boldsymbol K}_{k,k - 1}^s} ||$ of the star.

4.2 Measuring the centroid of the star

In this section, for the image sequence obtained by observation, image processing is first performed to measure the centroid of the star, and then their position features are extracted to achieve the object detection. Therefore, the method of object detection consists of two functional modules: image processing and pattern recognition. Measuring the centroid of the star belongs to the field of image processing and the remaining components of the method belong to the field of pattern recognition. The overall framework for the object detection is given in Fig. 3.

 figure: Fig. 3.

Fig. 3. Diagram for the object detection.

Download Full Size | PDF

The purpose of image processing is to measure the centroid coordinates of stars in the image. SExtractor (Source-Extractor) is an open source software for detecting celestial bodies from sky survey images and extracting information about their magnitude, position, etc [35]. Compared with other similar software, SExtractor is faster in processing and more accurate in measurement. Therefore, it is widely used to measure the physical parameters of celestial bodies. In order to improve the measurement accuracy, SExtractor is employed to measure the centroid coordinates of stars in this section.

4.3 Calculating movement distance between stars in adjacent frames

The principal role of pattern recognition is to detect the centroid coordinates of the object from all the centroid coordinates obtained by the image processing module. It is divided into 3 procedures, which are calculating movement distance between stars in adjacent frames, creating the suspicious objects list and associating coordinates of the suspicious objects. Pattern recognition is the focus of this research.

The movement distance between stars in the two adjacent frames is calculated as follows: let $A_k^{(x,y)} = \{ (x,y)|x \in I,y \in J\}$ be the centroid coordinates of stars in the kth frame; let $B_{k - 1}^{(i,j)} = \{ (i,j)|i \in I,j \in J\}$ be the centroid coordinates of stars in the (k−1)th frame; then the movement distance between all stars in the kth and (k−1)th frames can be expressed as

$$||{{\boldsymbol K}_{k,k - 1}^s} ||= \sqrt {{{({A_k^{(x,y)} - B_{k - 1}^{(i,j)}} )}^2}}.$$

From the previous discussion, in adjacent frames, the movement distance $||{{\boldsymbol K}_{k,k - 1}^{s0}} ||$ of the object is theoretically minimal compared to the movement distance $||{{\boldsymbol K}_{k,k - 1}^s} ||$ of the star.

4.4 Creating the suspicious objects list

The minimum movement distance $||{{\boldsymbol K}_{k,k - 1}^s} ||$ can be calculated to detect the object. However, there are other interferences. For example, the coordinates [2048, 2056] are the centroid of a star in the (k−1)th frame; coincidentally, the coordinates [2048,2056] are also the centroid of another star in the kth frame. Both stars have the same centroid coordinates, which results in the error movement distance $||{{\boldsymbol K}_{k,k - 1}^{s0}} ||$ and error detection. For this purpose, the list of the suspicious objects is introduced to detect the true object.

There are a large number of stars in each frame (ranging from hundreds to thousands), resulting in a great many values of $||{{\boldsymbol K}_{k,k - 1}^s} ||$ and a huge list. The values of $||{{\boldsymbol K}_{k,k - 1}^s} ||$ are sorted in ascending order and the first n values are selected to create a list. For ease of discussion, n=5 is taken and the format of the list is shown in Fig. 4. It contains four lists. In List 1, n is the number of columns; the coordinates [x, y] are the centroid of a star in the 2nd frame; the value of $||{{\boldsymbol K}_{2,1}^s} ||$ is the minimum movement distance between the coordinates [x, y] and all stars in the 1st frame.

$$\begin{array}{cc} {||{{\boldsymbol K}_{2,1}^s} ||= \min \left( {\sqrt {{{({({x_m},{y_m}) - B_1^{(i,j)}} )}^2}} } \right)}&{\{{m|m \in 0,1,2,3,4} \}} \end{array}.$$

 figure: Fig. 4.

Fig. 4. Diagram for detecting the object using multiple lists.

Download Full Size | PDF

Except for the 1st frame, each image corresponds to a list. These lists contain object, stars and interference. Although the movement distance of the object in each list is not necessarily the minimum, these lists must contain the coordinates of the object. As the image sequence is processed, the number of appearances of the same coordinates gradually increases, so it is most likely to be the object.

Figure 4 specifically shows the process of detecting an object using multiple lists. The coordinates [2036.00, 2197.00] are in the first and second order in List 1 and List 2, respectively, but they have dropped to the third order in List 3 and List 4. The coordinates [2036.00, 2197.00] appear once in each of the four lists, for a total of 4 times, so it is likely to be the object.

The values of $||{{\boldsymbol K}_{k,k - 1}^s} ||$ are sorted in ascending order and the first n values are selected to create a list. If the value of n is set too small, the coordinates of the object may not be included; conversely, the calculation time will be increased. The peaks of following error under extreme conditions are derived in the previous section as $N_{\max }^\alpha$ and $N_{\max }^\beta$.

Therefore, the maximum value of $||{{\boldsymbol K}_{k,k - 1}^s} ||$ in List k is defined as:

$$||{{\boldsymbol K}_{k,k - 1}^{{s_{\max }}}} ||= \lceil{\max ({N_{\max }^\alpha ,N_{\max }^\beta } )} \rceil.$$

The symbol $\lceil{} \rceil$ means to round the number to an upper integer. Since $N_{\max }^\alpha$ and $N_{\max }^\beta$ denote the peaks of following error at maximum velocity and maximum acceleration, the object coordinates must be included in the List k. Therefore, the values of $||{{\boldsymbol K}_{k,k - 1}^s} ||$ are sorted in ascending order and the value of n corresponding to $||{{\boldsymbol K}_{k,k - 1}^{{s_{\max }}}} ||$ is selected to create a list.

The list of the suspicious objects is characterized by the fact that the more images are processed, the more lists will be got, and the more accurate detection for the object will be.

4.5 Associating coordinates of the suspicious objects

Although the list is used to detect the object, the coordinates of the suspicious objects that appear in each list are not necessarily the same. Therefore, these coordinates need to be associated to detect the object.

The association in coordinates of the suspicious objects can be done using a clustering algorithm based on the similarity threshold T and the Euclidean distance d.

Specifically, the clustering algorithm firstly determines the similarity threshold T for the specific problem, secondly calculates the Euclidean distance d from each feature vector to each cluster center, and finally compares the Euclidean distance d with the similarity threshold T. If the Euclidean distance d is greater than the similarity threshold T, the feature vector is classified as a new cluster center; conversely, the feature vector is classified into one of classes and it is regarded as the new cluster center at the same time.

Since the peaks of following error under extreme conditions are derived in the previous section as $N_{\max }^\alpha$ and $N_{\max }^\beta$, the similarity threshold T is defined as:

$$T = \lceil{\max ({N_{\max }^\alpha ,N_{\max }^\beta } )} \rceil.$$

The Euclidean distance d between the feature vector and the cluster center is calculated as:

$$d({{x_i},{C_i}} )= \sqrt {\sum\limits_{j = 1}^m {{{({{x_{ij}} - {C_{ij}}} )}^2}} }.$$
where, x is the feature vector, Ci is the ith cluster center, and m is the dimension of feature vector.

Since the telescope observes the object in satellite tracking mode, the cluster containing the most elements is likely to be its trajectory. The detailed clustering algorithm can be found in Supplement 1, Algorithm S1. To determine the object, the following two properties of the trajectory cluster wk can be adopted.

  • 1. It is supposed that $l_i^n \in {w_k}$, and $\overline {l_i^n}$ is the average value of $l_i^n$,then we have:
    $$\mathop {\textrm{Max}}\limits_{n = 0}^P |{l_i^n - \overline {l_i^n} } |\le T.$$
  • 2. It is supposed that $l_i^n \in {w_k}$ and $H = \lceil{\max ({N_{\textrm{RMS}}^\alpha ,N_{\textrm{RMS}}^\beta } )} \rceil$, then we have:
    $$\frac{{\sqrt {\sum\limits_{n = 0}^P {{{({l_i^n - \overline {l_i^n} } )}^2}} } }}{{P + 1}} \le H.$$

If T and H are smaller, then the coordinate sequence of the object is stationary and the probability of false alarms can be reduced. At the same time, these two properties can also be used to verify whether the trajectory obtained by the detection method is correct.

5. Experiments and analysis

5.1 Performance parameters of the 1.2-meter wide FOV survey telescope

In order to test the accuracy of the detection method, the 1.2-meter wide FOV survey telescope at Jilin station of Changchun observatory was selected as the test equipment. The observation station is located in the vicinity of 126.3° E longitude and 43.8° N latitude in Jilin Province, China.

 figure: Fig. 5.

Fig. 5. 1.2-meter wide FOV survey telescope

Download Full Size | PDF

The telescope structure is designed as an altitude over azimuth (alt-az) mount, as shown in Fig. 5. It has a prime focus system configuration with wide FOV. The telescope has two characteristics of strong power to gather light and wide FOV. A scientific camera platform with a very large (61.4×61.4 mm) detection area is installed at prime-focus to observe the object in medium and high earth orbit. The important parameters of the telescope are shown in Table 1.

Tables Icon

Table 1. The important parameters of the telescope

A fifth-order polynomial interpolation together with a feedforward control are used to improve the trajectory-following performance of the telescope. And its trajectory-following performance can be evaluated by the sinusoidal motion constructed with the parameters in Table 1. According to Supplement 1, Table S1, the parameters $||{K_{k,k - 1}^{{s_{\max }}}} ||$, T and H in the algorithm are set to 2 Pixels, 2 Pixels and 1 Pixels respectively. Therefore, 1.2-meter survey telescope has excellent following accuracy and stability.

5.2 Test for the object detection

In the experiments, three MEO (medium earth orbit) navigation satellites were selected as experimental objects and the orbital parameters are shown in Table 2.

Tables Icon

Table 2. The orbital parameters of satellites

The field angles of these satellites observed from the telescope are less than 1”; the field angles of stars observed from the telescope are also less than 1”; the field angle of each pixel of the CCD is 1.45”. In short exposure conditions, the number of pixels of objects and stars are both very small, as shown in Fig. 6. Therefore, the imaging characteristics of the objects and stars are similar. The observations of these satellites are consistent with the premise that the object is submerged in a vast number of background stars.

 figure: Fig. 6.

Fig. 6. Example of a telescope image. The exposure time is 1s.

Download Full Size | PDF

The coordinates of the three satellites are associated by the clustering algorithm, and the coordinate sequences reflect the object trajectory. Figure 7 is the coordinate sequence of the COSMOS 2501 (GLONASS) satellite. For its observation, the exposure time is set to 1 s, and a total of 30 frames are captured. As can be seen from Fig. 7, its image sequence fails to detect the object at the 13th frame, with the detection rate of 96%. The reason for this is that the star “collided” with the object and the extraction of centroid coordinates is off, which results in the object not being included in the list of the suspicious objects. Therefore, the object detection is not achieved. Figure 8 is the coordinate sequence of the GALILEO 7 satellite. For its observation, the exposure time is set to 1 s, and a total of 40 frames are captured. It has a detection rate of 100%. Figure 9 is the coordinate sequence of the GALILEO 8 satellite. For its observation, the exposure time is set to 1 s, and a total of 30 frames are captured. It also has a detection rate of 100%. In summary, the total detection rate for the three objects is 99%. Figure 10 shows the reason why COSMOS 2501 satellite was not detected at the 13th frame.

 figure: Fig. 7.

Fig. 7. The coordinate sequence of the COSMOS 2501 satellite.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. The coordinate sequence of the GALILEO 7 satellite.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. The coordinate sequence of the GALILEO 8 satellite.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. The object collided with a background star.

Download Full Size | PDF

The properties of the trajectory cluster proposed in this research are used to determine whether the trajectory is correct or not. The results are shown in Table 3. All the calculations in row 1 are less than T (2 Pixels). The property 1 is satisfied. And all the calculations in row 2 are less than H (1 Pixels). The property 2 is also satisfied. Therefore, these three coordinate sequences are the object trajectories. It can be seen from Table 3 that the three coordinate sequences are very stationary, which indicates that the trajectory-following performance is the basis of the detection method. The smaller the following error is, the lower the probability of false alarms is, and the easier it is to detect the object.

Tables Icon

Table 3. Calculated values for trajectory clustering properties

In addition to validating the method with the observation data from three navigation satellites, the data from other satellites is also applied for testing it. Amongst these satellites, there are both MEO and GEO ones. For their observations, all of the exposure times are set to 1 s, and a total of 400 frames are captured, and the experimental results are shown in Table 4. The number of images with the detected object is 393 frames, the number of images with the undetected object is 7 frames, and the total detection rate is 98%. When the object is not detected, it is usually for one of two reasons.1). The star “collided” with the object. 2). The object features are reflected in dark and faint, so its grey values are very close to the sky background. This results in the SExtractor software failing to measure its coordinates. For the latter problem, we can boost the SNR of the object by improving the image processing algorithm so as to measure the coordinates of the object.

Tables Icon

Table 4. Statistical results of the detection number

5.3 Assessment of the external accuracy

The image coordinate sequence of the object has no clear physical significance. What really has physical significance is the celestial coordinate sequence of the object, which can be used for orbit determination. The mapping relationship between the image coordinate system and the celestial coordinate system is established by using surface equation [36].

$$\left\{ {\begin{array}{c} {\alpha = {a_0} + {a_1}x + {a_2}y + {a_3}{x^2} + {a_4}xy + {a_5}{y^2}}\\ {\delta = {b_0} + {b_1}x + {b_2}y + {b_3}{x^2} + {b_4}xy + {b_5}{y^2}} \end{array}} \right.,$$
where, (x, y) is the image coordinates, and (α, δ) is the equatorial coordinates. The detection method is applied to obtain the image coordinates of the object as (xs, ys). And then the astronomical positioning data (αs, δs) of the object can be derived from Eq. (22).

External accuracy derives truth from a source outside the dataset. Accuracy is the offset between this truth and the measurement. Because of the high accuracy of satellite laser ranging, the CPF (Consolidated Prediction Format) precision data is used as the theoretical value C for comparison with the astronomical positioning data (observed value O). Therefore, $\Delta = O - C$ is the residual of the astronomical positioning.

For each observed arc of orbit, it has [37]

$$\sigma = \sqrt {\frac{{\sum\limits_{i = 1}^n {\Delta _i^2} }}{n}}.$$
where, n is the valid frame number (i.e. the object can be detected), and $\sigma$ is also called the root mean square error (RMSE). Applying Eq. (23), one can find the root mean square error ${\sigma _\alpha }$ and ${\sigma _\delta }$ for $\alpha$ and $\delta$, respectively. Then, the external accuracy ${\sigma _{\alpha ,\delta }}$ can be obtained by:
$${\sigma _{\alpha ,\delta }} = \sqrt {\sigma _\alpha ^2 + \sigma _\delta ^2}.$$

Table 5 shows the effect of the implementation on the COSMOS 2501 satellite.

Tables Icon

Table 5. The external accuracy of the COSMOS 2501 satellite.

Based on Eq. (23) and Eq. (24), we get ${\sigma _\alpha } = 0.899^{\prime\prime}$, ${\sigma _\delta } = 1.0854^{\prime\prime}$ and ${\sigma _{\alpha ,\delta }} = 1.4094^{\prime\prime}$. Since the COSMOS 2501 satellite has a perigee altitude of 19,090 km and an apogee altitude of 19,169 km, its mean orbital altitude is taken to be 19,130 km. This gives an average measurement error of 130.648 m, which proves that the detected object in the image is a real object in the space. It also illustrates the accuracy of the detection method proposed in this research.

6. Conclusion

The detection method for the space object observed by the wide FOV telescope shows significant improvement in robustness and detection rate. The reason for the improvement is that the method is closely related to the trajectory- following performance and makes full use of the following error to reduce the probability of false alarms. To do all this, the research employs a fifth-order polynomial interpolation to generate smooth paths and feedforward control to eliminate most of the lag. Through the equivalent sine simulation test, the 1.2-meter wide FOV survey telescope at Jilin station of Changchun observatory provides high accuracy and stability in tracking.

The theoretical foundations of this detection method come from engineering practice and is refined and summarized in this research. In engineering practice, it can also be roughly understood that because the object moves fast with respect to the stellar background, the movement distance of the object is minimal in the satellite tracking mode.

In this research, calculating movement distance between stars in adjacent frames, creating the suspicious objects list, and associating coordinates of the suspicious objects are classified in pattern recognition. The main reason is that calculating movement distance is essentially equivalent to the difference method for extracting features; the suspicious objects list is basically equivalent to the set of candidate objects; and associating coordinates are entirely equivalent to the clustering algorithm.

External accuracy visually reflects the measurement accuracy of the measured data. Sources of error in the observation can also be discussed by it. However, its most important function in this research is to verify that the object is true.

The applications of the method were limited by the difficulty to detect the object quickly in the high resolution image. If the detection method is implemented with hardware in the future, such as accelerating data processing with GPU (Graphics Processing Unit) or FPGA (Field Programmable Gate Array), the object can be detected in real time. This allows the telescope to achieve closed-loop tracking and a firm lock on the object. As a result, it will have a wider scope for applications such as laser ranging of space debris and spectroscopic observation of space debris.

Funding

Ministry of Science and Technology of the People's Republic of China (2017YFB1002900); National Natural Science Foundation of China (12003052, 61771220).

Acknowledgments

We would like to express our sincere gratitude for the support and help from the technical team at the Jilin station. The presented results would not be so complete and important without the invaluable contribution of the technical team operating the 1.2-meter wide FOV survey telescope.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. I. Molotov, V. Agapov, V. Titenko, Z. Khutorovsky, Y. Burtsev, I. Guseva, V. Rumyantsev, M. Ibrahimov, G. Kornienko, A. Erofeeva, V. Biryukov, V. Vlasjuk, R. Kiladze, R. Zalles, P. Sukhov, R. Inasaridze, G. Abdullaeva, V. Rychalsky, V. Kouprianov, O. Rusakov, E. Litvinenko, and E. Filippov, “International scientific optical network for space debris research,” Adv.Space.Res. 41(7), 1022–1028 (2008). [CrossRef]  

2. T. Flohrer, T. Schildknecht, and R. Musci, “Proposed strategies for optical observations in a future European Space Surveillance network,” Adv.Space.Res. 41(7), 1010–1021 (2008). [CrossRef]  

3. T. Schildknecht, “Optical surveys for space debris,” Astron Astrophys Rev. 14(1), 41–111 (2007). [CrossRef]  

4. T. Schildknecht, R. Musci, M. Ploner, G. Beutler, W. Flury, J. Kuusela, J. de Leon Cruz, and L. de Fatima Dominguez Palmero, “Optical observations of space debris in GEO and in highly-eccentric orbits,” Adv.Space.Res. 34(5), 901–911 (2004). [CrossRef]  

5. E. Olmedo, N. Sanchez-Ortiz, N. Guijarro, J. Nomen, and H. Krag, “Survey-only optical strategies for cataloguing space debris objects in the future European space surveillance system,” Adv.Space.Res 48(3), 535–556 (2011). [CrossRef]  

6. M. Tumarina, M. Ryazanskiy, S. Jeong, G. Hong, N. Vedenkin, I. H. Park, and A. Milov, “Design, fabrication and space suitability tests of wide field of view, ultra-compact, and high resolution telescope for space application,” Opt. Express 26(3), 2390–2399 (2018). [CrossRef]  

7. K. Kaminski, E. Wnuk, J. Golebiewska, M. Kruzynski, P. Kankiewicz, and M. Kaminska, “High efficiency robotic optical tracking of space debris from PST2 telescope in Arizona,” in 7th European Conference on Space Debris conference (2017).

8. T. Schildknecht, U. Hugentobler, and A. Verdun, “Algorithms for ground based optical detection of space debris,” Adv.Space.Res. 16(11), 47–50 (1995). [CrossRef]  

9. W. Gurtner and M. Ploner, “CCD and SLR dual-use of the Zimmerwald tracking system,” in 15th International Laser Ranging Workshop(ILRW) (2006).

10. W. Flury, A. Massart, T. Schildknecht, U. Hugentobler, J. Kuusela, and Z. Sodnik, “Searching for small debris in the geostationary ring-discoveries with the Zeiss 1-metre telescope,” Third European Conference on Space Debris (2000).

11. P. Seitzer, R. Smith, J. Africano, K. Jorgensen, E. Stansbery, and D. Monet, “MODEST observations of space debris at geosynchronous orbit,” Adv.Space.Res. 34(5), 1139–1142 (2004). [CrossRef]  

12. K. J. Abercromby, P. Seitzer, H. M. Rodriguez, E. S. Barker, and M. J. Matney, “Survey and chase: a new method of observations for the Michigan Orbital DEbris Survey Telescope (MODEST),” Adv.Space.Res. 65(1-2), 103–111 (2009). [CrossRef]  

13. K. J. Abercromby, P. Seitzer, H. M. Cowardin, E. S. Barker, and M. J. Matney, “Michigan Orbital DEbris Survey Telescope observations of the geosynchronous orbital debris environment observing years: 2007–2009,” NASA/TP-2011-217350 (2011).

14. F. Alby, M. Boer, B. Deguine, I. Escane, F. Newland, and C. Portmann, “Status of CNES optical observations of space debris in geostationary orbit,” Adv.Space.Res. 34(5), 1143–1149 (2004). [CrossRef]  

15. M. Bourez-Laas, A. Klotz, G. Blanchet, M. Boer, and E. Ducrotte, “Algorithms improvement in image processing for optical observations of artificial objects in geostationary orbit with the TAROT telescopes,” Proc. SPIE 7000, 700020 (2008). [CrossRef]  

16. V. Kouprianov, “Distinguishing features of CCD astrometry of faint GEO objects,” Adv.Space.Res. 41(7), 1029–1038 (2008). [CrossRef]  

17. M. Bourez-Laas, G. Blanchet, M. Boer, E. Ducrotte, and A. Klotz, “A new algorithm for optical observations of space debris with the TAROT telescopes,” Adv.Space.Res. 44(11), 1270–1278 (2009). [CrossRef]  

18. J. Piattoni, A. Ceruti, and F. Piergentili, “Automated image analysis for space debris identification and astrometric measurements,” Acta Astronautica. 103, 176–184 (2014). [CrossRef]  

19. T. Yanagisawa, A. Nakajima, K. I. Kadota, H. Kurosaki, T. Nakamura, F. Yoshida, B. Dermawan, and Y. Sato, “Automatic Detection Algorithm for Small Moving Objects,” Astron. Soc. Japan. 57(2), 399–408 (2005). [CrossRef]  

20. T. Yanagisawa, H. Kurosaki, and A. Nakajima, “Activities of JAXA’s Innovative Technology Center on Space Debris Observation,” in Proceedings of the Twelfth Advanced Maui Optical and Space Surveillance Technologies Conference (AMOS) (2009).

21. K. Fujita, T. Hanada, Y. Kitazawa, and A. Kawabe, “A debris image tracking using optical flow algorithm,” Adv.Space.Res. 49(5), 1007–1018 (2012). [CrossRef]  

22. J. Nunez, A. Nunez, F. J. Montojo, and M. Condominas, “Improving space debris detection in GEO ring using image deconvolution,” Adv.Space.Res. 56(2), 218–228 (2015). [CrossRef]  

23. J. B. Xi, D. S. Wen, O. K. Ersoy, H. W. Yi, D. L. Yao, Z. X. Song, and S. B. Xi, “Space debris detection in optical image sequences,” Appl. Opt. 55(28), 7929–7940 (2016). [CrossRef]  

24. J. Tompkins, S. Cain, and D. Becker, “Near earth space object detection using parallax as multi-hypothesis test criterion,” Opt. Express 27(4), 5403–5419 (2019). [CrossRef]  

25. Y. Du, D. S. Wen, G. Z. Liu, S. Qiu, D. L. Yao, H. W. Yi, and M. Y. Liu, “A novel approach for space debris recognition based on the full information vectors of star points,” J. Vis. Commun. Image R. 71, 102716 (2020). [CrossRef]  

26. X. Yang, T. Wu, N. N. Wang, Y. Huang, B. Song, and X. B. Gao, “HCNN-PSI: A hybrid CNN with partial semantic information for space target recognition,” Pattern Recognition. 108, 107531 (2020). [CrossRef]  

27. H. Yoon, K. Riesing, and K. Cahoy, “Satellite tracking system using amateur telescope and star camera for portable optical ground station,” Proc. of 30th Annual AIAA/USU Conference on Small Satellites (2016).

28. D. Wei and C. Y. Zhao, “An accuracy analysis of the SGP4/SDP4 model,” Chinese Astronomy and Astrophysics. 34(1), 69–76 (2010). [CrossRef]  

29. J. Chen and Y.Q. Huang,“The real-time guide data fusion for an optoelectronic theodolite,” Proc. SPIE 5434, 372 (2004). [CrossRef]  

30. D. R. Smith and K. Souccar, “A polynomial-based trajectory generator for improved telescope control,” Proc. SPIE 7019, 701909 (2008). [CrossRef]  

31. J. S. Allen, J. L. Stufflebeam, and D. Feller, “Development of a feed-forward controller for a tracking telescope,” Proc. SPIE 5430, 1 (2004). [CrossRef]  

32. M. Li and H.B. Gao, “Tracking Error Estimate for Theodolite Based on General Regression Neural Network,” Adv. Mater. Res. 472-475, 1383–1387 (2012). [CrossRef]  

33. J. Watson and O. Zielinski, Subsea Optics and Imaging: Subsea laser scanning and imaging systems, F. M. Caimi, F. R. Dalgleish, and H. Branch, eds.(Woodhead Publishing,2013),pp.327–335.

34. O. Montenbruck and E. Gill, Satellite orbits models methods application (Springer, 2000), Chap. 2.

35. E. Bertin and S. Arnouts, “SExtractor: Software for source extraction,” Astron. Astrophys. Suppl. Ser. 117(2), 393–404 (1996). [CrossRef]  

36. R. Y. Sun, Y. Lu, and C. Y. Zhao, “A method for correcting telescope pointing error in optical space debris surveys,” Chinese Astronomy and Astrophysics. 40(1), 66–78 (2016). [CrossRef]  

37. R. Y. Sun, X. X. Zhang, and C. Y. Zhao, “A method for detecting dpace debris based on apriori information,” Chinese Astronomy and Astrophysics. 37(4), 464–472 (2013). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       A clustering algorithm;The test for equivalent sine simulation

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. The diagram of feedforward cascade control.
Fig. 2.
Fig. 2. Schematic diagram of telescope tracking space object.
Fig. 3.
Fig. 3. Diagram for the object detection.
Fig. 4.
Fig. 4. Diagram for detecting the object using multiple lists.
Fig. 5.
Fig. 5. 1.2-meter wide FOV survey telescope
Fig. 6.
Fig. 6. Example of a telescope image. The exposure time is 1s.
Fig. 7.
Fig. 7. The coordinate sequence of the COSMOS 2501 satellite.
Fig. 8.
Fig. 8. The coordinate sequence of the GALILEO 7 satellite.
Fig. 9.
Fig. 9. The coordinate sequence of the GALILEO 8 satellite.
Fig. 10.
Fig. 10. The object collided with a background star.

Tables (5)

Tables Icon

Table 1. The important parameters of the telescope

Tables Icon

Table 2. The orbital parameters of satellites

Tables Icon

Table 3. Calculated values for trajectory clustering properties

Tables Icon

Table 4. Statistical results of the detection number

Tables Icon

Table 5. The external accuracy of the COSMOS 2501 satellite.

Equations (24)

Equations on this page are rendered with MathJax. Learn more.

s ( t ) = C 0 + C 1 t + C 2 t 2 + C 3 t 3 + C 4 t 4 + C 5 t 5 .
v ( t ) = d s d t = C 1 + 2 C 2 t + 3 C 3 t 2 + 4 C 4 t 3 + 5 C 5 t 4 .
a ( t ) = d 2 s d t 2 = 2 C 2 + 6 C 3 t + 12 C 4 t 2 + 20 C 5 t 3 .
j ( t ) = d 3 s d t 3 = 6 C 3 + 24 C 4 t + 60 C 5 t 2 ,
θ ( t ) = θ sin ( ω t ) ,
{ α = a / f ( rad) β = b / f ( rad) ,
{ N max α = δ A / ( 360 o 2 π × α × 3600 ) ( pixel ) N max β = δ h / ( 360 o 2 π × β × 3600 ) ( pixel ) ,
{ N RMS α = σ A / ( 360 o 2 π × α × 3600 ) ( pixel ) N RMS β = σ h / ( 360 o 2 π × β × 3600 ) ( pixel ) ,
[ x y z ] = [ ρ cos h cos A ρ cos h sin A ρ sin h ] ,
K k , k 1 s = [ x k s x k 1 s y k s y k 1 s z k s z k 1 s ] = [ ρ k s cos h k s cos A k s ρ k 1 s cos h k 1 s cos A k 1 s ρ k s cos h k s sin A k s ρ k 1 s cos h k 1 s sin A k 1 s ρ k s sin h k s ρ k 1 s sin h k 1 s ] .
K k , k 1 t = [ x k t x k 1 t y k t y k 1 t z k t z k 1 t ] = [ ρ k t cos h k t cos A k t ρ k 1 t cos h k 1 t cos A k 1 t ρ k t cos h k t sin A k t ρ k 1 t cos h k 1 t sin A k 1 t ρ k t sin h k t ρ k 1 t sin h k 1 t ] .
c o s θ = K k , k 1 s K k , k 1 t | | K k , k 1 s | | | | K k , k 1 t | | .
c o s θ = K k , k 1 s K k , k 1 t | | K k , k 1 s | | | | K k , k 1 t | | > K k , k 1 t K k , k 1 t | | K k , k 1 s | | | | K k , k 1 t | | = | | K k , k 1 t | | | | K k , k 1 s | | .
cos θ 1 | | K k , k 1 s | | .
| | K k , k 1 s | | = ( A k ( x , y ) B k 1 ( i , j ) ) 2 .
| | K 2 , 1 s | | = min ( ( ( x m , y m ) B 1 ( i , j ) ) 2 ) { m | m 0 , 1 , 2 , 3 , 4 } .
| | K k , k 1 s max | | = max ( N max α , N max β ) .
T = max ( N max α , N max β ) .
d ( x i , C i ) = j = 1 m ( x i j C i j ) 2 .
Max n = 0 P | l i n l i n ¯ | T .
n = 0 P ( l i n l i n ¯ ) 2 P + 1 H .
{ α = a 0 + a 1 x + a 2 y + a 3 x 2 + a 4 x y + a 5 y 2 δ = b 0 + b 1 x + b 2 y + b 3 x 2 + b 4 x y + b 5 y 2 ,
σ = i = 1 n Δ i 2 n .
σ α , δ = σ α 2 + σ δ 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.