Feng Yu 1 and Zhen He 2 and Bing Qiao 1 and Xiaoting Yu 2
Academic Editor:Gerhard-Wilhelm Weber
1, College of Astronautics, Nanjing University of Aeronautics and Astronautics, P.O. Box 315, 29 Yudao Street, Nanjing 210016, China
2, College of Automation, Nanjing University of Aeronautics and Astronautics, 29 Yudao Street, Nanjing 210016, China
Received 8 July 2014; Accepted 1 November 2014; 25 November 2014
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
The autonomous on-orbit servicing is expected to be one of the most challenging and exciting space activities in the future [1, 2]. In the past, many expensive satellites were out of service in orbit due to various failures, such as solar panel undeployment and gyro malfunction. However, in these cases, most of the other parts of the satellites were still functional [3, 4]. The on-orbit servicing, which is the execution of repair, refueling, orbit maintenance, and reorbiting, can extend the life of malfunctioned satellites and save a large amount of expense.
Therefore, more and more attentions have been paid to the technologies of autonomous on-orbit servicing. Several on-orbit servicing projects including on-orbit servicing demonstration experiments and conceptual on-orbit servicing systems have been carried out [5, 6].
As malfunctioned satellites are generally noncooperative targets tumbling in space and have no equipment which can be used for relative pose measurement [7], it is necessary to develop the method of relative pose measurement without the cooperation of target satellites. For example, the purpose of the Spacecraft for the Universal Modification of Orbits (SUMO) program is to demonstrate the integration of machine vision, robotics, mechanisms, and autonomous control algorithms to accomplish autonomous rendezvous and grapple of a variety of interfaces traceable to future spacecraft servicing operations [8]. In the SUMO program, the main concept is to be able to capture an unaided target satellite; that is, target satellite is equipped without special facilities such as grapple fixtures or reflectors which is compatible with the SUMO. SUMO was later renamed as the Front-End Robotics Enabling Near-term Demonstration (FREND) to develop a series of key components including a seven degree-of-freedom (DOF) flight robotic arm which was demonstrated in a test bed with a stereo photogrammetric imaging system [9]. Similarly, DEOS (Deutsche Orbitale Servicing Mission) project focuses on finding and evaluating the procedures and techniques for rendezvous, capture, and deorbiting of a noncooperative target satellite [10]. In the automatic mode, a number of images are taken and dumped to the ground segment, and then target motion is estimated by the control software in the ground station.
The relative pose information of two involved satellites is of vital importance for the end-effector to successfully grasp the tumbling satellite. If the dynamic model of the target is known, a filter can be designed to suppress the measurement noise and estimate the states which cannot be measured directly. Furthermore, the dynamic model can also be used to predict the motion of the target satellite, and then the optimal path planning can be conducted to guide a robotic manipulator to capture the tumbling target at a rendezvous point with the same velocity [11]. Vision system is more and more commonly utilized for the relative pose measurements in the rendezvous missions due to the low cost, low mass, and low energy requirement of CCD/CMOS cameras [12, 13]. Nagamatsu et al. proposed a 12th order extended Kalman filter to estimate the position and attitude of a torque-free target satellite by using a simplified dynamical model [14]. NASA investigated the feasibility of robotic autonomous servicing of the Hubble Space Telescope (HST). A nonlinear estimator was developed to estimate the HST body rates by using vision-based sensors which can output relative quaternion [15, 16]. NASA/GSFC continued to develop a Goddard Natural Feature Image Recognition (GNFIR) algorithm. A 6-DOF pose estimation was demonstrated in Robotic Refueling Mission in order to augment the traditional overlays [17]. Tweddle described a new approach to solve a SLAM problem for unknown and uncooperative objects that are spinning about an arbitrary axis. An experimental test-bed known as "Goggles" with computer vision-based navigation capability by the stereo cameras is used to test the algorithm [18]. Lichter and Dubowsky proposed an important and effective architecture of estimating dynamic state, geometric shape, and model parameters of objects in orbit [19]. A team of cooperating 3D vision sensors captures sequences of range images to determine the rough target poses before filtering. The relative positions and orientations between sensors mounted on different satellites are required to be known with high accuracy, which is perhaps a cost and operation burden for on-orbit servicing. A model-based pose refinement algorithm was proposed to perform the relative pose estimation of a floating object from the visual information of a stereo-vision system [20]. Du et al. proposed a method based on two collaborative cameras sharing the recognition task to determine the pose of a large noncooperative target [21], and details of image processing were also discussed.
In this paper, the problem of relative pose estimation for the final phase of the rendezvous and docking of noncooperative satellites is investigated. Instead of the dense stereo algorithm presented in the literature [17], here the relative pose estimation method is based on the sparse stereo algorithm. The body-fixed coordinate system of the target satellite is reestablished by utilizing the natural features on the target surface. Two relative attitude measurement methods based on TRIAD and QUEST are presented. Then, a standard Kalman filter is designed to estimate the translational states and the location of the mass center. An extended Kalman filter (EKF) and an unscented Kalman filter (UKF) are designed, respectively, to estimate the rotational states and the moment-of-inertia ratios including the ratios of product of inertia, and their performances are compared with each other.
This paper is structured as follows. Section 2 defines the coordinate systems and transformation matrixes used in the relative pose estimation. Section 3 presents two algorithms for relative attitude measurements including the scheme for updating the target feature coordinate system when target is tumbling. Then, an approach based on Kalman filter to estimate the location of mass center and translation parameters is proposed in Section 4. In Section 5, the EKF and UKF are designed, respectively, to estimate the moment-of-inertia ratios and orientation parameters. Numerical simulation is conducted to verify the proposed algorithm and results are discussed as well in Section 6. The last section contains conclusions.
2. Coordinate Systems and Transformation Matrixes
As various physical quantities are usually defined in different coordinate systems, the coordinate systems and coordinate transformation matrixes are defined to illustrate the relative geometrical relationship between the target and chaser. The relative geometrical relationship and coordinate systems are illustrated in Figure 1.
Figure 1: Geometrical relationship and coordinate systems.
[figure omitted; refer to PDF]
2.1. Inertial Coordinate System ( [figure omitted; refer to PDF] )
The origin of the inertial coordinate system [figure omitted; refer to PDF] is centered on Earth, the [figure omitted; refer to PDF] axis is aligned with the rotation axis, and the [figure omitted; refer to PDF] axis is defined to point toward the Vernal equinox. The [figure omitted; refer to PDF] axis completes the right-handed orthogonal coordinate system. The attitude of target satellite and service satellite can be described as the rotation from the inertial frame to the body frame.
2.2. Local Orbital Coordinate System ( [figure omitted; refer to PDF] & [figure omitted; refer to PDF] )
The origin of the local orbital coordinate system [figure omitted; refer to PDF] ( [figure omitted; refer to PDF] ) lies in the mass center of service satellite. The [figure omitted; refer to PDF] axis is in the opposite direction of the Earth center, the [figure omitted; refer to PDF] axis is in the flight direction, and the [figure omitted; refer to PDF] axis which completes the right-handed orthogonal coordinate system is in the direction of the angular momentum of the orbit.
Similarly, the local orbital coordinate system [figure omitted; refer to PDF] is attached to target satellite. The rotation matrix between the two satellite orbital frames is denoted as [figure omitted; refer to PDF] . As two satellites are almost in the same orbit and the service satellite is very close to the target, [figure omitted; refer to PDF] is nearly a [figure omitted; refer to PDF] identity matrix. For instance, if the target is a leading satellite in the circular orbit of 700 km altitude and the chaser is 100 m behind the target in the same orbit, the rotation angle between two orbital fames is only about [figure omitted; refer to PDF] .
2.3. Chaser Body-Fixed Coordinate System ( [figure omitted; refer to PDF] )
The origin of chaser body-fixed coordinate system [figure omitted; refer to PDF] lies in the mass center of service satellite, and the three body axes of symmetry are defined as three coordinate axes [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] . The orientation of the [figure omitted; refer to PDF] relative to [figure omitted; refer to PDF] or [figure omitted; refer to PDF] is determined by attitude and orbit determination system which is equipped with gyro, star sensor, GNSS receiver, and so on. The coordinate transformation matrix from inertial coordinate system [figure omitted; refer to PDF] to [figure omitted; refer to PDF] is denoted as [figure omitted; refer to PDF] , and the coordinate transformation matrix from [figure omitted; refer to PDF] to [figure omitted; refer to PDF] is denoted as [figure omitted; refer to PDF] .
2.4. Stereovision-Fixed Coordinate System ( [figure omitted; refer to PDF] )
The origin of stereovision-fixed coordinate system [figure omitted; refer to PDF] lies in the stereo vision system. The axis [figure omitted; refer to PDF] is parallel with the optical axis of the cameras, [figure omitted; refer to PDF] is vertical to the optical axis, and the direction of axis [figure omitted; refer to PDF] obeys the right-hand role. The measurement information (coordinates of feature points) of stereo vision system is described in this coordinate system.
The rotation matrix [figure omitted; refer to PDF] and the translation vector [figure omitted; refer to PDF] from [figure omitted; refer to PDF] to [figure omitted; refer to PDF] can be acquired by calibration on the ground. Therefore, relationship between the coordinates of feature points in [figure omitted; refer to PDF] and [figure omitted; refer to PDF] can be described as [figure omitted; refer to PDF] where [figure omitted; refer to PDF] are the coordinates of [figure omitted; refer to PDF] th feature point in the frame [figure omitted; refer to PDF] and [figure omitted; refer to PDF] are in [figure omitted; refer to PDF] .
2.5. Target Feature Coordinate System ( [figure omitted; refer to PDF] )
As there are no artificial marks which can provide effective cooperative information to service satellite, natural features on the target surface are utilized to establish target feature coordinate system [figure omitted; refer to PDF] . The natural features are usually on frameworks of the antenna backboard, solar panels, and so on. The frame [figure omitted; refer to PDF] is applied to determine the relative pose between target and chaser satellite. The methods to establish the frame [figure omitted; refer to PDF] are described in Section 3.
2.6. Target Body-Fixed Coordinate System ( [figure omitted; refer to PDF] )
The origin of target body-fixed coordinate system [figure omitted; refer to PDF] lies in the mass center of target satellite, and the three coordinate axes [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] are set to be parallel to [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] , respectively. Obviously, the axes [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] are probably not aligned with three body axes of symmetry of target.
2.7. Target Nominal Body-Fixed Coordinate System ( [figure omitted; refer to PDF] )
The origin of target nominal body-fixed coordinate system [figure omitted; refer to PDF] lies in approximate mass center of target satellite, and the three coordinate axes [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] are parallel to [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] , respectively. Approximate location of mass center is estimated by ground staff using teleoperation loop or estimated automatically by service spacecraft. For example, the approximate mass center of target satellite can be supposed to lie in the target centroid. Then, the vector [figure omitted; refer to PDF] is known and the offset [figure omitted; refer to PDF] which is denoted as [figure omitted; refer to PDF] will be estimated.
3. Algorithms of Relative Attitude Measurement
Natural feature points are utilized to establish the target feature coordinate system [figure omitted; refer to PDF] . As image processing including image filtering, edge detection, feature point recognition, and matching is discussed [19], it is assumed in this paper that the coordinates of natural feature points in frame [figure omitted; refer to PDF] are outputted by the stereo vision system.
3.1. Attitude Determination Based on TRIAD
There are several feature points tracked by the stereo vision system, which is illustrated in Figure 1. Three points [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] are selected according to geometry and imaging quality of natural feature points to establish the frame [figure omitted; refer to PDF] by right-handed rule. The [figure omitted; refer to PDF] is the origin of [figure omitted; refer to PDF] , the axis [figure omitted; refer to PDF] is in the direction of vector [figure omitted; refer to PDF] , and the axis [figure omitted; refer to PDF] is vertical to the plane which contains the three points. The direction of axis [figure omitted; refer to PDF] obeys the right-hand role. The rotation matrix [figure omitted; refer to PDF] from the frame [figure omitted; refer to PDF] to [figure omitted; refer to PDF] can be determined by direction cosine matrix as follows: [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is the magnitude calculation and [figure omitted; refer to PDF] is the representation of the operation of vector cross product. The error of TRIAD can be described as rotation angle [figure omitted; refer to PDF] about the Euler axis [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is the angle between [figure omitted; refer to PDF] and [figure omitted; refer to PDF] , and it produces minimum error when it is 90°. [figure omitted; refer to PDF] and [figure omitted; refer to PDF] are pointing errors of [figure omitted; refer to PDF] and [figure omitted; refer to PDF] , respectively, and the pointing errors will be reduced with the extension of the line segments. Then, it is concluded that the three points which contribute the largest area are to be selected to obtain better accuracy in practice.
3.2. Attitude Determination Based on QUEST
As there are often more than 3 feature points available in the view of stereo vision system, it is better that all available points (the number of points is denoted as [figure omitted; refer to PDF] ) in view are utilized to improve the accuracy of relative attitude determination. Then coordinates of all feature points except [figure omitted; refer to PDF] in the frame [figure omitted; refer to PDF] can be calculated as follows: [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is the vector inner product. As measuring is repeated, an average processing is utilized and then more accurate coordinates [figure omitted; refer to PDF] are determined in the frame [figure omitted; refer to PDF] . The components of [figure omitted; refer to PDF] are described as [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is the number of measurements.
A set of observation unit vectors [figure omitted; refer to PDF] are defined as [figure omitted; refer to PDF]
Similarly, a set of reference unit vectors [figure omitted; refer to PDF] are defined as [figure omitted; refer to PDF]
According to the Quest algorithm, an attitude matrix [figure omitted; refer to PDF] can be found to minimize the loss function [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is the weight of the [figure omitted; refer to PDF] th vector pair which is defined as [figure omitted; refer to PDF]
The algorithm of optimal processing for (8) is presented in the literature [22].
3.3. Strategy for Updating the Target Feature Coordinate System
As the target satellite is tumbling in space, some feature points will be lost and new feature points will appear in the view of the stereo vision system. The new target feature coordinate system [figure omitted; refer to PDF] is established dynamically according to the measurement requirements. It is necessary to determine the offset and orientation between the frames [figure omitted; refer to PDF] and [figure omitted; refer to PDF] ( [figure omitted; refer to PDF] is the original target feature coordinate system and the rotation angle between the frame [figure omitted; refer to PDF] and [figure omitted; refer to PDF] is 0) to keep continuous pose estimation.
According to (4) and (5), accurate coordinates of the feature points in [figure omitted; refer to PDF] are obtained. Three new feature points [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] are selected among the feature point set to establish [figure omitted; refer to PDF] with [figure omitted; refer to PDF] as the new origin. The offset of the origin of the frame [figure omitted; refer to PDF] in [figure omitted; refer to PDF] is [figure omitted; refer to PDF] , which is denoted as [figure omitted; refer to PDF] . Similar to (2), the orientation of the frame [figure omitted; refer to PDF] relative to [figure omitted; refer to PDF] can be described as [figure omitted; refer to PDF]
In general, the relationship between the coordinates of the feature points in [figure omitted; refer to PDF] and [figure omitted; refer to PDF] can be described as [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is the number of establishing target feature coordinate systems, [figure omitted; refer to PDF] are the coordinates of [figure omitted; refer to PDF] th feature point in the frame [figure omitted; refer to PDF] , [figure omitted; refer to PDF] are the coordinates in [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] is the translation vector between [figure omitted; refer to PDF] and [figure omitted; refer to PDF] .
As the rotation matrix [figure omitted; refer to PDF] and translation vector [figure omitted; refer to PDF] between the frames [figure omitted; refer to PDF] and [figure omitted; refer to PDF] can be derived according to (11), the coordinates of any natural feature point in the frame [figure omitted; refer to PDF] can be obtained even if feature points which are used to establish the frame [figure omitted; refer to PDF] are lost in view.
4. Estimation of the Location of Mass Center and Translation Parameters
In order to estimate the location of the mass center for target satellite and the relative position and velocity between chaser and target, a standard Kalman filter is designed to process the coordinates of feature points with the CW equations as the dynamic model.
4.1. CW Equations
When two satellites are nearly in the same circular orbit with close distance, the CW equations can perfectly describe the translational motion dynamics, [figure omitted; refer to PDF] where [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] ; it is due to the effect of orbital mechanics, [figure omitted; refer to PDF] and [figure omitted; refer to PDF] are identity and zero matrices, respectively, [figure omitted; refer to PDF] is matrix dimension, and [figure omitted; refer to PDF] is the force disturbance on a unit mass. [figure omitted; refer to PDF] is the displacement vector from the target mass center to the chaser mass center and it is expressed in the target orbital frame [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] is the relative velocity between target and chaser.
4.2. Measurement Model
The estimation of relative position and velocity is also the process of locating the target mass center. When a series of feature points are continuously tracked, the coordinates of feature points are outputted in the frame [figure omitted; refer to PDF] . The relative attitude between the target satellite and the stereo vision system is determined according to methods in Section 3. According to the geometrical relationship, the displacement vector [figure omitted; refer to PDF] is described as [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is the coordinate value of the [figure omitted; refer to PDF] th natural feature point in the frame [figure omitted; refer to PDF] , [figure omitted; refer to PDF] is the location error of mass center which is illustrated as [figure omitted; refer to PDF] in Figure 1, [figure omitted; refer to PDF] is the vector from the origin of the frame [figure omitted; refer to PDF] to the [figure omitted; refer to PDF] th feature point and is expressed in the frame [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] is the rotation matrix from the frame [figure omitted; refer to PDF] to [figure omitted; refer to PDF] . As the target is a malfunctioned satellite which is noncooperative, no information including the attitude parameters can be provided to the service spacecraft. An available way to measure the target attitude is combining the relative attitude and service spacecraft attitude. It is written as [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is the rotation matrix from the frame [figure omitted; refer to PDF] to the frame [figure omitted; refer to PDF] which will be provided by attitude and orbit control system (AOCS), [figure omitted; refer to PDF] is ignored as it is nearly a 3 × 3 identity matrix, [figure omitted; refer to PDF] is defined in Section 2.4, [figure omitted; refer to PDF] is determined by methods described in Sections 3.1 and 3.2, and [figure omitted; refer to PDF] is rewritten as [figure omitted; refer to PDF]
[figure omitted; refer to PDF] can be described as [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is the coordinates of the [figure omitted; refer to PDF] th natural feature point in the frame [figure omitted; refer to PDF] , which is measured by stereo vision system, and [figure omitted; refer to PDF] and [figure omitted; refer to PDF] are defined in Section 2.4.
There are several natural feature points on the surface of the target spacecraft. If stereo vision system tracks these feature points simultaneously during the relative pose estimation, better measurements will be achieved by an averaging operation. Substituting (14) into (13), the observation equations can be rewritten as [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is the measurement noise. If stereo vision system is carefully calibrated, the coordinate error of a feature point can be approximately regarded as white noise. Therefore, [figure omitted; refer to PDF] is approximately modeled as zero mean white Gaussian process with covariance matrix of [figure omitted; refer to PDF] . Obviously, [figure omitted; refer to PDF]
4.3. Filter Design
In order to suppress measurement noise and estimate states which cannot be measured directly, a standard Kalman filter is designed.
Define state variables as follows: [figure omitted; refer to PDF]
Then, the relative position and velocity estimation dynamic equations can be concluded as follows: [figure omitted; refer to PDF]
The math model of estimating mass center location and translational states is a linear system according to (17) and (20). Setting the linear system (20) into the standard state-space form [figure omitted; refer to PDF] , the solution to the corresponding state transfer matrix [figure omitted; refer to PDF] and covariance matrix of the discrete-time process noise [figure omitted; refer to PDF] is as follows: [figure omitted; refer to PDF]
Equation (17) is rewritten as measurement equations: [figure omitted; refer to PDF]
Initialization is [figure omitted; refer to PDF]
Time update is [figure omitted; refer to PDF]
Measurement update is [figure omitted; refer to PDF]
5. Estimation of the Moment-of-Inertia Ratios and Orientation Parameters
As noncooperative target satellite is often a malfunctioned satellite that is tumbling freely with unknown rates in space, the moment-of-inertia ratios can be estimated instead of the moment-of-inertia. Because the axes of the frame [figure omitted; refer to PDF] are not aligned with three body axes of symmetry, the products of inertia cannot be neglected.
5.1. Measurement Model
The AOCS of the service spacecraft can also output the attitude relative to inertial system. A measurement of the target attitude will be provided by combining the relative attitude and service spacecraft attitude. Then, [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is the attitude of service satellite relative to the frame [figure omitted; refer to PDF] , [figure omitted; refer to PDF] is described in Section 2, and [figure omitted; refer to PDF] is [figure omitted; refer to PDF] which is described in (14). The rotation matrix [figure omitted; refer to PDF] can be rewritten in quaternion form [figure omitted; refer to PDF] and [figure omitted; refer to PDF] can be expressed as [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is target quaternion measurement, [figure omitted; refer to PDF] is the quaternion which defines the rotation from inertial to the target body frame, and [figure omitted; refer to PDF] is the measurement noise. The symbol [figure omitted; refer to PDF] designates the multiplication of quaternion.
5.2. System Dynamic Equations
The time-derivative of the quaternion can be expressed as follows: [figure omitted; refer to PDF] where [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] is angular velocity of target satellite with respect to inertial space, resolved in target body-fixed coordinate system [figure omitted; refer to PDF] .
According to Euler's equation, dynamics of the rotational motion for a rigid target satellite can be expressed as [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is disturbing torque vector and [figure omitted; refer to PDF] is the inertia matrix, [figure omitted; refer to PDF]
We define the element [figure omitted; refer to PDF] as 1, and then we obtain the moment-of-inertia ratios matrix as [figure omitted; refer to PDF]
Then, (29) can be rewritten as [figure omitted; refer to PDF] where [figure omitted; refer to PDF] and the elements [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] are constants. For the dynamic equations are nonlinear system, EKF and UKF will be employed to estimate the moment-of-inertia ratios and the relative attitude.
5.3. EKF Filter Design
Define state variables as follows: [figure omitted; refer to PDF] where [figure omitted; refer to PDF] . When dynamic equations are linearized, [figure omitted; refer to PDF] is ignored as [figure omitted; refer to PDF] is not an independent variable and it has variations of only the second order. Then, the error-state vector [figure omitted; refer to PDF] is described as [figure omitted; refer to PDF] where the components of [figure omitted; refer to PDF] are defined as follows: [figure omitted; refer to PDF] where [figure omitted; refer to PDF] .
The dynamic equations can be given as [figure omitted; refer to PDF]
For simplicity, process equations (36) and measurement equations (27) are rewritten as [figure omitted; refer to PDF] where the state noise [figure omitted; refer to PDF] and the measurement noise [figure omitted; refer to PDF] are assumed to be independent of each other, white noise.
Define the state transfer matrix [figure omitted; refer to PDF] [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is the Jacobian matrix of partial derivatives of [figure omitted; refer to PDF] with respect to state variable [figure omitted; refer to PDF] ; that is, [figure omitted; refer to PDF] where [figure omitted; refer to PDF]
If (27) is quaternion multiplied by [figure omitted; refer to PDF] in left, the linearized measurement equation is obtained by ignoring the high order terms, [figure omitted; refer to PDF]
For simplicity, (41) is rewritten as [figure omitted; refer to PDF]
Initialization is [figure omitted; refer to PDF]
Time update is [figure omitted; refer to PDF]
Measurement update is [figure omitted; refer to PDF]
Therefore, the estimated states [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] can be innovated right after measurement update according to (35).
5.4. UKF Filter Design
Although the EKF is the estimation algorithm which is widely used for nonlinear systems, UKF is more accurate when the model is highly nonlinear [23, 24]. Therefore, a UKF filter is designed to achieve a better estimation.
The initialization of UKF is the same to EKF. For [figure omitted; refer to PDF] , calculate sigma points: [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is the number of states, and [figure omitted; refer to PDF] is the scaling factor for the sigma points which is calculated as [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is the scaling parameter which determines the spread of the sigma points around and is usually set to be a small positive value ( [figure omitted; refer to PDF] ), and [figure omitted; refer to PDF] is a secondly scaling parameter which is usually set to 0.
Time update is [figure omitted; refer to PDF] where [figure omitted; refer to PDF] and [figure omitted; refer to PDF] are the weighted coefficients and defined as follows: [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is the scaling parameter and is used to incorporate prior knowledge of the distribution of [figure omitted; refer to PDF] (for Gaussian distribution, [figure omitted; refer to PDF] is optimal).
Measurement update is [figure omitted; refer to PDF] where [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] are calculated by using the same method described in Section 5.3.
6. Simulation and Evaluation
The computer simulation is developed to verify the validity of the proposed algorithms in this paper. The simulation software integrates the orbital and attitude dynamics, camera photograph model, and relative pose estimation algorithms. The simulation is realized in MATLAB and Figure 2 shows the simulation blocks in the MATLAB environment.
Figure 2: Realization of the simulation software.
[figure omitted; refer to PDF]
It is assumed that the service satellite and the malfunctioned satellite are in the same orbit with different orbital phases. The initial orbit elements of the target and chaser are given in Table 1.
Table 1: Initial orbit elements.
| [figure omitted; refer to PDF] (km) | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] (°) | [figure omitted; refer to PDF] (°) | [figure omitted; refer to PDF] (°) | [figure omitted; refer to PDF] (°) |
Target | 6700 | 0 | 0 | 0 | 0 | 0 |
Chaser | 6700 | 0 | 0 | 0 | 0 | -8.5 × 10-5 |
So the orbit altitude is about 300 km and the service satellite stands at the position which is about 10 m behind target satellite.
The target satellite is under torque-free tumbling motion. The target satellite is initially aligned with orbital frame and initial angular velocity is assumed as [figure omitted; refer to PDF] . The moment-of-inertia matrix in the target body-fixed coordinate system ( [figure omitted; refer to PDF] ) is set as [figure omitted; refer to PDF]
Therefore, the corresponding moment-of-inertia ratios matrix is as follows: [figure omitted; refer to PDF]
As the image processing including image filtering, edge detection, and feature matching is discussed in many papers, the image processing is assumed as a solved problem in this paper, and then coordinates of natural feature points in image plane are regarded to be available for the estimation processing. The specifications of feature points in frame [figure omitted; refer to PDF] are shown in Table 2.
Table 2: Feature points specifications in the simulation.
Number | [figure omitted; refer to PDF] (m) | [figure omitted; refer to PDF] (m) | [figure omitted; refer to PDF] (m) |
1 | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
2 | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
3 | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
4 | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
5 | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
6 | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
Two cameras are parallel to each other and the baseline is 0.5 m. The focus length of camera is set as 2.5 cm. The two cameras of the stereo vision system are mounted at [figure omitted; refer to PDF] m and [figure omitted; refer to PDF] m in frame [figure omitted; refer to PDF] . The CCD pixel elements are assumed as 2048 × 2048 and the pixel size is 3.2 μ m × 3.2 μ m. The coordinates of feature points in image plane are corrupted by white Gaussian noises 0.5 pixels ( [figure omitted; refer to PDF] ) to simulate the image processing error.
The location errors of mass center [figure omitted; refer to PDF] are set as [figure omitted; refer to PDF] mm and the initial values of the corresponding states are supposed as [figure omitted; refer to PDF] mm. The initial relative position errors are set as [figure omitted; refer to PDF] mm and initial relative velocity errors are [figure omitted; refer to PDF] mm/s. The initial relative attitude errors and angular velocity errors are given as [figure omitted; refer to PDF] and [figure omitted; refer to PDF] , respectively.
The first 3 points listed in Table 2 are selected to establish the frame [figure omitted; refer to PDF] by TRIAD and number 1 point is used as the origin. Assume that number 1 point is lost at 10 s and number 2-4 points are selected to establish the new frame [figure omitted; refer to PDF] with number 4 point as the origin. According to (10) and (15), target attitude with respect to chaser (the orientation of [figure omitted; refer to PDF] relative to [figure omitted; refer to PDF] ) can be continuously measured. The attitude determination errors are plotted in Figure 3. In Figure 3, the errors in period of 10 s-300 s vary similarly to the errors in period of 0 s-10 s, which proves that method described in Section 3.3 is feasible.
Figure 3: Attitude determination errors using TRIAD.
[figure omitted; refer to PDF]
Attitude determination errors using QUEST are shown in Figure 4. Obviously, QUEST algorithm achieves a higher precision than TRIAD, because all feature points tracked are utilized. The standard deviations of TRIAD are 0.27°, 0.11°, and 0.26°, and the standard deviations of QUEST are 0.19°, 0.04°, and 0.18°. Then, the TRIAD algorithm is applied to the following simulations.
Figure 4: Attitude determination errors using QUEST.
[figure omitted; refer to PDF]
The standard Kalman filter for estimating the location of the mass center and the translation parameters is tuned. Filter parameters are defined as [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] . A series of simulation figures are shown from Figure 5 to Figure 11.
Figure 5: Estimation errors of relative position.
[figure omitted; refer to PDF]
The estimation errors of relative position and relative velocity are plotted in Figures 5 and 6, respectively, and the estimation of mass center location is displayed in Figure 7. The convergence time for recognition of the mass center location and estimating the relative position and velocity is less than 20 s. The accuracy of relative navigation is fairly high with the triaxial relative position errors less than 2.5 mm, 5.5 mm, and 3.0 mm, respectively, and the triaxial relative velocity errors less than 0.2 mm/s, 0.2 mm/s, and 0.2 mm/s, respectively. The location errors of the mass center are less than 2.0 mm, 6.0 mm, and 1.1 mm in three axes. It also should be noticed that [figure omitted; refer to PDF] axial errors are larger than the other two axial errors. The corresponding reason is that the positioning errors of stereo vision system are bigger along the optical axis direction.
Figure 6: Estimation errors of relative velocity.
[figure omitted; refer to PDF]
Figure 7: Estimation of the mass center location.
[figure omitted; refer to PDF]
The EKF and UKF are utilized to estimate the moment-of-inertia ratios and the orientation parameters. In order to compare performances of the two filters, the filter parameters are defined the same as [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] . Other parameters of UKF are tuned as [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] .
The differences between the referenced attitude of the target and the estimated attitude by using EKF and UKF, which are attitude estimation errors, are plotted together in Figure 8. The simulation results show that both estimation algorithms can assure the convergence. Comparing UKF output errors with those from EKF, the accuracy of UKF is better than the accuracy of EKF. In the period of 50 s-300 s, the attitude determination errors of UKF are less than 0.08°, 0.12°, and 0.08° and errors of EKF are less than 0.38°, 0.52°, and 0.34°.
Figure 8: Estimation errors of relative attitude.
[figure omitted; refer to PDF]
Figure 9 shows the estimation errors of angular velocity. The estimation performances of angular velocity of the two algorithms are similar to the performances of attitude. In the same time period, the estimation errors of UKF are less than 0.01°/s, 0.025°/s, and 0.01°/s and errors of EKF are less than 0.038°/s, 0.11°/s, and 0.038°/s.
Figure 9: Estimation errors of relative angular velocity.
[figure omitted; refer to PDF]
Estimation errors of moment-of-inertia ratios are shown in Figures 10 and 11. Although [figure omitted; refer to PDF] is the linear model, the convergence time of UKF is shorter than that of EKF as the UKF estimation accuracy for angular velocity is better. However, UKF and EKF achieve similar accuracy for estimating the moment-of-inertia ratios in the period of 100 s-300 s. At the same time, it is noticed that the convergence time of estimating moment-of-inertia ratios is longer than attitude and angular velocity. The reason why the estimation of moment-of-inertia ratios needs a longer convergence time lies in that the moment-of-inertia ratios cannot be observed directly and more time will be required to measure attitude errors which are produced by the errors of moment-of-inertia ratios. The estimation errors are displayed in Table 3 according to errors of UKF in the period of 200 s-250 s.
Table 3: Estimation of moment-of-inertia ratios.
| [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
True value | 1.3 | 1.2 | 0.30 | 0.25 | 0.15 |
Estimation | 1.29 | 1.19 | 0.30 | 0.25 | 0.15 |
Figure 10: Estimation of moment-of-inertia ratios.
[figure omitted; refer to PDF]
Figure 11: Estimation of moment-of-inertia ratios.
[figure omitted; refer to PDF]
7. Conclusions
In order to capture the malfunctioned satellite, this paper proposes an algorithm for estimating the relative pose and inertial parameters of the target satellite by using a single stereo vision system. Compared to the dense stereo algorithm presented in literature 17, the proposed method can avoid degeneracy when the target has a high degree of axial symmetry and reduce the numbers of sensors and the complexity of the operation. As UKF achieves a better performance than EKF, the UKF is recommended for the rotational estimation. The simulation results demonstrate that the developed algorithm can estimate the relative pose, the location of mass center, and the moment-of-inertia ratios with high precision. However, the proposed method also has degeneracy; if there are no obvious features on the target surface, it is a better choice that the two methods are jointly used in practice. The experimental verification of the proposed algorithm should be conducted to ensure the practical application, which will be a focus in the further study.
Acknowledgments
This study was supported by the National Natural Science Foundation of China (Grant no. 61203197), the Aerospace Science and Technology Innovation Fund of the China Aerospace Science, and Technology Corporation and Innovation Fund of Shanghai Aerospace Science and Technology (SAST 201308). The authors fully appreciate the financial supports.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
[1] A. M. Long, M. G. Richards, D. E. Hastings, "On-orbit servicing: a new value proposition for satellite design and operation," Journal of Spacecraft and Rockets , vol. 44, no. 4, pp. 964-976, 2007.
[2] J. Liang, O. Ma, "Angular velocity tracking for satellite rendezvous and docking," Acta Astronautica , vol. 69, no. 11-12, pp. 1019-1028, 2011.
[3] W. Xu, B. Liang, B. Li, Y. Xu, "A universal on-orbit servicing system used in the geostationary orbit," Advances in Space Research , vol. 48, no. 1, pp. 95-119, 2011.
[4] A. Ellery, J. Kreisel, B. Sommer, "The case for robotic on-orbit servicing of spacecraft: spacecraft reliability is a myth," Acta Astronautica , vol. 63, no. 5-6, pp. 632-648, 2008.
[5] R. B. Friend, "Orbital express program summary and mission over-view," in Sensors and Systems for Space Applications II, 695803, vol. 6958, of Proceedings of SPIE, Orlando, Fla, USA, March 2008.
[6] R. M. Pinson, R. T. Howard, A. F. Heaton, "Orbital Express advanced video guidance sensor: Ground testing, flight results and comparisons," in Proceedings of the AIAA Guidance, Navigation and Control Conference and Exhibit, pp. 1-12, Honolulu, Hawaii, USA, August 2008.
[7] A. Flores-Abad, O. Ma, K. Pham, S. Ulrich, "A review of space robotics technologies for on-orbit servicing," Progress in Aerospace Sciences , vol. 68, pp. 1-26, 2014.
[8] A. B. Bosse, W. J. Barnds, M. A. Brown, N. G. Creamer, A. Feerst, C. G. Henshaw, A. S. Hope, B. E. Kelm, P. A. Klein, F. Pipitone, B. E. Plourde, B. P. Whalen, "SUMO: spacecraft for the universal modification of orbits," in Spacecraft Platforms and Infrastructure, vol. 5419, of Proceedings of SPIE, pp. 36-46, April 2004.
[9] T. J. Debus, S. P. Dougherty, "Overview and performance of the front-end robotics enabling near-term demonstration (FREND) robotic arm," in Proceedings of the AIAA Infotech@Aerospace Conference, pp. 1-12, Seattle, Wash, USA, April 2009.
[10] K. Landzettel Ongoing Space Robotics Missions: TECSAS / DEOS, http://www.dlr.de/rm/en/desktopdefault.aspx/tabid-3825/5963_read-8759
[11] F. Aghili, "Optimal control of a space manipulator for detumbling of a target satellite," in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '09), pp. 3019-3024, Kobe, Japan, May 2009.
[12] L. Tarabini, J. Gil, F. Gandia, M. Á. Molina, J. M. del Cura, G. Ortega, "Ground guided CX-OLEV rendez-vous with uncooperative geostationary satellite," Acta Astronautica , vol. 61, pp. 312-325, 2007.
[13] C. Lyn, G. Mooney, D. Bush, "Computer vision systems for robotic servicing of the hubble space telescope," in Proceedings of the AIAA SPACE Conference & Exposition, pp. 1-13, Long Beach, Calif, USA, September 2007.
[14] H. Nagamatsu, T. Kubota, I. Nakatani, "Capture strategy for retrieval of a tumbling satellite by a space robotic manipulator," in Proceedings of the 13th IEEE International Conference on Robotics and Automation, pp. 70-75, April 1996.
[15] J. K. Thienel, S. Z. Queen, J. M. VanEepoel, "Hubble space telescope angular velocity estimation during the robotic servicing mission," in Proceedings of the AIAA Guidance, Navigation, and Control Conference and Exhibit, pp. 1-15, San Francisco, Calif, USA, August 2005.
[16] J. K. Thienel, R. M. Sanner, "Hubble space telescope angular velocity estimation during the robotic servicing mission," Journal of Guidance, Control, and Dynamics , vol. 30, no. 1, pp. 29-34, 2007.
[17] B. Roberts, J. Pellegrino, "Robotic servicing technology development," in Proceedings of the AIAA SPACE 2013 Conference & Exposition, pp. 1-10, San Diego, Calif, USA, September 2013.
[18] B. E. Tweddle Computer Vision-Based Localization and Mapping of an Unknown, Uncooperative and Spinning Target for Spacecraft Proximity Operations , Massachusetts Institute of Technology, Cambridge, Mass, USA, 2013.
[19] M. D. Lichter, S. Dubowsky, "State, shape, and parameter estimation of space objects from range images," in Proceedings of the IEEE International Conference on Robotics and Automation, pp. 2974-2979, New Orleans, La, USA, May 2004.
[20] J. M. Kelsey, J. Byrne, M. Cosgrove, "Vision-based relative pose estimation for autonomous rendezvous and docking," in Proceedings of the Aerospace Conference, pp. 1-20, Big Sky, Mont, USA, 2006.
[21] X. Du, B. Liang, W. Xu, Y. Qiu, "Pose measurement of large non-cooperative satellite based on collaborative cameras," Acta Astronautica , vol. 68, no. 11-12, pp. 2047-2065, 2011.
[22] M. D. Shuster, S. D. Oh, "Three-axis attitude determination from vector observations," Journal of Guidance and Control , vol. 4, no. 1, pp. 70-77, 1981.
[23] S. J. Julier, J. K. Uhlmann, H. F. Durrant-Whyte, "A new approach for filtering nonlinear systems," in Proceedings of the American Control Conference, pp. 1628-1632, Seattle, Wash, USA, June 1995.
[24] S. J. Julier, J. K. Uhlmann, "Unscented filtering and nonlinear estimation," Proceedings of the IEEE , vol. 92, no. 3, pp. 401-422, 2004.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2014 Feng Yu et al. Feng Yu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
Autonomous on-orbit servicing is expected to play an important role in future space activities. Acquiring the relative pose information and inertial parameters of target is one of the key technologies for autonomous capturing. In this paper, an estimation method of relative pose based on stereo vision is presented for the final phase of the rendezvous and docking of noncooperative satellites. The proposed estimation method utilizes the sparse stereo vision algorithm instead of the dense stereo algorithm. The method consists of three parts: (1) body frame reestablishment, which establishes the body-fixed frame for the target satellite using the natural features on the surface and measures the relative attitude based on TRIAD and QUEST; (2) translational parameter estimation, which designs a standard Kalman filter to estimate the translational states and the location of mass center; (3) rotational parameter estimation, which designs an extended Kalman filter and an unscented Kalman filter, respectively, to estimate the rotational states and all the moment-of-inertia ratios. Compared to the dense stereo algorithm, the proposed method can avoid degeneracy when the target has a high degree of axial symmetry and reduce the number of sensors. The validity of the proposed method is verified by numerical simulations.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer