Content area
Satellite-based target positioning is vital for applications like disaster relief and precision mapping. Practically, satellite errors, e.g., thermal deformation and attitude errors, lead to a mix of fixed and random errors in the measured line-of-sight angles, resulting in a decline in target-positioning accuracy. Motivated by this concern, this study introduces a systematic error self-correction target-positioning method under continuous observations using a single video satellite. After analyzing error sources and establishing an error-inclusive positioning model, we formulate a dimension-extended equation estimating both target position and fixed biases. Based on the equation, a projection transformation method is proposed to obtain the linearized estimation of unknown parameters first, and an iterative optimization method is then utilized to further refine the estimate. Compared with state-of-the-art algorithms, the proposed method can improve positioning accuracy by 98.70% in simulation scenarios with large fixed errors. Thus, the simulation and actual data calculation results demonstrate that, compared with state-of-the-art algorithms, the proposed algorithm effectively improves the target-positioning accuracy under non-ideal error conditions.
Full text
1. Introduction
High-precision target positioning is critical in disaster relief, hazard monitoring, unmanned driving, and precision mapping. Target position information is generally obtained by combining global navigation satellite systems, navigation stations, and other navigation systems using onboard navigation devices. However, some targets cannot be equipped with navigation devices or fail to acquire navigation information, requiring external measurement equipment for positioning. In such situations, radio [1,2], radar [3], LiDAR [4,5], and optical approaches [6,7,8] are used to locate targets. Among these approaches, optical positioning is passive and determines target positions without actively emitting electromagnetic waves, offering strong concealment [9]. Additionally, visible light has a short wavelength and enables high observation resolutions, providing high-precision target positioning. Driven by these advantages, this study focuses on target positioning under optical observations.
Common approaches using optical positioning include ground-based [10,11] and satellite [12,13] observations. The latter offers advantages such as a wide field of view, minimal influence from Earth’s curvature and terrain, and broad operational coverage, making satellites a primary measuring platform for target positioning through external observation. However, due to the high relative velocity between the satellite and the target, conventional optical satellites have long imaging intervals, making the continuous observation of targets challenging [14]. In contrast, video satellites, with their short imaging intervals and capability for constant tracking imaging, can effectively overcome the limitations of conventional optical satellites in target positioning [15].
In general, target-positioning methods based on optics include the kinematic estimation method, the image-matching method, and a method that relies entirely on angle measurement data. The kinematic estimation method typically utilizes filtering algorithms to estimate the target position by combining a target kinematic model with observational data [16,17,18,19], and they can be combined with artificial intelligence [20]. However, it cannot be effectively applied when the dynamic model is unknown [21]. Similarly, in practical applications, factors such as the distance between the target and the satellite, camera performance, and target characteristics make establishing an accurate kinematic model and image-matching difficult [22]. In contrast, methods that rely entirely on angle measurement data have wider applicability. The approach determines the target’s position using the angular relationship between the satellite and the target under optical observation, that is, employing line-of-sight (LOS) angle intersection to calculate the position [10,23]. Since this method does not require target feature information or other prior data, it overcomes the challenges associated with the kinematic estimation and image-matching methods. Therefore, this study focuses on target positioning under continuous observation using a single video satellite, and the fundamental objective after continuous observation of the target in stare mode using a video satellite is to determine the target position using multiple observations of the LOS angles and satellite position data.
Practical factors such as satellite manufacturing, launch conditions, and the space environment introduce errors in the measured LOS angles, causing discrepancies between the observed and actual values and further reducing positioning accuracy. In the distributed optical observation positioning scenario considered in this study, the target observation equation is inherently nonlinear; thus, methods such as the Gauss–Newton (GN), Levenberg–Marquardt (LM) [24], and maximum likelihood estimation [25] methods can be used to estimate the target position. On this basis, reference [26] proposes a two-step least squares method to position targets using video satellites, and reference [27] proposes a method for positioning intersecting targets. These methods ensure good positioning accuracy under the assumption that observational errors follow a normal distribution. However, more general cases have a bias in the error between the satellite LOS angle observations and the true value, which results in a reduction in the positioning accuracy of the above methods and an inability to effectively locate the target. In engineering applications, in order to eliminate the influence of the fixed error, the calibration method is often used to calculate the error compensation. However, short-term fixed errors such as thermal deformation exist in practice, long calibration periods cannot effectively solve such problems, and in some cases, satellites may not even have calibration conditions.
Typical parameter estimation methods under non-ideal noise conditions include information-assisted correction [28,29], state separation estimation [30], error distribution estimation and compensation [31], factor graph techniques [32], and systematic error self-correction [33,34,35]. Among these methods, the information-assisted correction method requires certain auxiliary prior information, but prior information cannot be obtained in some cases. The state separation estimation method and factor graph techniques involve mathematically transforming the model to separate fixed biases with the target position or to decompose the observation equation, which is challenging to apply in this positioning situation. Moreover, the error distribution estimation and compensation method need to compute different fixed errors and estimated quantities separately, leading to high computational complexity. The systematic error self-correction method originates from spacecraft autonomous navigation and relies solely on observational data with relatively low computational complexity. At the same time, this method is also one of the basic principles of augmented Kalman filtering and has shown good performance in system state estimation [36]. Based on the above-mentioned approach, this study proposes an analytical–iterative combined target-positioning method for systematic error self-correction in video satellite systems. Specifically, the state is augmented by introducing dimension-extended unknown variables to treat the fixed biases in observation as unknown parameters, and a combined approach of initial analytical estimation and iterative accuracy refinement is adopted. More precisely, the projection transformation method provides an initial analytical solution of the dimension-extended unknown parameters. Then, the unknown parameters are iteratively optimized further, ultimately achieving high-precision estimation of the target position.
In this study, the distribution of errors during video satellite observation is analyzed, and a new high-precision target location method considering mixed observation errors is proposed. The main contributions of this study are summarized as follows: (1). Considering the influence of fixed errors on target-positioning accuracy, a fixed error-processing method based on system error self-correction is proposed. The fixed errors are considered unknowns in this method. A projection transformation method is proposed to make the positioning equations linear after dimension extension. (2). Given that the projection transformation method can only provide low linearized solution accuracy and that iterative computation can be time-consuming when the initial value is poor, this study recommends an analytical–iterative combined algorithm. This scheme uses the projection transformation method to obtain an initial solution, followed by iterative computation for further optimization.
The remainder of this paper is structured as follows: Section 2 presents the target-positioning model under the video satellite stare mode. Section 3 analyzes the errors generated during the target-positioning process and establishes an error model based on this analysis. Section 4 introduces a representation method that treats fixed errors as unknown parameters, along with the linearization approach for this positioning model, and proposes an analytical–iterative combined target-positioning estimation method. Section 5 provides experimental simulations and analyzes the results of the proposed method. Finally, Section 6 concludes this work.
Notation: In this paper, scalars, column vectors, and matrices are represented by italic letters, bold italic lowercase letters, and bold italic uppercase letters, respectively. The notation refers to the observed value of a, while is its estimated value. denotes the identity matrix, and represents a diagonal matrix, with a as its diagonal elements. (A)ii denotes the diagonal elements of matrix A, and represents the value of function at .
2. Target-Positioning Model in Stare Mode
A video satellite can identify and determine a target through feature extraction and classification algorithms. Based on this identification, the satellite utilizes image information and control methods to achieve stare tracking via its attitude control system, adjusting the target to the desired position within the field of view (FoV) [37,38,39,40]. Target images are captured from different angles during tracking, enabling target position calculation. Thus, to minimize the impact of camera imaging distortion errors, the target is typically positioned at the center of the FoV, which ensures that roll direction errors do not compromise positioning accuracy.
To better represent the position and the relationship of satellite and the target, this study adopted the Earth-centered, Earth-fixed coordinate system (ECEF) to facilitate calculations.
Figure 1 shows a schematic of the relative position between the target and satellite from the Earth and satellite perspectives. As illustrated in Figure 1, at the j-th observation (j = 1, 2, …, m, where m is the total number of observations), the satellite’s coordinates are given by . The observed azimuth angle (Az) and elevation angle (El) of the target are denoted as , where is the Az and is the El. Let the target position be estimated as . The target observation is formulated as follows:
(1)
A single observation from one satellite (Equation (1)) comprises two observation equations, while the target position contains three unknown variables. Therefore, at least two observations are required for an analytical solution, which is given by the following:
(2)
Equation (2) is a nonlinear system of equations that can be solved using the Newton iteration method in combination with the Aitken method [41].
3. Error Analysis Affecting Positioning Accuracy
Various factors introduce errors into the observed LOS angles during satellite manufacturing, launch, and operation in the space environment, causing deviations from the true values. These errors ultimately degrade the accuracy of space target positioning. Therefore, analyzing the factors influencing satellite observation errors and establishing an observation model that accounts for them is mandatory, laying the foundation for high-precision positioning.
3.1. Sources of Errors
As illustrated in Figure 2, the errors in optical remote sensing satellite target positioning can be categorized into internal and external camera errors [42]. Internal camera errors include assembly, thermal deformation, vibration-induced, and optical distortion errors, while external errors include satellite orbit and satellite attitude determination errors.
3.1.1. Analysis of Internal Camera Errors
Camera assembly errors arise from inaccuracies in the installation position of the camera during actual engineering applications [43]. Such inaccuracies prevent the precise determination of the camera’s optical axis orientation after stare tracking control, leading to significant positioning errors. These errors mainly stem from satellite manufacturing and vibrations during launch, which misalign the camera’s actual and theoretical coordinate systems.
Thermal deformation errors result from the thermal expansion and contraction properties of materials. During satellite operations in orbit, variations in the satellite’s position relative to the Earth and the Sun and changes in satellite attitude cause fluctuations in thermal conditions. The primary sources of satellite heating include solar radiation, the Earth’s albedo, and the Earth’s infrared radiation, with solar radiation contributing the most and exhibiting the most significant annual variation. The average annual solar radiation flux is approximately 1367 W/m2, reaching a minimum of 1322 W/m2 during the summer solstice and a maximum of 1414 W/m2 during the winter solstice [44]. The intensity of Earth’s albedo radiation is correlated with solar radiation [45]. Additionally, the effect of solar radiation on onboard materials is influenced by satellite attitude, solar El, and Earth shadowing, presenting both diurnal and annual periodic thermal deformations.
Vibration-induced errors arise because components such as the camera mount and secondary mirror support structures are not perfectly rigid. When the satellite undergoes attitude maneuvers or is affected by the space environment, structural vibrations cause deviations in the camera’s optical axis from its intended state, introducing errors.
Optical distortion errors originate from the nonlinear distortions introduced when the camera lens focuses light during imaging, resulting in image warping [46]. Self-calibration data from ZY-3 satellite imagery indicate that CCD deformation errors exceed 0.5 pixels at the image edges. Although image correction techniques such as universal mathematical imaging models and image–space affine transformation methods can be used to rectify distortions in severe imaging errors, the effectiveness of these compensation techniques is limited [47].
3.1.2. Analysis of External Camera Errors
Satellite orbit determination errors arise when determining the satellite’s orbital position. Typically, when using the BeiDou Navigation Satellite System for positioning, the orbit determination accuracy is about 10 m [48]. When employing orbit extrapolation with periodic ground-based observations (once per orbit), errors can reach the hundred-meter level [49]. Although ground-based tracking enables high-precision orbit determination, the limited observation arcs in practical applications lead to error divergence over time.
Satellite attitude determination errors originate from inaccuracies in the satellite’s attitude determination process. For instance, when using star sensors for attitude determination, errors in the sensor’s optical system can introduce inaccuracies. Similarly, when using satellite navigation-based attitude determination, errors in positioning can propagate into attitude errors. In general, satellite attitude determination errors exceed 2.4 arcseconds [50], while specialized platforms can achieve accuracy to 0.0001 degree precision [51].
3.2. Error Model Establishment
Based on their distribution characteristics, errors can be categorized into systematic (fixed) and random errors. Fixed errors remain constant across multiple observations. Conversely, random errors vary randomly across observations. Where camera assembly and optical distortion errors are produced by satellite production and launch, the satellite no longer changes after it is in orbit. Therefore, the above errors can be seen as fixed errors. Satellite attitude determination, satellite orbit determination, and vibration errors are mainly caused by complex conditions such as signal propagation, space environment, etc., during satellite operation, changing rapidly and without obvious rules. Thus, these errors can be considered random errors.
The modeling of thermal deformation errors is quite unique. The magnitude of thermal deformation errors is mainly affected by structural temperature, and the structural temperature of a satellite is mainly influenced by its position relative to the Sun and the Earth, as well as its angular relationship with solar radiation, in addition to its own characteristics [52,53]. During satellite operation, the above conditions exhibit long periodic changes. Figure 3 shows the simulation results for changes in the angle between the satellite optical axis and solar radiation during a typical target observation process. As shown in Figure 3, during a period of target tracking and observation, the changes in the angle between the satellite optical axis and solar radiation are relatively small. Meanwhile, according to the in-orbit conditions of the Huangjing-2A satellite, the maximum rate of change in angle error caused by thermal deformation is on the order of 10−7 rad/s [54]. Under the most drastic changes, the change in camera thermal deformation angle during observation tracking time is shown in Figure 4 as an approximation (assuming a thermal deformation angle of 0.01 radians at the initial observation time). Thus, changes in the thermal deformation error caused can be ignored and the thermal deformation error can be considered a fixed error.
Considering the above analysis, under continuous observation via a single satellite, the relationship between the satellite’s observed LOS angles and their true values is expressed as follows:
(3)
where represents the LOS angle measurement for the j-th observation, denotes the fixed angular error of the satellite, and represents the random angular error for the j-th observation. Let denote the vector of all random observation errors. Under practical conditions, these errors can be assumed to follow a zero-mean normal distribution, with a covariance matrix denoted as .4. Target Positioning Under Non-Ideal Error Conditions
Due to observation errors, redundant observations are typically used to estimate the target position and improve positioning accuracy. Since the target-positioning equation in stare mode is nonlinear, the equation for estimation is either linearized or solved iteratively using GN or LM methods. However, when non-ideal errors are present, directly applying these algorithms may fail to achieve accurate target positioning. Since fixed errors remain constant across multiple observations, they can be considered unknowns during parameter estimation. The target position and fixed error are estimated simultaneously using dimension-extended unknowns. In general, compared with linearized estimation methods, iterative estimation provides higher accuracy. Furthermore, when initial conditions are favorable, iterative positioning estimation reduces the number of iterations, thereby decreasing computational complexity and enhancing computational efficiency. Therefore, this study first linearizes the dimension-extended positioning model, and thus, the analytical linear estimate of the target position is obtained as a favorable initial value. Subsequently, the LM iterative method refines the target position estimation to improve accuracy.
4.1. Analytical Linear Estimation of Dimension-Extended Unknowns
The relationship between the angle observations and the target position to be estimated (Equation (1)) contains unknown squared terms, square root terms, and inverse trigonometric functions, which make the parameter estimation highly nonlinear and complex to solve directly using linear estimation. In order to obtain the analytical initial value of the target position, a linearization representation of the positioning model is required. In order to facilitate formula transformation, the formula linearization method and the corresponding estimate result are given first. Then, the formula linearization method for dimension-extended unknowns is presented.
4.1.1. Projection Transformation for Target-Positioning Equation
Considering the linearization of Equation (1), the squared and square root terms of the unknowns mainly appear in the second term. To reduce the nonlinearity and to preserve the completeness of the observation information, this study adopts the projection transformation method for the pitch angle in the second term of Equation (1). Figure 5 depicts the projection process of El and projection angle on the xOz plane (the red and blue parts are El and projection angle , respectively). The relationship between projection angle , Az , and El is given by the following:
(4)
Replacing with , we eliminate the square root and squared terms containing the unknowns in Equation (1) while maintaining the completeness of the observation information. This strategy reduces the nonlinearity of the equation. Moreover, after substituting Equation (4) into Equation (1), we obtain the following:
(5)
This derivation transforms the equation containing the square root and square terms into a form that is more conducive to linearization and the derivation of a closed-form solution. Regarding the inverse trigonometric functions of the unknowns in Equation (5), we expand them to obtain the following linearized equation:
(6)
To facilitate further derivation, Equation (6) is converted into a matrix form:
(7)
where(8)
In order to improve target-positioning accuracy, we explore the positioning problem under redundant observations, as presented in Figure 6. Notably, the least squares method can solve the target position for multiple observations. Based on Equation (7), the expression for the target position under multiple observations is given by the following:
(9)
where(10)
In practice, the true values of the LOS angles cannot be obtained, so the observed LOS angles are used here. Thus,
(11)
where and represent the random and fixed errors in Equation (9), respectively. Therefore, based on the least squares method, the estimated target position is given by the following:(12)
The above method is known as the projection transformation method for target position estimation.
4.1.2. Linear Transformation After Dimension Extension
The projection transformation method presented in Section 4.1.1 can effectively improve the accuracy of target localization in the presence of random errors, but it cannot effectively address the fixed errors in the target-positioning model. Based on the characteristic that fixed errors do not change during each observation process, they can be considered fixed values and introduced into Equation (3). Furthermore, considering that fixed errors cannot be directly known, they are considered unknowns. According to Equation (3) and the modeling of errors, when facing the problem studied in this article, fixed errors can be regarded as two angle unknowns and introduced into the observation equation. At this point, by simultaneously estimating the target position and fixed error, the influence of the fixed error on the target position estimation can be eliminated.
Therefore, introducing the fixed errors into Equation (3) into Equation (5) as unknowns and ,
(13)
leads to the following projection transformation:(14)
where(15)
(1). Fixed Error Linearization Approximation Extraction
Notably, completing the linearization as in Equation (6) is impossible, and thus, the fixed errors to be estimated are first extracted from the trigonometric calculations. Since fixed errors are relatively small, Equation (14) can be approximated by linearizing as follows:
(16)
Based on Equation (15), in Equation (14) is determined using and , so it can be approximated linearly as follows:
(17)
where is the Jacobian matrix of with respect to the elements of . It is expanded as follows:(18)
where(19)
The approximately linearized equation is then expanded as follows:
(20)
The formula transformation process in this section is shown in Figure 7.
(2). Differential Elimination of High-Order Terms
After extracting the fixed errors, Equation (20) still contains a quadratic term involving the target position and fixed errors, which requires further linearization to derive the closed-form solution. Since the quadratic term in the target position to be estimated is the same across multiple observations, the multiple observation equations can be differentiated to eliminate the quadratic terms. The quadratic terms in Equation (20) have different forms and quantities and must therefore be handled separately. First, a first-order differencing operation is performed for the first equation in Equation (20), which contains a single quadratic term. By performing differencing on the equations for two adjacent observations, we obtain the following:
(21)
By subtracting the term containing , we obtain the following:
(22)
The detailed development of Equation (22) can be found in Appendix A.
Next, two differentiations are required for the second term in Equation (20), which has two quadratic terms with different coefficients. By combining the adjacent two observation equations, we obtain the following:
(23)
Through subtraction, we eliminate the terms involving :
(24)
The detailed development of Equation (24) can be found in Appendix A. After eliminating , the same method can be used to eliminate the terms involving , resulting in the following:
(25)
The detailed development of Equation (25) can be found in Appendix A. By combining Equations (22) and (25), we obtain the following equation:
(26)
where(27)
and where ξ represents x, y, z, α, β, and 0.The formula transformation process in this section is shown in Figure 8.
(3). Summary of the Linearization Process
Thus far, a linear equation has been derived, which includes the target position and fixed error as unknowns. A linearization flowchart is depicted in Figure 9 to visualize the linearization process. In Figure 9, the observation equations represent the equations for Equation (13), the linearized equations represent the equations for Equation (20), the difference equations represent the equations for Equation (24), and the final equations represent the equations for Equations (22) and (25). In the numbering of the equation, the part before the decimal point represents the satellite number provided by the observation equation, 1 after the decimal point represents the direction angle observation equation and its transformation equation, and 2 represents the pitch angle and projection angle observation equation and its transformation equation. The process begins by extracting the fixed errors from the trigonometric calculations through approximation expansion (the blue line in Figure 9), followed by differentiating between equations to eliminate higher-order terms (the red line in Figure 9), resulting in a linear system of equations.
Based on Equation (26), substituting the observed LOS angles for the true values leads to the following:
(28)
where is the random error. The dimension-extended linear method estimates of the target position and fixed errors can then be calculated as follows:(29)
The above method is known as the dimension-extended linear method for target position estimation.
4.2. Iterative Solution of Dimension-Extended Unknowns
Estimating the target position using the dimension-extended linear method, presented in Section 4.1, amplifies the random errors on positioning accuracy due to the differentiation operations and mathematical transformations used to eliminate the second-order terms. This causes the dimension-extended linear method to underperform when random errors are significant. Hence, this study proposes a method that iteratively optimizes unknowns based on the dimension-extended linear estimation as the initial value to improve the parameter estimation accuracy.
Given that the LM method converges fast and has a high estimation accuracy, it can be used to improve the target-positioning accuracy. Therefore, the LM method is adopted to obtain further iterative optimization results. For ease of representation, the representation of Equation (13) after dimension extension is simplified into the following:
(30)
which is then referred to as follows:(31)
This process involves the following steps:
Step 1: Determine the initial iteration value, accuracy, and damping factor. The estimated result from Equation (29) is used as the initial value , and the iteration accuracy σ and damping factor are set, where is determined using the Nielsen strategy, as follows:
(32)
Here, the value range of τ is [10−8, 1], and is the Jacobian matrix of with respect to θ, expanded as follows:
(33)
Step 2: Calculate the correction factor. A first-order Taylor expansion of is performed at the i-th iteration result :
(34)
The expansion of Equation (33) is provided in Appendix A. The correction factor and residual vector are defined as follows:
(35)
The residual vector is used to calculate the correction factor and to assess the iteration stop condition, with the correction factor adjusting the previous iteration result. Substituting Equation (35) into Equation (34) and separating the random errors yields the following:
(36)
For faster convergence, a damping term is added into Equation (36), and the observed values are substituted for the true values to estimate the (i + 1)-th iteration correction factor:
(37)
Step 3: Compute the iteration result and residual vector. The correction factor from Step 2 is used to adjust the (i − 1)-th iteration estimate and to calculate the i-th iteration estimate as follows:
(38)
Next, the residual vector at is calculated as follows:
(39)
Step 4: End the judgment and damping factor adjustment. When , the iteration process is terminated. When , the damping factor is adjusted, and the iteration continues. The damping factor affects the step size and direction of each iteration in the LM. The damping factor is updated each iteration to ensure both the estimation accuracy and convergence speed by comparing the change in the residual vector before and after each iteration. If the residual does not decrease compared with the previous iteration, i.e., , then, the iteration step size is too large or the direction is incorrect, resulting in an over-corrected estimation. In this case, the iteration step size must be reduced, and the recalculated estimation is as follows:
(40)
where is the negative feedback coefficient in the (0, 1) range. At the same time, the iteration result is not updated, i.e.,(41)
Then, the process returns to Equation (37) for recalculation. If the residual decreases compared with the previous iteration, i.e., , then the estimation is not over-corrected, and the iteration step size continues to increase:
(42)
where is the positive feedback coefficient in the (1, +∞) range. At the same time, the iteration result is updated:(43)
Then, the process returns to Step 2 and continues the iteration.
Figure 10 presents a flowchart of the proposed dimension-expanded combination method, which first derives the dimension-extended equations based on the positioning variance and error model. Then, the target position is estimated through the two analytical solving steps and the parameter estimation iteration optimization. The first step provides a good initial value for the latter, which, in turn, yields a more accurate estimation result. The blue part of Figure 10 represents the initial value linearization solution portion of the method, and the red part represents the iterative optimization portion of the estimated results of the method. At the same time, the gray path in Figure 10 represents the data or equations that are input or output, and the white path represents the processing method.
5. Method Validation and Analysis
In order to effectively verify the advantages of the method proposed in this paper, simulation experiments and actual data verification were conducted separately.
5.1. Simulation Experiment
The simulated experiment involved the continuous observation of the target from a single satellite with orbital elements, as reported in Table 1, at the initial observation time. The observation interval was set to 1 s, the observation value was affected by identical noise distribution, and the target coordinates were [−2210.8, 5012.5, 3253.0] km. The experimental setup included a 64-bit Windows 11 Home Edition, an AMD R7-7840HS CPU, and a 16 GB AMD Radeon 780 M Graphics RAM (Advanced Micro Devices, Inc., Santa Clara, CA, USA).
The positioning accuracy of the proposed method was evaluated under three different scenarios against the Aitken method [41], projection transformation method (Equation (12)), LM method [24], combined method (with the results calculated based on the projection transformation method as initial values of the LM algorithm without dimension-extended unknowns), dimension-extended linear method (Equation (29)), dimension-expanded LM method (with the results calculated based on the Aitken method as initial values of the LM algorithm with dimension-extended unknowns), and dimension-expanded combined method (with the results calculated based on the dimension-extended linear method as initial values of the LM algorithm with dimension-extended unknowns). The experimental scenarios were as follows: (1) The satellite observation counts and random error distribution remained unchanged while analyzing the effect of different fixed errors on positioning accuracy. (2) The satellite observation counts and magnitude of fixed errors remained unchanged while analyzing the effect of different random errors on positioning accuracy. (3) The fixed errors and magnitude distribution of random errors remained unchanged while analyzing the effect of different satellite observation counts on positioning accuracy.
Legend: The pink line represents Aitken acceleration; blue, the projection transformation method; cyan, the LM method; black, the combination method; red, the dimension-extended linear method; yellow, the dimension-extended LM method; green, the dimension-extended combination method.
These simulations aimed to assess the performance of different methods in improving positioning accuracy across various scenarios. Additionally, since noise in a single trial is highly random, a Monte Carlo simulation with 10,000 iterations was performed to better illustrate the methods’ effectiveness. The fixed error magnitude is denoted as dg and the random error magnitude as ds to describe the simulation results clearly. The mean absolute error (MAE) and deviation were used as parameters to characterize the accuracy of position estimation. The formula for MAE and deviation is as follows:
(44)
where mc is the number of Monte Carlo simulations, and is the result of the i-th simulation.Figure 11 and Figure 12 present the simulation results for 12 satellite observations. Specifically, Figure 11 illustrates the variation in target-positioning error with dg when ds = 1 × 10−5 rad (Figure 11a illustrates the variation in MAE, and Figure 11b, the variation in deviation), and Figure 12 depicts the variation in target-positioning error with ds when dg = 1 × 10−2 rad (Figure 12a illustrates the variation in MAE, and Figure 12b, the variation in deviation). According to Figure 11, as the fixed error increases, the positioning error of methods that do not consider fixed errors gradually increases. In contrast, the positioning error of the dimension-extended linear method increases slightly. The dimension-extended LM and dimension-extended combination methods have unchanging positioning errors with an increase in fixed errors. Under dg = 1 × 10−3 rad, the dimension-extended combination method improves the positioning accuracy by 98.70% compared to LM. At this point, these two methods can be concluded to have identical positioning accuracies, unaffected by changes in fixed errors, and they demonstrate superior performance compared with the other methods. Additionally, the positioning accuracy of the dimension-extended linear method is slightly affected by changes in fixed errors. At the same time, the deviation in methods that do not consider fixed errors gradually increases with the fixed error. In contrast, the deviation in methods that consider fixed errors always remains unbiased. According to Figure 12, as the random error increases, the positioning error of all methods increases slightly, except for the dimension-extended linear method, where the positioning error increases significantly. The primary reason for this exception is that the differencing calculation in the linearization process of the dimension-extended linear method amplifies the impact of random errors. Meanwhile, the positioning accuracy of the dimension-extended LM and dimension-extended combination methods remains identical and superior to that of other methods. Under ds = 9.1 × 10−5 rad, the dimension-extended combination method improves the positioning accuracy by 89.38% compared to LM. Analyzing both figures reveals that the dimension-extended linear method effectively suppresses the influence of fixed errors on positioning accuracy but is sensitive to random errors. In contrast, the dimension-extended LM and dimension-extended combination methods effectively suppress the influence of fixed errors on positioning accuracy without amplifying the impact of random errors. At the same time, the methods that do not consider fixed errors show deviations, though the deviations do not vary with random error. The deviation in the dimension-extended linear method increases with random errors. In contrast, the dimension-extended LM and dimension-extended combination methods always remain unbiased. However, according to the trend analysis of various methods with random errors in Figure 12a, the estimation accuracy of the dimensionality expansion method is more sensitive to random errors. In cases where the random error is large and the fixed error is small, there may be cases where the estimation accuracy of the extended dimensional methods is lower than that of the non-extended dimensional methods. Overall, the dimensionality expansion method is more suitable for situations with relatively large fixed errors, while LM methods are more suitable for situations with relatively large random errors.
Figure 13 illustrates the variation in positioning accuracy of the aforementioned methods with respect to the number of observations. The simulation was conducted with ds = 1 × 10−5 rad and dg = 1 × 10−2 rad. According to Figure 13, except for the Aitken acceleration method (which only utilizes the first two observations), the positioning accuracy of all methods improves as the number of satellite observations increases. When the number of satellites is small, the positioning accuracy of the dimension-extended linear method is relatively low. As the number of satellites increases, the accuracy of the dimension-extended linear method improves significantly, eventually surpassing the accuracy of algorithms that do not account for fixed errors. At the 20th observation, the proposed method improved the positioning accuracy by 99.56% compared to LM. Notably, the dimension-extended LM and dimension-extended combination methods consistently demonstrate high positioning accuracy. Meanwhile, the positioning deviation of each method does not change with the number of observations. As the number of observations increases, the main reason for the significant improvement in the positioning accuracy of the expansion method is that the increase in observation data effectively reduces the impact of random errors on estimation accuracy, thereby gradually increasing the proportion of fixed errors on positioning accuracy. The expansion method can effectively eliminate this effect. Therefore, in cases where there are more observations, the dimensionality expansion method is more likely to achieve better positioning accuracy.
Table 2 compares the processing speed of the methods evaluated here, indicating that linear estimation methods (including the projection transformation and dimension-extended linear methods) are the fastest, approximately an order of magnitude faster than the iterative methods. Combination algorithms (including the combination and dimension-extended combination methods) are approximately twice as fast as the purely iterative methods (including the LM and dimension-extended LM methods). Moreover, within combination algorithms, the impact of an extended dimension on computational speed is minimal. The main reason why linear methods have the fastest computing speed is that they do not require iterative calculations and have a smaller computational load. Similarly, combination algorithms reduce the number of iterations, thus reducing computation time compared to simple LM algorithms.
5.2. Actual Data Verification
To further validate the method proposed in this paper, the accuracy of target estimation using the aforementioned method was analyzed using measured data. The actual data used for verification was obtained by the “Tiantuo” team at the National University of Defense Technology. Satellites were used to observe the target via continuous observation to obtain the position calculation data of two targets, and the observation results are shown in Figure 14. (The target-tracking method from reference [40] was used for target tracking and correlation). The target and satellite data are shown in Table 3, and the positioning accuracy is shown in Table 4. Based on actual data, it can be concluded that the method proposed in this paper can effectively improve the positioning accuracy compared to other methods.
6. Conclusions
This study addressed the target-positioning problem with non-ideal errors in camera LOS angles. A combined analytical–iterative target-positioning method with systematic error self-correction was proposed. The simulation and actual data calculation results demonstrated that the proposed dimension-extended combined algorithm effectively improves the positioning accuracy under non-ideal errors compared with state-of-the-art algorithms, while also ensuring that the positioning accuracy remains unaffected by variations in fixed errors. The proposed method can improve the positioning accuracy by 98.70% in simulation scenarios with large fixed errors. Notably, the combined algorithm achieved approximately twice the processing speed compared to an iterative algorithm. Currently, the proposed method may have the drawback of poor performance in scenarios with relatively large random errors and fewer observations. Therefore, in future work, the method will be further improved to expand its application scope. This method can also be further applied to high-precision, time-sensitive tasks such as emergency response and precision mapping, enabling video satellites to play a more effective role in relevant applications.
Conceptualization, X.B. and H.S.; methodology, X.B.; software, X.B. and H.S.; validation, X.B. and L.H.; formal analysis, X.B. and L.H.; investigation, X.B. and C.F.; resources, C.F. and Y.Y.; data curation, H.S.; writing—original draft preparation, X.B.; writing—review and editing, H.S. and C.F.; visualization, X.B. and Y.Y.; supervision, H.S.; project administration, C.F.; funding acquisition, C.F. All authors have read and agreed to the published version of the manuscript.
The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.
The authors declare no conflicts of interest.
The following abbreviations are used in this manuscript:
| LOS | Line of sight; |
| GN | Gauss–Newton algorithm; |
| LM | Levenberg–Marquardt algorithm; |
| FoV | Field of view; |
| ECEF | Earth-centered, Earth-fixed coordinate system; |
| Az | Azimuth angle; |
| El | Elevation angle; |
| RAAN | Right ascension of ascending node; |
| MAE | Mean absolute error. |
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1 Schematic of the relative position between the target and the satellite. (a) Earth perspective; (b) Satellite perspective.
Figure 2 Primary error sources in the video satellite target-positioning process.
Figure 3 Changes in the angle between the satellite optical axis and solar radiation.
Figure 4 Schematic of camera thermal deformation angle change.
Figure 5 The angle γ of El β projected onto the xOz plane.
Figure 6 Target-positioning diagram based on LOS intersection in single-satellite stare mode.
Figure 7 Formula transformation process.
Figure 8 Formula transformation process.
Figure 9 Equation transformation process.
Figure 10 Flowchart of the dimension-expanded combination method.
Figure 11 Effect of fixed error on positioning accuracy. (a) MAE; (b) Deviation.
Figure 12 Effect of random error on positioning accuracy. (a) MAE; (b) Deviation.
Figure 13 Effect of observation count on positioning accuracy. (a) MAE; (b) Deviation.
Figure 14 Satellite image. (a) Target 1; (b) Target 2.
Initial orbital elements of the simulated satellite.
| Parameter | Value |
|---|---|
| Semi-major axis | 6739.080 km |
| Inclination | 97.1204 deg |
| Eccentricity | 0.000587 |
| RAAN | 300.9754 deg |
| Perigee | 203.0697 deg |
| Mean anomaly | 157.0301 deg |
Comparison of computational speeds.
| Method | Computation Time (s) |
|---|---|
| Aitken method | 3.42 × 10−4 |
| LM method | 6.67 × 10−4 |
| Projection transformation method | 3.46 × 10−5 |
| Combination method | 3.14 × 10−4 |
| Dimension-extended linear method | 5.71 × 10−5 |
| Dimension-extended LM method | 6.79 × 10−4 |
| Dimension-extended combination method | 3.51 × 10−4 |
Satellite orbit elements at the initial moment.
| Parameter | Target 1 | Target 2 |
|---|---|---|
| Target name | Luohe Railway Station | Jinggangshan Huangyangjie |
| Target coordinate | [−2161.6, 4846.4, 3521.9] km | [−2326.4, 5169.7, 2854.2] km |
| Initial satellite coordinate | [−2393.2, 4996.9, 3961.6] km | [−4268.9, 4304.0, 3106.9] km |
| Number of observations | 10 | 10 |
Target-positioning MAE.
| Method | Target 1 (km) | Target 2 (km) |
|---|---|---|
| Aitken method | 6.6620 | 3.2690 |
| LM method | 1.3356 | 1.8987 |
| Projection transformation method | 4.5041 | 2.1031 |
| Combination method | 1.3356 | 1.8987 |
| Dimension-extended linear method | 12.8983 | 11.2250 |
| Dimension-extended LM method | 1.0322 | 1.8806 |
| Dimension-extended combination method | 1.0322 | 1.8806 |
Appendix A
Equation (22) can be expanded as follows:
Equation (24) can be expanded as follows:
Equation (25) can be expanded as follows:
Equation (33) can be expanded as follows:
The partial derivatives can be further expanded as follows:
1. Li, Y.B. Foreign Aerospace Electronic Reconnaissance Equipment: Development and Enlightenment. Telecommun. Eng.; 2023; 63, pp. 598-604. [DOI: https://dx.doi.org/10.20079/j.issn.1001-893x.220821001]
2. Pan, X.; Wu, Y. Modeling and simulations of ECCM of ocean surveillance satellite electronic intelligence. Proceedings of the 2012 5th International Conference on BioMedical Engineering and Informatics; Chongqing, China, 16–18 October 2012; pp. 1476-1480.
3. Chen, Y.Z. Primary analysis of location error sources synthetic aperture radar satellite. Aerospace Shanghai; 1998; 3, pp. 16–20+34. [DOI: https://dx.doi.org/10.19328/j.cnki.1006-1630.1998.03.003]
4. Qiao, P.; Lv, X.N.; Zhao, J.S.; Xia, Y.L.; Li, J.M.; Zhou, Y. Space Target Tracking and positioning Algorithm Using Multi-satellites. Spacecr. Eng.; 2021; 30, pp. 9-15. [DOI: https://dx.doi.org/10.3969/j.issn.1673-8748.2021.05.002]
5. Lerro, D.; Bar-Shalom, Y. Tracking with debiased consistent converted measurements versus EKF. IEEE Trans. Aerosp. Electron. Syst.; 1993; 29, pp. 1015-1022. [DOI: https://dx.doi.org/10.1109/7.220948]
6. Chen, C.P.; Li, H.; Wei, Y.; Xia, T.; Tang, Y.Y. A local contrast method for small infrared target detection. IEEE Trans. Geosci. Remote Sens.; 2013; 52, pp. 574-581. [DOI: https://dx.doi.org/10.1109/TGRS.2013.2242477]
7. Chen, Y.; Zhang, G.; Ma, Y.; Kang, J.U.; Kwan, C. Small infrared target detection based on fast adaptive masking and scaling with iterative segmentation. IEEE Geosci. Remote Sens. Lett.; 2021; 19, pp. 1-5. [DOI: https://dx.doi.org/10.1109/LGRS.2020.3047524]
8. Northrop Grumman. Hypersonic & Ballistic Tracking Space Sensor; Northrop Grumman: Washington, DC, USA, 2022.
9. Liu, J. Research on Single-Base Angle Measurement Passive positioning Technology. Master’s Thesis; University of Electronic Science and Technology of China: Chengdu, China, 2020.
10. Long, H.; Li, Z.J. Study and Simulation Analysis of Geometry Localization Based on LOS Observation. Comput. Simul.; 2010; 27, pp. 14-17. [DOI: https://dx.doi.org/10.3969/j.issn.1006-9348.2010.08.004]
11. Qiu, P.; Zou, S.M.; Zhang, X.M.; Wang, J.F.; Lin, Q.; Jiang, X.J. Study on performance testing techniques for astronomical optical cameras. Infrared Laser Eng.; 2023; 52, pp. 197-209. [DOI: https://dx.doi.org/10.3788/IRLA20230316]
12. Cao, H.; Gao, W.; Zhang, X.; Liu, X.; Fan, B.; Li, S. Overview of ZY-3 satellite research and application. Proceedings of the 63rd IAC (International Astronautical Congress); Naples, Italy, 1–5 October 2012; pp. 1-5.
13. Wang, Z.; Luo, J.-A.; Zhang, X.-P. A novel location-penalized maximum likelihood estimator for bearing-only target localization. IEEE Trans. Signal Process.; 2012; 60, pp. 6166-6181. [DOI: https://dx.doi.org/10.1109/TSP.2012.2218809]
14. Fan, L.J.; Wang, Y.; Yang, W.T.; Yu, L.J.; Zhang, G.B. GFDM-1 Satellite System Design and Technical Characteristics. Spacecr. Eng.; 2021; 30, pp. 10-19. [DOI: https://dx.doi.org/10.3969/j.issn.1673-8748.2021.03.002]
15. Zhang, X.Y. Study on Moving Objects Intelligent Sensing and Tracking Control for Video Satellite. Ph.D. Thesis; National University of Defense Technology: Changsha, China, 2017.
16. Gong, B.C. Research on Angles-only Relative Orbit Determination Algorithms for Spacecraft Autonomous Rendezvous. Ph.D. Thesis; Northwestern Polytechnical University: Xi’an, China, 2016.
17. Wang, D.Y.; Hou, B.W.; Wang, J.Q.; Ge, D.M.; Li, M.D.; Xu, C.; Zhou, H.Y. State estimation method for spacecraft autonomous navigation: Review. Acta Aeronaut. Astronaut. Sin.; 2021; 42, pp. 72-89. [DOI: https://dx.doi.org/10.7527/S1000-6893.2020.24310]
18. Garg, S.K. Initial Relative-Orbit Determination Using Second-Order Dynamics and Line-of-Sight Measurements. Master’s Thesis; Auburn University: Auburn, AL, USA, 2015.
19. Gong, B.; Wang, S.; Li, S.; Li, X. Review of space relative navigation based on angles-only measurements. Astrodynamics; 2023; 7, pp. 131-152. [DOI: https://dx.doi.org/10.1007/s42064-022-0152-2]
20. Gong, B.; Liu, Y.; Ning, X.; Li, S.; Ren, M. RBFNN-based angles-only orbit determination method for non-cooperative space targets. Adv. Space Res.; 2024; 74, pp. 1424-1436. [DOI: https://dx.doi.org/10.1016/j.asr.2024.05.012]
21. Zhang, Z.; Shu, L.; Zhang, K.; Zhu, Z.; Zhou, M.; Wang, X.; Yin, W. Orbit Determination and Thrust Estimation for Noncooperative Target Using Angle-Only Measurement. Space Sci. Technol.; 2023; 3, 0073. [DOI: https://dx.doi.org/10.34133/space.0073]
22. Lei, T.; Guan, B.; Liang, M.; Liu, Z.; Liu, J.; Shang, Y.; Yu, Q. Motion measurements of explosive shock waves based on an event camera. Opt. Express; 2024; 32, pp. 15390-15409. [DOI: https://dx.doi.org/10.1364/OE.506662]
23. Lei, T.; Guan, B.; Liang, M.; Li, X.; Liu, J.; Tao, J.; Shang, Y.; Yu, Q. Event-based multi-view photogrammetry for high-dynamic, high-velocity target measurement. arXiv; 2025; arXiv: 2506.00578
24. Liu, J.; Xia, Z.X. Modeling and Identification of Dynamic Systems; National University of Defense Technology Press: Changsha, China, 2007.
25. Yi, W.; Zhou, T.; Ai, Y.; Blum, R.S. Suboptimal Low Complexity Joint Multi-Target Detection and Localization for Non-Coherent MIMO Radar With Widely Separated Antennas. IEEE Trans. Signal Process.; 2020; 68, pp. 901-916. [DOI: https://dx.doi.org/10.1109/TSP.2020.2968282]
26. Bai, X.; Fan, C.; Song, H.; Zhang, Y. Space Target Positioning Under Gaze Mode of Collaborative Distributed Video Satellite. Proceedings of the 2024 International Symposium on Intelligent Robotics and Systems (ISoIRS); Changsha, China, 14–16 June 2024; pp. 227-231.
27. Chen, S.; Liu, H.; Liu, X.; Yu, Q. Non-cooperative maritime target position and velocity measuring method based on monocular trajectory intersection for video satellite. Proc. Inst. Mech. Eng. Part G J. Aerosp. Eng.; 2019; 233, pp. 44-56. [DOI: https://dx.doi.org/10.1177/0954410017727023]
28. Yang, J.; Wang, K.; Xiong, K. In-orbit error calibration of star sensor based on high resolution imaging payload. Proceedings of the 2015 IEEE Sensors; Busan, Republic of Korea, 1–4 November 2015; pp. 1-4.
29. Liu, Z.; Liang, S.; Guan, B.; Tan, D.; Shang, Y.; Yu, Q. Collimator-assisted high-precision calibration method for event cameras. Opt. Lett.; 2025; 50, pp. 4254-4257. [DOI: https://dx.doi.org/10.1364/OL.564294] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/40591293]
30. Kim, K.H.; Lee, J.G.; Park, C.G. Adaptive two-stage extended Kalman filter for a fault-tolerant INS-GPS loosely coupled system. IEEE Trans. Aerosp. Electron. Syst.; 2009; 45, pp. 125-137. [DOI: https://dx.doi.org/10.1109/TAES.2009.4805268]
31. Liu, H.-B.; Wang, J.-Q.; Tan, J.-C.; Yang, J.-K.; Jia, H.; Li, X.-J. Autonomous on-orbit calibration of a star tracker camera. Opt. Eng.; 2011; 50, pp. 023604-023608. [DOI: https://dx.doi.org/10.1117/1.3542039]
32. Yu, Z.; Li, J.; Guo, Q.; Sun, T. Message passing based robust target localization in distributed MIMO radars in the presence of outliers. IEEE Signal Process. Lett.; 2020; 27, pp. 2168-2172. [DOI: https://dx.doi.org/10.1109/LSP.2020.3042456]
33. Wei, C.L.; Zhang, B.; Zhang, C.Q. An Attitude Maneuvering Aided Self-calibration Algorithm for Celestial Autonomous Navigation System. J. Astronaut.; 2010; 31, pp. 93-97.
34. Zhang, C.Q.; Liu, L.D.; Li, Y. Observability Analysis for Biased Satellites Autonomous Orbit Determinati on Systems. Chin. Space Sci. Technol.; 2006; pp. 1–7+13. [DOI: https://dx.doi.org/10.3321/j.issn:1000-758X.2006.06.001]
35. Jia, T.; Ke, X.; Liu, H.; Ho, K.C.; Su, H. Target Localization and Sensor Self-Calibration of Position and Synchronization by Range and Angle Measurements. IEEE Trans. Signal Process.; 2025; 73, pp. 340-355. [DOI: https://dx.doi.org/10.1109/TSP.2024.3520909]
36. Zhao, X.; Liu, G.; Wang, L.; He, Z.; Yao, Z. Augmented Cubature Kalman Filter/Kalman Filter Integrated Algorithm; Department of Control Engineering, The Second Artillery Engineering University: Xi’an, China, 2014; Volume 43, pp. 647-653.
37. Fan, X.; Liu, Z.; Wu, Y.P.; Hu, G.H. Study on Algorithms of Target Recognition and Target Tracking Based on Video Sequence. Fire Control Command Control; 2014; 39, pp. 116-119. [DOI: https://dx.doi.org/10.3969/j.issn.1002-0640.2014.z1.039]
38. Wang, H.R.; Liu, X. Study on recognition and tracking algorithm for air vehicle infrared image. Laser Infrared; 2021; 51, pp. 1097-1103. [DOI: https://dx.doi.org/10.3969/j.issn.1001-5078.2021.08.020]
39. Avidan, S. Ensemble tracking. IEEE Trans. Pattern Anal. Mach. Intell.; 2007; 29, pp. 261-271. [DOI: https://dx.doi.org/10.1109/TPAMI.2007.35]
40. Wu, D.; Song, H.; Fan, C. Object tracking in satellite videos based on improved kernel correlation filter assisted by road information. Remote Sens.; 2022; 14, 4215. [DOI: https://dx.doi.org/10.3390/rs14174215]
41. Shi, J.L.; Liu, S.Z.; Chen, G.Z. Computer Numerical Method; Higher Education Press: Beijing, China, 2009; 282.
42. Zang, W.C. Research on Major Errors in Sight Determination of Medium and Low Orbit Optical Satellites. Master’s Thesis; National University of Defense Technology: Changsha, China, 2018.
43. Song, C.; Fan, C.; Song, H.; Wang, M. Spacecraft Staring Attitude Control for Ground Targets Using an Uncalibrated Camera. Aerospace; 2022; 9, 283. [DOI: https://dx.doi.org/10.3390/aerospace9060283]
44. Huang, H.L. On-Orbit External Heat Flow Calculation and Internal Thermal Analysis of Satellites. Master’s Thesis; Jilin University: Changchun, China, 2019.
45. Li, Q.; Kong, L.; Zhang, L.; Wang, Z.C. Thermal design and validation of multispectral max width optical remote sensing satellite. Opt. Precis. Eng.; 2020; 28, pp. 904-913. [DOI: https://dx.doi.org/10.3788/OPE.20202804.0904]
46. Li, H.H.; Cao, H.; Shi, J. High Precision positioning Technology and Practice for High Resolution Optical Satellite Images. Geo Space Inf.; 2018; 16, pp. 1–8+137. [DOI: https://dx.doi.org/10.3969/j.issn.1672-4623.2018.05.001]
47. Li, L.; Xie, J.H.; Wang, H.; Li, Y.J.; Pen, L.Y. Study on Image Distortion Correction Method of CCD Large Field of View Lens in Down view System. Mod. Inf. Technol.; 2021; 5, pp. 168–170+173. [DOI: https://dx.doi.org/10.19850/j.cnki.2096-4706.2021.19.043]
48.
49. Ji, W.; Bai, T.; Wu, G.Q.; Lin, B.J. The accuracy and error analysis of satellite autonomous celestial navigation orbit determination. Electron. Des. Eng.; 2017; 25, pp. 90–93+97. [DOI: https://dx.doi.org/10.14022/j.cnki.dzsjgc.2017.15.023]
50. Zhang, C.Q.; Wang, S.Y.; Chen, C. A High-Precision Relative Attitude Determination Method for Satellite. Aerosp. Control Appl.; 2014; 40, pp. 19-24. [DOI: https://dx.doi.org/10.3969/j.issn.1674-1579.2014.03.004]
51. Liu, S.; Zhang, W.; Liao, B.; Tang, Z.X.; Zhu, M.; Xie, J.J.; Yao, C. A Sailboard Sunshade High Pointing Accuracy and Stability Satellite Platform System for Morning and Dusk Orbits. Patent; CN112977884B, 27 June 2023.
52. Wang, J.; Chen, Z.; Fan, C.; Wu, G.; Luo, J.; Feng, M. On-orbit validation of thermal control subsystem for microsatellite with integrated configuration of platform and payload. Therm. Sci. Eng. Prog.; 2022; 34, 101442. [DOI: https://dx.doi.org/10.1016/j.tsep.2022.101442]
53. Silva, D.F.; Muraoka, I.; Garcia, E.C. Thermal Control Design Conception of the Amazonia-1 Satellite. Instituto Nacional de Pesquisas Espaciais—INPE São José dos Campos/SP; Instituto Tecnológico de Aeronáutica—ITA São José dos Campos/SP. J. Aerosp. Technol. Manag.; 2014; 6, pp. 169-176. [DOI: https://dx.doi.org/10.5028/jatm.v6i2.320]
54. Fang, H.; Zhu, J.; Ma, L.; Xu, Z.; Dong, S. Research on optimization of 16 m camera installation mode based on whole-satellite thermal distortion analysis. Adv. Small Satell. Technol.; 2025; 2, pp. 63-69. [DOI: https://dx.doi.org/10.12470/ASST20250007]
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.