Introduction
Pose estimation is a fundamental problem of determining the position and attitude (or orientation) of one object's coordinate system with respect to another. An extension of this principle to computer vision transforms the problem into estimating the pose between the object and the camera from which it is being observed. There has been significant progress in determining the pose of six degrees-of-freedom platforms, particularly by integrating inertial measurement units and monocular/stereo cameras for navigation and mapping [1–5].
For crewed aircraft, studies have shown that the lack of positional awareness is a major cause of accidents [6]. In particular, estimating the pose of a runway relative to a monocular camera offers benefits to all aircraft which should not be underestimated. Moreover, it can be used to reduce errors in the on-board navigation system and adds a level of redundancy to the system. A considerable amount of research has gone into vision-based landing scenarios [7, 8], including the pose estimation for landings on aircraft-carrier ships [9–11]. Most of the work has focused on vertical-take-off-and-landing platforms and not much on the fixed-wing types, except the several recent works such as in [12]. The reason is due to the small margin of the approaching angle in the most ship-landing scenario and thus the difficulty in obtaining precise navigational information from the cameras relative to the runway.
In the application presented here, we consider an airport runway as the object on which the aircraft is to land. Several characteristics of runways make them highly suitable as navigation aids, especially when an aircraft is on its final approach [13, 14]. First, the precise geodetic coordinates of most runways are well known, thus allowing them to be used as absolute navigation references. Secondly, the dimensions of runways must adhere to strict protocols, as does the white markers placed on them. Thirdly, runways are designed to be highly visible, so in essence, they can be used as a navigation tool even if the aircraft does not intend to land at the runway it is observing. Fourthly, there is no limit to the number of aircraft that can observe a given runway for navigation. The emphasis in this work is to increase the level of situational awareness for an aircraft on its final approach.
We propose a novel optimisation algorithm by parameterising the runway pose as a unit dual quaternion (UDQ). The UDQ has been widely applied for the pose estimation problem such as strap-down inertial navigation system [15], coordination of multiple rigid bodies [16], and pose-graph simultaneous localisation and mapping (SLAM) [17]. The UDQ has the advantages of no singularity, unlike Euler angles, and fast convergence due to the unified optimisation of the non-Euclidian rotation and translational vector. It should be noted that the reliable detection of the runway (for example, detecting the piano-key -like markers on the runway) is important and has been the subject of studies [18, 19], but it is not the focus in this work. The unit vectors describing the four runway corners in the camera frame are used to formulate a cost function, which is derived from the geometric relationships between the observations and runway, transformed into the dual quaternion algebra. The pose is estimated through the Levenberg–Marquardt (LM) optimisation method. The contributions of this work are
UDQ parameterisation to estimate the runway pose from monocular images;
camera-in-the-loop demonstration of visual-inertial navigation by utilising an open-source flight simulator.
To the author's knowledge, there is no prior literature investigating the UDQ-based optimisation for the runway pose estimation problem. We also utilise a FlightGear simulator to demonstrate the method, as shown in Fig. 1, illustrating an example of image observations of a runway using the simulator, as well as showing the extracted visual features. The simulator is open source and flexible, and it can be effectively used for the camera-in-the-loop simulation, which we believe is a valuable tool for vision-based research.
[IMAGE OMITTED. SEE PDF]
The remaining part of this paper is outlined as follows. Section 2 provides a brief overview of the UDQ algebra, and Section 3 details the UDQ parametrisation for the runway observations obtained from a monocular camera followed by the LM optimisation. Section 4 shows the integration of the runway observations and inertial navigation system. Experimental results are given in Section 5, providing the flight simulator interface, and analysing the optimisation and filtering performance. Section 6 will conclude with future direction.
UDQ algebra
The UDQ is attractive for the estimation of the pose, as the dual quaternion can be represented as a single vector that is amenable to implementation in a statistical estimation process. In addition to this, the rotational and translational parts can be simultaneously optimised in a cost function setting, which is separable and many cases and expandable to include any number of observations. It also has a closed-form solution to the Taylor series expansion [ for arbitrary quaternions q and p as ].
The dual quaternion comprises two parts that can describe the rigid-body transformation of one coordinate frame with respect to another and is given by
Similar to the unit quaternion, a UDQ satisfies the unit norm
Property 1
Let and q be the translation vector and quaternion of a rigid body with respect to a global frame, respectively, then the rigid-body transformation can be represented as a following UDQ [17]:
Runway observation using UDQ
The pose measurement of the runway is parametrised in UDQ form :
[IMAGE OMITTED. SEE PDF]
The problem is formulated by choosing any two unit vectors, , the plane defined by them, and the unit normal vector to this plane, denoted as . It is obtained by taking the cross product of and normalising the result. Also, required is the unit vector, , that describes the direction of the vector created by subtracting one position vector from the other, relative to the runway frame. From the definitions, it can be observed that . Then, it is necessarily the case that . This provides the constraint
In this application, the solutions are obtained by reformulating (16) and (17) using a UDQ, , as the error functions of the LM optimisation. The reformulation process necessitates the transformation of unit vectors and rotation matrices into their quaternion equivalents, in which a vector changes to a quaternion equivalent form , and the rotational operation of a vector to becomes . By utilising the matrix form of the quaternions, the alternative representations of (16) and (17) become
The inclusion of the unit-magnitude constraints of the dual quaternion and (in a vector form) yields the cost functional
The estimate of the dual part of the quaternion can be obtained by substituting the final attitude estimate into the Jacobian of (20) or (21). Alternatively, it can be obtained through its relationship with
Runway-aided inertial navigation
To demonstrate the effectiveness of the runway pose observation, a runway-aided inertial navigation system is designed. The nominal inertial navigation model is a simplified one utilising a local-fixed, local-tangent navigation frame which is suitable for low-quality inertial sensor applications. An integration extended Kalman filter consists of an inertial state vector with a state kinematic model:
Position in the navigation frame .
Velocity in the navigation frame .
Accelerometer measurement .
is the gravitational acceleration.
is a transforming matrix of a vector from the body to navigation frame.
Attitude quaternion .
Gyroscope measurement (in a quaternion form).
w 1 and w 2 are the white Gaussian process noises with strength matrices of Q 1 and Q 2 for the translation and velocity, respectively.
The runway pose solutions from the previous section are subsequently used as the observations to the extended Kalman filter, which are related to the inertial navigation state
[IMAGE OMITTED. SEE PDF]
Experiment results
FlightGear interface
The FlightGear package is a continuously developing open-source flight simulator (). During runtime, rendered images were extracted from FlightGear and saved in an image format, while the instantaneous state information was stored as one large text. The images were also displayed in the open-source GUI display tool, EZWGL. This allowed the functionality of algorithms to be monitored during run time. A typical output is illustrated in Fig. 1. Two images are shown in the figure. On the left is the raw grey-scale image extracted from FlightGear and on the right is the same image with its corners highlighted. The four runway corners are clearly marked in addition to a multitude of white runway marking corners. Although only the corners marking the extremities of the runway were used in this application, the additional runway corners would serve to improve the state estimate. During the simulation, the aircraft was controlled by the user to make several lengthwise passes over the runway while ensuring the complete runway appeared in the image for as long as possible. Corner extraction was performed using the SUSAN [20] corner detecting algorithm, chosen for its high performance, adaptability and robustness to noise. For realism noise was inserted to body-derived inertial measurements. If the complete runway was contained in the image, then the set of camera-to-corner unit vectors was ascertained. The discretisation and rendering process of the simulator precluded the necessity for additional noise to be inserted on these measurements.
Results
Fig. 4 compares the estimated and actual 3D and 2D paths traversed by the aircraft showing the correctional effects when the runway was in full view of the camera. Fig. 5 (left) further compares the estimated aircraft xyz -position with the actual obtained during the experiment. The effect of adding Gaussian noise to the accelerometer and gyroscope data is most noticeable in the position plot. This is clearly evident in the aircraft's y -position, as the y -position appears to be affected the most because of the low magnitudes relative to the other two axes. Fig. 5 (right) illustrates the number of iterations required for convergence of the LM process through the usable range of images. Convergence was assumed when the residual was less than the value of . Thresholds lower than this are not guaranteed to produce better results. A weighting of 100 was placed on the term in the cost function for the faster convergence. Fig. 6 shows the estimated errors in velocity and attitude (converted to Euler angles for display) with uncertainty. Fig. 7 provides the filter's innovation sequences of the position and attitude with uncertainty, showing the consistent operations of the filter.
[IMAGE OMITTED. SEE PDF]
[IMAGE OMITTED. SEE PDF]
[IMAGE OMITTED. SEE PDF]
[IMAGE OMITTED. SEE PDF]
Conclusions
This work presented the development of an algorithm that allows the use of visual runway observations as zero bias, absolute navigation references. Runways are suitable for this task because their geodetic coordinates are known precisely, as are their dimensions and detection markings, and the fact that they are designed to be highly visible. The emphasis in this application was placed on increasing the level of situational awareness for an aircraft during its landing approach, providing additional redundancy to navigating in the vicinity of a runway, thereby reducing the reliance on global positioning system. The algorithm produces a UDQ-based cost function based on the geometric relationships between the observations and the runway, and an iterative minimisation of the cost function using the LM technique. The algorithm was then integrated into the inertial navigation system and demonstrated using a flight simulator using a camera-in-the-loop setup. Future work is extending the method to accommodate the state-of-the-art non-linear observer-based SLAM technique and the UDQ to enhance stability and efficiency.
Acknowledgments
The work presented here is an extract from the primary author's PhD thesis, Vision-Aided Aircraft Navigation and Trajectory Optimisation, completed at the Australian National University in 2007.
Kim J. Byun H. Guivant: ‘Compressed Pseudo‐SLAM: Pseudorange Integrated Generalised Compressed SLAM’, Australasian Conference of Robotics and Automation, Brisbane, December 2020
Usenko V. Demmel N. Schubert D. et al.: ‘Visual‐inertial mapping with non‐linear factor recovery’, IEEE Robot. Autom. Lett., 2020, 5, (2), pp. 422–429
Liang Q. Liu M.: ‘A Tightly Coupled VLC‐Inertial Localization System by EKF’, IEEE Robot. Autom. Lett., 2020, 5, (2), pp. 3129–3136
Forster C. Carlone L. Dellaert F. et al.: ‘On‐manifold preintegration for real‐time visual–inertial odometry’, IEEE Trans. Robot., 2017, 33, (1), pp. 1–21
Miiller M.G. Steidle F. Schuster M.J. et al.: ‘Robust visual‐inertial state estimation with multiple odometries and efficient mapping on an MAV with ultrawide FOV stereo vision’. 2018 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Madrid, Spain, October 2018, pp. 3701–3708
Ashford R.: ‘Flight safety digest: a study of fatal approach‐andlanding accidents worldwide 1980–1996’, Flight Safety Digest, 1998, 17, (2/3)
Serra P. Cunha R. Hamel T. et al.: ‘Landing of a quadrotor on a moving target using dynamic image based visual servo control’, IEEE Trans. Robot., 2016, 32, (6), pp. 1524–1535
Xu L. Luo H.: ‘Towards autonomous tracking and landing on moving target’. 2016 IEEE Int. Conf. on Real‐time Computing and Robotics (RCAR), Angkor Wat, Cambodia, June 2016, pp. 620–628
Tang D. Jiao Y. Chen J.: ‘On automatic landing system for carrier plane based on integration of INS, GPS and vision’. 2016 IEEE Chinese Guidance, Navigation and Control Conf. (CGNCC), Nanjing, China, August 2016, pp. 2260–2264
Nguyen K.D. Ha C.: ‘Vision‐based hardware‐in‐the‐loop simulation for unmanned aerial vehicles’, in ‘Intelligent computing theories and application’ (Springer International Publishing, Wuhan, China, 2018), pp. 72–83
Coutard L. Chaumette F. Pflimlin J.‐M.: ‘Automatic landing on aircraft carrier by visual servoing’. 2011 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, San Francisco, USA, 2011, pp. 2843–2848
Kus M.: ‘Autonomous carrier landing of a fixed‐wing UAV with airborne deck motion estimation’. Ph.D. dissertation, 2019
Gibert V. Plestan F. Burlion L. et al.: ‘Visual estimation of deviations for the civil aircraft landing’, Control Eng. Pract., 2018, 75, pp. 17–25
Moore A.J. Schubert M. Dolph C. et al.: ‘Machine vision identification of airport runways with visible and infrared videos’, J. Aerosp. Inf. Syst., 2016, 13, pp. 266–277
Wu Y. Hu X. Hu D. et al.: ‘Strapdown inertial navigation system algorithms based on dual quaternions’, IEEE Trans. Aerosp. Electron. Syst., 2005, 41, (1), pp. 110–132
Wang X. Yu C. Lin Z.: ‘A dual quaternion solution to attitude and position control for rigid‐body coordination’, IEEE Trans. Robot., 2012, 28, (5), pp. 1162–1170
Cheng J. Kim J. Shao J. et al.: ‘Robust linear pose graph‐based SLAM’, Robot. Auton. Syst., 2015, 72, pp. 71–82
Han P. Cheng Z. Chang L.: ‘Automatic runway detection based on unsupervised classification in polsar image’. 2016 Integrated Communications Navigation and Surveillance (ICNS), Herndon, USA, April 2016, pp. 6E3‐1–6E3‐8
Abu‐Jbara K. Alheadary W. Sundaramorthi G. et al.: ‘A robust vision‐based runway detection and tracking algorithm for automatic UAV landing’. 2015 Int. Conf. on unmanned aircraft systems (ICUAS), Denver, USA, 2015, pp. 1148–1157
Smith S.M. Brady J.M.: ‘SUSAN ‐ a new approach to low level image processing’. Int. Journal of Computer Vision, 1997, 23, (1), pp. 45–78
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright John Wiley & Sons, Inc. 2020
Abstract
This study addresses the pose estimation problem of an aircraft runway using visual observations in a landing approach scenario. The authors utilised the fact that the geodetic coordinates of most runways are known precisely with highly visible markers. Thus, the runway observations can increase the level of situational awareness during the landing approach, providing additional redundancy of navigation and less reliance on global positioning system. A novel pose optimisation algorithm is proposed utilising unit dual quaternion for the runway corner observations obtained from a monocular camera. The estimated runway pose is further fused with an inertial navigation system in an extended Kalman filter. An open‐source flight simulator is used to collect and process the visual and flight dataset during the landing approach, demonstrating reliable runway pose estimates and the improved inertial navigation solution.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer