Content area

Abstract

Accurate estimation of accelerometer biases in Inertial Measurement Units (IMUs) is crucial for reliable Unmanned Aerial Vehicle (UAV) navigation, particularly in GPS-denied environments. Uncompensated biases lead to an unbounded accumulation of position error and increased velocity error, resulting in significant navigation inaccuracies. This paper examines the effects of accelerometer bias on UAV navigation accuracy and introduces a vision-aided navigation system. The proposed system integrates data from an IMU, altimeter, and optical flow sensor (OFS), employing an Extended Kalman Filter (EKF) to estimate both the accelerometer biases and the UAV position and velocity. This approach reduces the accumulation of velocity and positional errors. The efficiency of this approach was validated through simulation experiments involving a UAV navigating in circular and straight-line trajectories. Simulation results show that the proposed approach significantly enhances UAV navigation performance, providing more accurate estimates of both the state and accelerometer biases while reducing error growth through the use of vision aiding from an Optical Flow Sensor.

Full text

Turn on search term navigation

1. Introduction

In robotics, localization refers to determining an object’s position and attitude in 2D or 3D space. This is a very important capability, especially in aerospace applications [1]. Using sensors, robots can maintain an estimate of their state, including velocity, attitude, and position [2]. Accelerometers within IMUs, which also include gyroscopes, are commonly used to calculate position by double integrating acceleration data [3]. However, IMU sensor readings inherently contain errors such as random noise and sensor bias, which degrade navigation performance [4]. Additionally, numerical integration introduces extra noise due to time discretization, exacerbating drift [5]. For instance, Euler integration results in local truncation errors, similar to those found in Taylor series approximations [6]. Thus, the double integration of accelerometer data to generate velocity and position leads to compounded drift errors [7], posing a significant challenge in autonomous UAV navigation. Recent experiments have shown the potential of fusing various sensors with IMUs to match the accuracy of IMU/GNSS systems [8]. This includes novel hardware combinations and algorithmic approaches to enhance sensor fusion accuracy in UAVs. Another approach in GPS-denied environments is the use of multiple UAVs, each with a range sensor and an IMU [9]. Drift is corrected by comparing distances from other UAVs in a swarm using trilateration. Cameras in UAVs can improve navigation through object recognition and tracking, utilizing landmarks. However, this approach is computationally demanding, and issues like unknown obstacles or lighting changes can cause data inaccuracies. To address these, SLAM (simultaneous localization and mapping) uses multiple cameras and LIDAR sensors to handle environmental lighting changes [10]. SLAM combines multiple sensors to generate a 3D map of the surroundings [11], offering accurate position estimation but at a high computational cost. Optical flow sensors are increasingly being explored for relative positioning. Effective in short-term stabilization, their applicability for long-term positioning is an area of active research [12]. Studies have indicated the potential of optical flow sensors in positional estimation, particularly when fused with other sensor syst- ems [13,14]. These advancements lay the foundation for the growing significance of optical flow sensors in UAV systems. The authors in [15] introduced a vision-aided approach to estimate the horizontal velocity of a UAV without accounting for biases. The current paper extends that work to estimate both the UAV velocity and accelerometer biases, employing an Extended Kalman Filter (EKF) to fuse data from an on-board IMU, an altitude sensor, and an optical flow sensor. This sensor integration greatly improves the accuracy of the EKF estimates for the UAV’s horizontal velocity, particularly during intermittent GPS signal loss. As a result, the system significantly reduces navigation errors compared to relying on dead reckoning alone. Our method uses pixel displacement measurements from the optical flow sensor to calculate horizontal velocity in the platform’s body frame. These measurements are then transformed into a North-East-Down (NED) locally level navigation frame, utilizing the platform’s attitude, which is derived from the IMU’s rate sensor, accelerometer, and magnetometer data. The resulting velocity measurements in the NED frame are used in the EKF to estimate both the UAV’s horizontal velocity and IMU accelerometer biases. To assess the effectiveness of this aided inertial navigation approach, we applied it to various simulated scenarios, including straight and circular trajectories. Estimation performance was evaluated by calculating errors in the estimated platform location and velocity. Our results demonstrate that the proposed approach reduces state estimation errors and improves UAV trajectory control during GPS signal outages. Section 2 presents the problem formulation. Section 3 describes the sensors that are involved, including a detailed development of the Optical Flow Sensor model. Section 4 describes the navigation filter. Section 5 describes the simulation model and the numerical parameters used in the generation of results. Section 6 contains the simulation results and discussion. Finally, Section 7 gives conclusions and future work.

2. Problem Formulation

2.1. Coordinate System

The coordinate system used as the navigation frame (N-frame) is the North-East-Down (NED) system. This coordinate system is fixed relative to the Earth, and its origin is typically on the surface of the Earth. The N-frame is located near the region in which the drones operate, with the xN-axis pointing towards geographic north, the yN-axis pointing towards the east, and the zN-axis pointing downwards towards the center of the Earth. The navigation solution being sought is the platform’s location and velocity within this frame, and possibly also the platform’s attitude, also defined relative to this frame. The UAV platform moves within this frame, and its navigation solution is defined relative to this frame.

Another coordinate frame is defined, one attached to and moving with the UAV platform body and designated the platform body frame (B-frame). Its xB-axis is the platform roll axis and is aligned normally with the platform’s direction of the flight. This frame’s yB-axis points out of the right side of the platform, and its zB-axis is directed out of the bottom of the platform, downward when the vehicle is oriented horizontally. A critical part of the process of generating a strapdown inertial navigation solution is the coordinate transformation step that converts the input acceleration vector measured by the IMU in the B-frame to the N-frame where the navigation solution is generated. A coordinate transformation matrix designated as CNB premultiplies a vector given in the B-frame to transform that vector to the N-frame. A description of the generation of this orientation matrix using the body angular rates and initial attitude can be found in Titterton [16]. This process is depicted in Figure 1 in the block receiving rate–gyroscope signals and producing the platform orientation. Another matrix designated CBN and called the attitude or orientation matrix is directly related to CNB. The matrix CBN defines the orientation of the platform body frame relative to the navigation frame. The platform’s orientation can be reached by three successive rotations starting with a frame initially coincident with the N-frame. The first rotation ψ occurs about the N-frame’s zN axis, also called the yaw axis. This rotation is followed by rotation θ about the new pitch axis that occurs after the yaw rotation. Finally, rotation ϕ is performed about the new roll axis that occurs after the pitch rotation. These three rotations are through angles referred to as Euler angles, those being the roll angle ϕ, the pitch angle θ, and the yaw angle ψ. These three rotation matrices are multiplied in succession to form the orientation matrix CBN:

(1)CBN=Rx(ϕ)·Ry(θ)·Rz(ψ)

The individual rotation matrices are given here:

(2)Rz(ψ)=cos(ψ)sin(ψ)0sin(ψ)cos(ψ)0001

(3)Ry(θ)=cos(θ)0sin(θ)010sin(θ)0cos(θ)

(4)Rx(ϕ)=1000cos(ϕ)sin(ϕ)0sin(ϕ)cos(ϕ)

Each of the matrices, Rz(ψ), Ry(θ), and Rx(ϕ), are orthogonal matrices, hence CBN is orthogonal and has the property

(5)CNB=(CBN)1=(CBN)T

Matrix CNB plays a critical role in the implementation of a strapdown navigation system and is assumed to be provided in this study by an Attitude and Heading Reference System (AHRS). The matrix appears in Figure 1 and in the equations herein involving transformation from the B-frame to the N-frame.

2.2. Strapdown Inertial Navigation

A navigation system based on the use of an IMU that is rigidly affixed to the mobile platform body (i.e., strapped down) is given in the block diagram of Figure 1. This diagram is indicative of the process for generating a navigation solution in the ideal case occurring when the input signals are error-free and initial conditions are known. This diagram illustrates that the solution is derived from the body angular rates and body linear acceleration time histories, which are expressed as three-dimensional vectors along the platform body axes entering from the left. These vector quantities undergo a series of integration processes to generate the navigation solution given the platform’s initial orientation, velocity, and position. In the absence of errors in the initial conditions and inputs, and assuming perfect integration, the navigation solution would be error-free. However, due to input errors, numerical integration errors, and uncertainties in the initial state, output errors inevitably occur.

If aB is the acceleration vector provided by the IMU expressed in the B-frame and containing accelerometer bias ba, then the corresponding acceleration vector in the N-frame, corrected for gravity and denoted as aN, can be obtained by transforming aB using the transformation matrix CNB:

(6)aN=CNB(aBba)+g+wa

where

  • aN is the acceleration in the navigation frame,

  • aB is the acceleration measured in the body frame,

  • CNB is the transformation matrix from the body frame to the navigation frame,

  • ba is the bias in the acceleration measurement,

  • g is a three-element vector containing the acceleration of gravity, [00g]T,

  • g is the acceleration due to gravity, considered constant and acting in the positive direction in the navigation frame,

  • wa is the random noise present in the acceleration measurement.

2.3. Processing Data from Inertial Sensors

IMUs, consisting of gyros and accelerometers, sense angular rotational rates and linear translational accelerations. One might consider using these signals as observations within the navigation filter, using the model as shown in Figure 1 as the plant; however, that is not what is typically performed [17]. One issue with this approach is that these observations would be functions of the inputs to the plant, which are typically treated as known signals, and so to adjust for this case in which they are unknown, one might consider introducing additional state dynamics at the inputs so that these sensor outputs would be functions of the states to be estimated.

The more commonly accepted approach, however, is to inject the outputs from the IMU into the plant model of Figure 1 after those inputs are adjusted to remove sensor error, a process mentioned in [17] as a “Model Replacement Mode”. The approach taken is to consider the outputs from the IMU as consisting of the true rates and accelerations plus error source terms. These errors are functions of fixed or slowly time-varying unknown parameters. The filter then estimates these parameters, allowing these error sources to be computationally removed from the output of the IMU, resulting therefore in an estimate of the true body rate and acceleration. These corrected outputs are then injected into the navigation filter dynamic model.

2.4. IMU Model

There are many error sources present in a model of an IMU. All of the vectors in these models are defined relative to the IMU body frame, which is coincident with the B-frame, ignoring any misalignment that may be present. We consider first the following angular rate measurement error model:

(7)ωm=ωT+RωT+UaT+bω+wω

In this model, we define

  • ωT—true angular rates in the body frame,

  • ωm—measured angular rates,

  • R—scale factor and misalignment error matrix,

  • U—acceleration sensitivity matrix,

  • bω—angular rate bias,

  • wω—angular rate random noise.

This angular rate model is provided here for completeness, although is not used in this paper whose focus is that of the accelerometer bias error source and dealing with the removal of its impact. The study of the impact of rate errors on attitude determination as provided by the AHRS and their impact on navigation performance is a planned topic for future research.

The other is the acceleration sensor error model also expressed in the body frame:

(8)am=aTB+QaTB+LωT+ba(CBN)g+wa

where

  • aTB—true acceleration,

  • am—measured acceleration,

  • Q—scale factor and misalignment error matrix,

  • L—angular rate sensitivity matrix,

  • ba—acceleration bias,

  • g—the three-element gravity vector in the N-frame, [00g]T,

  • wa—acceleration random noise.

In the problem being considered, the scale factor, misalignment, and sensitivity terms and matrices, R,U,Q, and L, are assumed to be either fixed or varying slowly enough such that they can be estimated during a production or preflight calibration process. They then remain sufficiently constant during the time period when in use, i.e., the mission time, so that compensation and removal of the effects of these errors can occur in real time without the need for online recalibration. This situation is often true in practice and it is what is assumed here.

Acceleration and rate biases, on the other hand, do typically vary with time quickly enough that their current values need to be estimated as they evolve. These estimated values are then used to remove them from the output of the IMU. Angular rate biases, unlike acceleration bias, can be estimated and removed when the platform is moving if the bias time constants are large relative to the times associated with platform maneuvers. This allows for the assumption that the true angular rates are zero mean. The slowly time-varying bias is estimated by computing the mean of the sensor output over a time period that is relatively long compared to vehicle maneuvers.

Angular rate sensor random walk errors can also be bounded in the roll and pitch axes using the gravity vector included in the IMU accelerometer measurement. Gravity provides a measure of the vertical direction. When platform accelerations do not include a fixed acceleration due to, for example, a steady, unending turn, then the body accelerations are on average dominated by the components of gravity that fall on the body axes, and these components are related to the body pitch and roll angles. This, in turn, enables the roll and pitch drift angles to be constrained. Similarly, a compass heading sensor can be used as an angle measurement to constrain and limit heading drift.

There is no mechanism, however, to bound the impact of IMU accelerometer bias error. If left uncompensated, these biases can cause velocity error to grow without bound, which then subsequently causes position estimation errors to grow as well. The Optical Flow Sensor is applied herein to measure platform horizontal velocity, which then enables time-varying accelerometer biases on the xB and yB axes to be estimated. Similarly, the Baro-Altimeter provides a measure of the platform altitude, which thereby enables the time-varying accelerometer bias on the zB axis to be estimated.

2.5. Continuous-Time Plant Equations

A block diagram of the strapdown navigation system model including the process and observation noise signals used in defining the navigation filter is shown in Figure 2. The process noises, wa and wb, represent the noise in the accelerometer data and the noise driving the bias model, respectively. The observation noises, wv and wPN, are the noise signals in the velocity and altitude observations, respectively.

From this model, we have the following continuous-time plant equations:

(9)v˙=CNB(amba)+g+wa

(10)p˙=v

(11)b˙a=wb

where p and ba are the platform position and bias vectors shown in Figure 2.

2.6. Discrete-Time Plant Equations

The equations above are discretized for time step δT:

(12)vk+1=vk+CNB(ambk)δT+gδT+waδT

(13)pk+1=pk+vkδT

(14)bk+1=bk+wbδT

The term amδT represents the change in velocity over time interval δT due to the acceleration and angular rotation that occur during that time interval. This term is derived from Titterton (see Equation (31)) [16]) and is shown to be

(15)amδT=ΔVk+12(Δθk×ΔVk)

where the angular rate terms are given by

(16)ωMδT=Δθk

One should note that variables Δθk and ΔVk are often directly available as outputs from the IMU. Inserting (15) into (12) results in the following discrete-time state-space equations:

(17)vk+1=vk+CNB(ΔVk+12(Δθk×ΔVk))CNBbkδT+gδT+ηv

(18)pk+1=pk+vkδT

(19)bk+1=bk+ηb

where ηv and ηb are the discrete-time noise process given by

(20)ηv=waδT,ηb=wbδT

2.7. State Vector Definition

We define the following nine-state state vector:

(21)xk=vkpkbk=vNvEvDpNpEpDbaxbaybazk

State vector xk includes the position and velocity of the platform in the N-frame, where the NED velocities are designated vN,vE,vD, the NED positions are designated pN,pE,pD, and the accelerometer biases are bax,bay,baz. Platform orientation is assumed to be estimated using the Attitude and Heading Reference System (AHRS) capability provided with many IMU devices, such as the VN-100 by VectorNav Technologies (Dallas, TX, USA).

2.8. Discrete-Time State-Space Model

Given this state vector, we can formulate the following discrete-time state-space model of the plant:

(22)xk+1=Fkxk+u¯k

where

(23)u¯k=CNB(ΔVk+12(Δθk×ΔVk))+gδT

(24)Fk=I303×3CNBδTδTI3I303×303×303×3I3

(25)g=00g

where I3 represents the 3×3 identity matrix, 03×3 represents the 3×3 zero matrix, and g is the acceleration due to gravity acting in the zN direction.

3. Sensor Measurement Models

3.1. Altitude Measurement

A barometer is used as an altitude sensor. It measures atmospheric pressure and applies the principle that pressure decreases with altitude. This approach provides an estimate of altitude above sea level, typically using the barometric formula. Altitude above the N-frame origin is computed by subtracting the current barometer altitude from the barometric altitude recorded at take-off. Ground level is, in this case, also assumed to be the xN-yN horizontal plane of the N-frame. The equation for this observation as a function of the platform state is thus

(26)ya=pD+wPD

where wPD is the altitude sensor measurement noise.

3.2. Optical Flow Sensor Model

An Optical Flow Sensor is an imaging device that generates the incremental change in the angular location of the scene in view from frame to frame. Each pixel represents a quantized angular displacement. By comparing successive image frames, the direction and speed of motion of the image can be determined. This method of calculating motion from scene changes can be achieved using smart cameras along with software libraries or with embedded light-tracking optical sensors. An optical flow sensor, however, is simpler than these camera- and software-based approaches, having image and signal processing capability resident with the sensor to enable it to convert the captured images to an incremental angular shift quantized to a pixel. The OFS produces a two-element vector representing the shifts about the sensor’s x and y axes, shifts that we designate ΔX and ΔY. Typically, the OFS is oriented to look downward in the zB direction with its x and y axes nominally aligned with the xB and yB axes of the platform body.

The optical flow sensor model PMW3901, made by Pimoroni (Sheffield, UK) is an example of a scene-tracking optical sensor with embedded processing. It examines a collection of pixel displacement values known as a “flow field”, averaging that field to produce a two-element “flow vector”. This sensor data sheet indicates that it has a Field of View (FOV) of 42. From a series of tests measuring the sensor’s output at various velocities, it was observed that the PMW3901 must move a minimum distance to produce an angular rotation of 0.0015 radians to register a change of 1 pixel count in its output. This angular increment is the OFS angular quantization level Q, approximately 0.09.

A key feature of an optical flow sensor is that it does not measure velocity but instead measures a change in the location of the scene. We assume for this discussion that the scene is moving by several pixels in distance from one frame to the next, and that the frame time is 20 ms (i.e., 50 Hz). Then, the delta-angle output from the sensor is an integer count value representing the angular rotation measured between the two frame images, quantized to the pixel size. The fractional part of the motion is removed from the output by quantization. However, it is not lost. We consider the case when the motion is very slow so that it takes several frames, e.g., 10, for the scene to move by a single pixel. In this case, the output sits at zero (0) counts until the scene finally reaches a distance of one pixel; at that sample time, it outputs a change of 1 pixel, then it drops back to outputting zeros while moving to the next pixel quantum distance. This has a distinct advantage over a sensor that measures angular velocity and outputs a quantized value of that measurement. A velocity sensor could experience an angular rate that remains within one quantum and that motion would never be evident in the sensor’s output. But the optical flow sensor instead is sensing angular displacement, or the integral of the angular rate, and producing the change in that angle from frame to frame, so that even the smallest angular rate input is eventually detected when it reaches an angular displacement the size of one pixel. Thus, when the sensor is moving slowly and the scene is moving slowly over the image plane, the sensor produces a stream of zeros punctuated with a periodic value of one count that occurs when the accumulated distance traversed reaches a quantization step equaling one pixel size.

The model created to represent the optical flow sensor output is therefore one that involves the integral of the angular rate of the scene. That angular rate equation as defined earlier is

(27)Ω=ωVA

where Ω is scene angular velocity, ω is the angular velocity of the sensor body, V is the sensor translational velocity, and A is the sensor distance (or altitude) from the scene being observed (i.e., the ground). Integrating this to produce an angle that accumulates over the time interval t0 to t:

(28)ϕ(t)=t0t(ωVA)dt

where ω,V, and A are functions of the continuous time variable t. To produce the sensor output, we integrate ϕ(t) over one sample interval from the OFS frame time (k1)T to the next frame time kT at frame index k, quantizing to the quantization step size Q, and generating the difference in the quantized result:

(29)Δϕk=(k1)TkTωVAdt

(30)Xk=floorϕkQ

(31)ΔXk=XkXk1

This value, ΔXk, is what comes out of the sensor to represent motion of the scene in one direction. It is an integer value and has units of counts. There is an equivalent equation for ΔYk, representing motion in the other orthogonal direction.

3.3. Computation of Velocity Observation

To convert from the sensor’s incremental change in scene angle ΔXk to linear velocity, we integrate Equation (29) by assuming that the angular and linear velocities are constant over the frame time T, leading to

(32)Δϕk=ωkTVkAkT

Angular rate ωk is the pitch angular velocity and is provided by the IMU. The altitude Ak is assumed to be known, provided by an altitude sensor which can be a barometric altimeter’s measure of the distance above the take-off altitude, assuming that the ground is relatively flat. Equation (31) is converted from units of counts to units of radians (angle) using the quantization step size:

(33)Δϕk=ΔXkQ

Combining Equations (32) and (33) and solving for velocity Vk, we obtain

(34)Vk=ωkΔXkQTAk

The velocity observations along the xB and yB body axes are therefore:

(35)vxB=ωykΔXkQTAk

(36)vyB=ωxkΔYkQTAk

where ΔXk and ΔYk are the OFS outputs at sample k.

The velocity measurements derived from the OFS and IMU provide a measure of the velocity of the platform in the horizontal plane under the assumption that the platform roll and pitch angles are small, i.e., <20. These velocities are converted from the B-frame to the N-frame and generated therefore using the yaw coordinate transformation matrix Rz(ψ), the tranformation matrix that assumes the roll and pitch of the platform are zero:

(37)CNB(0,0,ψ)=RzT(ψ)

Thus, we generate the horizontal velocity observations:

(38)yvNyvE0k=RxT(ψ)vxBvyB0k

The velocity observations are therefore given by

(39)y1k=yvN=cψvXsψvY

(40)y2k=yvE=sψvX+cψvY

where sψ=sin(ψ) and cψ=cos(ψ), and where ψ is the heading angle provided by the AHRS.

In addition, we have the baro-altimeter observation

(41)y3k=pD

4. Aided Inertial Navigation: The Estimation Problem

The EKF is a widely used technique for state estimation in nonlinear systems, making it particularly suitable for drone navigation where the process and measurement models are often nonlinear. The EKF works by linearizing the nonlinear model around the current state estimate and then applying the standard Kalman Filter update equations. The prediction and update steps of the EKF are represented by the following equations:

  • Prediction step:

(42) x ¯ k + 1 = f ( x ^ k , u k )

(43) P ¯ k + 1 = F k P ^ k F k T + W k

  • Update step:

(44)Kk=P¯kHkT(HkP¯kHkT+Mk)1

(45)x^k=x¯k+Kk(zkh(x¯k))

(46)P^k=(IKkHk)P¯k

In these equations, x¯k and x^k represent the predicted and updated state estimates, respectively, while P¯k and P^k denote the predicted and updated state covariance matrices. The process and measurement models are represented by f(·) and h(·), with Fk and Hk being the corresponding Jacobian matrices. Wk and Mk are the process and measurement noise spectral density matrices, and Kk is the Kalman gain. The system dynamics model and the state transition matrix Fk are adjusted accordingly:

  • Prediction step:

(47)u¯k=CNB(ΔVk+12(Δθk×ΔVk))+gδT

(48)x¯k+1=Fkx^k+u¯k(StateUpdate)

(49)P¯k+1=FkP¯kFk+Wk(CovarianceUpdate)

where

(50)Fk=I303×3CNBδTδTI3I303×303×303×3I3

(51)Wk=Wv03×303×303×3Wp03×303×303×3Wba

with I3 representing the 3 × 3 identity matrix, 03×3 representing the 3 × 3 zero matrix, and Wv, Wp and Wba being the velocity, location, and accelerometer bias process noise spectral density matrices, respectively.

Given Equations (39), (40) and (43), the H(x^) matrix is generated:

(52)H=100000000010000000000001000

Note that the observation update can occur at a lower sample rate than the predict step sample rate, which is equal to that of the IMU sample rate. The measurement update can occur irregularly, whenever a new measurement is available. In this study, we performed a prediction step at 100 Hz with each new IMU output and an update step at 50 Hz when new velocity and altitude observations are available. The output of the prediction step xk+1 becomes the input to the update step in Equations (53) and (57):

  • Update step:

(53)rk=zkHx¯k

(54)Sk=HkP¯kHk+Mk

(55)Kk=P¯kHkSk1

(56)P^k=(IKkHk)P¯k

(57)x^k=x¯k+Kkrk

where

(58)Mk=wvN000wvE000wPD

with wvN and wvE being the velocity measurement noise spectral densities, and wPD the altitude measurement noise spectral density.

5. Simulations

Simulation tests were performed to assess the effectiveness of an accelerometer bias estimation when integrating the OFS, altimeter, and an IMU. In the evaluation process, we conducted simulations involving a drone moving in a straight motion and a circular motion. To assess performance, we computed the error in the estimated platform states (position and velocity) and the estimated accelerometer biases. To establish a baseline for evaluating the performance of our algorithm, we first conducted simulations without biases. Then, simulations with biases were performed—both without and with bias estimation.

A block diagram of the system being simulated is given in Figure 3 and shows that the model includes the motion of the drone in the horizontal plane, an Optical Flow Sensor model, a Velocity Generator that generates velocity observations in the B-frame, an IMU model, and the Kalman estimator. The IMU model generates the angular rate and acceleration sensor outputs in accordance with Equations (7) and (8). This is followed by the integration and calculation of the change in these signals over the previous time step. In the case of the accelerometer, the IMU model adds bias and random noise ηv. The IMU accelerometer measurement noise standard deviation in these simulations is set to σacc to 0.014 m/s2. This corresponds to a noise density of 0.14 mg/Hz over a bandwidth of 100 Hz. All other error sources present in the IMU model—scalefactor error, misalignments, angular rate bias, and random noise error—are all set to zero. Doing so enables the simulation to specifically identify the impact that accelerometer bias and bias estimation have on navigation performance. The IMU functions at a frequency of 100 Hz, whereas the optical flow sensor operates at a frequency of 50 Hz. We assume that an AHRS provides the platform orientation matrix CBN for use in the navigator.

The Optical Flow Sensor has a quantization step size Q of 0.09/count. The accelerometer bias true values along the xB-axis (bax), yB-axis (bay), and zB-axis (baz), when applied, have bias levels as specified in Table 1 (in m/s2).

The initial position and velocity state estimate errors are assumed to be zero. The Kalman Filter was updated at a rate of 100 Hz, with new velocity observations entering at a rate of 50 Hz. The filter process noise spectral density matrices were set to Wv=0.001×I3, Wp=0×I3, and Wba=107×I3.

Computational Complexity

The computations involved in producing state estimate x^k occur in the blocks following Optical Flow Sensor (OFS) and IMU blocks. OFS and IMU perform internal computations to produce their outputs and therefore do not impose any additional computational demands in the realization of the filter. The Velocity Generator block, the Conversion to N-frame block, and the Equation (47) block require a small number of multiplies and adds as well as computation of sin(ψ) and cos(ψ), which could be generated in a computationally tractable fashion using a table lookup function. The bulk of the computational requirements occur, therefore, in the Kalman Filter, specifically Equations (47)–(57). The most onerous computational task in the filter is likely the matrix Sk1—a 3 × 3 inverse that involves one divide. There are several matrix–vector multiplications to be performed within the filter, but many of elements in these matrices are zero and can be skipped. Consequently, the overall computation load is similar to that of a typical linear time-invariant Kalman Filter. In essence, the computational demands required to implement the filter are fairly modest.

6. Estimation Schemes and Results

The following Table 2 contains a definition of the conditions applied in the simulation results.

6.1. Scenarios 1: Straight-Line Motion

Scheme 1 through scheme 4 describe a straight-line motion where the UAV begins accelerating along the x-axis when t = 60 s, pitching forward to an angle of about 15 and accelerating to a steady forward velocity. Before moving northward, the platform remains stationary. Once in motion, the UAV maintains a steady northward trajectory. The simulation continues for a total 600 s (10 min).

6.1.1. Scheme 1: Dead-Reckoning (Noisy Measurements Without State Estimation)

In this scheme, velocity and position are determined by integrating IMU outputs. The measurements contain noise but are free of acceleration bias. No bias estimation or correction is applied, meaning the system does not incorporate data from the OFS or Barometric Altimeter. The resulting velocity and position data are shown in Figure 4 and Figure 5. Clearly, there is a random walk type of process present and the position estimates are drifting steadily, reaching approximately 12 m of error.

6.1.2. Scheme 1a: Dead Reckoning (Biased Measurements Without State Estimation)

In this scheme, the accelerometer bias is turned on, and so accelerometer noise and bias error are present. The UAV’s position and velocity are estimated using a dead-reckoning approach; thus, filter corrections are not applied. As a result, sensor biases lead to significant drift in both the velocity and position estimates, causing large position errors over time. With bias present and no Kalman corrections applied, the dead reckoning solution becomes extremely large with position errors on the order of 150 km in 10 min. The performance of this approach is illustrated in Figure 6 and Figure 7.

6.1.3. Scheme 2: Baseline: EKF (Noisy Measurements with State Estimation)

In this scheme, accelerometer biases are off but the Kalman estimator is on. The absence of sensor bias results in more accurate velocity and position estimates compared to Scheme 1. The performance of this approach is illustrated in Figure 8 and Figure 9.

6.1.4. Scheme 3: Biased Noisy Measurements (No Bias Estimation)

In this scheme, accelerometer biases remain on and the Kalman estimator is partially engaged, with corrections applied to position and velocity, without bias estimation. These results are bounded velocity errors and position errors that continue to grow over time in an unbounded fashion. Figure 10 and Figure 11 highlight the impact of uncorrected biases.

6.1.5. Scheme 4: Biased Noisy Measurements (with Bias Estimation)

In this scheme, bias estimation is added. Accelerometer bias and noise are on, and bias estimation is performed simultaneously with position and velocity estimation. In other words, the complete filter defined above is applied. This significantly improves the accuracy of both the velocity and position estimates, greatly mitigating the effect of sensor biases. Additionally, turning Bias Estimation on also enables Altimeter observations, leading to an improved altitude estimate. This is demonstrated in Figure 12 and Figure 13 where the scale of the figures is the same as that of Scheme 3 in order to more easily compare the degree of improvement. Furthermore, Figure 14 illustrates the estimated bias.

Note that the position error does not converge to zero as does the velocity error. The OFS provides only a relative displacement measure and does not provide absolute position. In other words, the xN and yN locations of the platform are unobservable. However, the zN-axis downward location, as measured by the barometer, is observable as it demonstrates with the zN position estimate converging to zero.

6.2. Scenarios 2: Circular Motion

In the circular motion scenario, the drone is flying alone a circular trajectory for about 20 min. We start with the case in which the accelerometer biases are on and present, and the Kalman estimator is on with bias estimatin off. In the second scheme, we use biased measurements and we simultaneously estimate the UAV position, velocity, and sensor biases. Bias estimation yields significantly improved target velocity errors in the presence of biases. The biases are introduced at 100 s into the simulation. At this point, the Kalman Filter (KF) is actively engaged, correcting the velocity and position states using observations from the barometer and the Optical Flow Sensor (OFS). This dynamic adjustment helps to maintain the accuracy of the navigation solution despite the introduced biases.

6.2.1. Scheme 5: Biased Noisy Measurements (No Bias Estimation)

Biased sensor measurements were utilized, focusing only on estimating the UAV’s position and velocity without correcting for the bias. Over time, this led to the accumulation of position and velocity errors due to uncorrected sensor biases. The effects are illustrated in Figure 15 and Figure 16.

6.2.2. Scheme 6: Biased Noisy Measurements (with Bias Estimation)

While biased measurements were again used, bias estimation was engaged within the filter along with the position and velocity estimation. The result is a significant improvement in estimation accuracy, with velocity error converging to zero. The improvements are demonstrated in Figure 17 and Figure 18.

The OFS is crucial for generating horizontal velocity observations, essential for mitigating velocity errors caused by random walk and bias-induced drift. Additionally, the OFS facilitates the estimation of accelerometer bias along the xB and yB axes of the IMU. It is evident that the EKF is indispensable for eliminating the adverse effects of accelerometer bias on velocity and position estimates.

7. Conclusions

This paper presented an approach for improving UAV navigation by integrating IMUs with optical flow sensors, with a particular focus on bias estimation. While IMUs provide essential velocity, orientation, and acceleration data, they are prone to drift and systematic errors, such as accelerometer biases, which degrade navigation accuracy over time. Optical flow sensors, which measure relative UAV-ground motion, help mitigate this drift and enhance dead reckoning, particularly in GPS-denied environments.

A mathematical model for optical flow sensors was introduced, describing how sensor outputs relate to UAV motion. Unlike conventional sensors, optical flow sensors measure angular displacement changes directly rather than angular velocity, which affects velocity estimation. Bias estimation techniques were incorporated to correct accelerometer errors, significantly improving overall navigation accuracy. However, challenges such as limited operating range and sensitivity to platform angles must be considered in practical applications.

The findings of this research demonstrate the effectiveness of bias estimation in enhancing UAV state estimation and reducing drift. While this paper primarily utilizes a simulation-based validation approach, future research will involve both simulation and real-world testing across a broader range of conditions, including more complex trajectories and external factors like wind and obstacles. Future work will also aim to further refine the integration of IMU and optical flow data, as well as investigate advanced filtering techniques to improve robustness in real-world navigation scenarios.

Author Contributions

Methodology, D.B. and D.H.; Software, D.B. and D.H.; Validation, D.B.; Writing—original draft, D.B.; Writing—review & editing, D.B. and D.H. All authors have read and agreed to the published version of the manuscript.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

Author David Haessig was employed by the company AuresTech Inc. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Footnotes

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Figures and Tables
View Image - Figure 1. Basic block diagram for a strapdown inertial navigation system, courtesy [16].

Figure 1. Basic block diagram for a strapdown inertial navigation system, courtesy [16].

View Image - Figure 2. Strapdown inertial navigation with IMU acceleration input corrected for IMU bias.

Figure 2. Strapdown inertial navigation with IMU acceleration input corrected for IMU bias.

View Image - Figure 3. System and Simulation Block Diagram.

Figure 3. System and Simulation Block Diagram.

View Image - Figure 4. Position Errors, Dead-Reckoning, biases not present.

Figure 4. Position Errors, Dead-Reckoning, biases not present.

View Image - Figure 5. Velocity Errors, Dead-Reckoning, biases not present.

Figure 5. Velocity Errors, Dead-Reckoning, biases not present.

View Image - Figure 6. Position Errors, Dead-Reckoning, biases present.

Figure 6. Position Errors, Dead-Reckoning, biases present.

View Image - Figure 7. Velocity Errors, Dead-Reckoning, biases present.

Figure 7. Velocity Errors, Dead-Reckoning, biases present.

View Image - Figure 8. Position Errors, Kalman Est. on, biases not present, bias est. off.

Figure 8. Position Errors, Kalman Est. on, biases not present, bias est. off.

View Image - Figure 9. Velocity Errors, Kalman Est. on, biases not present, bias est. off.

Figure 9. Velocity Errors, Kalman Est. on, biases not present, bias est. off.

View Image - Figure 10. Position Errors, Kalman est. on, biases present, bias est. off.

Figure 10. Position Errors, Kalman est. on, biases present, bias est. off.

View Image - Figure 11. Velocity Errors, Kalman est. on, biases present, bias est. off.

Figure 11. Velocity Errors, Kalman est. on, biases present, bias est. off.

View Image - Figure 12. Position Errors, Kalman est. on, biases present, bias est. on.

Figure 12. Position Errors, Kalman est. on, biases present, bias est. on.

View Image - Figure 13. Velocity Errors, Kalman est. on, biases present, bias est. on.

Figure 13. Velocity Errors, Kalman est. on, biases present, bias est. on.

View Image - Figure 14. Bias Estimates during Scheme 4.

Figure 14. Bias Estimates during Scheme 4.

View Image - Figure 15. Position Errors, Kalman est. on, biases present, bias est. off.

Figure 15. Position Errors, Kalman est. on, biases present, bias est. off.

View Image - Figure 16. Velocity Errors, Kalman est. on, biases present, bias est. off.

Figure 16. Velocity Errors, Kalman est. on, biases present, bias est. off.

View Image - Figure 17. Position Errors, Kalman est. on, biases present, bias est. on.

Figure 17. Position Errors, Kalman est. on, biases present, bias est. on.

View Image - Figure 18. Velocity Errors, Kalman est. on, biases present, bias est. on.

Figure 18. Velocity Errors, Kalman est. on, biases present, bias est. on.

Sensor Biases (m/s2).

b ax b ay b az
0.8 0.4 −0.8

Simulation Schemes and Conditions.

Scheme Accelerometer Bias Bias Estimation Kalman Filter Corrections
1—Straight Line: Dead Reckoning, bias off OFF OFF OFF
1a—Straight Line: Dead Reckoning, bias on ON OFF OFF
2—Straight Line: bias off, Kalman Est. on, bias est off OFF OFF ON
3—Straight Line: bias on, Kalman Est. on, bias est off ON OFF ON
4—Straight Line: bias on/All est. on ON ON ON
5—Circle: bias on/Kalman Est. on, bias est off ON OFF ON
6—Circle: bias on/All est. on ON ON ON

References

1. Huang, S.; Dissanayake, G. Robot localization: An introduction. Wiley Encyclopedia of Electrical and Electronics Engineering; Wiley: Hoboken, NJ, USA, 2016; pp. 1-3. [DOI: https://dx.doi.org/10.1002/047134608x.w8318]

2. Barfoot, T.D. State Estimation for Robotics; 2nd ed. Cambridge University Press: Cambridge, UK, 2024; pp. 1-4.

3. Yuan, Q.; Chen, I.M. Localization and velocity tracking of human via 3 IMU Sensors. Sens. Actuators Phys.; 2014; 212, pp. 25-33. [DOI: https://dx.doi.org/10.1016/j.sna.2014.03.004]

4. Joshi, S.M. Adaptive Control in the Presence of Simultaneous Sensor Bias and Actuator Failures; NASA Technical Report NASA/TM-2012-217231; National Aeronautics and Space Administration, Langley Research Center: Hampton, VA, USA, 2012.

5. Wang, B. Reduced Integration Time Improves Accuracy in Dead Reckoning Navigation Systems. 2023; Available online: https://www.analog.com/en/analog-dialogue/articles/reduced-integration-time-improves-accuracy.html (accessed on 10 November 2024).

6. Abell, M.L.; Braselton, J.P. Chapter 2–First Order Ordinary Differential Equations. Introductory Differential Equations; 5th ed. Academic Press: Cambridge, MA, USA, 2018; [DOI: https://dx.doi.org/10.1016/B978-0-12-814948-5.00002-1]

7. Gilbert, H.B.; Celik, O.; O’Malley, M.K. Long-term double integration of acceleration for position sensing and frequency domain system identification. Proceedings of the 2010 IEEE/ASME International Conference on Advanced Intelligent Mechatronics; Montreal, QC, Canada, 6–9 July 2010; pp. 453-458. [DOI: https://dx.doi.org/10.1109/AIM.2010.5695927]

8. Tong, P.; Yang, X.; Yang, Y.; Liu, W.; Wu, P. Multi-UAV Collaborative Absolute Vision Positioning and Navigation: A Survey and Discussion. Drones; 2023; 7, 261. [DOI: https://dx.doi.org/10.3390/drones7040261]

9. Belfadel, D.; Haessig, D.; Chibane, C. Relative navigation of UAV swarm in a GPS-denied environment. Proceedings of the Signal Processing, Sensor/Information Fusion, and Target Recognition XXXII; Orlando, FL, USA, 1–3 May 2023; SPIE: Bellingham, WA, USA, 2023; Volume 12547, pp. 152-161.

10. Santoso, F.; Garratt, M.A.; Anavatti, S.G. Visual–Inertial Navigation Systems for Aerial Robotics: Sensor Fusion and Technology. IEEE Trans. Autom. Sci. Eng.; 2017; 14, pp. 260-275. [DOI: https://dx.doi.org/10.1109/TASE.2016.2582752]

11. MathWorks. Simultaneous Localization and Mapping—MATLAB. What Is SLAM (Simultaneous Localization and Mapping)—MATLAB; 2023; Available online: https://www.mathworks.com/discovery/slam.html (accessed on 10 November 2023).

12. Braber, T.; De Wagter, C.; de Croon, G.; Babuška, R. Optical-flow-based Stabilization of Micro Air Vehicles Without Scaling Sensors. Proceedings of the 10th International Micro-Air Vehicles Conference; Melbourne, Australia, 22–23 November 2018; pp. 289-297.

13. Lee, S.; Song, J.B. Robust mobile robot localization using optical flow sensors and encoders. Proceedings of the IEEE International Conference on Robotics and Automation; New Orleans, LA, USA, 26 April–1 May 2004; Volume 14, pp. 1039-1044. [DOI: https://dx.doi.org/10.1109/ROBOT.2004.1307287]

14. Kendoul, F.; Fantoni, I.; Nonami, K. Optic flow-based vision system for autonomous 3D localization and control of small aerial vehicles. Robot. Auton. Syst.; 2009; 57, pp. 591-602. [DOI: https://dx.doi.org/10.1016/j.robot.2009.02.001]

15. Belfadel, D.; Haessig, D. Optical Flow for Drone Horizontal Velocity Estimation without GPS. Proceedings of the 27th International Conference on Information Fusion; Venice, Italy, 7–11 July 2024.

16. Titterton, D.; Weston, J. Strapdown Inertial Navigation Technology; 2nd ed. Institute of Electrical Engineers: London, UK, 2004; [DOI: https://dx.doi.org/10.1049/PBRA017E]

17. Carpenter, J.; D’Souza, C. Navigation Filter Best Practices; Technical Report TP-2018-219822; NASA: Hampton, VA, USA, 2018.

© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.