Content area
In this review, the optimal control designs via adaptive dynamic programming (ADP) of unmanned vehicles are investigated. Various complex tasks in unmanned systems are addressed as fundamental optimal regulation and tracking control problems related to the position and attitude of vehicles. The optimal control can be obtained by solving the Hamilton-Jacobi-Bellman equation using ADP-based control methods. Neural network implementations and policy iterative ADP algorithms are common approaches in ADP-based control methods, enabling online updates and partially model-free control for unmanned vehicles with various structures. For complexities and uncertain disturbances in unmanned vehicle dynamics, robust ADP-based control methods are proposed, including robust ADP control for matched and unmatched uncertainties, robust guaranteed cost control with ADP, and ADP-based
Details
Dynamic programming;
Robust control;
Neural networks;
Artificial intelligence;
Tracking control;
Task complexity;
Bellman theory;
Robots;
Computer engineering;
Design;
Unmanned aerial vehicles;
Algorithms;
Adaptive control;
Unmanned vehicles;
Methods;
Optimal control;
Control methods;
H-infinity control;
Robotics;
Vehicles;
Control systems