1. Introduction
The “Model Predictive Controller (MPC)” takes into consideration often contradictory requirements by minimizing a composite cost function under the constraints that precisely describe the dynamic model (i.e., the “abilities”) of the controlled system. The mathematical background of this approach corresponds to optimization of functionals for the implementation of which the computationally very greedy “dynamic programming” was suggested in the 1950s [1,2]. By the application of a finite time grid and Lagrange’s “reduced gradient method” [3] referred to as “nonlinear programming”, the “Receding Horizon Controller (RHC)” [4] was introduced in 1978. Though the resolution of the grid must be fine enough to allow Euler integration over it, RHC normally needs considerably less computational resources than dynamic programming. It was successfully applied for the control of relatively slow processes as e.g., crystallization [5], and biomedical processes in the artificial pancreas [6]. The increase in computational power of modern computers later allowed various fields for its application (e.g., [7,8,9]); however, in robotics, where normally fast motion of the robot arms is required, the optimal control framework was found to be too complicated, and the available dynamic model of the robot was directly applied for computing the necessary driving torque in the “Computed Torque Control (CTC)” approach (e.g., [10,11,12]). In the 1990s, it became clear that it is difficult to obtain precise dynamic robot models (e.g., [13]). The friction effects that are not very well modeled by classical mechanics generally mean problems in model identification even with the relatively simple Striebeck model [14], while more sophisticated friction approaches as the “LuGre” model have to introduce new, dynamically coupled system variables [15].
Since the CTC controller can guarantee asymptotically zero tracking error for arbitrary initial conditions only in the possession of exact dynamic model, to achieve precise tracking, more sophisticated methods were developed. The fundamental idea came from Lyapunov’s Ph.D. thesis [16,17] in which he considered the stability of the equilibrium points of certain physical systems that were described by coupled nonlinear systems of ordinary differential equations. Since these equations of motion normally do not have closed form analytical solutions, their subtle details were not discovered; however, by Lyapunov’s ingenious method it became possible to define and prove various stability properties for these equilibrium points. In control applications, the “zero tracking error” was placed in the role of the equilibrium point, and in the 1990s, the first paradigms as the “Adaptive Inverse Dynamics Controller (AIDC)” and the “Slotine–Li Adaptive Robot Controller” appeared [18]. This approach works with the introduction of a special metric tensor in the space of the tracking error, its time-derivative and time-integral, by the use of which the Lyapunov function as a square of an error metrics can stagnate, monotonically or strictly monotonically decrease, or asymptotically converge to zero. This metric tensor depends on the feedback gains applied in the control rule. If the exact dynamical model is available, it is possible to prove that this special metrics converges to zero by solving the Lyapunov equation. In this case, the “arbitrary components” of this metric tensor are not present in the control law. If the available dynamical model is imprecise, in the adaptive feedback, the arbitrary components of this metric tensor appear, too. The result normally is not a particular, but rather a whole set of stable adaptive controllers the elements of which produce different error relaxation features. Since in several applications as e.g., in life sciences, the pure fact of stability cannot guarantee the avoidance of “lethal states”, complementary tuning of the parameters of the stable solutions became necessary as e.g., in [6]. The main problem is that only the original components of the tracking error have clear phenomenological interpretation and physical significance, as a result, guaranteeing the behavior of these components would be really significant; however, by guaranteeing the decrease of the norm obtained by the use of the particular metric tensor cannot guarantee the decrease of the physically interpreted error components because they are “mixed” in this metric. In the practice, the decrease of the individual components would be desirable.
The adaptive controllers can be divided into two big subsets: the “parameter adaptive approach” as in [18] utilizes the ab ovo exactly known formal properties of the dynamic model and realizes continuous refinement of the model parameters. In the “signal adaptive approach” as in the “Model Reference Adaptive Controller (MRAC)” [19] fast complementary control signals are applied that make the behavior of the controlled system identical to that of the reference model. This reference model normally can be chosen as a stable “Linear Time-Invariant (LTI)” system that easily can be controlled. This approach also uses the Lyapunov function in its design. Though for different problem classes, appropriate “candidate Lyapunov functions” are available (e.g., [20]), this approach is mathematically very difficult and needs well educated, innovative control designers. The failure of finding an appropriate Lyapunov function to a given particular problem does not mean any conclusion for its stability.
To tackle the problem of the “transient behavior” of the controlled system that is not clearly addressed in the approaches that guarantee only the stability of the solution, in [21] a novel iterative controller was suggested in which the task of computing the appropriate control signal was transformed to finding the fixed point of a contractive map. Its structure for a second-order physical system is sketched in Figure 1.
It is a “flexible framework” that can be filled in with various particular contents. In the “Kinematic Block”, an arbitrary tracking strategy can be formulated that can drive the trajectory tracking error to 0 as . For instance, in a “Proportional, Integral, Derivative (PID)” design the following quantities can be introduced with a positive constant in (1). Its stability can be proved without the use of any Lyapunov function.
(1a)
(1b)
(1c)
(1d)
It is evident that the functions have the property in (2)
(2)
therefore the linear space of the general solution of the LTI system in (1c) can be spanned by three basis functions of which each converges to 0 if in the form given in (3)(3)
in which the constants are determined by the arbitrary initial conditions. The basis function is mapped to zero by the operator , is mapped to zero by , and finally is mapped to zero by . It is evident that not only converges to zero: and also have to converge to zero because a sequence of implications can be obtained that “after a while” (practically a few times ) various components will become 0 as in (4)(4a)
(4b)
(4c)
(4d)
Evidently, (4b) can be rewritten in the form of (5) that corresponds to a stable inhomogeneous system in which due to (4d) the inhomogeneous “driving term” will vanish after a while, therefore as
(5)
In similar manner (4a) can be rewritten as (6)
(6)
that again is a stable inhomogeneous LTI system in which the inhomogeneous driving terms vanish after a while, therefore as .In the case of digital controllers, the box “Delay” normally corresponds to the cycle time of the digital controllers. In the box “Deformation”, the “Response Function” of the system can be introduced as in which the variables in the position denoted by the symbol “…” only slowly can vary while the controller very quickly can modify . The box contains the function or that is so constructed that if then , that is, the solution of the control task is the fixed point of function G. For the construction of G various proposals were put forward and investigated in [21,22,23] by the use of functions in analytical form. For proving the convergence of the iteration Banach’s fixed point theorem [24] was used according to which in a linear, normed, complete metric space (“Banach Space”) the iterative sequence created by the contractive mapping as converges to the unique fixed point of the function . For this purpose the response function generally has to meet the condition that the real part of each eigenvalue of must be negative or positive simultaneously in the vicinity of the fixed point. In this case, it is possible to introduce appropriate parameters in the function G that can guarantee its contractivity [25]. Though during one digital control step only one step of this iteration can be performed, since and vary only very slowly in comparison with , according to ample simulation investigations, this method showed good convergence properties in many cases. The proofs were based on Taylor series approximation of the function values in the vicinity of the fixed point. By realizing similar deformation in the field of the control forces the fixed point iteration-based method was found applicable in a novel design of MRAC controllers [26]. It was shown that this novel technique could be interpreted from the point of view of the Lyapunov function, too [27,28].
In [29] a simple, geometrically interpreted method was introduced. The arrays , can be so augmented to arrays , that they obtain a common Frobenius norm: . If these vectors are not parallel to each other they span a 2 dimensional hyperplane in that is also spanned by the orthogonal unit vectors and that is made of the component of that is orthogonal to . In this case the skew-symmetric matrix and the angle determine an Orthogonal Matrix that makes rotations in the space so that the ‘‘axis of rotation” is the dimensional orthogonal subspace of the vectors
(7)
that corresponds to the generalization of the Rodrigues formula [30] (this fact easily can be proved by utilizing that ). If is calculated from the scalar product as , just rotates the augmented vector into the other augmented vector . Consequently, the physically interpreted projection of , is exactly transformed into . By the introduction of an interpolation factor the rotation with makes approach . When this function is applied in the block called Deformation, via setting a great value to R and a small one to , the convergence of the iteration can be guaranteed if the angle formed between and is acute. The steering systems of bicycles and cars work accordingly: if the driver wishes to achieve “sharper turn” to the tune of , the necessary modification of the rotational angle of the steering wheel, i.e., , forms an acute angle with . In other words: if the driver wishes to turn to the left, the steering wheel has to be turned to the left, too. This simple property can be well utilized in the practice in the development of steering algorithms.With regard to modeling issues, the pioneering discovery by Weierstraß has to be mentioned. In a lecture in 1872 he gave the first example for an everywhere continuous function that nowhere was differentiable [31]. He highlighted the fact that the class of continuous functions is so complicated that we cannot even “imagine” the graph of a general continuous function. We can “see” only the graphs of “smooth” functions the derivatives of which may have some jumps at a few discrete points only. His discovery anticipated difficulties in the field of approximation of continuous functions; however, in his inaugural lecture at the Academy of Berlin in 1885 he has shown that polynomials can be regarded as universal approximators of single variable continuous functions [32]. His theorem was extended from polynomials to other approximators by Stone in 1948 [33]. The approximation of multivariable continuous functions was found to be a more difficult task. In around 1900, Hilbert in one of his conjectures “guessed” that it was impossible to construct continuous multivariable functions by the use of single variable ones [34]. Though in 1927 Volterra introduced a series model [35] that is a sequence of approximations for continuous functions using a polynomial functional expansion [36] for dealing with integro-differential equations, the first rigorous rebuttals of Hilbert’s conjecture were published only in 1957 by Arnold for the functions of three variables [37], and by Kolmogorov for continuous functions of many variables [38]. Kolmogorov’s constructive proof in the 1960s was simplified and made more elegant by Sprecher in 1965 [39] and Lorentz in 1966 [40,41], and served as the mathematical background of the feedforward neural networks that can be regarded as technical realizations of the universal approximators that appeared in the 1990s [42,43]. In 1990, the idea of “Convolutional Neural Networks (CNN)” was suggested for image recognition applications by Le Cun et al. [44] in which a convolutional layer was the fundamental building block. The early convolutional layers were linear systems: their outputs were affine transformations of their inputs. To take into account nonlinear effects in face recognition, the Volterra polynomials appeared in 2009 in [45], and later were built in the CNNs [36], too. While the feedforward neural networks can be taught by examples, Kohonen’s “self-organizing map” [46] was able to automatically find categories in samples. For modeling dynamic effects, recurrent neural networks appeared [47,48] and were introduced in the CNN’s convolutional layer [49], too.
For the mathematically rigorous tackling of uncertainties of non-statistical nature in 1965, Zadeh introduced the concept of “fuzzy sets” [50]. By our time, the concept was extended to “type 3” sets [51]. In the 1990s, it became clear that the fuzzy systems can be regarded as universal approximators, too [52,53], in which the Weierstraß–Stone approximation theorem can also be used [54]. Practically satisfactory description of the physical state of certain machines as e.g., turbojet engines, various parameters as temperature, pressure, speed, and vibrations are used that can be revealed by complicated diagnostic methods (e.g., [55,56,57]); however, for control purposes, only certain aspects of the “complete” physical model can be used (e.g., [58,59]) in the form of very “incomplete” models.
From the point of view of function approximation, the various realizations of the universal approximators can be regarded as huge structures that contain numerous free parameters that have to be fitted. For this purpose, numerous methods, such as the gradient descent type “error backpropagation” [60] that can be made more efficient by using appropriate activation functions in the neurons, its combination with genetic algorithms [61], other stochastic optimization methods as simulated annealing [62], memetic algorithms and bacterial memetic algorithms [63,64,65,66], simplex algorithm [67,68,69], “particle swarm algorithm” [70,71], etc., can be mentioned.
Rigorous theoretical investigations soon revealed the phenomenon called “curse of dimensionality” due to which the “universal approximator” ability of the soft computing tools was doubted at least from practical point of view (e.g., [72,73,74]). Much of these problems in terms of “nowhere denseness” stem from the “irregular nature” of the continuous functions discovered by Weierstraß in [31]. It can be expected that the situation is not so hard in the case of “smooth functions”. In this view, the idea of “polytopic models” was introduced in 2006 [75,76] in which the function values were sampled over some grid points of a multidimensional space (i.e., in the vertices of polytopes). By the use of the higher-order generalization of Golub’s and Kahan’s “Singular Value Decomposition (SVD)” [77] by Lathauwer et al. in 2000 [78], these models can be simplified by finding the most important contributions belonging to the larger singular values, under well controlled conditions. Further, by also taking into account control possibilities via “Linear Matrix Inequalities (LMI)”, the controller design methodology of the “tensor product model” was proposed in [79]. This approach renders the program announced by Boyd et al. in 1994 [80] generally applicable. By the use of LMIs a wide set of control problems can be solved by a Lyapunov function-based design, for which efficient MATLAB packages were developed [81]. In this approach, in each cell different metric tensor can be applied in the Lyapunov function. Passing the cells’ limits can make certain problems arise that can be tackled by the use of either “switching controllers” (e.g., [82,83,84]) or some redundant system of coordinates can be used from the vertices of a polytope within a convex hull to deal with a continuous problem (e.g., [85]).
To tackle the problem of the curse of dimensionality, to evade the need for massive parallelism and sophisticated data synchronization in the learning and operating phases of the traditional structures, in [86], a novel soft computing structure was suggested that is more or less akin to a coarse resolution grid or a fuzzy model in which very approximate and simple rules are applied for control purposes. It broke with the Lyapunov function-based design that generally was present in the switching controllers and in the “Linear Parameter Varying (LPV)” or “Quasi LPV” design by the application of simple smoothed tracking of the jumps in the control force. At first, its modeling ability was checked for the description of the behavior of the free and the controlled van der Pol oscillator [87] that is a popular benchmark system. It makes nonlinear oscillations in a limit cycle. For modeling the motion of the free system, the function , and for control purposes, the function , were approximated in a dynamic range that was “filled in” during the control process. It had the main properties as follows:
1.. For the free system, the two dimensional vectors and were augmented to three-dimensional ones of identical Frobenius norms and exactly as it was performed in the rotations-based abstract deformations applied in [29]. In the case of the controlled model the three-dimensional vectors and were similarly augmented to produce the four dimensional ones of identical norms as and in which and were the “dummy components” without any physical interpretation. Their role was to guarantee equal norms.
2.. Coarse resolution grids were introduced in for the values, and in for the values. In the center points of the grids, the appropriate and Q values were computed from the available exact model of the van der Pol oscillator. Following that, the abstract rotations defined in (7) were calculated that rotated into , and transformed into , respectively. Each grid cell was associated with a “neuron” that had the “activation function”. It executed the rotations according to (7), and had the following parameters:
Its cell-limits as , for the free motion, and , , for the controlled system, respectively;
The orthogonal unit vectors and of which the generator of the rotation in (7) can be computed, and the angle of the necessary rotation, .
3.. These neurons were arranged in a single layer in which each neuron obtained its input value for the “teaching process” as for the free system, and for the dynamic model. If the input signal belonged to the “range of competence” of the given neuron, , , and were computed. During the “normal operation” the neuron used the input for the free system modeling as and for “the use for control mode”. If the input data belonged to its range of competence, it computed , computed the rotated vector for the free system, and for the control application, and as its output, it provided the first component of the rotated vector that corresponded to the modeled value of and Q, respectively.
4.. The last layer of the novel neural structure consisted of a single neuron that summarized the calculated outputs. Since the cells’ limits were determined in a way that the model had only disjoint cells, the output of the summarizing layer was the result of the “soft model”.
5.. To reduce the effects of the jumps in the control signal at the cell boundaries, the really applied generalized force was smoothed by the tracking rule based on a positive constant in (8)
(8)
that, for a stationary “driving term" , has the stationary solution , furthermore, for time-varying , two different solutions of (8), i.e., and , can differ from each other by that satisfies the differential equation , that is, the differences can stem from the initial conditions and converge to 0 as . Consequently, the smoothed signal tracks well the actual if the variation of which is not significant during the time-interval of duration , and “smooths well” the signals that have faster variation.6.. The data representation made it possible to apply real-time modification (“step-by-step learning”) of the neuron’s previously learned parameters as the unit vectors , and were refreshed according to a learning rule determined by the parameter as
(9a)
(9b)
(9c)
(9d)
(9e)
(9f)
(9g)
It must be noted that even if and were orthogonal to each other, the new unit vectors and will not be exactly orthogonal ones. Consequently, the new skew symmetric matrix can generate rotations in the form , but, because , instead of (7) we can state only that
(10)
i.e., by maintaining the right-hand side of (10) that is easy to compute, instead of exact rotations, linear transformations can be used that are good approximations of rotations.
In the present paper the above outlined idea is extended and it contains the novelties as follows:
1.. The controlled system is an underactuated two degree of freedom construction in which the directly controlled subsystem is dynamically coupled with a non-controllable one acting as “parasite dynamics”.
2.. Instead of the simple CTC control and its robust variable structure/sliding mode-based correction (e.g., [88,89,90]), the “fixed point iteration-based adaptive control scheme” depicted in Figure 1 is applied to compensate the effects of the imprecisions of the coarse grid-based model with the application of the rotations-based adaptive controller announced in [29].
3.. The effects of the measurement noises are investigated and reduced by a smoothing technique that is similar to the solution published in [91].
4.. The computation time of the controller was measured for the hardware and software environment that was used in the simulations.
In Section 2, the dynamic model of the controlled system is discussed. Section 3 presents the rotational network-based soft model; in Section 4, simulation results are presented. The paper is closed with Section 5 in which the results are discussed and further possible research is outlined.
2. The Dynamic Model of the Controlled System
The controlled system is a wheel that is rotated around a horizontal axle. In one of its spokes, a mass point is built in. It is located between two springs and its motion is damped by viscous friction. The Euler–Lagrange equations of motion of this model are completed with the friction term and are given in (11) that can be used for simulation purposes,
(11a)
(11b)
in which denotes the rotational angle of the wheel, is the radial distance of the mass point along the spoke, measured from the rotational axle, r is the zero force position of the spring connected the axle with the mass point, k is the spring constant, d denotes the viscous damping coefficient of the mass point as it moves along the spoke, m is its inertia, and corresponds to the inertia momentum of the wheel, denotes the driving torque. For control purposes, the necessary control torque can be obtained by rearranging (11a) as(12)
The numerical data used in the simulations are given in Table 1.
3. The Rotational Neural Model Structure Tailored to the Controlled System
The structure of the rotational neural model is outlined in Figure 2.
The nodes corresponded to 11 elements grids as follows: with the grid width , with the grid width , with the grid width , with the grid width , with the grid width . For smoothing the torque signal obtained from (8) the parameter was in use. The augmented vectors were rotated in with the common vector norm . The adaptive rotations in Figure 1 were achieved in the space with the common vector norm and interpolation factor . For incremental learning purposes in (9) was used. In the “Kinematic Block" of Figure 1 the PID term with was applied.
With regard to noise issues, it was assumed that was measurable in each digital cycle with a Gaussian noise of and was measurable with a Gaussian noise of . For filtering the measurement noises, the measured noisy signal x was tracked by the filtered one according to the tracking rule given in (13)
(13)
and the filtered values were used in the “Kinematic Block" and in the fixed point iteration with the filtering parameter . The operation of this filter can be analyzed exactly as it was performed in the case of (8).4. Simulation Results
Numerical simulations were performed by the use of the Julia language Version 1.5.1 (25 August 2020) working under the operating system Linux 5.10.53-1-MANJARO x86_64 21.1.0 Pahvo on a DELL inspiron 15R laptop using the program code that is available on the Web given in Section “Sample Availability": “
4.1. Comparative Analysis of the Performances of the Neural and the Exact Models
In this subsection, the figures also contain simulation results obtained when the exact dynamic model was in use to support comparative analysis. In Figure 3, two 2-dimensional sub-spaces of the stored “soft dynamic model” in are exemplified by giving the stored abstract rotational angles of the rotations in the augmented space. The non-adaptive control corresponds to the simple CTC controller. In Figure 4, the trajectory tracking properties can be compared.
Figure 5 reveals that the adaptive control evades the greater tracking errors.
According to Figure 6, the adaptation makes the phase trajectory of the controlled system “more canonical” and reduces the excitation of the coupled parasite dynamical subsystem. According to Figure 7, the adaptive approach causes more even control torque contributions in spite of the noisy signals. In the non-adaptive results, the effects of changing the cell that is competent to fire can be identified. The dynamic range of the adaptive control forces is considerably narrower than that of the non-adaptive control. Figure 8 reveals the operation of the adaptive controller: due to the adaptive deformation, the realized 2nd time-derivative values well approximate the desired ones. This effect can be observed better in the lack of measurement noises in Figure 9.
To reveal the behavior of other cells, nominal motion of smaller amplitude and higher circular frequency was investigated in Figure 10, Figure 11, Figure 12 and Figure 13.
Finally, simulation results are displayed for nominal motion of small amplitude and small circular frequency in Figure 14, Figure 15 and Figure 16.
4.2. Estimation of the Computational Time of the Operations in the Control Cycles
Computational complexity of the suggested method and the duration of the necessary computations within the control cycles is an interesting practical question. It cannot be estimated by counting the necessary mathematical operations, because it depends on various external factors, such as the operating system that manages the controller, and the properties of the program that realizes the control task. Normally, a multitasking controller is available that uses a single central processor unit. Its capacities are shared by various cooperating processes. If the operating system is not definitely designed to manage real-time tasks, the actual duration of a given computation depends on the other duties of the computer. For instance, if during the calculation of the simulations some video player is in use, its effects can be well observed in the duration of each actually executed computation.
In a similar manner, if the program language makes “garbage collection” in the stack, such an activity may need considerably longer time than that of a “normal computation” within the control cycle. These longer sessions, though not very frequently, sometimes appear during the simulation. The Julia language is a typical garbage collecting construction that generates such effects.
However, it offers a simple solution to measure the time that is needed for the operations defined in a given command line. For instance, the command “
A program code “
It is evident that during one digital control step of duration of the non-adaptive computations needed less than , while the adaptive ones consumed up approximately maximum . This observation indicates that the suggested method can be implemented by the use of common hardware/software tools. Figure 19 and Figure 20 compare the trajectory tracking performances. In Figure 21 and Figure 22 the phase trajectory of the controlled and that of the coupled “parasite system” can be seen. These figures well reveal the effects of the necessary centripetal force that increases with the amplitude of the nominal motion.
Figure 23, Figure 24 and Figure 25 reveal the details of the adaptation mechanism that worked according to the expectations.
5. Discussion
In this paper, the idea of “rotational neural networks” introduced in [86] was applied to bring about a coarse grid-based soft dynamic model for control applications. The jumps in the model output at the cell boundaries are smoothed by a first-order tracking controller, and the effects of the model inaccuracies are compensated by a fixed point iteration-based adaptive controller. Both the adaptive and modeling mechanisms are based on the same mathematical background that applies simple rotations in higher dimensional spaces. It was shown that the network is able to work in a “step-by-step” learning mode in which the information stored in a visited cell can be refined depending on the fact that various parts of the coarse resolution cells can be visited during the motion of the controlled system.
The suggested network has very simple topological structure and it is easy to simulate it, too. It was shown that the modeling and control methods can be combined with simple noise filtering techniques. The operation of this controller seems to be simpler than that of the traditional fuzzy controller that has to compute ample number of “generalized” maximum and minimum operations over wide ranges.
The most important advantage of this approach is that it breaks with the Lyapunov function-based design in which the phenomenologically well interpreted error components, their integrals and derivatives are “mixed” by the use of very special metric tensors so that these metrics as they are do not have clear interpretation and physical meaning. In the novel approach for each error component kinematically well interpreted behavior can be prescribed.
The applicability of the suggested method was demonstrated in the case of a two degree of freedom underactuated paradigm in which the control of the motion of a wheel is perturbed by a directly not controllable, dynamically coupled parasite system within one of its spokes. The sum of the duration of the necessary computations within each control cycle was calculated for the hardware/software system that run the simulation. It was concluded that at the level of our prevailing possibilities the suggested approach is realistic.
In the simulations, special PID-like kinematic behavior was required that does not result in monotonic decrease of the error components; however, this method can be combined with various fractional order derivatives-based kinematic requirements (e.g., [92,93,94]) that can produce monotonic variation.
In the future, we wish to compare the operation of this idea with that of traditional fuzzy controllers based on approximate and “coarse” rules. For this purpose an electromechanical testbed was built and used for testing the operation of the traditional PID and CTC controllers in [95]. The first successful preliminary results with regard to the implementation of the fixed point iteration-based adaptive controller using abstract rotations on the same testbed were recently obtained by Árpád Varga. The planned future investigations are based on this instrument.
Author Contributions
Conceptualization, J.F.B. and I.J.R.; methodology, J.F.B. and I.J.R.; software, Á.V.; validation, Á.V.; formal analysis, J.K.T.; writing—original draft preparation, J.K.T. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the Doctoral School of Applied Informatics and Applied Mathematics of Óbuda University, Budapest, Hungary.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
The study did not report any data.
Acknowledgments
The authors wish to express their thanks to Krzysztof Kozłowski with whom they were in active working connection and cooperation with from the 1990s in bilateral cooperation projects as well as in relation to the RoMoCo and MMAR conferences. Professor Kozłowski was external member of the Antal Bejczy Center for Intelligent Robotics of Óbuda University, Budapest, Hungary.
Conflicts of Interest
The authors declare no conflict of interest.
Sample Availability
Sample program “Wheel_Applied_Sciences_Noisy.jl” available at “
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figures and Table
Figure 1. The structure of the fixed point iteration-based adaptive controller for a second-order dynamical system (after [21]).
Figure 3. Examples of the abstract rotational angles stored in various grid points.
Figure 4. Trajectory tracking for the maximal amplitude and circular frequency of the nominal trajectory for which the adaptive controller was able to use only the originally learned data. (a): Non-adaptive, (b): adaptive control.
Figure 5. Trajectory tracking error for the maximal amplitude and circular frequency of the nominal trajectory for which the adaptive controller was able to use only the originally learned data. (a): Non-adaptive, (b): adaptive control.
Figure 6. The phase trajectories for the maximal amplitude and circular frequency of the nominal trajectory for which the adaptive controller was able to use only the originally learned data (note that δq1≈0.628rad, and δq˙1=2.0rad·s−1). (a): Non-adaptive phase trajectory for the controlled variable q1, (b): adaptive phase trajectory of the controlled variable q1. (c): Non-adaptive phase trajectory of the coupled mass point. (d): Adaptive phase trajectory of the coupled mass point.
Figure 7. The control torque Q1 for the maximal amplitude and circular frequency of the nominal trajectory for which the adaptive controller was able to use only the originally learned data. (a): Non-adaptive, (b): adaptive control.
Figure 8. The operation of the adaptive controller. (a): The desired, deformed, and realized 2nd time-derivatives. (b): The angle of the adaptive abstract rotation.
Figure 9. The operation of the adaptive controller in the lack of measurement noises. (a): The desired, deformed, and realized 2nd time-derivatives. (b): The angle of the adaptive abstract rotation.
Figure 10. Trajectory tracking for the small amplitude and high circular frequency of the nominal trajectory for which the adaptive controller was able to use only the originally learned data. (a): Non-adaptive, (b): adaptive control.
Figure 11. Trajectory tracking error for the small amplitude and high circular frequency of the nominal trajectory for which the adaptive controller was able to use only the originally learned data. (a): Non-adaptive, (b): adaptive control.
Figure 12. The control torque Q1 for the small amplitude and high circular frequency of the nominal trajectory for which the adaptive controller was able to use only the originally learned data. (a): Non-adaptive, (b): adaptive control.
Figure 13. The operation of the adaptive controller for smaller amplitude and high circular frequency of the nominal motion. (a): The desired, deformed, and realized 2nd time-derivatives. (b): The angle of the adaptive abstract rotation.
Figure 14. Trajectory tracking for the small amplitude and circular frequency of the nominal trajectory. (a): Non-adaptive, (b): adaptive control.
Figure 15. Trajectory tracking error for the small amplitude and circular frequency of the nominal trajectory. (a): Non-adaptive, (b): adaptive control.
Figure 16. The control torque Q1 for the small amplitude and circular frequency of the nominal trajectory. (a): Non-adaptive, (b): adaptive control.
Figure 17. The computational time of the control cycles. (a): Adaptive. (b): Non-adaptive.
Figure 18. The computational time of the control cycles in ms units (zoomed in excerpts). (a): Adaptive. (b): Non-adaptive.
Figure 22. The phase trajectory of the coupled mass-point. (a): Adaptive. (b): Non-adaptive.
Figure 23. The adaptively deformed 2nd time-derivatives. (a): Adaptive. (b): Non-adaptive.
The dynamic parameters of the controlled system.
| Parameter | Measurement Unit | Numerical Value |
|---|---|---|
| Inertia momentum of the wheel | ||
| Inertia of the mass-point m | ||
| Spring constant k | ||
| Spoke length r | ||
| Gravitational acceleration g | ||
| Damping constant along the spoke d |
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2021 by the authors.
Abstract
The model-based controllers generally suffer from the lack of precise dynamic models. Making reliable analytical models can be evaded by soft modeling techniques, while the consequences of modeling imprecisions are tackled by either robust or adaptive techniques. In robotics, the prevailing adaptive techniques are based on Lyapunov’s “direct method” that normally uses special error metrics and adaptation rules containing fragments of the Lyapunov function. The soft models and controllers need massive parallelism and suffer from the curse of dimensionality. A different adaptive approach based on Banach’s fixed point theorem and using special abstract rotations was recently suggested. Similar rotations were suggested to develop particular neural network-like soft models, too. Presently, via integrating these approaches, a uniform adaptive controlling and modeling methodology is suggested with especial emphasis on the effects of the measurement noises. Its applicability is investigated via simulations for a two degree of freedom mechanical system in which one of the generalized coordinates is under control, while the other one belongs to a coupled parasite dynamical system. The results are promising for allowing the development of relatively coarse soft models and a simple adaptive rule that can be implemented in embedded systems.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
; Varga, Árpád 4
1 Antal Bejczy Center for Intelligent Robotics, Óbuda University, Bécsi út 96/B, H-1034 Budapest, Hungary;
2 Antal Bejczy Center for Intelligent Robotics, Óbuda University, Bécsi út 96/B, H-1034 Budapest, Hungary;
3 Antal Bejczy Center for Intelligent Robotics, Óbuda University, Bécsi út 96/B, H-1034 Budapest, Hungary;
4 Doctoral School of Applied Informatics and Applied Mathematics, Óbuda University, Bécsi út 96/B, H-1034 Budapest, Hungary;




