1. Introduction
Quadratic minimization (QM) is a widely studied branch of optimization theory, with applications in various fields such as image processing [1,2], communication engineering [3], robot kinematics [4], and energy system design [5]. While numerical algorithms can efficiently solve static QM problems, they are unable to handle large-scale time-varying quadratic minimization (TVQM) problems with real-time requirements due to their serial processing mechanism. Thus, it is crucial to develop a more effective solution framework to address TVQM problems.
Previous studies have shown that traditional gradient-based neural network (GNN) models can effectively solve the static or time-invariant matrix problems by designing a scalar evaluation where the error function approaches zero. For example, Nikolova and Chan proposed a static QM model for image restoration [1] and used the gradient linearization iterative method to solve the static QM problems. However, this method has limitations in solving time-varying quadratic minimization (TVQM) problems as it cannot trace exact solutions online. Therefore, it is inadequate in handling TVQM problems.
To address the limitations of traditional algorithms in effectively handling time-varying problems, Zhang et al. developed a zeroing neural network model called the Original Zeroing Neural Network (OZNN) [6]. The OZNN model uses derivative information to predict the evolution direction of the problem to be solved, resulting in high accuracy [7,8]. As a result, the OZNN model has found extensive use in automatic control and signal processing [9,10]. However, the OZNN model is sensitive to noise interferences, which can reduce its solution accuracy when solving time-varying problems [11]. Moreover, the OZNN model requires artificial setting and adjustment of the scale factor, leading to time-consuming iterative adjustment when applied to practical engineering applications [12]. To overcome these challenges, Yan et al. proposed a noise-tolerant zeroing neural network (NTZNN) model [13] that can effectively resist various noise disturbances, thereby solving the corresponding problems caused by noise interferences. Additionally, Wang et al. investigated the robustness of the proposed bounded ZNN model [14].
The NTZNN model has been further developed to improve its convergent performance. For instance, Xiao et al. introduced the limited-time robust neural network (LTRNN) model [15], which not only has noise-resistant capabilities but can also converge in finite time, unlike its predecessors. Li et al. proposed the finite time convergent and noise rejection recurrent neural network (FTNRZNN) model [16], which has similar performance to the LTRNN model and can solve other time-varying nonlinear equations. In addition, a novel design framework for finite and predefined time convergence performance for the zeroing neural network was proposed by Xiao et al. [17]. Liao et al. proposed the prescribed-time convergent and noise-tolerant Z-type neural dynamics (PTCNTZND) model [18], which not only converges in finite time but also accelerates the solving process of achieving the optimal solution and has anti-noise capabilities. Furthermore, a parameter-changing and complex-valued zeroing neural network (PC-CVZNN) model was introduced by Xiao et al. [19], which can solve time-varying complex linear matrix equations (CV-LME-TVC) in finite time and achieves superior performance due to the integration of a new parameter change function. However, these models share a common limitation of not being able to adaptively adjust their convergence speed. To address this, Jia et al. proposed an adaptive fuzzy control strategy to zeroing neural network (AFT-ZNN) model [20]. This model utilizes an adaptive fuzzy control value to adjust its convergence speed based on the calculated error, resulting in superior performance and faster convergence speed.
Recently, some researchers have also proposed robust and noise-tolerant ZNN models with applications to dynamic complex matrix equation solving [21,22] and mobile manipulator path tracking [23,24]. Additionally, improved recurrent neural networks have been proposed for text classification and dynamic Sylvester equation solving [25].
We are inspired by the residual learning framework [26] and combine its advantages with the advantages of the above zeroing neural network models to propose an Adaptive Zeroing Neural Network with Non-convex Activation (AZNNNA) model design framework for solving the TVQM problem. In addition, our proposed AZNNNA model can be applied to various applications such as robots and acoustic source localization, as demonstrated in the work of Jin et al. [27]. Furthermore, the AZNNNA model can resist linear noise and solve dynamic Sylvester equation problems, as shown in the work of Han et al. [28]. Overall, the AZNNNA model’s main advantage over existing zeroing neural network models is its ability to adaptively adjust its convergence speed and improve its representation capability for solving complex and nonlinear problems. The main contributions of this paper can be summarized as follows:
An adaptive zeroing neural network with non-convex activation (AZNNNA) model design framework is proposed and investigated for the first time. Compared with existing zeroing neural network models, our proposed AZNNNA model performs well in terms of convergence and robustness.
Theoretical analyses and conclusions are made from the perspective of Lyapunov stability theory, and the global convergence and robustness of the proposed AZNNNA model for solving the TVQM problem are theoretically verified under noise disturbance.
We perform relevant quantitative digital experiments to demonstrate the performance of the AZNNNA model in solving the TVQM problem under different noise interferences.
The rest of this paper is structured into five sections. Section 2 presents the problem description and method. Section 3 outlines the design framework of the adaptive coefficients, non-convex activation function, and evolution scheme of the proposed Adaptive Zeroing Neural Network with Non-convex Activation (AZNNNA) model. Section 4 presents theoretical analyses of the AZNNNA model from the perspective of Lyapunov stability theory, verifying its global convergence and robustness against noise disturbance for solving the TVQM problem. Section 5 presents the relevant quantitative simulation experiments and result survey. Finally, Section 6 summarizes the conclusions.
2. Problem Description and Related Solution Formula
Generally, the unknown form of the time-varying quadratic minimization (TVQM) problem can be written as
(1)
where the given positive-definite matrix and coefficient vector are smooth and time varying, and for any time , matrix is positive definite. Moreover, is an unknown time-varying vector to be solved, and the superscript denotes the transpose of a vector. For the convenience of statement, a function is defined as . Thus, the gradient of function can be described in the following form:(2)
It is worth noting that by zeroing the above function at each time instant , the theoretical solution of TVQM problem (1) can be obtained in real time. Therefore, the following Equation is equal to the TVQM problem (1):
(3)
Specifically, it can be seen from the above that the theoretical time-varying solution to (1), as the minimum point at any time instant t, satisfies . The following error function is arranged to monitor and revise the development direction of the solving system:
(4)
On the basis of the origin zeroing neural network (OZNN) construction framework, the evolution direction of the error function (4) should satisfy that , where represents the scale factor and , denotes the activation function. Therefore, the OZNN model for solving the TVQM problem (1) can be designed as
(5)
where the parameter , , and represent the time derivatives of the , , and , respectively.3. AZNNNA Model Construction
In this section, aiming at the shortcomings of the existing zeroing neural network models, an adaptive zeroing neural network with non-convex activation (AZNNNA) model is formulated as follows:
(6)
where, the time-varying parameter represents the adaptive scale coefficient, and is a scaling coefficient adjusted to control the influence of the integral item. The other is , which penalizes the integration of towards zero. Next, the following method can be used to construct the adaptive coefficient(7)
where the parameter , , and denotes the 2-norm of a vector.Next, we define non-convex function with where G and denote two sets. Therefore, can be defined as a projection from set G to set . The following two examples can be utilized to explain the construction method of the non-convex activation function:
Bounded situation with a saturation activation function.
where, }, and the parameter .Non-convex situation with a saturation activation function.
where, or or , and , and satisfy such relationships: , and .
Therefore, it can be concluded that the proposed AZNNNA model for solving the TVQM problem (1) can be written as follows:
(8)
In addition, the AZNNNA model (8) is inevitably interfered with by various noises in the real-time solution system. Therefore, the AZNNNA model (8) used to solve the TVQM problem (1) interfered with by noise is described as
(9)
where the noise interference item .The characteristic comparison between the existing recurrent neural network and the proposed AZNNNA model in solving the TVQM problem (1) is shown in Table 1.
Considering that convergence is generally a key criterion of the AZNNNA model (8), the following theorem and the corresponding proof process to analyze the global convergence of the AZNNNA model (8) were presented.
Given any solvable TVQM problem (1), the calculated solution vector of the proposed AZNNNA model (8) can globally converge from any random initial state to the theoretical solution of the TVQM problem (1).
The ith subsystem of the AZNNNA model evolution formula can be depicted as
(10)
The following Lyapunov candidate function is given for investigating the global convergence of the system (10) [33]:
(11)
Obviously, when or , Lyapunov candidate function . If and only if , . At this time, the Lyapunov candidate function is semi-definite. Then, we take the time derivative of the function (11) as follows:Obviously, the Lyapunov candidate function is negative definite. Therefore, on the basis of the Lyapunov theory, the function and the error function globally converge to zero as time goes on. That is to say, the proposed AZNNNA model globally converges to the theoretical solution to the TVQM problem. The proof is complete. □
4. Robustness of AZNNNA Model under Different Noise Situations
In this section, we propose three theorems to prove the robustness of the AZNNNA model (8) under constant noise, linear noise, and bounded random noise interferences, respectively.
The calculated solution vector of the AZNNNA model (9) used to solve the TVQM problem (1) interfered by the constant noise will globally converge to the solution of the problem (1).
For further analysis, according to the Laplace transform method [32], the ith sub-element of the AZNNNA model (9) interfered by constant noise is written as
(12)
Reformulating Equation (12) can be performed as:
(13)
According to the construction method of adaptive coefficient , we can draw a conclusion: . Therefore, Equation (13) can be written as
(14)
In the end, it can be seen that the poles of the transfer function are and . Due to the parameters and , we can conclude that the two poles are located in the left half-plane, which shows the stability of the solution system. Therefore, according to the definition of the final value theorem, and applying it to Equation (14), we can obtain:
(15)
In summary, the of the AZNNNA model (8) to solve the TVQM problem (1) interfered by no matter how large constant noise will globally converge to zero. □Next, we provide Theorem 3 to study and prove the robustness of the proposed AZNNNA model (8) under linear noise interference.
The residual error of the AZNNNA model (8) interfered by linear noise for solving the TVQM problem (1) will eventually converge to , where k is the scale coefficient in (8). Notably, when as the parameter .
The Laplace transform of the AZNNNA model (8) can be utilized:
(16)
where the parameter is obtained by Laplace transform with linear noise . Similar to what is mentioned in Theorem 2, Equation (16) can be rearranged asNext, on the basis of the final value theorem, we can obtain the following Equation:
It can be seen from the above that when , the error of the subsystem of the AZNNNA model (8) will converge to a fixed value . All in all, the error of the AZNNNA model (8) interfered with by linear noise will eventually converge to . Therefore, it can be concluded that
The proof is thus completed. □
In the above content, we described the robustness of the proposed AZNNNA model (8) under constant noise and linear noise. However, the influence of nonlinear dynamic noise on the solution system cannot be ignored. It is worth noting that one commonly encountered nonlinear dynamic noise can be regarded as a special kind of fast-changing random noise. The AZNNNA model (8) can achieve the anti-interference ability of random noise and avoid the limitation of the convex function while avoiding redundant preprocessing procedures. In order to analyze and prove the robustness of the AZNNNA model (8) with bounded random noise interference, we propose the following theorem.
Suppose the upper and lower bounds of the bounded random noise are and , respectively, then, the steady-state error of the AZNNNA model (8) interfered with by bounded random noise is as follows:
The ith subsystem of AZNNNA model (8) interfered with by bounded random noise can be expressed as
(17)
Known from Theorem 2, we can define . Therefore, when , Equation (17) can be transformed as the following:
(18)
where the parameter . The roots of the general solution of the second-order differential Equation (18) can be expressed as and . Next, we can divide the proof process into three situations according to the values of a and k.(1) The first case is : Combine Equation (17) with the second order differential function solving framework, it can be converted as
Next, according to the triangular inequality in [34], we have
Therefore, we can draw the following conclusion:
(19)
where n represents the dimension number of bounded random noise .(2) The second case is This situation is similar to the first situation, so the following Equation can be directly given as:
where the parameter . According to Theorem 1 proposed in the paper [34], we can formulate the following inequality:(20)
where the constant and . Therefore, combining inequality (20) with triangular inequality, we haveNext, we simplify the above inequality:
Therefore, we can obtain the following formula:
(21)
(3) The third case is In this case, we can convert Formula (17) into the following form:
(22)
where the parameter . This situation is similar to the first situation, so we have:(23)
Combining the above three situations, the error of the AZNNNA model (8) can eventually converge to a fixed value under the interference of bounded random noise . So far, the theoretical proof is complete. □
5. Experiments and Results
In this section, we summarize and visualize the simulation experiment of the AZNNNA model (8) proposed to solve the TVQM problem (1). Second, we compare the performance of the AZNNNA model (8) with state-of-the-art neural network models, particularly the GNN model [29], the non-convex and bound constraint zeroing neural network (NCZNN) model [30], and the modified zeroing neural network (MZNN) model [32]. We note that all simulations are implemented using MATLAB R2016a on a computer equipped with an Intel Core i5-12400F 2.50 GHz CPU and 16 GB RAM.
5.1. Time-Varying Quadratic Minimization Example
The following time-varying quadratic minimization is an example of bounded constraint and non-convex activation functions applied for this simulation part:
Next, examples of the time-varying matrix and vector in the TVQM problem (1) are constructed as follows:
The adaptive coefficient and scale coefficient k of the proposed AZNNNA model are set as and , respectively. The corresponding quantitative simulation results of the example are arranged in Figure 1, Figure 2, Figure 3, Figure 4 and Figure 5. Take care that the scale parameter of the MZNN model [32], the GNN model [29], and the NCZNN model [30] are arranged the same as 5. Additionally, the parameter of the MZNN model [32] is set as 5.
5.2. Results in Noise-Free Case
The visualization results of the AZNNNA model for solving the TVQM problem example without noise are presented in Figure 1 and Figure 2. Figure 1a displays the theoretical solution and the calculated solution , and Figure 1b shows the theoretical solution and the calculated solution . It is worth noting that the theoretical solution and the calculated solution are represented by the red and blue lines, respectively. As seen in Figure 1a,b, starting from randomly generated initial values, the computational solution trajectory of the AZNNNA model (8) can converge to the theoretical solution within 3 s.
As shown in Figure 2a, starting from stochastically generated initial values, the residual error of the proposed AZNNNA model (8) rapidly approaches zero. This shows that the solving system can promptly converge to the theoretical solution. Compared with the other three models, the AZNNNA model (8) has the fastest convergence speed. The residual error logarithm is shown in Figure 2b, which is used to depict the model solution precision in detail. As demonstrated in Figure 2b, the AZNNNA model (8) achieves higher precision in solving the noise-free TVQM problem than the GNN model and the NCZNN model. Specifically, the proposed AZNNNA model achieves a convergence precision of order , while the NCZNN and MZNN models only achieve a precision of orders of and , respectively. Although the AZNNNA model (8) and the MZNN model have the same solution accuracy when solving the TVQM problem (Section 5.1) under the noise-free case, the convergence speed of the proposed AZNNNA model (8) is faster. Therefore, the proposed AZNNNA model not only converges faster than the other three models, but also has a higher accuracy.
5.3. Results in Different Noise Cases
Figure 3, Figure 4 and Figure 5 present a comparison of visualization results of the GNN, MZNN, NCZNN, and proposed AZNNNA (8) models, which were used to solve the TVQM problem under each type of noise. The proposed AZNNNA model employs an adaptive coefficient and a scale coefficient, where and , respectively. In this section, we analyze and discuss the results obtained for the following three different types of noise.
-
In the case of constant noise: The amplitude of the constant noise is supplied as . As shown in Figure 3, starting from a random-generated initial value, if the AZNNNA model (8) is interfered with by the constant noise , its system residual error can still accurately converge to the theoretical solution. Moreover, although the accuracy of MZNN under constant noise interference is the same as that of the AZNNNA model (8), the convergence speed of the AZNNNA model (8) is better than that of the MZNN model.
-
In the case of linear noise: The quantitative experimental simulation results of the AZNNNA model (8) for solving the TVQM problem example (Section 5.1) under linear time−varying noise interference are shown in Figure 4. Under the interference of linear noise , compared with the other three comparison models, the AZNNNA model (8) proposed in this paper has the highest solution accuracy among the four models and the fastest convergence speed.
-
In the case of bounded random noise: The quantitative experimental simulation results of the AZNNNA model (8) for solving the TVQM problem example (Section 5.1) under bounded random noise interference is shown in Figure 5. It can be seen from Figure 5 that the AZNNNA model (8) proposed in this paper has the highest solution accuracy among the four models and the fastest convergence speed.
Therefore, we can determine that the proposed AZNNNA model (8) has higher robustness and stability than other ZNN models in the face of noises in different situations. The further performance comparison between the existing ZNN models and the proposed AZNNNA model for solving the TVQM problem (1) under different noise conditions is shown in Table 2.
6. Conclusions
The AZNNNA model proposed in this paper differs from previous zeroing neural network models as it is an adaptive zeroing neural network with a non-convex activation function. By adopting a non-convex activation function and adaptive coefficients, the AZNNNA model overcomes the limitation of convex activation functions and can adaptively change the convergence speed based on the error. This not only improves the robustness of the model but also results in faster convergence and higher accuracy than previously proposed zeroing neural network models. The paper proposes four theorems and provides corresponding proofs based on the Lyapunov stability theory [35,36] to analyze the global convergence and robustness of the AZNNNA model under different noise interferences. Numerical experiments are also conducted to demonstrate the advantages of the proposed AZNNNA model. In the future, we can develop more adaptive zeroing neural network models and explore more combinations of nonconvex activation functions and adaptive coefficients to further improve the robustness and convergence speed of the models. Meanwhile, we further study the global convergence and robustness of the zeroing neural network model under different noise disturbances to provide stronger theoretical support for practical application scenarios. We are also exploring the application of such An Adaptive Zeroing Neural Network with Non-convex Activation to Prediction, description, classification and regression, identification of multimodal medical imaging lesions detection and auxiliary diagnosis, image processing, text recognition, and other fields.
Conceptualization, H.Y. and W.P.; methodology, H.Y. and W.P.; software, H.Y. and W.P.; validation; X.X., S.F. and Y.Z.; formal analysis, H.Y. and W.P.; investigation, X.X. and Y.Z.; resources, H.Y. and W.P.; writing—original draft preparation, H.Y. and W.P.; writing—review and editing, X.X., S.F., H.Z. and Y.Z.; supervision, X.X. and Y.Z.; project administration, X.X. and Y.Z.; funding acquisition, X.X. and Y.Z. All authors have read and agreed to the published version of the manuscript.
Not applicable.
The authors declare no conflict of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1. The visualization results of AZNNNA model for solving TVQM problem example under noise−free. (a) shows the theoretical solution [Forumla omitted. See PDF.] (red line) and calculated solution [Forumla omitted. See PDF.] (blue line), (b) shows the theoretical solution [Forumla omitted. See PDF.] (red line) and calculated solution [Forumla omitted. See PDF.] (blue line).
Figure 2. The simulation comparison results of the GNN model, MZNN model, NCZNN model and the proposed AZNNNA model (8) for solving TVQM problem (1) under noise−free case. (a) Residual error [Forumla omitted. See PDF.]. (b) The logarithmic graph of the residual error [Forumla omitted. See PDF.].
Figure 3. The simulation comparison results of the GNN model, MZNN model, NCZNN model and the proposed AZNNNA model (8) for solving TVQM problem (1) under constant noise case. (a) Residual error [Forumla omitted. See PDF.], (b) The logarithmic graph of the residual error [Forumla omitted. See PDF.].
Figure 4. The simulation comparison results of the GNN model, MZNN model, NCZNN model and the proposed AZNNNA model (8) for solving TVQM problem (1) under linear time−varying noise case. (a) Residual error [Forumla omitted. See PDF.]. (b) The logarithmic graph of the residual error [Forumla omitted. See PDF.].
Figure 5. The simulation comparison results of the GNN model, MZNN model, NCZNN model and the proposed AZNNNA model (8) for solving TVQM problem (1) under bounded random noise case. (a) Residual error [Forumla omitted. See PDF.]. (b) The logarithmic graph of the residual error [Forumla omitted. See PDF.].
Comparison of various neural network models for TVQM problem (
Model | Non-Convex Activation | Adaption Coefficient | Anti Perturbations | Integral Information Involved |
---|---|---|---|---|
OZNN model in [ |
No | No | No | No |
GNN model in [ |
No | No | No | No |
NCZNN model [ |
No | No | Yes | No |
PTCZNN model [ |
No | No | No | No |
MZNN model [ |
No | No | Yes | Yes |
The proposed AZNNNA model ( |
Yes | Yes | Yes | Yes |
Comparison on robustness of different neural network models for solving TVQM problem (
Different Noise Interferences | ||||
---|---|---|---|---|
Model | Noise-Free | Constant Noise | Random Noise | Linear Noise |
OZNN model in [ |
Negligible | Bounded | Bounded |
|
GNN model in [ |
Bounded | Bounded | Bounded |
|
NCZNN model in [ |
Negligible | Bounded | Bounded |
|
PTCZNN model in [ |
Negligible | Bounded | Bounded |
|
MZNN model in [ |
Negligible | Negligible | BS |
BS |
The proposed AZNNNA model ( |
Negligible | Negligible | BS |
BS |
BS
References
1. Nikolova, M.; Chan, R.H. The equivalence of half-quadratic minimization and the gradient linearization iteration. IEEE Trans. Image Process.; 2007; 16, pp. 1623-1627. [DOI: https://dx.doi.org/10.1109/TIP.2007.896622] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/17547139]
2. Johansen, T.A.; Fossen, T.I.; Berge, S.P. Constrained nonlinear control allocation with singularity avoidance using sequential quadratic programming. IEEE Trans. Control. Syst. Technol.; 2004; 12, pp. 211-216. [DOI: https://dx.doi.org/10.1109/TCST.2003.821952]
3. Fantacci, R.; Forti, M.; Marini, M.; Tarchi, D.; Vannuccini, G. A neural network for constrained optimization with application to CDMA communication systems. IEEE Trans. Circuits Syst. II Analog. Digit. Signal Process.; 2003; 50, pp. 484-487. [DOI: https://dx.doi.org/10.1109/TCSII.2003.814805]
4. Zhang, Z.; Li, Z.; Zhang, Y.; Luo, Y.; Li, Y. Neural-dynamic-method-based dual-arm CMG scheme with time-varying constraints applied to humanoid robots. IEEE Trans. Neural Netw. Learn. Syst.; 2015; 26, pp. 3251-3262. [DOI: https://dx.doi.org/10.1109/TNNLS.2015.2469147] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/26340789]
5. Killian, M.; Zauner, M.; Kozek, M. Comprehensive smart home energy management system using mixed-integer quadratic-programming. Appl. Energy; 2018; 222, pp. 662-672. [DOI: https://dx.doi.org/10.1016/j.apenergy.2018.03.179]
6. Zhang, Y.; Mu, B.; Zheng, H. Link between and comparison and combination of Zhang neural network and quasi-Newton BFGS method for time-varying quadratic minimization. IEEE Trans. Cybern.; 2013; 43, pp. 490-503. [DOI: https://dx.doi.org/10.1109/TSMCB.2012.2210038]
7. Xiao, X.; Jiang, C.; Lu, H.; Jin, L.; Liu, D.; Huang, H.; Pan, Y. A parallel computing method based on zeroing neural networks for time-varying complex-valued matrix Moore-Penrose inversion. Inf. Sci.; 2020; 524, pp. 216-228. [DOI: https://dx.doi.org/10.1016/j.ins.2020.03.043]
8. Lu, H.; Jin, L.; Luo, X.; Liao, B.; Guo, D.; Xiao, L. RNN for solving perturbed time-varying underdetermined linear system with double bound limits on residual errors and state variables. IEEE Trans. Ind. Inform.; 2019; 15, pp. 5931-5942. [DOI: https://dx.doi.org/10.1109/TII.2019.2909142]
9. Wang, G.; Li, Q.; Liu, S.; Xiao, H.; Zhang, B. New zeroing neural network with finite-time convergence for dynamic complex-value linear equation and its applications. Chaos Solitons Fractals; 2022; 164, 112674. [DOI: https://dx.doi.org/10.1016/j.chaos.2022.112674]
10. Qi, Y.; Jin, L.; Wang, Y.; Xiao, L.; Zhang, J. Complex-valued discrete-time neural dynamics for perturbed time-dependent complex quadratic programming with applications. IEEE Trans. Neural Netw. Learn. Syst.; 2019; 31, pp. 3555-3569. [DOI: https://dx.doi.org/10.1109/TNNLS.2019.2944992]
11. Wei, L.; Jin, L.; Yang, C.; Chen, K.; Li, W. New noise-tolerant neural algorithms for future dynamic nonlinear optimization with estimation on Hessian matrix inversion. IEEE Trans. Syst. Man Cybern. Syst.; 2019; 51, pp. 2611-2623. [DOI: https://dx.doi.org/10.1109/TSMC.2019.2916892]
12. Xie, Z.; Jin, L.; Du, X.; Xiao, X.; Li, H.; Li, S. On generalized RMP scheme for redundant robot manipulators aided with dynamic neural networks and nonconvex bound constraints. IEEE Trans. Ind. Inform.; 2019; 15, pp. 5172-5181. [DOI: https://dx.doi.org/10.1109/TII.2019.2899909]
13. Yan, J.; Xiao, X.; Li, H.; Zhang, J.; Yan, J.; Liu, M. Noise-tolerant zeroing neural network for solving non-stationary Lyapunov equation. IEEE Access; 2019; 7, pp. 41517-41524. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2907746]
14. Wang, G.; Hao, Z.; Zhang, B.; Jin, L. Convergence and robustness of bounded recurrent neural networks for solving dynamic Lyapunov equations. Inf. Sci.; 2022; 588, pp. 106-123. [DOI: https://dx.doi.org/10.1016/j.ins.2021.12.039]
15. Xiao, L.; Dai, J.; Lu, R.; Li, S.; Li, J.; Wang, S. Design and comprehensive analysis of a noise-tolerant ZNN model with limited-time convergence for time-dependent nonlinear minimization. IEEE Trans. Neural Netw. Learn. Syst.; 2020; 31, pp. 5339-5348. [DOI: https://dx.doi.org/10.1109/TNNLS.2020.2966294]
16. Li, W.; Xiao, L.; Liao, B. A finite-time convergent and noise-rejection recurrent neural network and its discretization for dynamic nonlinear equations solving. IEEE Trans. Cybern.; 2019; 50, pp. 3195-3207. [DOI: https://dx.doi.org/10.1109/TCYB.2019.2906263] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31021811]
17. Xiao, L.; Cao, Y.; Dai, J.; Jia, L.; Tan, H. Finite-time and predefined-time convergence design for zeroing neural network: Theorem, method, and verification. IEEE Trans. Ind. Inform.; 2020; 17, pp. 4724-4732. [DOI: https://dx.doi.org/10.1109/TII.2020.3021438]
18. Liao, B.; Wang, Y.; Li, W.; Peng, C.; Xiang, Q. Prescribed-time convergent and noise-tolerant Z-type neural dynamics for calculating time-dependent quadratic programming. Neural Comput. Appl.; 2021; 33, pp. 5327-5337. [DOI: https://dx.doi.org/10.1007/s00521-020-05356-x]
19. Xiao, L.; Tao, J.; Dai, J.; Wang, Y.; Jia, L.; He, Y. A parameter-changing and complex-valued zeroing neural-network for finding solution of time-varying complex linear matrix equations in finite time. IEEE Trans. Ind. Inform.; 2021; 17, pp. 6634-6643. [DOI: https://dx.doi.org/10.1109/TII.2021.3049413]
20. Jia, L.; Xiao, L.; Dai, J.; Qi, Z.; Zhang, Z.; Zhang, Y. Design and application of an adaptive fuzzy control strategy to zeroing neural network for solving time-variant QP problem. IEEE Trans. Fuzzy Syst.; 2020; 29, pp. 1544-1555. [DOI: https://dx.doi.org/10.1109/TFUZZ.2020.2981001]
21. Jin, J.; Zhao, L.; Chen, L.; Chen, W. A robust zeroing neural network and its applications to dynamic complex matrix equation solving and robotic manipulator trajectory tracking. Front. Neurorobotics; 2022; 16, 1065256. [DOI: https://dx.doi.org/10.3389/fnbot.2022.1065256] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/36457416]
22. Gerontitis, D.; Behera, R.; Shi, Y.; Stanimirović, P.S. A robust noise tolerant zeroing neural network for solving time-varying linear matrix equations. Neurocomputing; 2022; 508, pp. 254-274. [DOI: https://dx.doi.org/10.1016/j.neucom.2022.08.036]
23. Jin, J.; Gong, J. A noise-tolerant fast convergence ZNN for dynamic matrix inversion. Int. J. Comput. Math.; 2021; 98, pp. 2202-2219. [DOI: https://dx.doi.org/10.1080/00207160.2021.1881498]
24. Jin, J.; Gong, J. An interference-tolerant fast convergence zeroing neural network for dynamic matrix inversion and its application to mobile manipulator path tracking. Alex. Eng. J.; 2021; 60, pp. 659-669. [DOI: https://dx.doi.org/10.1016/j.aej.2020.09.059]
25. Chen, W.; Jin, J.; Gerontitis, D.; Qiu, L.; Zhu, J. Improved Recurrent Neural Networks for Text Classification and Dynamic Sylvester Equation Solving. Neural Process. Lett.; 2023; pp. 1-30. [DOI: https://dx.doi.org/10.1007/s11063-023-11176-6]
26. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Las Vegas, NV, USA, 27–30 June 2016; pp. 770-778. [DOI: https://dx.doi.org/10.1109/CVPR.2016.90]
27. Jin, L.; Yan, J.; Du, X.; Xiao, X.; Fu, D. RNN for solving time-variant generalized Sylvester equation with applications to robots and acoustic source localization. IEEE Trans. Ind. Inform.; 2020; 16, pp. 6359-6369. [DOI: https://dx.doi.org/10.1109/TII.2020.2964817]
28. Han, L.; He, Y.; Liao, B.; Hua, C. An Accelerated Double-Integral ZNN with Resisting Linear Noise for Dynamic Sylvester Equation Solving and Its Application to the Control of the SFM Chaotic System. Axioms; 2023; 12, 287. [DOI: https://dx.doi.org/10.3390/axioms12030287]
29. Chen, Y.; Yi, C.; Qiao, D. Improved neural solution for the Lyapunov matrix equation based on gradient search. Inf. Process. Lett.; 2013; 113, pp. 876-881. [DOI: https://dx.doi.org/10.1016/j.ipl.2013.09.002]
30. Jiang, C.; Xiao, X.; Liu, D.; Huang, H.; Xiao, H.; Lu, H. Nonconvex and bound constraint zeroing neural network for solving time-varying complex-valued quadratic programming problem. IEEE Trans. Ind. Inform.; 2020; 17, pp. 6864-6874. [DOI: https://dx.doi.org/10.1109/TII.2020.3047959]
31. Li, W.; Ma, X.; Luo, J.; Jin, L. A strictly predefined-time convergent neural solution to equality-and inequality-constrained time-variant quadratic programming. IEEE Trans. Syst. Man Cybern. Syst.; 2019; 51, pp. 4028-4039. [DOI: https://dx.doi.org/10.1109/TSMC.2019.2930763]
32. Jin, L.; Zhang, Y.; Li, S.; Zhang, Y. Modified ZNN for time-varying quadratic programming with inherent tolerance to noises and its application to kinematic redundancy resolution of robot manipulators. IEEE Trans. Ind. Electron.; 2016; 63, pp. 6978-6988. [DOI: https://dx.doi.org/10.1109/TIE.2016.2590379]
33. Li, X.; Yu, J.; Li, S.; Ni, L. A nonlinear and noise-tolerant ZNN model solving for time-varying linear matrix equation. Neurocomputing; 2018; 317, pp. 70-78. [DOI: https://dx.doi.org/10.1016/j.neucom.2018.07.067]
34. Jin, L.; Zhang, Y.; Li, S. Integration-enhanced Zhang neural network for real-time-varying matrix inversion in the presence of various kinds of noises. IEEE Trans. Neural Netw. Learn. Syst.; 2015; 27, pp. 2615-2627. [DOI: https://dx.doi.org/10.1109/TNNLS.2015.2497715] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/26625426]
35. Singkibud, P.; Mukdasai, K. Robust passivity analysis of uncertain neutral-type neural networks with distributed interval time-varying delay under the effects of leakage delay. J. Math. Comput. Sci.; 2022; 26, pp. 269-290. [DOI: https://dx.doi.org/10.22436/jmcs.026.03.06]
36. Kumar, P.; Panwar, V. Wavelet neural network based controller design for non-affine nonlinear systems. J. Math. Comput. Sci.; 2022; 24, pp. 49-58. [DOI: https://dx.doi.org/10.22436/jmcs.024.01.05]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
The field of position tracking control and communication engineering has been increasingly interested in time-varying quadratic minimization (TVQM). While traditional zeroing neural network (ZNN) models have been effective in solving TVQM problems, they have limitations in adapting their convergence rate to the commonly used convex activation function. To address this issue, we propose an adaptive non-convex activation zeroing neural network (AZNNNA) model in this paper. Using the Lyapunov theory, we theoretically analyze the global convergence and noise-immune characteristics of the proposed AZNNNA model under both noise-free and noise-perturbed scenarios. We also provide computer simulations to illustrate the effectiveness and superiority of the proposed model. Compared to existing ZNN models, our proposed AZNNNA model outperforms them in terms of efficiency, accuracy, and robustness. This has been demonstrated in the simulation experiment of this article.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer