Full text

Turn on search term navigation

© 2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Process control systems are subject to external factors such as changes in working conditions and perturbation interference, which can significantly affect the system's stability and overall performance. The application and promotion of intelligent control algorithms with self-learning, self-optimization, and self-adaption characteristics have thus become a challenging yet meaningful research topic. In this article, we propose a novel approach that incorporates the deep deterministic policy gradient (DDPG) algorithm into the control of double-capacity water tanklevel system. Specifically, we introduce a fully connected layer on the observer side of the critic network to enhance its expression capability and processing efficiency, allowing for the extraction of important features for water-level control. Additionally, we optimize the node parameters of the neural network and use the RELU activation function to ensure the network's ability to continuously observe and learn from the external water tank environment while avoiding the issue of vanishing gradients. We enhance the system's feedback regulation ability by adding the PID controller output to the observer input based on the liquid level deviation and height. This integration with the DDPG control method effectively leverages the benefits of both, resulting in improved robustness and adaptability of the system. Experimental results show that our proposed model outperforms traditional control methods in terms of convergence, tracking, anti-disturbance and robustness performances, highlighting its effectiveness in improving the stability and precision of double-capacity water tank systems.

Details

Title
Optimization control of the double-capacity water tank-level system using the deep deterministic policy gradient algorithm
Author
Ye, Likun 1   VIAFID ORCID Logo  ; Jiang, Pei 2 

 Shien-Ming Wu School of Intelligent Engineering, South China University of Technology, Guangzhou City, Guangdong Province, China 
 School of Instrument Science and Optoelectronic Engineering, Beihang University, Beijing City, China 
Section
RESEARCH ARTICLES
Publication year
2023
Publication date
Nov 2023
Publisher
John Wiley & Sons, Inc.
e-ISSN
25778196
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2886719639
Copyright
© 2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.