Machine learning is a rapidly evolving interdisciplinary field that draws on the expertise of multiple disciplines, including computer science, robotics, statistics, psychology, and other related areas. Its main focus is on looking for patterns in data to forecast unidentified trends. The intersection of robotics and psychology in machine learning has a significant effect on the development of reinforcement learning (RL).1
Reinforcement learning surpasses traditional control methods due to its ability to learn and improve through an interactive trial-and-error approach that relies on observations obtained from the dynamic environment.2 In recent years, there has been a trend toward implementing comprehensive intelligence in industrial production. The surge of cloud computing and communication network connects the related industries, unleashing the full potential of the industrial network orchestrated by machine learning and AI due to their capability to collect and generate large volume of network data.3 Moreover, AI-based control algorithms, characterized by robust autonomous learning and the ability to handle complexity, have revitalized various control schemes and led to new research avenues.4 Consequently, a very important technological orientation of industrial intelligence is the systematic study of intelligent control.5 As a significant subdivision of intelligent control, the incorporation of RL into system control technology is anticipated to pave the way for novel research paths, presenting tremendous research potential and promising application prospects.
Reinforcement learning and traditional control methods are commonly employed in the development of system control strategies for industrial production and management processes. However, certain proposed models may be susceptible to variations in working conditions, leading to performance limitations. Abe et al.6 employed RL in the continuous decision-making process for optimizing the phase of microperistaltic pumps. The findings indicated that RL performed effectively in the optimization of the pump actuation sequence. Nevertheless, the real-time operational delay in the actuation sequences affected the state transition in RL. Dey et al.7 achieved high precision and stable convergence in controlling the water level in a tank system using Fuzzy logic method. However, the method was not able to accurately detect errors caused by delays in the pump environment. Efheij et al.8 implemented a PID controller based on Arduino atmega328p to monitor an industrial water-level system. The response of the controller showed slight overshoot and almost zero steady-state error. However, in the face of dynamic changes in the water level and associated characteristics, the controller necessitated frequent parameter adjustments.
Another critical factor for evaluating a successful RL control method is its robustness. The ability to withstand and adapt to changes in the environment is essential for training and upgrading the actor network and critical network.9 Harish and Peter10 proposed a solution to enhance the robustness of RL training by imposing random perturbations on the system input, which is known as the Linear Quadratic Gaussian (LQG) method. While this method was shown to effectively improve robustness during RL training, it came at the cost of reduced model performance. Model convergence is also a critical measure of a system's stability and ability to resist interference. While Q-learning is a widely used model-free RL algorithm, it can have limited convergence capacity. Qi et al.11 developed a real-time energy management system using deep Q-network, which combines Q-learning with a deep neural network to provide optimal control decisions in a continuous environment. Nevertheless, the model's convergence performance is suboptimal, resulting in system instability and susceptibility to crashes. Cheng et al.12 proposed a multi-agent deep deterministic policy gradient (MADDPG) offloading algorithm for mobile devices that maximizes long-term utility in terms of execution latency and energy consumption. However, the large number of mobile devices can make the training process unstable, which can limit the effectiveness of the proposed algorithm. In summary, numerous existing RL algorithms and traditional control methods are limited by weak anti-interference capability, poor convergence performance, and low robustness.
Recent studies have highlighted the potential of RL algorithms, including the deep deterministic policy gradient (DDPG) algorithm, for the control of nonlinear systems. Mendiola-Rodriguez et al.13 conducted a study on the anaerobic digestion systems of Tequila vinasses, utilizing the DDPG algorithm to reduce chemical oxygen demand (COD). Their findings demonstrated the algorithm's ability to handle plant-model mismatch, as well as its robustness against disturbances and uncertainties, indicating its potential for applications in nonlinear system control. Yoo et al.14 introduced a pioneering methodology that integrates RL and optimal control techniques to address the non-stationary and irreversible nature of batch processes. The approach employs a Monte-Carlo deep deterministic policy gradient with phase segmentation (MC-DDPG) and has demonstrated remarkable efficacy in managing substantial uncertainties and intricate nonlinear dynamics. While these studies successfully applied DDPG and its variants to directly control nonlinear systems, our work focuses on employing the DDPG algorithm to control a linearized version of the nonlinear double-capacity water tank system. The linearization of the system may lead to a more computationally efficient implementation, while still capturing the essential dynamics of the nonlinear system.
This article focuses on the control of a double-capacity tank-level system, a complex system that exhibits nonlinearity and time delay. This system finds wide applications in industries such as chemical and power plants, where even minor deviations can lead to significant financial loss and potential accidents.7 However, due to the underlying complexity in the control mechanism of double-capacity water tank, classic physical models or traditional control methods can hardly be applied to effectively control the system parameters like water level and level deviation.
Given the challenges associated with controlling complex and dynamic systems, this article proposes the use of an advanced RL model, specifically the DDPG algorithm, to effectively regulate the double-capacity water tank system. By leveraging the power of artificial intelligence and machine learning, we aim to address the limitations of traditional control methods and achieve optimal control of the system parameters such as water level and level deviation. The double-capacity water tank is classified as a continuous action-space system, making DDPG an ideal control method due to its deterministic action output. This property contributes to stabilizing policy updates and enhances the efficiency of directional exploration within the tank environment. In our work, to improve the processing efficiency of the system, a fully-connected layer is incorporated into the observer side during the construction of the critic network, enhancing its feature extraction performance. Considering the scenario of a continuous water-tank system, node parameters are optimized and a RELU activation function is added to the design of actor-critic network, ensuring that it can continuously react to changes in the environment while minimizing gradient loss. Robustness and convergence performances are key concerns in a water tank system, so we incorporate PID controller output into the observer side of the DDPG pure control system so as to achieve better feedback performance.
In this study, we have developed DDPG pure control and DDPG adaptive compensation control systems for the control of a double-capacity water tank. Through a comparative analysis with proportion-integration-differentiation (PID) control and Fuzzy PID control, our results demonstrate that DDPG surpasses these traditional control methods, showcasing superior adaptability, tracking performance, disturbance resistance, and robustness. It shows that DDPG adaptive control system has the best control effect, combing the adaptability and convergence of the pure DDPG method and the robustness of the PID method. Overall, the DDPG algorithm demonstrates superior performance metrics and holds promising potential for application in industrial process control systems.
The rest of this article is organized as follows. Section 2 demonstrates the design process of the DDPG-based control methods with proper tank environment building and innovative network construction. The present study focuses on the double-capacity water tank system and entails the development of two control systems through Simulink simulation. Specifically, the DDPG pure control system and the DDPG adaptive compensation control algorithm are employed to evaluate their respective performance in controlling the water tank system. Section 3 analyzes the design logic of the refined DDPG framework and demonstrates the control process of DDPG algorithm. Section 4 conducts a comprehensive comparative analysis on four distinct control methods, namely, PID control, Fuzzy control, DDPG pure control, and DDPG adaptive compensation control. This comparative study encompasses four fundamental aspects: convergence, tracking, anti-disturbance, and robustness performances. The principal objective of this analysis is to discern and appraise the control efficacy of different methods. Section 5 summarizes the main work of this article, predicting the future research prospect of DDPG-based control methods.
CONTROL MODEL BASED ONDue to its special properties in the water-level control scenario, the double-capacity water tank makes a good controlled object for researching DDPG control algorithms. The inflow and outflow rates of the tank, which are continuous in nature, are managed by the system in an effort to adjust the water level inside the tank. Meanwhile, the water level also serves as a continuous state variable. Therefore, DDPG is a kind of RL technique that is appropriate for this problem given its continuous state and action space. In our study, we have linearized the double-capacity tank model to enable a more thorough exploration of its performance in the context of continuous process control.
In the simulation, it is assumed that the tank model satisfies the conditions listed in Table 1.
TABLE 1 Parameters of double-capacity water tank-level system.
Parameter | Symbol | Unit | Condition | |
Tank 1 | Level height | dm | [0, 20] | |
Initial level | dm | 0 | ||
Balanced level | dm | 10 | ||
Expected level | dm | 10 | ||
Sectional area | dm2 | 1 | ||
Tank 2 | Level height | dm | [0, 20] | |
Initial level | dm | 1 | ||
Balanced level | dm | 10 | ||
Expected level | dm | 10 | ||
Sectional area | dm2 | 1.2 | ||
Valve |
Loading factor | dm2 | 0.2828 | |
Loading factor | dm2 | 0.2828 | ||
Adjustment factor | dm2 | 1 | ||
Flow rate | Inflow rate | dm3/s | – | |
Middle flow rate | dm3/s | – | ||
Outflow rate | dm3/s | – | ||
Other | Delay time | s | 0.5 |
Figure 1 shows a double-capacity tank-level control system composed of two single tanks in series, with the input quantity being valve opening variation of the regulating valve and the output quantity being the level increment of tank 2.
According to the material balance equation, the following relationship can be obtained: [Image Omitted. See PDF]
Based on the continuity equation, the following relationship can be derived through first-order Taylor expansion: [Image Omitted. See PDF]
Combining the equations listed in (2), the differential equation of double-capacity water tank will be of this form: [Image Omitted. See PDF]
Under zero initial conditions, considering that there is a delay time of for the change in water volume caused by the change in the opening of the regulating valve , the transfer function of the double-capacity tank-level system can be obtained by the Rasch transform: [Image Omitted. See PDF]
Drawing from the assumptions outlined in Table 1, the transfer function of the double-capacity tank-level system is obtained by considering the operating condition in which the level height of tank 2 is maintained at a final value of . [Image Omitted. See PDF]
Based on the transfer function, a double-capacity water tank-level system model is built in Simulink and encapsulated in the “water tank system” module, as shown in Figure 2.
The DDPG intelligent agent itself is applied to the control loop as a pure controller while the components other than the intelligent agent and double-capacity water tank serve as the external environment in the architecture of the DDPG pure control system. The observer inputs are the tank-level height and level deviation , which reflect the state of control system. The key component of the DDPG pure control system design is the neural network, which can effectively train and utilize multiple input parameters. This helps to reduce the impact of disturbances and enhance control accuracy. The DDPG pure control method is applied to the control system and its structure block diagram is shown in Figure 3.
In the environmental model, the observer inputs are initialized tank level height and level deviation . The range of tank level height is from 0 to 20 .
The termination symbol is designed to reflect whether training is completed. When the level height goes beyond the upper limit of 20 or goes below the lower limit of 0 , the value of termination symbol equals to 1. In other cases, the value of termination symbol equals to 0.
The reward value requires consideration of several parameters, mainly including the level deviation and the termination sign, where the deviation is taken as an absolute value for the operation. The level height is used as the control target and its deviation from the set value is the priority in the design of the reward value. The reward is designed to be higher when the deviation is smaller, which incentivizes the agent to bring the level height closer to the target value. The specific reward function is shown below: [Image Omitted. See PDF]
Network modelThe DDPG control model consists of four deep neural networks: two Critic networks and two Actor networks, with networks of the same type having identical structures.15 To simplify the system, a single Critic network and a single Actor network can be constructed, respectively. The Critic network model consists of two parts, the observer side and the action side. We add an additional fully connected layer in the observer side of the critic network and use RELU as the activation function. The input layer of the critic network receives inputs from both the observer side (s) and the action side (a). Behind the fully connected layer lies the hidden layer, while the superposition layer combines the outputs of the fully connected layers on both sides. Finally, the output layer produces the evaluation value of the current policy. For the actor network which has similar structure, it receives state information from the environment and output the corresponding action policy. The overall network model is shown in Figure 4.
Using the DDPG pure control as the control method and “water tank system” as the controlled object, the model of double-capacity water tank level control system is built in Simulink and the control system model is shown in Figure 5.
In the DDPG adaptive compensation control system, the parts other than the intelligent agent and water tank are regarded as the external environment while the DDPG intelligent agent is used as the front controller. To increase the system's capacity to regulate itself, PID is used as a feedback controller in the control loop. The observer inputs become tank level height , the level deviation and the output value of the feedback controller, which reflect the state of the control system. The DDPG adaptive compensation control method is applied to the control system and its structural block diagram is shown in Figure 6.
The construction method of the environment model is roughly the same as that shown in Section 2.2.2. The main differences are (i) the observer inputs have three channels, which are the tank level height , the level deviation , and the output value of the feedback controller; (ii) during the reward function calculation, when , the environment imposes a penalty on the intelligent agent with a negative reward value of
Network modelThe building technique of the network model is roughly the same as that described in Section 2.2.3, with two key differences: (i) the inputs of the Critic network model are the input on the observer side, the input on the action side, with an additional component of the output of the feedback controller; (ii) the inputs of the actor network model become a combination of state on the observer side and action on the action side. The overall network model is shown in Figure 7.
Using DDPG adaptive compensation control as the control method and “water tank system” as the controlled object, a double-capacity water tank-level control system model is built in Simulink and the control system model is shown in Figure 8.
Deep deterministic policy gradient was proposed by the DeepMind team in 2016 as a strategy algorithm that incorporates deep learning neural networks into DPG.16 The DDPG algorithm employs an Actor-Critic network to approximate the policy function and utilizes the DQN algorithm to train the network function Q, which enables the computation of temporal difference errors and the implementation of gradient updates from the Online Network to the Target Network.17
Q function in the Critic network represents the expected value obtained after executing action output by actor network and policy in state , with a discount factor of : [Image Omitted. See PDF]
The Q network in DDPG is obtained by simulating the Q function using the critic network, with the parameter denoted as .
The performance of the strategy is measured by the function , which is defined as follows: [Image Omitted. See PDF] where denotes the environmental state and its distribution function is .
The design block diagram of the DDPG control algorithm is shown in Figure 9.
DDPG is a RL algorithm that uses a neural network to learn a policy function, which outputs deterministic actions based on the agent's observations. The agent then interacts with the environment to receive feedback in the form of a reward signal, which is used to update the policy network and the Q-value function.18 It aims to maximize and minimize the loss incorporated in Q network, finally completing data mapping from input to output19 through continuous policy and value-based training incorporated within the actor-critic network. It creates an experience replay buffer, similar to that used in the DQN method, to store the experiences of the intelligent agent during the previous time steps. At the same time, it randomly samples from the experience memory to facilitate the gradient information transfer from the evaluation network (critic network) to the action network (actor network), aiming to update the parameters of online network and target network through backpropagation and avoid overfitting. The efficiency of the process of experience replay and gradient descent is improved by introducing a fully-connected layer in the observer side of the critic network so that it can better extract key features for water-level control from the tank environment. Refined node parameters and employment of RELU function prevent the issue of vanishing gradients to guarantee the quality of network parameter updates.
The algorithm design process is shown as follows (Algorithm 1):
1: Randomize the reference signal and the initial height
2: Initialize random noise signal UO
3: Initialize experience pool and its capacity
4: Initialize actor parameters and critic parameter in online network
5: Initialize actor parameters and critic parameters in target network
6: Initialize the environment state
7: for the termination of maximum reward or each episode = 1, N do
8: Read control action
9: Execute action
10: Obtain final control action
11: Obtain the reward , the state at the next moment and the terminator
12:
13: =
14: Record the sample to the experience pool
, and overwrite the previous records if the capacity is insufficient
for do
15: Randomly sample data from the experience pool and input them to the
Actor – Critic network
16: Calculate the online value
17: Calculate the target value
18: Minimize loss function and update the parameters of Critic network
19: Maximize reward function and update the parameters of Actor network
20: Using the running average, change the parameters every steps:
21: end for time
22: end for episode
The effectiveness of a control system is generally evaluated through quantitative measures. To ensure the desired performance criteria, it is essential to compare and analyze different control algorithms through simulation experiments under the same initial conditions. Here, we assume that the target water level of tank 2 is fixed at . In the simulation experiments, traditional PID controller and Fuzzy controller are compared with DDPG control algorithm to verify the performance of DDPG-based control framework. PID controller is the extensively employed regulator in industries.20 In order to eliminate the static difference of the system and, ultimately, stabilize the system output, it works with pre-defined parameters and applies the results of its operation to the controlled object through the actuator. Fuzzy controllers are considered by researchers to be the excellent choice for studying complex systems as opposed to ordinary linear controllers.21 In order to achieve better control outcomes, Fuzzy control incorporates fuzzifying the precise values of the PID controller parameters, creating inference rules, and performing inverse defuzzification.
General parametersThe training parameters of the intelligent agent under the control of DDPG pure control algorithm (a) and DDPG adaptive compensation control algorithm (b) are designed to satisfy the conditions listed in Table 2:
TABLE 2 Parameters of intelligent agent in DDPG control algorithm.
Parameters | Unit | Condition (a) | Condition (b) |
Sampling time | s | 1 | 1 |
Maximum segment | step | 2000 | 500 |
Training time | step/ | 600 | 600 |
Sampling batch | pcs/package | 64 | 64 |
Actor learning rate | – | ||
Critic learning rate | – | ||
Target smoothing factor | – | ||
Discount factor | – | 0.95 | 1 |
Experience length | – | ||
Noise variance | – | 0.15 | 0.15 |
Variance decay rate | – | ||
Average reward threshold | – | 200 | 4050 |
Server and programming configurations are presented in Table 3:
TABLE 3 Server and programming configurations.
Parameter | Configuration |
Central processing unit (CPU) | Intel(R) Core(TM)5-8300H CPU @ 2.30GHz |
Operating system (OS) | Win10 64-bit |
Programming language | Python 3.8 |
The efficacy of RL training is directly measured by convergence performance, which can determine whether a system have provable convergence guarantee to a globally optimal and feasible strategy.22 To assess this characteristic, we utilize the reward curves obtained from the training of the intelligent agent.
During the training of the intelligent agent, the training ends when any of the following conditions are met: (i) the simulation distance exceeds 30 steps; (ii) the cumulative average reward of the intelligent agent is higher than 2500. Assuming that the training is performed under the same parameters, the cumulative reward curves of DDPG-based methods is shown in Figure 10.
Judged from Figure 10, it can be shown that under the same cumulative average reward threshold of 1200, the DDPG pure control method requires 30 training steps and takes 5 minutes and 43 seconds, while the DDPG adaptive compensation control method requires 24 training steps and takes 4 min and 18 s. It is justified that DDPG adaptive compensated control method can reach the cumulative reward threshold quickly, converge faster, and exhibit better convergence performance.
Upon examining the cumulative reward curves, we observe that both the DDPG pure control and DDPG adaptive compensation control methods exhibit a consistent upward trend, characterized by a smaller fluctuation range and greater stability performance. However, the difference between the two is whether their exploration is directional or not. The exploration of the DDPG pure control method is non-directional, with constant trial and error tolerance in the early stages of training while the exploration of the DDPG adaptive compensation control method is directional, which guarantees that its total compensation is always positive and allows it to continuously get closer to the desired value. The positive reward values lead to a shorter training time, allowing the intelligent agent to quickly obtain the optimal strategy.
Tracking performance testingTracking performance is a crucial parameter that has been extensively investigated in various fields, and its optimization in system control holds great significance.23 Achieving exceptional quality in a system implies that the actual value of the system index should quickly follow the set value. In this section, we focus on exploring the tracking performance of the double-capacity water tank system by adjusting the input conditions.
Assuming the level input is set to at and at during system operation, we conduct simulations under the same initial conditions and present the level output curve in Figure 11.
Judged from Figure 11, it can be concluded that when the set value of tank level changes, the DDPG-based control method responds faster than the PID-based control method, with a smaller fluctuation range and consequently superior tracking performance.
Anti-disturbance performance testingIn the actual control process, the system is often affected by disturbances from the external tank environment. Anti-disturbance performance, a crucial control system parameter, provides a direct indication of the water tank system's robustness and its capacity for extremely dynamic control setting.24 In this part, internal and external disturbances are imposed on the tank system to access its performance.
Assume that when the system runs to , there is an internal-level disturbance of between the controller and the double-capacity tank-level system, and that at , there is an external level disturbance of between the double-capacity tank-level system and the feedback loop. Simulation is performed under the same initial conditions and the level output curve is shown in Figure 12.
Judged from Figure 12, it can be shown that the DDPG-based control method takes less time to return to the steady-state value and has a smaller fluctuation range than the PID-based control method for the same disturbance. In the DDPG-based control method, the intelligent agent is trained to converge to the optimal value with minimal fluctuation, even in the presence of internal and external disturbances. This approach effectively alleviates the impact of stochastic external factors and significantly enhances the self-adaptive performance of the double-capacity tank system
Robust performance testingRobustness is a critical parameter that measures a system's ability to adapt to the external environment and handle a wide range of testing scenarios.25 Scholars have been actively involved in constructive endeavors aimed at improving the robustness of system control. Mendiola-Rodriguez et al.26 emphasize the importance of employing an integrated approach to improve sustainability in control processes. By considering multiple aspects of the process and using suitable sustainability metrics, decision-making can be streamlined. The study underscores the advantages of implementing an integrated approach to increase the controllability of the system. In the development of our DDPG adaptive compensation control model, we incorporated a PID controller to enhance the system's robustness against the uncertainties imposed by the tank environment. In addition, a fully connected layer is introduced on the observer side of the critic network to mitigate the uncertainty's impact by improving the system's feature extraction capabilities.
Random factors, like sudden changes in valve coefficients, can affect the dynamic characteristics of the tank system and thus impact its robustness in practical working conditions. In this section, to evaluate the robustness of the system, we introduce variations in the load valve coefficient and retrain the agent accordingly.
Assume that the condition of the system changes, that is, the coefficient of the load valve in a double-capacity tank system varies by at a certain moment, which makes the outflow flow rate increase by . The simulation is performed under the same initial conditions with a change in the valve condition, and the level output curve is shown in Figure 13.
Judged from Figure 13 and compared with the system state before and after the environmental change, it can be observed that the DDPG pure control method exhibits sluggish response and inadequate robustness when the valve coefficient fluctuates within a specific range, which can be exacerbated in scenarios with higher degrees of uncertainty. Specifically, after running for 40 s, there is an error of . The PID-based control method takes less time to recover to the steady-state value, and the system responds faster with a smaller fluctuation range, reflecting its good robustness. It's worth highlighting that the DDPG adaptive compensation control algorithm demonstrates exceptional robustness, which can be attributed to our deliberate efforts to design it in a way that can accommodate uncertainties.
CONCLUSIONIn this article, DDPG pure control and DDPG adaptive compensation control methods are proposed to adjust the water level of a double-capacity tank-level system by building a comprehensive tank system model with enhanced feedback regulation capability and training DDPG-based intelligent agents using actor-critic network with refined structure and parameters. The performances of DDPG-based control methods are compared with traditional PID controller and Fuzzy controller in terms of system's convergence, tracking, anti-disturbance and robust performances. Simulation results indicate that our proposed DDPG adaptive compensation control algorithm combines the strengths of adaptive performance of DDPG method and robustness of PID method, outperforming conventional schemes in the control of double-capacity tank-level system. Under the same condition, it can converge faster, respond more quickly and track the target water level more precisely. Furthermore, compared with traditional control methods, the DDPG adaptive compensation control system demonstrates superior self-adaptive performance in the presence of random factors in the water tank environment, integrating the robustness of the PID method and adaptive capability of the DDPG algorithm.
The increasing performance demands and wider application scenarios of process control have prioritized the use of DDPG adaptive compensation control, which has been found to efficiently address complex problems with continuous action, and provide data-driven control of dynamic systems with strong robustness.27 In our upcoming research, we'll use the DDPG adaptive compensation control method to look into multi-tank-level systems that call for sophisticated nonlinear constraints. Moreover, we hope to achieve autonomous control over tank-level system with DDPG algorithm. One potential solution to maximize the strengths of DDPG adaptive compensation control algorithms is to perform supervised learning on the optimized control actions to build new neural network controllers, allowing for algorithm migration. The self-updating of controllers after migration is likely to enhance the effectiveness of the DDPG adaptive compensation control approach.
AUTHOR CONTRIBUTIONSLikun Ye: Conceptualization (equal); data curation (equal); formal analysis (equal); methodology (equal); validation (equal); visualization (equal); writing – original draft (equal). Pei Jiang: Data curation (equal); formal analysis (equal); investigation (equal); validation (equal); visualization (equal); writing – original draft (equal).
CONFLICT OF INTEREST STATEMENTWe declare that we have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this article.
PEER REVIEWThe peer review history for this article is available at
The data that support the findings of this study are available from the corresponding author upon reasonable request.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Process control systems are subject to external factors such as changes in working conditions and perturbation interference, which can significantly affect the system's stability and overall performance. The application and promotion of intelligent control algorithms with self-learning, self-optimization, and self-adaption characteristics have thus become a challenging yet meaningful research topic. In this article, we propose a novel approach that incorporates the deep deterministic policy gradient (DDPG) algorithm into the control of double-capacity water tanklevel system. Specifically, we introduce a fully connected layer on the observer side of the critic network to enhance its expression capability and processing efficiency, allowing for the extraction of important features for water-level control. Additionally, we optimize the node parameters of the neural network and use the RELU activation function to ensure the network's ability to continuously observe and learn from the external water tank environment while avoiding the issue of vanishing gradients. We enhance the system's feedback regulation ability by adding the PID controller output to the observer input based on the liquid level deviation and height. This integration with the DDPG control method effectively leverages the benefits of both, resulting in improved robustness and adaptability of the system. Experimental results show that our proposed model outperforms traditional control methods in terms of convergence, tracking, anti-disturbance and robustness performances, highlighting its effectiveness in improving the stability and precision of double-capacity water tank systems.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer