1. Introduction
The rapid development of process industries has resulted in an increase in the number of factories and machines, thus making it difficult for humans to monitor and operate all the machines during operation. Therefore, it is essential to develop technologies that automatically control the values of parameters such as pressure, velocity, temperature, and flow. The most widely used controller is the proportional–integral–derivative (PID) controller because it is effective despite its simplicity. It controls the output by calculating the error between the response and the target value [1,2,3]. In PID control, the response characteristics of the system vary according to the magnitude of the proportional, integral, and derivative terms, called PID parameters. When optimal values of PID parameters for the control object are set, the system shows a response close to the target value. However, if the system changes owing to deterioration or environmental changes, the PID parameters must also be changed dynamically to obtain a good response.
In general, to set an appropriate PID parameter, an expert familiar with the system adjusts the parameter using a trial-and-error method and simultaneously checks the response of the system to select the optimal value of the parameter. Experts commonly use methods such as the Ziegler–Nichols step response method, Chien–Hrones–Reswick method, ITAE, and robust PID tuning [4,5,6,7,8,9,10,11]. However, a major drawback of these methods is that the new PID parameters must be tuned manually whenever the characteristics of the system change; hence, complete automation of the tuning process is a challenge.
Despite the development of relevant software and hardware to tune the parameters, the process is time consuming [12,13,14,15,16]. Therefore, for complete automation, it is necessary to set the PID parameters quickly even without a human.
Methods have been developed for auto-tuning, such as a fuzzy logic controller. The fuzzy PID controller is a combination of classical PID control and fuzzy logic based on human knowledge and expertise [17]. It has been successfully applied in many nonlinear systems [18,19,20]. However, problems with fuzzy logic include the fact that the accuracy of responses depends on the knowledge and expertise of human beings, fuzzy rules must be updated with time, and there is no standard procedure for the design of the fuzzy controller. To compensate for the fuzzy PID controller, an adaptive neuro-fuzzy inference system (ANFIS) has been proposed. ANFIS is a combination of a neural network and a fuzzy logic controller [21,22,23]. ANFIS can successfully control a system by finding the values for fuzzy logic through a neural network without expert knowledge. Active disturbance rejection control (ADRC) is also a robust controller with the cancellation of disturbances [24,25,26]. However, ANFIS and ADRC cannot be applied to conservative industries that prefer the traditional PID controller with a PID parameter for safety because ANFIS control systems operate without PID parameters.
Artificial intelligence (AI) is the technology used to automate PID control. The research on PID control using AI has been conducted in two ways. First, to replace the PID controller with AI [27,28,29] and second, to automate the PID parameter setting [30,31,32,33,34]. The advantage of replacing the PID controller with AI is that the tuning of PID parameters is automated. However, certain industries still prefer the traditional approach to tune the PID controller, especially in conservative systems where safety is essential. Therefore, the automation of PID parameter settings is indispensable for complete system automation. Previous studies related to the automation of PID parameter settings using AI have drawbacks, such as only assisting tuning of the PID controller even after replacing with automated tuning with AI, or requiring multiple attempts to tune the parameters in spite of being learned because they are trained using reinforcement learning [30,31,32,33,34].
In this study, we developed a practical neural network method for conservative industries to automate the tuning of PID parameters that dramatically reduces the number of tuning attempts and can be learned with a practically achievable small amount of data. The proposed method was configured to identify the target system and then recommend PID parameters for the system. Additionally, it provided an understanding of the target system from the response characteristics and determined the PID parameters from the system in a minimal number of attempts. Moreover, it addressed the concerns of conservative industries that prefer using PID controllers. In addition, the number of sampling data required for AI learning was also examined, the stability of the response to noise was confirmed, and the variation in response when the target position was changed during the operation was studied.
2. Methods
2.1. Simulator for PID Control
A simulator was designed to evaluate the output response and its characteristics for a given PID parameter and to check whether a proper response was produced when the AI method recommended the PID parameters. Additionally, it was also used to accumulate the response data according to the PID parameters for machine learning. The simulator was composed of a second-order system based on a simple mass–spring–damper model with the goal of position control. Most real-time systems can be approximated as second-order systems by model reduction [35,36,37,38,39]; hence, the mass–spring–damper model was selected (Figure 1).
The mass (m), spring (k), and damper (c) values were set as variables with SI units of kg, N/m, and N∙s/m, respectively. When the PID parameters were set, the position of the response along with its characteristics, such as overshoot, overshoot ratio, rise time, and settling time, were evaluated.
With a shorter settling time and smaller overshoot, better response characteristics, such as quick response and fast stability, were obtained. The acceptable response characteristics required for each system were different. For example, the level of water stored in a reservoir was controlled slowly, whereas the position of a robotic arm was controlled quickly. Hence, to obtain acceptable response characteristics in this study, the settling time and overshoot values were referenced from a similar research work [40,41,42,43]; the settling time with a 5% error band was less than 1.5 s and the overshoot was less than 10% (Figure 2).
2.2. Data Acquisition for Learning and Testing
The data for PID learning were acquired as follows. In the simulator, the m, k, and c values along with the PID parameters were set to a random value between 0 and 1000. The initial position was at 0 m, the target position was fixed at 1 m, and the response was saved at 1000 Hz for 5 s. The neural network was trained to understand the system information by looking at the PID parameters, the response and its characteristics, and the values of m, k, and c. If the response of the system was not stable for 5 s, then the settling time was recorded as 5.001 s. A total of 10,000,000 sets of data were stored, and data creation took approximately 120 h with an Intel Xeon Gold 5220 CPU and an NVIDIA Titan RTX GPU.
In addition, the learning data recommended the optimal PID parameters from the identified system, and the acceptable PID parameters satisfying the specific criteria of settling time and overshoot, as mentioned in Section 2.1, were separately stored.
2.3. Learning Process
The originality of this study is the use of an AI method to recommend PID parameters after identifying the target system. In the proposed method, two neural networks were configured (Figure 3). One was configured to identify the type of system from the PID parameters and response characteristics, and the other to find the acceptable PID parameters for the identified system. These two neural networks were configured to work in series and recommend PID parameters quickly.
The first neural network was composed of two methods. One used a simple artificial neural network (ANN), and the other used a long short-term memory (LSTM) network that was capable of handling order dependence in sequential data. The ANN had three hidden layers and the LSTM had three hidden layers and one flattened layer. The ANN received inputs that included n sampling with position values at 0.1 s intervals (Figure 4), response characteristics, and PID parameters. In this method, response characteristics (RC) include values of: rise time when the position firstly reaches 95% of the target position; settling time, when the position settles with a 5% error band; overshoot, when the position reaches the highest position; and second peak, when the position reaches the second peak. LSTM received similar inputs as ANN that were converted to two-dimensional data to handle the sequential data. The outputs of both of the neural networks were m, k, and c.
The neural network inputs were given via eight approaches (Table 1). Two types of neural network were considered. The values of n were considered as 11 and 21, indicating that response data were sampled from 0 to 1 s and from 0 to 2 s at 0.1 s intervals for comparing the effect of the amount of sampling data considered as inputs. The response characteristics could be included or excluded for comparing the effect of response characteristics considered as an input.
The second neural network generated the output of the acceptable PID parameters from the values of m, k, and c. As it produced three outputs from three inputs, it was configured as a simple ANN with four hidden layers represented by groups of multiple neurons. This ANN was able to learn the acceptable PID parameters for the system.
For training, an Adam optimizer with a learning rate of 0.0005 and mean squared logarithmic error as loss was used. The activation functions used for the LSTM network and Multi-layer Perceptron (MLP) were tanh and ReLu, respectively. To prevent overfitting, a 10% of drop out method and L1, L2 regularizer method were used.
In this study, the data for learning were accumulated by simulation; hence, 10 million data samples could be produced at a time using this method. However, this was a challenge in real-time implementation. Therefore, in an actual PID system it is important to achieve an acceptable performance by generating the minimum amount of data. To check the amount of data required for learning, AI was trained by reducing the number of learning data in the order of 10 million, 10,000, and 1000.
2.4. Inference and Performance Evaluation
The inference and performance evaluation of the system were performed using the random PID parameters considered initially. Then, the two neural networks recommended the new PID parameters from the response, response characteristics, and randomly input PID parameters. These recommended PID parameters were given as inputs to the simulator and the response, and its characteristics were checked again to confirm whether an acceptable tuning was achieved. The criteria of an acceptable response were when the settling time with a 5% error band was less than 1.5 s and the overshoot was less than 10%. If the response with the recommended PID parameters was not acceptable, the neural networks recommended new PID parameters based on the current response and the past values of the parameters. This was repeated until the tuning was complete. If tuning was not successful within 20 attempts, it was treated as a failure. To evaluate the performance of the system, the number of failures and trials to succeed were considered (Figure 5).
In addition, the performance of the system with noise was examined. Random noise, which has a normal distribution with a standard deviation of 20% of the settling condition, was added to the position every 0.01 s (Equation (1)). The performance of the system when the target position changed during the operation was also examined.
(1)
when : position at time t, : position from PID controller, : mean of noise = 0, : standard deviation = 20% of settling condition = , : settling condition = 5% of target position.3. Results
A total of 1000 cases of systems with random m, k, and c values were tested. Two neural network models were proposed to automate the tuning of PID parameters. The results indicated that the PID parameter tuning was completed in less than 1.6 attempts in every method, because these methods generated the acceptable PID parameters of the system as outputs after identifying the system from the response. An example of a response graph that changed as the tuning was repeated is shown in Figure 6.
According to the learning method classified by the type of network, number of sampling data, and whether response characteristics were included as inputs, the LSTM networks showed better performance in the results of both the first and second neural networks (Table 2 and Table 3). The relative error between real model parameters and predicted model parameters from the first neural network was between 14.5% and 37.6% (Table 2). The accuracy of the first neural network also affected the performance of the second neural network. When the relative error of the first neural network was low, the final results were also good. LSTM networks showed approximately 98% (~980 success per 1000 cases) success in tuning (Table 3). However, ANN with 21 and 11 sampling data without response characteristics showed poor performance. The performances of the LSTM network and ANN were similar when the input data included the response characteristics; however, there was a significant difference in the performances when the input data did not include the response characteristics. The performance of the LSTM network was 53% better than that of ANN when the input data did not include the response characteristics.
Based on the input features, both the methods with inputs including the number of sampling data and response characteristics showed approximately 98% success in tuning; however, when the input did not include response characteristics, the ANN showed unacceptable performance. Therefore, an LSTM network with an input sampling data size of 11 was recommended, because when the input was minimal, only 1 s of the response data was sufficient for tuning the PID parameter.
The advantages of using a neural network to identify the system are quick tuning of PID parameters and creation of learning data. Except for the reinforcement learning method, all other learning methods require optimal PID parameters to predict the subsequent parameters from the response. The methods using AI demonstrate high accuracy after learning multiple cases of predicting the optimal parameters and the response corresponding to each system with different m, k, and c values. Therefore, researchers must be able to identify multiple cases for predicting the optimal PID parameters for multiple cases of response of the random system to create learning data. However, there was no need to identify the optimal PID values for multiple cases of response from multiple cases of systems when one AI system identified the system from the response and another AI system identified the optimal PID parameters for the system. The values of m, k, and c corresponding to response and PID parameters were essential for learning with one AI system, and the optimal PID value data corresponding to the system were essential for learning with the other AI system. It was possible to learn efficiently even while generating data and with less data than AI, which directly connected the optimal PID value from the response. If the system is variable and the variation of the system is defined, the method of constructing two AIs is more effective in generating learning data.
However, the PID setting failed for certain values of m, k, and c, such as 0.1 kg, 900 N/m, and 900 Ns/m, respectively, because they could not be achieved in real systems; for example, feathers were placed on the spring and damper used in heavy equipment. In another failed case, the mass, spring, and damper were at the edge of the range of training data. As training data were generated randomly, the edge of the range may have been slightly learned or never learned. Therefore, the accuracy near the edge was reduced. This could be prevented by generating training data of a larger range.
The performance of LSTM networks with a reduction in the number of learning data in the order of 10 million, 10,000, and 1000 is shown in Table 4. The number of successful tuning attempts decreased with a decrease in the number of learned data. However, all the cases indicated a success rate of more than 93%. Additionally, the average number of successful tuning attempts increased with a decrease in the number of learned data. Even if only 1000 data were learned, the L-11 method showed 92.9% improvement in performance, and the tuning was completed in an average of 2.94 attempts. The 1000 samples of data could be generated in a real PID system.
When random noise with a standard deviation of 20% of settling condition was added to the data, the average number of tuning attempts to success and the number of failed tuning attempts tended to increase (Figure 7 and Table 5). The position of the system was sensitive to fluctuations in noise and a situation similar to the narrowing of settling condition was encountered. Hence, even if a successfully tuned PID parameter was used, random noise often resulted in a failure condition response. Therefore, it is suggested to evaluate the situation in which there is noise with more loose evaluation criteria such as changing the margin of the error band from 5% to 10%.
Additionally, the response of the system with the PID parameter recommended by neural networks was explored when the target position was changed during operation. In Figure 8, the responses of the system with m, k, and c as 70.3796 kg, 930.6824 N/m, and 313.7844 N∙s/m, and Kp, Ki, and Kd as 40.3243, 922.2207, and 122.4163 recommended by AI are plotted. The set position started at 0 m and changed to 1, 4, 2, −3, 5, −5, 0 m at 2 s intervals. Even if the set position was changed during the operation, the response of the system with PID parameters recommended by AI for the set position quickly followed the changed set position.
4. Conclusions
In this study, we proposed an AI method to automate the tuning of PID parameters. We designed a series of two AI systems to automate the tuning process and recommend acceptable PID parameters based on the response of the system. These models dramatically reduced the number of tuning attempts by identifying the target system and recommending PID parameters for the system.
Both ANN and LSTM networks showed satisfactory results; however, an LSTM network with 21 or 11 sampling data points without response characteristics was the most recommended because it could predict the next PID parameter based on the response data of 1 or 2 s. In addition, even for 1000 learned data, the L-11 method showed 92.9% improvement in performance and tuning was completed in an average of 2.94 attempts. The robustness of the proposed method was evaluated by determining the performance of the system with the addition of noise or when the target position was changed.
Additionally, this method can be used effectively even in conservative industries that prefer using traditional PID controllers. In the future, it will be necessary to use a more complex simulator such as a higher-order system to evaluate the performance of this method.
Author Contributions
Y.-S.L.; data acquisition, writing—original draft preparation, and editing, D.-W.J.; investigation, methodology, writing—original draft preparation, and editing. Both authors have read and agreed to the published version of the manuscript.
Funding
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. NRF-2020R1G1A1101591).
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Data sharing not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figures and Tables
Figure 1. Mass–spring–damper system for simulation. Symbols m, k, and c indicate the values of the mass, spring, and damper, respectively.
Figure 4. Sampling method for using the data as inputs when number of sampling was 11 or 21.
Figure 6. Response graphs with the changes as tuning was repeated when m = 187.77 kg, k = 318.47 N/m, and c = 88.65 N∙s/m.
Figure 7. Response graphs of L-11-1000 with noise that changed as tuning was repeated for m = 38.08 kg, k = 80.73 N/m, and c = 807.74 N∙s/m.
Figure 8. Position tracking of the performance of the system with PID parameters recommended by the neural networks.
Learning method classified by the type of network, number of sampling data, and whether response characteristics were included as inputs. ANN and LSTM are abbreviated as “A” and “L”, respectively. The numbers “21” and “11” indicate the number of sampling data. “RC” indicates if the input included or excluded the response characteristics. If the name of the method does not end with “RC,” then the input of that method did not include response characteristics.
Method | Type of Network | Number of Sampling Data (N) | Response Characteristic (RC) |
---|---|---|---|
A-21-RC | ANN | 21 | Included |
A-11-RC | ANN | 11 | Included |
A-21 | ANN | 21 | Excluded |
A-11 | ANN | 11 | Excluded |
L-21-RC | LSTM | 21 | Included |
L-11-RC | LSTM | 11 | Included |
L-21 | LSTM | 21 | Excluded |
L-11 | LSTM | 11 | Excluded |
The relative error between real model parameter and predicted model parameter from the first neural network for model identification after training 10 million data.
Method | Relative Error (%) | |||
---|---|---|---|---|
Mass | Spring | Damper | Average | |
A-21-RC | 4.37 | 9.90 | 29.3 | 14.5 |
A-11-RC | 4.07 | 12.2 | 36.8 | 17.6 |
A-21 | 32.5 | 30.0 | 26.6 | 29.7 |
A-11 | 34.3 | 52.3 | 26.1 | 37.6 |
L-21-RC | 5.81 | 13.4 | 26.5 | 15.2 |
L-11-RC | 6.50 | 25.0 | 15.3 | 15.6 |
L-21 | 15.3 | 17.9 | 17.6 | 16.9 |
L-11 | 10.0 | 25.7 | 34.6 | 23.3 |
Number of successful cases and average number of tuning attempts to achieve success using eight methods after training 10 million data.
Method | Number of Successful Cases |
Average Number of Tuning Attempts to Achieve Success |
---|---|---|
A-21-RC | 985 | 1.047 |
A-11-RC | 991 | 1.115 |
A-21 | 738 | 1.344 |
A-11 | 552 | 1.001 |
L-21-RC | 982 | 1.085 |
L-11-RC | 979 | 1.048 |
L-21 | 992 | 1.004 |
L-11 | 992 | 1.571 |
Number of successful tuning attempts and average number of tuning attempts to success using LSTM networks by reducing the number of learning data. “# of Succ.” means number of successful attempts, and “# of Tune.” means average number of tuning attempts to success.
Method | Number of Trained Data | |||||
---|---|---|---|---|---|---|
10 Million | 10,000 | 1000 | ||||
# of Succ. | # of Tune. | # of Succ. | # of Tune. | # of Succ. | # of Tune | |
L-21-RC | 982 | 1.185 | 983 | 1.265 | 963 | 2.478 |
L-11-RC | 979 | 1.048 | 913 | 1.685 | 964 | 2.621 |
L-11 | 992 | 1.571 | 982 | 1.864 | 929 | 2.940 |
Number of successful tuning attempts and average number of tuning attempts to success using LSTM networks by reducing the number of learning data. The number “1000” in the method L-21-RC-1000 and L-11-1000 means that these methods trained only 1000 of the total data.
Method with Noise | Number of Successful Tuning Attempts (Total 1000) | Average Number of Tuning Attempts to Success |
---|---|---|
L-21-RC | 974 | 2.82546 |
L-21-RC-1000 | 934 | 3.15096 |
L-11 | 996 | 1.85241 |
L-11-1000 | 944 | 3.29131 |
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2021 by the authors.
Abstract
The feasibility of a neural network method was discussed in terms of a self-tuning proportional–integral–derivative (PID) controller. The proposed method was configured with two neural networks to dramatically reduce the number of tuning attempts with a practically achievable small amount of data acquisition. The first network identified the target system from response data, previous PID parameters, and response characteristics. The second network recommended PID parameters based on the results of the first network. The results showed that it could recommend PID parameters within 2 s of observing responses. When the number of trained data was as low as 1000, the performance efficiency of these methods was 92.9%, and the tuning was completed in an average of 2.94 attempts. Additionally, the robustness of these methods was determined by considering a system with noise or a situation when the target position was modified. These methods are also applicable for traditional PID controllers, thus enabling conservative industries to continue using PID controllers.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 Department of Mechanical Engineering, School of Industrial and Mechanical Engineering, The University of Suwon, 17, Wauan-gil, Bongdam-eup, Hwaseong 18323, Korea;
2 Department of Mechanical Engineering, Myongji University, Yongin 17058, Korea