This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
In the COVID-19 era, brought about by severe climate change and ecosystem destruction, the automobile industry is replacing vehicles with internal combustion engines, with eco-friendly vehicles. Among such vehicles, the numbers of electric vehicles (EVs) have increased explosively [1, 2]. In 2021, the number of registered vehicles in South Korea was 25,010,000, of which 1,200,000 were eco-friendly. Of these, 240,000 were battery electric vehicles (BEVs), thus 1% of all registered vehicles [3–5]. The rapid increase in BEVs that use large batteries creates demands for charging infrastructure and specialized vehicle maintenance, and creates environmental problems (spent batteries, the treatment of which is a major problem). Many such batteries (used in BEVs and hybrid cars that became popular in the 2010s) must be managed in 2020. However, there is as yet no eco-friendly treatment; the batteries are simply stored [6–8].
One (partial) solution is to maximize battery efficiency to increase the working life; another is eco-friendly regeneration. Battery life is prolonged by efficient charging and efficient driving [9, 10]. The driver is responsible for efficient charging; this cannot be controlled by the vehicle developer. Driving with consideration of the vehicles ahead can be implemented by the vehicle developer; again, this prolongs battery life. Here, this paper presents a method that extends battery life. This paper creates an efficient driving profile that considers vehicles ahead. In particular, the driving profile optimization method can be a practical alternative for car manufacturers to directly respond to environmental constraints in the process of selling cars.
Several studies have presented eco-friendly driving profiles for BEVs. First of all, the following is a summary of the paper focusing on batteries. Piao et al. [11] published a paper that improves battery efficiency through a battery management system using the cell-balancing algorithm. This paper differs in that it focused on improving the efficiency of electrical energy possessed using the battery management system. Ramkumar et al. [12] published a paper that identifies the function of batteries in EVs and argues for the introduction of battery management systems to improve battery performance and efficiency. This paper also differs in that it checks the function of the battery in an EV and argues for the introduction of a battery management system to efficiently use the battery in hand. Wang et al. [13] optimized driving of hybrid electric vehicle (HEV) queues. Our method differs in that this paper predicts battery life on the basis of driving style; this paper does not consider HEVs. Sun et al. [14] predicted the speed of an HEV using an exponentially varying, stochastic Markov chain, and a neural network-based model. Our paper differs in that this paper does not consider the efficiency aspect of battery life.
Next, the papers written by incorporating technology rather than machine learning are as follows. Krasopoulos et al. [15] developed a multiobjective optimization method for the speed and torque trajectories of a light EV traveling on a predefined route. Our work differs in that this paper, search for driving profiles that optimize battery life on an arbitrary road. Bozorgi et al. [16] proposed a paper that generates a speed profile of an EV using two-choice routing algorithms: reducing driving time through data mining or improving battery energy efficiency. This paper has similarities in that it generates a velocity profile, but it differs from this paper in that it combines data mining technology and considers energy efficiency. Zhang et al. [17] developed a cloud-based, velocity profile optimizer that determined the driving profile and charge status using a genetic algorithm, and presented a dynamic programming targeting plug-in hybrid buses. Our work differs in that this paper estimates battery life using reinforcement learning (a form of artificial intelligence [AI]). Finally, Song et al. [18] used machine-learning methods for HEV energy efficiency management. However, the cited authors did not develop a BEV driving profile that considered battery life using reinforcement learning.
In addition, research on applying reinforcement learning to vehicles has been presented. Terapaptommakol et al. [19] proposed a deep Q-network (DQN) method to develop an autonomous vehicle control system to achieve trajectory design and collision avoidance with regard to obstacles on the road in a virtual environment. Mohammed et al. [20] employed deep reinforcement learning to assist unmanned aerial vehicles to find air pollution plumes in an equal-sized grid space. Zheng et al. [21] proposed to model the dynamic scheduling of automatic guided vehicles as a Markov decision process (MDP) with mixed decision rules based on a DQN to generate the optimal policy. Although the cited authors conducted about reinforcement learning, but did not develop a BEV driving profile that considered battery life.
Here, this paper presents a method of driving profile optimization that increases BEV battery life. Among the methods that can increase the lifespan of the battery of a BEV, the most reliable method possible from the developer’s point of view is a method of optimizing the driving profile. At the same time, optimizing the driving profile is contributing in terms of providing an alternative to automakers that are constrained by environmental constraints. The profile employs a DQN (a reinforcement learning method). Presenting the DQN method rather than the existing optimization algorithm as a method of optimizing the driving profile of a BEV expands the applicability of reinforcement learning such as DQN to the automotive field. This paper uses simulation to evaluate the method; these verify its applicability.
The present paper is organized into five sections. In Section 2, machine-learning methods including reinforcement learning are described. In Section 3, the proposed, reinforcement learning-based driving profile model is explained. In Section 4, the environment used for performance evaluation of the model, and the results, are described. Finally, in Section 5, conclusions are presented.
2. Machine Learning
Machine learning can solve problems effectively using data-based experiences generated in a specific field. Thus, a machine-learning method automatically learns the rules from data and makes decisions based on these rules; no human programming is required. AI renders computers intelligent; they learn and infer as do humans. Thus, AI includes machine learning. For example, AI systems for autonomous vehicle driving follow learned rules when driving.
Machine learning is broadly divided into supervised, unsupervised, and reinforcement learning depending on the signals received and the feedback required for learning [22–24]. Supervised learning allows of predictions, estimations, and classifications using training data. Supervised learning features both independent and dependent variables, and (under supervision) generalizes relationships between them. On the other hand, unsupervised learning searches for hidden patterns or rules in observed data. Such learning generalizes a hidden pattern in a large number of data. The only variables are input variables; there are no dependent variables and no need for supervision. There is often no obvious “correct” way to solve a problem and no way to check whether learning is appropriate. Reinforcement learning determines the actions that are optimal under the current conditions (Figure 1) [25, 26]. A reward is given (in an external environment) whenever an agent takes an action; learning proceeds in directions that maximize the reward. A reward may not be given immediately an action is taken. Thus, a credit assignment problem may occur. The reward stays the same even if the difficulty of the current problem suddenly exceeds the difficulties of the previous two problems.
[figure(s) omitted; refer to PDF]
In reinforcement learning, an agent consists of a policy, a value function, and a model [27]. The policy is an action pattern that determines what to do in a given environment. It thus links the environment to an action. The policy may be deterministic (the taking of a certain action in a given environment) or stochastic (the probability distribution of actions is considered). The value function predicts the extent of the reward by reference to the environment and the action. The model predicts the next environment to be encountered and the size of the reward. Both environmental and reward models exist. Reinforcement learning algorithms can be divided into those with and without environmental models, and with or without value functions and policies. Both model-based and model-free reinforcement learning methods have been described. If the policy is perfect, the value function need not require the intermediate calculations used to form the policy. When the agent learns only a policy (thus not a value function), this is termed policy-based or policy optimization. By contrast, if the value function is perfect, an agent selects only the actions of highest value in each state; an optimal policy is readily attained. When an agent learns only value functions (the policies are implicit), this is termed value-based or Q-learning. Employment of a value-based agent increases the efficiency of data use, but a policy-based agent learns more reliable because it optimizes directly what it prefers.
When applying reinforcement learning frameworks to the EV velocity profile optimization problem, the agent is the EV and the rewards energy efficiency and a longer battery life. It is very important to define the state, the action, and the reward when engaging in reinforcement learning. In general, the state includes features such as the demand power, the velocity, the state-of-charge deviation, and the torque but, here, the state includes all of the velocity, the safe distance, and the relative velocity. Our reinforcement learning algorithm employs a DQN using a value-based agent.
3. The Reinforcement Learning-Based EV Driving Profile Model
3.1. Electric Vehicle Model
When a vehicle moves, it experiences resistance in the direction opposite to the direction of travel, including rolling, air, grade, and inertial resistances; all cause energy loss [28]. The rolling resistance is the energy loss attributable to repeated tire rolls (associated with tire deformation and recovery; Equation (1)):
The air resistance may be drag, lift, or a lateral force. The drag force is the principal cause of energy loss. The drag force is horizontal (in the direction opposite of travel) and is caused by shear stress and pressure generated by the vehicle body because of the viscosity of air (Equation (3)) [28, 29]:
The grade resistance is a force acting in the slope-descending direction, thus the horizontal component of a certain force (
The inertial resistance is the force required to increase the vehicle’s speed. All rotating parts in engines and the drive shafts and wheels, and the vehicle per se, experience different rotatory accelerations in the travel direction. The equivalent mass of the rotating parts must thus be considered. The inertial resistance is Equation (5):
The total running resistance of the vehicle is the sum of the rolling, air, grade, and inertial resistance (Equation (6)):
The experiments that derive total running resistance are performed on roads that are not sloped. The total running resistance (except the grade resistance,
3.2. The Battery Life Model
The equation for the battery life model is that of Meng [30]. The model can be expressed using (Equation (9)) [31, 32]:
In earlier studies on battery design, the discharge rates were set to 0.5, 2, 6, and 10 C when estimating variables. However, our battery life model equation considers changing discharge rates. Thus, our model interpolates battery life model equations at various discharge rates (0.1–10 C). The model is expressed by Equation (10) after substituting
Figure 2 shows the battery capacity loss over time at various discharge rates using the interpolated battery model. As this estimates the capacity loss at various discharge rates (C-rates), it reflects different EV driving patterns.
[figure(s) omitted; refer to PDF]
3.3. Reinforcement-Based Driving Profile Model
To optimize the driving profile via reinforcement learning, the problem is viewed as a sequential decision-making problem; a MDP model is appropriate. The MDP model is Equation (14):
The action taken by the agent is acceleration, thus a change in vehicle speed. The agent accelerates or decelerates within the physically possible range (0–100 km/hr) with consideration of the current speed and the vehicle specifications. The reward function optimizes the energy efficiency of the vehicle, battery life, and the distance to the vehicle ahead, and is Equation (15):
4. Evaluation of the Reinforcement-Based EV Driving Profile Model
The selected vehicle is model A EV of Company H; the specifications are listed in Table 1. A features a permanent-magnet synchronous motor that yields a 204 horsepower (“pferdestarke”) at 3,600 RPM or higher, and a maximum torque of 395 N.m from 0 to 3,600 RPM. The torque and RPM scales were modified to reflect the performance of the KONA motor using the holding data of the motor efficiency map. The regenerative mode efficiency map was reversed. The KONA-EV battery is of the lithium-ion polymer type, but the battery used in our battery life model was an LiFePO4 battery, because reference data were available.
Table 1
Vehicle specifications.
Group | Kind | Value |
Vehicle | Equivalent test weight | 1,814 kg |
Front area | 2.00 m2 | |
Drag coefficient | 0.29 | |
L/W/H | 4.18/1.8/1.55 m | |
Gear ratio (single gear) | 7.981 | |
Electric motor | Electric motor type | IPMSM |
Max power | 150 kW | |
Max torque | 395 N.m (40.27 kg.m) | |
Max rotational speed | 11,200 rpm | |
Cold method | Water cooled | |
Tire | Aspect ratio | 55 |
Radius | 334.15 mm | |
Tire pressure | 33 psi | |
Battery | Voltage | 356 V |
Capacity | 180 Ah | |
Energy | 64 kWh | |
Environment | Temperature | 15°C |
Gradient | 0 rad | |
Air density | 1.2 kg/m2 |
The vehicle was modeled using Cruise M vehicle simulation software of Company AVL; the power generated during driving was calculated. The simulations considered the loss caused by the total running resistance, thus including the rolling, air, grade, and inertial resistances, and the power recharged by regenerative braking when decelerating. Figure 3 shows the vehicle model simulated using AVL Cruise M software; this paper derived the energies consumed and powers generated. The simulation was conducted every 10 ms and the results were collected over 100 runs. The motor controller and inverter were included in the motor block, and the efficiency of the inverter was set to 92% by referring to the manufacturer’s data.
[figure(s) omitted; refer to PDF]
The method maintained a safe distance from the vehicle ahead. This optimized energy efficiency and battery life. According to the safety distance standards of KoROAD, when the speed limit is 80 km/hr or above and the vehicle speed is A (km/hr), the safe distance is Am; when the speed limit is 80 km/hr or less and the vehicle speed is B (km/hr), the safe distance is (B-15) m. In addition, the time-to-collision (TTC) was set to 1.6 s to ensure flexibility of the safe distance [33]. In terms of these two safe distance standards, the KoROAD standard was followed if the speed was 25 km/hr or more (maximum safety distance); otherwise the TTC was followed (minimum safety distance). Finally, at least a 2 m safe distance was assumed at low speeds and when stopped.
The hardware used for simulation was a desktop computer with an AMD 3600x processor, 32 GB of main memory, and a GeForce GTX 1080 ti graphics processing unit. Some sections of Federal Test Procedure-75 (FTP-75) were used when establishing the driving profile of the vehicle ahead. The test vehicle was assumed to follow this driving profile during reinforcement learning on how to optimize energy efficiency and battery life while maintaining the aforementioned safety distances. Simulations were conducted to compare energy consumption efficiencies and battery lives when driving on some sections of FTP-75. The energy consumption efficiency (km/kWh) were calculated by dividing the electricity energy (kWh) consumed by the distance traveled (km). In addition, as battery life does not change rapidly, simulations were conducted over a 1-year cycle.
The performance of Q-learning and DQN (representative reinforcement learning methods) were compared. The exploration and entire episode Q-learning steps were 15,000 and 11,000, respectively, for a 120 s duration driving profile of FTP-75. An optimal value could not be found; values that did not completely converge were frequently generated although similar driving profiles when the episodes exceeded 10,000 steps. In the case of Q-learning, it has been concluded that it takes a lot of exploration and learning to complete the Q-table to meet the number of too many cases. The DQN was trialed using the same driving profile sample. Q-learning revealed similarity when the episode exceeded 10,000 steps whereas the DQN began to evidence similarity after 400 steps after the sample to be learned was gathered in replay memory. Thus, DQN attained an optimized value faster and more accurately than did Q-learning. In other words, when solving a problem with high complexity, DQN’s replay memory method is more effective than adding depth of Q-learning.
In DQN learning, the learning rate was 0.001, the target update frequency was 3, the maximum episode was 11,000 times, the discount factor r was 0.9, the mini-batch size was 256, and the gradient threshold was 1, which was selected through a tuning process. In addition, ReLU was selected as the activation function.
Figure 4 shows the energy efficiency results for Model A of Company H (the test model when DQN reinforcement learning was applied [and not] in the aforementioned simulation environment). Numbers 1–6 on the x-axis of Figure 4 refer to (arbitrary) Sections 1–6 around 120 s in the FTP-75 profile, and the y-axis is the energy efficiency (km/kWh). All of Cases 1–6 (except Case 2) improved when the DQN was applied (compared to not applied). The least improved case was Case 4 (4.92% in terms of energy efficiency) and the most improved, Case 3 (15.39%). The energy efficiency of the least improved, Case 4, was 12.50 km/kWh when DQN was not applied. However, when DQN was applied, the energy efficiency was 13.11 km/kWh. Furthermore, the energy efficiency of the most improved, Case 3, was 10.71 km/kWh when DQN was not applied. When DQN was applied, the energy efficiency was 12.35 km/kWh. By contrast, Case 2 exhibited a better energy efficiency when DQN was not applied, compared to applied. The energy efficiency was 8.52 km/kWh when DQN was not applied but 8.40 km/kWh when DQN was applied, thus a decrease of 1.48%. The reason for this is that, for Case 2, the speed rapidly increased or decreased; DQN learning did not significantly impact energy efficiency.
[figure(s) omitted; refer to PDF]
Figure 5 shows the battery capacity loss results for Model A of Company H (the test model when DQN reinforcement learning was applied and not). Numbers 1–6 on the x-axis of Figure 4 refer to (arbitrary) Sections 1–6 around 120 s in the FTP-75 profile, and the y-axis is the battery capacity loss (kWh/1,000 km). All of Cases 1–6 (except Case 2) improved when the DQN was applied (compared to not). The least improved case was Case 4 (13.00%) and the most improved, Case 3 (29.14%). The battery capacity loss in the least improved, Case 4, was 0.055 kWh/1,000 km when DQN was not applied, but 0.048 kWh/1,000 km when DQN was applied. The battery capacity loss in the most improved, Case 3, was 0.094 kWh/1,000 km when DQN was not applied but 0.066 kWh/1,000 km when DQN was applied. By contrast, Case 2 evidenced a better energy efficiency when DQN was applied than not. The battery capacity loss was 0.056 kWh/1,000 km when DQN was not applied and 0.059 kWh/1,000 km when DQN was applied, thus a fall of 4.64%. The reason for this is that, for Case 2, the speed rapidly increased or decreased; DQN learning did not significantly impact energy efficiency. In Case 2, if the information about the driving profile is sufficiently learned through long-term driving, the BEV will have improved battery energy consumption efficiency and battery capacity loss. In addition, if the driving profile of the BEV is sufficiently learned, it is judged that it will have better energy efficiency and battery capacity reduction rate than the result through simulation in other cases.
[figure(s) omitted; refer to PDF]
The method for optimizing the driving profile assuming that there is a vehicle ahead is activated after the driver gets into the vehicle and starts the engine. The driving profile optimization method is deactivated according to the driver’s intention or when the safety distance of 2 m cannot be maintained. The case where the driving profile optimization method cannot be applied is when the vehicle changes lanes, overtakes, or reverses the previous vehicle. When the driving profile optimization method is activated, it can be said to be effective in improving the energy efficiency and battery life of the BEV.
5. Conclusions
This paper presents a method that optimizes the driving profile to increase BEV battery life. The BEV driving profile employed a DQN reinforcement learning method. This paper verified the applicability of the method using simulations. Our conclusions are:
First, the BEV battery life varies by the driving profile. In the simulation results with the proposed optimization method, the battery performance varied from 29.14% to −4.64%. In particular, the method to optimize the driving profile of BEVs is the method to improve battery life from the BEV developer’s viewpoint.
Second, the proposed optimization method of driving profile based on reinforcement learning was effective in improving energy efficiency and battery life. Energy efficiency using the proposed optimization method was improved by 7.99% on average, and battery capacity loss was reduced by 16.84% on average.
However, the method did not improve energy efficiency or battery life when the speed changed rapidly. Such changes negatively impact energy efficiency and BEV battery life. This result verified that rapid speed changes can be negative impacts on the energy efficiency and battery life of BEVs.
Also, we used only some FTP-75 profiles. More profiles should be evaluated to enhance reliability under dynamics and extreme condition. In addition, other reinforcement learning algorithms such as Dueling DQN, Double DQN, and D3QN that feature approximation techniques should be tested. Finally, to apply it to an actual BEV, the driver’s sense of heterogeneity and driving mode change (4WD to 2WD) must be considered.
Authors’ Contributions
All the authors contributed significantly to this work.
Acknowledgments
This research was supported by the Ministry of Trade, Industry & Energy (MOTIE), Korea Institute for Advancement of Technology (KIAT) through the Core Technology Development Program for The Industries of Vehicle (N20006869, 2019).
[1] Q. Chen, Y. Tian, S. Kang, Y. Yu, J. Ding, Y. Xie, "Sensorless control of permanent magnet synchronous motor for electric vehicle based on phase locked loop," International Journal of Automotive Technology, vol. 22, pp. 1409-1414, DOI: 10.1007/s12239-021-0122-3, 2021.
[2] A. Dhand, K. Pullen, "Review of battery electric vehicle propulsion systems incorporating flywheel energy storage," International Journal of Automotive Technology, vol. 16, pp. 487-500, DOI: 10.1007/s12239-015-0051-0, 2015.
[3] S. Sato, Y. J. Jiang, R. L. Russell, J. W. Miller, G. Karavalakis, T. D. Durbin, K. C. Johnson, "Experimental driving performance evaluation of battery-powered medium and heavy duty all-electric vehicles," International Journal of Electrical Power & Energy Systems, vol. 141,DOI: 10.1016/j.ijepes.2022.108100, 2022.
[4] Z. Wang, J. Zhou, G. Rizzoni, "A review of architectures and control strategies of dual-motor coupling powertrain systems for battery electric vehicles," Renewable and Sustainable Energy Reviews, vol. 162,DOI: 10.1016/j.rser.2022.112455, 2022.
[5] L. Schärtel, B. Reick, M. Pfeil, R. Stetter, "Analysis and synthesis of architectures for automotive battery management," Applied Sciences, vol. 12 no. 21,DOI: 10.3390/app122110756, 2022.
[6] J. Wan, J. Lyu, W. Bi, Q. Zhou, P. Li, H. Li, Y. Li, "Regeneration of spent lithium-ion battery materials," Journal of Energy Storage, vol. 51,DOI: 10.1016/j.est.2022.104470, 2022.
[7] C. Yi, L. Zhou, X. Wu, W. Sun, L. Yi, Y. Yang, "Technology for recycling and regenerating graphite from spent lithium-ion batteries," Chinese Journal of Chemical Engineering, vol. 39, pp. 37-50, DOI: 10.1016/j.cjche.2021.09.014, 2021.
[8] X. Jiang, "Research on electric vehicle charging scheduling strategy based on the multiobjective algorithm," Mathematical Problems in Engineering, vol. 2022,DOI: 10.1155/2022/1959511, 2022.
[9] A. Maheshwari, S. Nageswari, "Real-time state of charge estimation for electric vehicle power batteries using optimized filter," Energy, vol. 254,DOI: 10.1016/j.energy.2022.124328, 2022.
[10] C. Kacperski, R. Ulloa, S. Klingert, B. Kirpes, F. Kutzner, "Impact of incentives for greener battery electric vehicle charginga field experiment," Energy Policy, vol. 161,DOI: 10.1016/j.enpol.2021.112752, 2022.
[11] C. Piao, Z. Wang, J. Cao, W. Zhang, S. Lu, "Lithium-ion battery cell-balancing algorithm for battery management system based on real-time outlier detection," Mathematical Problems in Engineering, vol. 2015,DOI: 10.1155/2015/168529, 2015.
[12] M. S. Ramkumar, C. S. R. Reddy, A. Ramakrishnan, K. Raja, S. Pushpa, S. Jose, M. Jayakumar, "Review on Li-ion battery with battery management system in electrical vehicle," Advances in Materials Science and Engineering, vol. 2022,DOI: 10.1155/2022/3379574, 2022.
[13] S. Wang, P. Yu, D. Shi, C. Yu, C. Yin, "Research on eco-driving optimization of hybrid electric vehicle queue considering the driving style," Journal of Cleaner Production, vol. 343,DOI: 10.1016/j.jclepro.2022.130985, 2022.
[14] C. Sun, X. Hu, S. J. Moura, F. Sun, "Velocity predictors for predictive energy management in hybrid electric vehicles," IEEE Transactions on Control Systems Technology, vol. 23 no. 3, pp. 1197-1204, DOI: 10.1109/TCST.2014.2359176, 2015.
[15] C. T. Krasopoulos, M. E. Beniakar, A. G. Kladas, "Velocity and torque limit profile optimization of electric vehicle including limited overload," IEEE Transactions on Industry Applications, vol. 53 no. 4, pp. 3907-3916, DOI: 10.1109/TIA.2017.2680405, 2017.
[16] A. M. Bozorgi, M. Farasat, A. Mahmoud, "A time and energy efficient routing algorithm for electric vehicles based on historical driving data," IEEE Transactions on Intelligent Vehicles, vol. 2 no. 4, pp. 308-320, DOI: 10.1109/TIV.2017.2771233, 2017.
[17] Z. Zhang, H. He, J. Guo, R. Han, "Velocity prediction and profile optimization based real-time energy management strategy for plug-in hybrid electric buses," Applied Energy, vol. 280,DOI: 10.1016/j.apenergy.2020.116001, 2020.
[18] C. Song, K. Kim, D. Sung, K. Kim, H. Yang, H. Lee, G. Y. Cho, S. W. Cha, "A review of optimal energy management strategies using machine learning techniques for hybrid electric vehicles," International Journal of Automotive Technology, vol. 22, pp. 1437-1452, DOI: 10.1007/s12239-021-0125-0, 2021.
[19] W. Terapaptommakol, D. Phaoharuhansa, P. Koowattanasuchat, J. Rajruangrabin, "Design of obstacle avoidance for autonomous vehicle using deep Q-network and CARLA simulator," World Electric Vehicle Journal, vol. 13 no. 12,DOI: 10.3390/wevj13120239, 2022.
[20] A. F. Y. Mohammed, S. M. Sultan, S. Cho, J.-Y. Pyun, "Powering UAV with deep Q-network for air quality tracking," Sensors, vol. 22 no. 16,DOI: 10.3390/s22166118, 2022.
[21] X. Zheng, C. Liang, Y. Wang, J. Shi, G. Lim, "Multi-AGV dynamic scheduling in an automated container terminal: a deep reinforcement learning approach," Mathematics, vol. 10 no. 23,DOI: 10.3390/math10234575, 2022.
[22] Y. Luo, J. Dai, H. Li, "Research on intelligent decision based on compound traffic field," International Journal of Automotive Technology, vol. 22, pp. 1023-1034, DOI: 10.1007/s12239-021-0092-5, 2021.
[23] Z. Chen, H. Hu, Y. Wu, R. Xiao, J. Shen, Y. Liu, "Energy management for a power-split plug-in hybrid electric vehicle based on reinforcement learning," Applied Sciences, vol. 8 no. 12,DOI: 10.3390/app8122494, 2018.
[24] Z. Du, Q. Miao, C. Zong, "Trajectory planning for automated parking systems using deep reinforcement learning," International Journal of Automotive Technology, vol. 21, pp. 881-887, DOI: 10.1007/s12239-020-0085-9, 2020.
[25] R. S. Sutton, A. G. Barto, Reinforcement Learning: An Introduction, 2018.
[26] Y. Liu, W. Chen, Z. Huang, "Reinforcement learning-based multiple constraint electric vehicle charging service scheduling," Mathematical Problems in Engineering, vol. 2021,DOI: 10.1155/2021/1401802, 2021.
[27] I. S. Oh, Machine Learning, 2019.
[28] D. S. Puma-Benavides, J. Izquierdo-Reyes, R. Galluzzi, J. de Dios Calderon-Najera, "Influence of the final ratio on the consumption of an electric vehicle under conditions of standardized driving cycles," Applied Sciences, vol. 11 no. 23,DOI: 10.3390/app112311474, 2021.
[29] P. A. Tuan, V. D. Quang, "Estimation of car air resistance by CFD method," Vietnam Journal of Mechanics, vol. 36 no. 3, pp. 235-244, DOI: 10.15625/0866-7136/36/3/4176, 2014.
[30] J. S. Meng, Fluid Mechanics, 2003.
[31] Y. Zhao, C. Li, M. Zhao, S. Xu, H. Gao, L. Song, "Model design on emergency power supply of electric vehicle," Mathematical Problems in Engineering, vol. 2017,DOI: 10.1155/2017/9697051, 2017.
[32] J. Wang, P. Liu, J. Hicks-Garner, E. Sherman, S. Soukiazian, M. Verbrugge, H. Tataria, J. Musser, P. Finamore, "Cycle-life model for graphite-LiFePO 4 cells," Journal of Power Sources, vol. 196 no. 8, pp. 3942-3948, DOI: 10.1016/j.jpowsour.2010.11.134, 2011.
[33] S. Jeong, C. Ahn, "Risk statistics based target vehicle steering control for collision avoidance in car-to-car," KSAE 2017 Annual Autumn Conference, pp. 662-664, .
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2023 Jihoon Kwon et al. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0/
Abstract
In the COVID-19 era, automobiles with internal combustion engines are being replaced by eco-friendly vehicles. The demand for battery electric vehicles (BEVs) has increased explosively. Treatment of spent batteries has received much attention. Battery life can be extended via both efficient charging and driving. Consideration of the vehicles ahead when driving a BEV effectively prolongs battery life. Several studies have presented eco-friendly driving profiles for BEVs, the cited authors did not develop a BEV driving profile that considered battery life using reinforcement learning. Here, this paper presents a method of driving profile optimization that increases BEV battery life. This paper does not address how to regenerate spent batteries in an eco-friendly manner. The BEV driving profile is optimized employing a deep Q-network (a reinforcement learning method). This paper uses simulations to evaluate the effect of the driving profile on BEV battery life; these verified the applicability of our model. Finally, the speed profile optimization method was limited to improve energy efficiency or battery life in rapid speed change sections.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details


1 School of Mechanical Engineering, Pusan National University, Busan 46241, Republic of Korea
2 Department of Electric Vehicle, Dong-Eui Institute of Technology, Busan 47230, Republic of Korea
3 Department of Future Automotive Engineering, Gyeongsang National University, Jinju 52828, Republic of Korea
4 Research and Development Team, Ecoenergy Research Institute Company, Busan 46703, Republic of Korea