This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
1. Introduction
In recent years, wireless communication has been put into many efforts from the researchers of both academy and industry [1, 2], which inspires a lot of practical applications, such as internet of things and video monitoring [3]. Among these applications, a key feature is that massive calculating is involved due to the massive number of accessing nodes [4]. To suppress the massive calculating, cloud computing has been proposed which assisted the task calculating through wireless transmission [5, 6]. A major limitation is that the latency and power consumption (PoC) become prohibitively high in a poor channel state, which limits the development and application of cloud computing severely.
To resolve the above disadvantages of cloud computing, mobile edge computing (MEC) has been proposed to install the calculating resources at the edge node (ENs) of the network [7–9]. In this way, the users can unpack the tasks to the nearby EN through wireless transmission, which leads to a decreased delay and PoC compared to the cloud computing. A key design in the MEC system is the unpacking scale [10, 11], which gives the number of scale of tasks to be calculated at the EN. The fundamental principle of unpacking is to jointly utilize the communication and calculating resources, through achieving a fine trade-off between the calculating and wireless transmission. Moreover, some advanced wireless techniques have been proposed to decrease the delay and PoC in the calculating and transmission [12, 13].
Another new technique to assist the calculating and communication in IoT networks is the deployment of unmanned aerial vehicles (UAVs), which are easy to be used and provide flexible ability. Moreover, the price of UAV is becoming cheaper and cheaper, which inspires a lot of applications in practice [14, 15]. For the MEC system, the UAVs can rescue the data calculating with higher priority through some intelligent path routing and scheduling, which exploits the incremental system resources due to the usage of UAVs. The integration of UAVs into MEC systems has attracted much attention from the researchers of academy and industry, which becomes the motivation of this article.
Motivated by the above literature review, this article studies a MEC system with one EN, where multiple unmanned aerial vehicles (UAVs) act as users which have some heavy tasks. As the users generally have limitations in both calculating and power supply, the EN can help calculate the tasks and meanwhile supply the power to the users through energy harvesting. We optimize the system by proposing a joint strategy to unpacking and energy harvesting. Specifically, a deep reinforcement learning (DRL) algorithm is implemented to provide a solution to the unpacking, while several analytical solutions are given to the power allocation of energy harvesting among multiple users. In particular, criterion I is the equivalent power allocation, criterion II is designed through equal data rate, while criterion III is based on the equivalent transmission delay. We finally give some results to verify the joint strategy for the UAV-aided multiuser MEC system with energy harvesting.
2. System Model
In this paper, we consider an unloading system model in Figure 1 which has an edge node (EN) (note that the notation of “CAP” is used in some literature, while the notation “EN” is used in other literature. Both stand for the same meaning, and these two notations can be used interchangeably) surrounded by
[figure(s) omitted; refer to PDF]
2.1. Local Calculating Model
The local calculating delay of the
2.2. Unloading Calculating Model
In this paper,
From (3), the transmission power at the
The transmission rate between the
The calculating delay at the
The calculating delay of all UAVs is
From (8) and (9), the unloading calculating of the whole system is
Therefore, the system target in this considered MEC network is
3. System Optimization
In this section, we demonstrate our optimization scheme for the considered system target. Specifically, we first utilize deep Q-network (DQN) algorithm to obtain the task unloading strategy, and then, we proposed three methods to allocate the charged power for UAVs in the considered system. The details of our optimization scheme are expressed as follows.
3.1. Scheme on the Task Unloading
Due to the complexity of wireless link in the system, it is hard to dynamically unload the task of UAVs by traditional method. Therefore, we exploit DQN algorithm to obtain the task unloading strategy. Different from the Q-learning algorithm, DQN has an experience pool and two neural networks that include the evaluation network and the target network, to interact with the training environment and break the training data correlation. Moreover, we use the Markov decision process (MDP) to model the consider task unloading issue. In particular, MDP generally consists of the state set
3.2. Methods on the Charged Power Allocation
In this part, we will describe three methods for allocating the charged power from EN to
(1) Equal-charge-power allocation method
Firstly, we allocate the charge power to
(2) Equal-transmission-rate allocation method
Secondly, we allocate the charge power to
From (16) and (5), we can obtain
By removing the common item of
From (4), we can obtain
Moreover, from (3) and (22), we can obtain
After removing the comment term of
Then, by further removing the comment term of
For simplicity, we assume the charging time of each
Therefore, from (26), we can obtain
By removing the common item of
From this equation, we have
Then, we can further obtain
By using the relationship of
From this equation, we can have the power charge allocation result of method 2 as
(3) Equal-charge-energy allocation method
Thirdly, we allocate the charge power to
From (3), we can obtain
By removing the common item of
Then, by removing the common item of
Since we assume that the charging time of each
Then, we can further obtain
By using the relationship of
From this equation, we can have the power charge result of method 3 as
In the next section, we will perform some simulations to demonstrate the effectiveness of our proposed scheme on task offloading and charged power allocation.
4. Simulation
In this section, we perform some simulations to demonstrate our proposed scheme on task offloading and charged power allocation. Specifically, the channel in the considered MEC network adopts the Gaussian channel, and the average channel gain of the wireless link from UAVs to EN is set to 1. The variance of AWGN at the EN is set 0.1. Moreover, the number of UAVs is set to 2, and the task size of UAVs is set to 50 MB. We set the calculating ability of UAVs to
Figure 2 shows the convergence of the proposed strategy with method 1. We can find that the system delay declines rapidly and converges after 15 epochs. For example, the system delay of method 1 decreases from 35 to less than 5. Similarly, Figures 3 and 4 show the convergence of the proposed strategy with methods 2 and 3, respectively. We can find that the system delay converges after 15 epochs and the value of delay eventually stabilised below five. These results demonstrate that the proposed DRL optimization strategy can effectively reduce the system delay and find the minimum value of the system delay.
[figure(s) omitted; refer to PDF]
Figure 5 shows the performance of the proposed strategy with method 1, where the value of
[figure(s) omitted; refer to PDF]
Figure 8 shows the performance of the proposed strategy with method 1, where the number of UAV ranges from 1 to 5. When the task size of each UAV is 100M or 50M, system delay increases as the number of UAVs increases. This i because the increase in the number of UAVs increases system burden and calculating delay. For example, the system delay when
[figure(s) omitted; refer to PDF]
5. Conclusions
This article studied a MEC system with one EN, where multiple unmanned aerial vehicles (UAVs) acted as users which had some heavy tasks. As the users generally had limitations in both calculating and power supply, the EN could help calculate the tasks and meanwhile supply the power to the users through energy harvesting. We optimized the system by proposing a joint strategy to unpacking and energy harvesting. Specifically, a deep reinforcement learning algorithm was implemented to provide a solution to the unpacking, while several analytical solutions were given to the power allocation of energy harvesting among multiple users. In particular, criterion I was the equivalent power allocation, criterion II was designed through equal data rate, while criterion III was based on the equivalent transmission delay. We finally gave some results to verify the joint strategy for the UAV-aided multiuser MEC system with energy harvesting.
Acknowledgments
This work was supported by the Key-Area Research and Development Program of Guangdong Province (No. 2018B010124001).
[1] J. Xia, F. Zhou, X. Lai, H. Zhang, H. Chen, Q. Yang, X. Liu, J. Zhao, "Cache aided decode-and-forward relaying networks: from the spatial view," Wireless Communications and Mobile Computing, vol. 2018,DOI: 10.1155/2018/5963584, 2018.
[2] B. Wang, F. Gao, S. Jin, H. Lin, G. Y. Li, "Spatial- and frequency-wideband effects in millimeter-wave massive MIMO systems," IEEE Transactions on Signal Processing, vol. 66 no. 13, pp. 3393-3406, DOI: 10.1109/TSP.2018.2831628, 2018.
[3] X. Hu, C. Zhong, Y. Zhang, X. Chen, Z. Zhang, "Location information aided multiple intelligent reflecting surface systems," IEEE Transactions on Communications, vol. 68 no. 12, pp. 7948-7962, DOI: 10.1109/TCOMM.2020.3020577, 2020.
[4] H. Yan, L. Hu, X. Xiang, Z. Liu, X. Yuan, "PPCL: Privacy-preserving collaborative learning for mitigating indirect information leakage," Information Sciences, vol. 548, pp. 423-437, DOI: 10.1016/j.ins.2020.09.064, 2021.
[5] Z. Su, F. Biennier, Z. Lv, Y. Peng, H. Song, J. Miao, "Toward architectural and protocol-level foundation for end-to-end trustworthiness in cloud/fog computing," IEEE Transactions on Big Data, vol. 8 no. 1, pp. 35-47, DOI: 10.1109/TBDATA.2017.2705418, 2022.
[6] M. T. Islam, S. Karunasekera, R. Buyya, "Performance and cost-efficient spark job scheduling based on deep reinforcement learning in cloud computing environments," IEEE Transactions on Parallel and Distributed Systems, vol. 33 no. 7, pp. 1695-1710, DOI: 10.1109/TPDS.2021.3124670, 2022.
[7] X. Lai, L. Fan, X. Lei, Y. Deng, G. K. Karagiannidis, A. Nallanathan, "Secure mobile edge computing networks in the presence of multiple eavesdroppers," IEEE Transactions on Communications, vol. 70 no. 1, pp. 500-513, DOI: 10.1109/TCOMM.2021.3119075, 2022.
[8] J. Zhao, X. Sun, Q. Li, X. Ma, "Edge caching and computation management for real-time internet of vehicles: an online and distributed approach," IEEE Transactions on Intelligent Transportation Systems, vol. 22 no. 4, pp. 2183-2197, DOI: 10.1109/TITS.2020.3012966, 2021.
[9] L. Chen, R. Zhao, K. He, Z. Zhao, L. Fan, "Intelligent ubiquitous computing for future UAV-enabled MEC network systems," Cluster Computing, vol. 2021 no. 1,DOI: 10.1007/s10586-021-03434-w, 2021.
[10] F. Zhou, R. Q. Hu, "Computation efficiency maximization in wireless-powered mobile edge computing networks," IEEE Transactions on Wireless Communications, vol. 19 no. 5, pp. 3170-3184, DOI: 10.1109/TWC.2020.2970920, 2020.
[11] F. Wang, H. Xing, J. Xu, "Real-time resource allocation for wireless powered multiuser mobile edge computing with energy and task causality," IEEE Transactions on Communications, vol. 68 no. 11, pp. 7140-7155, DOI: 10.1109/TCOMM.2020.3011990, 2020.
[12] W. Zhou, D. Deng, J. Xia, Z. Shao, "The precoder design with covariance feedback for simultaneous information and energy transmission systems," Wireless Communications and Mobile Computing, vol. 2018,DOI: 10.1155/2018/8472186, 2018.
[13] Q. Tao, J. Wang, C. Zhong, "Performance analysis of intelligent reflecting surface aided communication systems," IEEE Communications Letters, vol. 24 no. 11, pp. 2464-2468, DOI: 10.1109/LCOMM.2020.3011843, 2020.
[14] S. Arzykulov, A. Celik, G. Nauryzbayev, A. M. Eltawil, "UAV-assisted cooperative & cognitive NOMA: deployment, clustering, and resource allocation," IEEE Transactions on Cognitive Communications and Networking, vol. 8 no. 1, pp. 263-281, DOI: 10.1109/TCCN.2021.3105133, 2022.
[15] R. Akbar, S. Prager, A. R. Silva, M. Moghaddam, D. Entekhabi, "Wireless sensor network informed UAV path planning for soil moisture mapping," IEEE Transactions on Geoscience and Remote Sensing, vol. 60,DOI: 10.1109/TGRS.2021.3088658, 2022.
[16] J. Zhao, Q. Li, Y. Gong, K. Zhang, "Computation offloading and resource allocation for cloud assisted mobile edge computing in vehicular networks," IEEE Transactions on Vehicular Technology, vol. 68 no. 8, pp. 7944-7956, DOI: 10.1109/TVT.2019.2917890, 2019.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2022 Changyu Wang et al. This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
This article studies a mobile edge computing (MEC) with one edge node (EN), where multiple unmanned aerial vehicles (UAVs) act as users which have some heavy tasks. As the users generally have limitations in both calculating and power supply, the EN can help calculate the tasks and meanwhile supply the power to the users through energy harvesting. We optimize the system by proposing a joint strategy to unpacking and energy harvesting. Specifically, a deep reinforcement learning (DRL) algorithm is implemented to provide a solution to the unpacking, while several analytical solutions are given to the power allocation of energy harvesting among multiple users. In particular, criterion I is the equivalent power allocation, criterion II is designed through equal data rate, while criterion III is based on the equivalent transmission delay. We finally give some results to verify the joint strategy for the UAV-aided multiuser MEC system with energy harvesting.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details

1 Aviation University Air Force, Changchun, Jilin, China
2 Guangdong New Generation Communication and Network Innovative Institute (GDCNi), Guangzhou, China
3 AI Sensing Technology, Foshan, Guangdong, China
4 Starway Communication, Guangzhou, China