This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
1. Introduction
Driven by the rapid development of Internet of Things and mobile Internet, many novel applications are emerging [1]. However, most of these applications are computing-intensive and delay-sensitive, e.g., augmented reality, face recognition, and healthcare [2]. Running these applications locally is very challenging for smart mobile devices (SMDs) when ensuring users’ quality of experience (QoE) because of the limited resources of SMDs. How to complete the applications while guaranteeing users’ QoE becomes the focus of academic and industrial communities. Mobile edge computing (MEC) is a promising technique to solve this problem, which endows the radio access network with computation and storage capabilities. In order to improve users’ QoE, MEC helps SMDs complete applications by performing some tasks in the edge nodes of networks, which reduces the latency and energy consumption of task execution thanks to the close proximity of edge nodes to SMDs [3, 4].
Extensive research on MEC has been conducted from many perspectives, e.g., single-server MEC models and multiserver MEC models. Regarding the single-server MEC models, much work has been done, e.g., single-user models [5–9] and multiuser models [10–15]. For a single-user MEC model, the authors in [5] considered a binary computation offloading model and derived a data consumption rate threshold that decided to offload the whole task or execute the entire task locally. Based on that work, for further reducing the energy consumption of SMDs, partial offloading was introduced into the single-user model. The task was partitioned into two parts, one of which was offloaded [6, 7]. Considering the stochastic arrival of tasks, the optimal task scheduling policy was derived to minimize the weighted sum of the energy consumption and latency [8]. In addition, the energy harvesting technique was incorporated into the MEC model and the Lyapunov optimization-based dynamic computation offloading algorithm was proposed in [9]. For a multiuser MEC model, to satisfy the requirements of as many users as possible in a channel environment with wireless interference, the multiuser offloading system was formulated as a game and analyzed to admit a Nash equilibrium [10]. Considering inelastic computation tasks and non-negligible task execution durations, the authors in [11] proposed an energy-efficient resource allocation schemes. To deal with the arbitrary arrival of tasks in multiuser MEC system, tasks scheduling techniques were utilized in [12, 13]. To reduce the redundant execution of the same tasks and minimize the energy consumption, the storage resource of the base station was utilized in [14]. For further improving users’ QoE, wireless power transfer was added into the multiuser MEC model and an access point energy minimization problem was formulated [15].
Regarding the multiserver MEC models, many edge cloud architectures are emerging, e.g., flat edge cloud architectures [16–19] and hierarchical edge cloud architectures [20–22]. In the flat edge cloud architectures, MEC servers are located at the same tier. In the hierarchical edge cloud architectures, MEC servers are located at different tiers. And MEC servers in different tiers have distinct computation and storage capabilities [3, 23]. For a flat edge cloud architecture, geography information of SMDs and MEC servers was used to reduce the task execution delays in [16]. Considering maximizing the revenue of service providers, resources from different service providers were centralized to create a resource pool and the revenue was allocated by using core and Shapley values [17]. To minimize the communication latency, a cloudlet selection model based on mixed integer linear programming was developed in [18]. Furthermore, by utilizing the idle computing resources of vehicles, the authors in [19] proposed a decentralized framework named Autonomous Vehicular Edge to increase the computational capabilities of vehicles. For a hierarchical edge cloud architecture, a three-tier MEC model was built on the basis of LTE-advanced mobile backhaul network [20]. For improving the cost efficiency of network operators, the authors in [21] took the cost disparity of the edge tiers into account. Under a three-tier MEC model, the Stackelberg game was used to allocate the limited computing resources of edge severs to the data service subscribers [22].
Combined with heterogeneous networks, the hierarchical MEC was further studied. The small base station (SBS) and macro base station (MBS) are equipped with MEC servers to serve SMDs. Particularly, in [24], offloading decisions and radio resource were optimized jointly for minimizing the system energy cost. Then, the framework was developed further. SBSs were endowed with computing capabilities. And a resource allocation problem for minimizing the energy consumption of mobile users and MEC servers was formulated [25]. Based on the heterogeneous network powered by hybrid energy, user association and resource allocation were optimized for maximizing the network utility [26]. Considering the variability of mobile devices’ capabilities and user preferences, offloading decisions and resource allocation were optimized for maximizing system utility [27]. In addition, a novel information-centric heterogeneous network framework was designed and a virtual resource allocation problem was formulated in [28].
1.1. Motivations and Contributions
Hierarchical architectures of edge servers have an advantage over flat architectures in serving the peak loads [23, 29]. In addition, under the three-tier MEC architectures, previous studies focused on the system construction [20–22] and maximization of the system utility [26–28]. However, it is also important how to allocate computation and communication resource energy efficiently under a three-tier MEC architecture to improve users’ QoE. In this paper, we investigate a multiuser three-tier computing model under heterogeneous networks. The SBS integrated with relatively small computation capability and MBS integrated with great computation capability jointly execute tasks. Based on this hierarchical MEC model, an energy-efficient resource allocation (EERA) scheme is proposed. In EERA, the computation and radio resources are optimized jointly for minimizing the energy consumption of all SMDs. The main contributions of this paper are summarized as follows:
(1)
Based on heterogeneous networks, we establish a three-tier computing model, including local computing, SBS computing, and MBS computing. An energy-efficient optimization problem is formulated. Workload placement strategy, transmit power, and computation capability allocation are optimized to minimize SMDs’ energy consumption under task delay constraints.
(2)
We propose an EERA scheme based on the variable substitution technique. In this scheme, the optimal workload distribution and computation capability allocation are first obtained. Then, the optimal SMDs’ transmit power is derived through the variable substitution.
(3)
Numerical simulation experiments are conducted. Simulation results are presented to validate that EERA outperforms other baseline schemes and effectively reduces the SMDs’ energy consumption.
1.2. Organization
The rest of this paper is organized as follows. In Section 2, the three-tier computing model is presented and the energy-efficient optimization problem is formulated. In Section 3, EERA based on the variable substitution technique is proposed, where workload distribution in three-tier, computation capability allocation from SBS and SMDs’ transmit power are optimized jointly to minimize SMDs’ energy consumption. Numerical results are provided in Section 4, and conclusions are presented in Section 5.
2. System Model and Problem Formulation
As shown in Figure 1, SBS and MBS are equipped with MEC servers and help SMDs perform tasks. SMDs, SBS, and MBS execute tasks together and establish a three-tier computing architecture. In the first tier, there is
2.1. Local Computing and Transmitting Model
2.1.1. Local Computing Model
The number of bits needed to be processed locally is
We consider a low voltage task execution model and the energy consumed by one CPU cycle is denoted as
2.1.2. Local Transmitting Model
The transmitting channel between SMDs and SBS is assumed as Rayleigh channels [6]. We assume that the coherence time is larger than the task deadline
The offloading energy consumption is the product of the offloading time and transmit power as
2.2. Computation Model
2.2.1. SBS Computing Model
SBS has limited computation capabilities because of its limited volume compared with MBS.
The SBS workload from
The total delay of SBS computing is made up of offloading delay and execution delay, which is given by
2.2.2. MBS Computing Model
The backhaul link delay
The MBS execution latency can be ignored. Therefore, the delay of MBS computing
2.3. Problem Formulation
Based on equations (3) and (6), the energy consumption of k-SMD
The task of
The energy-efficient problem under tasks delay constraints is formulated as
3. Problem Solution
In this section, for gaining some engineering insights, an EERA scheme based on the variable substitution technique [6, 32] is proposed to solve problem P1. Firstly, we fix
According to equations (3), (6), and (12),
Substituting equation (14e) into (15),
3.1. Problem Decomposition
Fixing transmission power
3.2. Energy-Efficient Resource Allocation Scheme
We define the transmission energy consumption per bit as
Lemma 1.
Proof.
See Appendix A.
Define
Lemma 2.
Based on Lemma 1,
(1)
Proof.
The derivative of
Based on Lemma 1 and Lemma 2, we can judge whether problem P1 has a solution or not and get Lemma 3.
Lemma 3.
Problem P1 is feasible.
Proof.
See Appendix B.
Remark 4.
When
Remark 5.
According to Lemma 1,
Substituting equation (13) into inequality (14b), we get
In order to simplify problem P2,
When
When
According to Lemma 2, three cases are dealt with, respectively, to solve problem P1.
(1)
Lemma 6.
Both problems P2.1 and P2.2 have the same optimal local task load
Proof.
From inequalities (22b) and (23b),
Remark 7.
According to equation (24), the local workload is related to local computation ability and the task delay constraint. Larger local computation ability brings a larger local workload. In order to save energy, SMDs will process as many bits as possible locally if the processing latency meets the task delay constraint. Looser delay constraint brings the SMD a larger local workload. Looser delay constraint means that the local device has more time to execute the task and thus process more bits locally to save energy.
Lemma 8.
Define
Proof.
See Appendix C.
Remark 9.
According to equation (25),
When
Theorem 10.
The optimal workload distribution
Proof.
Substituting equations (24)–(26) into equation (17d), the optimal allocation of SBS computation ability and the optimal workload distribution can be obtained.
In the light of Remark 5, the optimal transmission rate
Lemma 11.
Problem P2.1 and problem P2.2 have the same optimal transmission rate
Proof.
According to inequalities (C.3) and (C.9), we choose the lower boundary of
Then, substituting equation (29) into equation (4), we attain the optimal solution of problem P3 by Theorem 12.
Theorem 12.
The optimal transmission power
Remark 13.
As can be seen from equation (30), smaller
(2)
Considering problem P2.1, we have the optimal local workload as Lemma 14.
Lemma 14.
The optimal
Proof.
We have
Similarly to Lemma 14, we obtain the optimal local workload of problem P2.2 as Lemma 15 using inequality (23c).
Lemma 15.
The optimal
Lemma 16.
When
Proof.
Considering problem P2.1, we obtain
According to equation (32), smaller
Remark 17.
There always exists
Remark 18.
Based on Lemma 14, Lemma 15, and Lemma 16, we easily find that problem P2.1 and problem P2.2 have the same optimal
Remark 19.
In the second case of Lemma 2, problem P2.1 and problem P2.2 have the same optimal local workload
Based on Remark 19, the solution of problem P2 can be obtained by Theorem 20.
Theorem 20.
When
Proof.
Substituting equations (31) and (33) into equation (17d), the optimal workload distribution
Considering problem P3, we substitute equations (34) and (35) into
Theorem 21.
When
Proof.
See Appendix D.
It is difficult to solve
(3)
Algorithm 1: Binary search for
Input: error
Output:
Initialization:
1: while
2: if
3:
4: else
5:
6:
7:
8: return
Theorem 22.
(1) When
(2) When
In (1) and (2), the latency of local execution entirely is denoted as
Proof.
See Appendix E.
By now, the optimal solution of problem P1 is given by the theorems and the procedure is described in Algorithm 2.
3.3. Analysis of Special Cases
From the first four theorems, we not only consider energy minimization but also consider the delay constraint. That is why we still allocate resources when we know the case with the least energy consumption.
Algorithm 2: The Main Process of the Energy-Efficient Resource Allocation Scheme
Step 1: According to Theorem 10 and Theorem 12, calculate
Step 2: Based on equation (19), compute
the results of Step 1.
Step 3:
if
recompute
and Theorem 21.
else if
recompute
In Theorem 22, we only consider the latency. In this case, energy consumed per bit by offloading equals the energy consumed per bit by local execution, i.e., the offloading will not reduce energy consumption of the task execution. We cannot use the offloading to reduce SMDs’ energy consumption. However, we can choose the solution with the least delay to try to improve users’ QoE. Wherefore, we choose to execute the task either locally or remotely according to the latencies of the task execution in the local device and offloading.
4. Numerical Results
In this section, numerical results are given to evaluate the performances of the proposed EERA scheme, as compared to the following baseline schemes.
(i)
Local Computing Only: all SMDs perform their own tasks by only local computing
(ii)
Full Offloading: all SMDs accomplish their own tasks by fully offloading
(iii)
Computing without MBS: the tasks are performed only by local devices and SBS server. Resource allocation for minimizing all SMDs’ energy consumption only takes place on local devices and the SBS server
Some parameters are set as follows unless stated otherwise. The tasks models of all SMDs are set to be identical, i.e.,
4.1. Performances of EERA
In this subsection, we analyze the performances of EERA compared with local-computing-only, full-offloading, and computing-without-MBS. Figures 2–5 present the energy consumption of SMDs under different conditions. It is shown that the proposed EERA achieves the lowest energy consumption among those four methods.
[figure omitted; refer to PDF] [figure omitted; refer to PDF] [figure omitted; refer to PDF] [figure omitted; refer to PDF]Figure 2 plots the sum energy consumption of all SMDs versus the user number
Figure 3 depicts the sum energy consumption of all SMDs versus the computation tasks size
Figure 4 shows the sum energy consumption versus the channel bandwidth
Figure 5 shows the sum energy consumption versus distance from the SBS to users. It is observed that these schemes except local-computing-only rise when the distance becomes larger. Similar to Figure 4, local-computing-only has nothing to do with the communication distance. Longer distance leads to a larger path loss, which needs high transmit power to meet the time delay constraint. It is shown that the energy consumption by EERA is less than computing-without-MBS. That is because the existence of the MBS server lowers the execution latency and the transmit power. Moreover, the gap between EERA and full-offloading is widening. It illustrates that the offloading bit number becomes less owing to the longer communication distance.
4.2. Impacts of Backhaul Time Delay Coefficient
In this subsection, we analyze the energy consumption with respect to the backhaul time delay coefficient in different conditions, e.g., the varying latency constraint, the varying user number, and the varying computation task size.
Figure 6 plots the sum energy consumption of all SMDs in different backhaul time delay coefficients
Figure 7(a) shows the energy consumption versus user number under different
[figures omitted; refer to PDF]
5. Conclusion
In this paper, we investigated resource allocation mechanisms for three-tier MEC architecture in heterogeneous networks. We considered that both MBS and SBS are integrated with MEC servers and are combined with local devices to form a three-tier computing architecture. Each task from SMDs can be divided into three parts. SMDs, SBS, and MBS perform a part of the task, respectively. We formulated an optimization problem to minimize all SMDs’ energy consumption under the time delay constraints. To improve the efficiency of resource allocation, we proposed an EERA mechanism based on the variable substitution technique, which jointly optimized the computation and radio resources. The optimal workload placement strategy among SMDs, SBS, and MBS was derived. And the optimal computation capability allocation and SMDs’ transmit power were obtained. Finally, numerical simulation results are presented. Compared with the benchmark schemes, the proposed EERA scheme can reduce the SMDs’ energy consumption significantly.
Appendix
A. Proof of Lemma 1
Substituting equation (4) into equation (19), we rewrite
The derivative of
Based on equation (A.2), the derivative of
Define
Furthermore, we get the derivative of Z as
Obviously,
B. Proof of Lemma 3
The energy consumption should be semipositive, i.e.,
(1)
In the first case of Lemma 2, i.e.,
From equation (16), we assume
Then, we obtain
Define
It is easy to get
(2)
In the second case of Lemma 2, i.e.,
(3)
In the third case of Lemma 2, i.e.,
Based on above (1), (2), and (3), problem P1 is feasible. The Proof is completed.
C. Proof of Lemma 8
(1)
Problem P2.1
Substituting equations (5), (8), and (9) into inequality (22c), we obtain
According to equation (17d), we substitute
Then, get the inequality about
In the light of Lemma 1 and Remark 5, smaller
Considering
Eliminating
We take
(2)
Problem P2.2
Substituting equation (11) into inequality (23c), we get
From equations (5) and (10), we rewrite (C.6) as
Based on equation (17d), we substitute
And thus, the lower boundary of
We take
According to the continuity of
Given above cases (1) and (2), both problems P2.1 and P2.2 have
D. Proof of Theorem 21
Based on Theorem 20, we substitute
For simplifying equation (D.1) and getting the optimal transmission power
In equation (D.1), a smaller
Furthermore, for simplifying the expression of
Then, the derivative of
Define
The second derivative of
Obviously, the second derivative of
In the light of equation (D.6), equation (D.7) shows that the first derivative of
For simplifying expressions, define
(1) When
According to
From equations (4) and (19), we obtain
For simplifying the expression, we set
(2) When
Wherefore, from Lemma 1,
E. Proof of Theorem 22
When tasks are executed entirely by local devices, substituting
When tasks are offloaded entirely, according to
Then, the offloading latency
We decompose equation (E.3) into two cases, i.e.,
When
According to equation (8), a smaller
Substituting equation (E.5) into equation (17d), we get
When
Substituting equations (E.5) and (E.6) into equation (E.3), we obtain the optimal latency of total offloading
Wherefore, (1) when
[1] W. Shi, J. Cao, Q. Zhang, Y. Li, L. Xu, "Edge computing: vision and challenges," IEEE Internet of Things Journal, vol. 3 no. 5, pp. 637-646, DOI: 10.1109/JIOT.2016.2579198, 2016.
[2] N. Abbas, Y. Zhang, A. Taherkordi, T. Skeie, "Mobile edge computing: a survey," IEEE Internet of Things Journal, vol. 5 no. 1, pp. 450-465, DOI: 10.1109/JIOT.2017.2750180, 2018.
[3] Y. Mao, C. You, J. Zhang, K. Huang, K. B. Letaief, "A survey on mobile edge computing: the communication perspective," IEEE Communications Surveys & Tutorials, vol. 19 no. 4, pp. 2322-2358, DOI: 10.1109/COMST.2017.2745201, 2017.
[4] Y. Ai, M. Peng, K. Zhang, "Edge computing technologies for internet of things: a primer," Digital Communications and Networks, vol. 4 no. 2, pp. 77-86, DOI: 10.1016/j.dcan.2017.07.001, 2018.
[5] W. Zhang, Y. Wen, K. Guan, D. Kilper, H. Luo, D. O. Wu, "Energy-optimal mobile cloud computing under stochastic wireless channel," IEEE Transactions on Wireless Communications, vol. 12 no. 9, pp. 4569-4581, DOI: 10.1109/TWC.2013.072513.121842, 2013.
[6] Y. Wang, M. Sheng, X. Wang, L. Wang, J. Li, "Mobile-edge computing: partial computation offloading using dynamic voltage scaling," IEEE Transactions on Communications, vol. 64 no. 10,DOI: 10.1109/TCOMM.2016.2599530, 2016.
[7] L. Li, Z. Kuang, A. Liu, "Energy efficient and low delay partial offloading scheduling and power allocation for MEC," ICC 2019 - 2019 IEEE International Conference on Communications (ICC),DOI: 10.1109/ICC.2019.8761160, .
[8] T. Q. Thinh, J. Tang, Q. D. La, T. Q. S. Quek, "Offloading in mobile edge computing: task allocation and computational frequency scaling," IEEE Transactions on Communications, vol. 65 no. 8,DOI: 10.1109/TCOMM.2017.2699660, 2017.
[9] Y. Mao, J. Zhang, K. B. Letaief, "Dynamic computation offloading for mobile-edge computing with energy harvesting devices," IEEE Journal on Selected Areas in Communications, vol. 34 no. 12, pp. 3590-3605, DOI: 10.1109/JSAC.2016.2611964, 2016.
[10] X. Chen, L. Jiao, W. Li, X. Fu, "Efficient multi-user computation offloading for mobile-edge cloud computing," IEEE/ACM Transactions on Networking, vol. 24 no. 5, pp. 2795-2808, DOI: 10.1109/TNET.2015.2487344, 2016.
[11] J. Guo, Z. Song, Y. Cui, Z. Liu, Y. Ji, "Energy-efficient resource allocation for multi-user mobile edge computing," GLOBECOM 2017 - 2017 IEEE Global Communications Conference,DOI: 10.1109/GLOCOM.2017.8254044, .
[12] Y. Mao, J. Zhang, S. H. Song, K. B. Letaief, "Power-delay tradeoff in multi-user mobile-edge computing systems," 2016 IEEE Global Communications Conference (GLOBECOM),DOI: 10.1109/GLOCOM.2016.7842160, .
[13] X. Wang, Y. Cui, Z. Liu, J. Guo, M. Yang, "Optimal resource allocation for multi-user MEC with arbitrary task arrival times and deadlines," ICC 2019 - 2019 IEEE International Conference on Communications (ICC),DOI: 10.1109/ICC.2019.8761684, .
[14] Y. Cui, W. He, C. Ni, C. Guo, Z. Liu, "Energy-efficient resource allocation for cache-assisted mobile edge computing," 2017 IEEE 42nd Conference on Local Computer Networks (LCN), pp. 640-648, DOI: 10.1109/LCN.2017.112, .
[15] F. Wang, J. Xu, X. Wang, S. Cui, "Joint offloading and computing optimization in wireless powered mobile-edge computing systems," IEEE Transactions on Wireless Communications, vol. 17 no. 3, pp. 1784-1797, DOI: 10.1109/TWC.2017.2785305, 2018.
[16] R. Yu, J. Ding, S. Maharjan, S. Gjessing, Y. Zhang, D. H. K. Tsang, "Decentralized and optimal resource cooperation in geo-distributed mobile cloud computing," IEEE Transactions on Emerging Topics in Computing, vol. 6 no. 1, pp. 72-84, DOI: 10.1109/TETC.2015.2479093, 2018.
[17] R. Kaewpuang, D. Niyato, P. Wang, E. Hossain, "A framework for cooperative resource management in mobile cloud computing," IEEE Journal on Selected Areas in Communications, vol. 31 no. 12, pp. 2685-2700, DOI: 10.1109/JSAC.2013.131209, 2013.
[18] L. Liu, Q. Fan, "Resource allocation optimization based on mixed integer linear programming in the multi-cloudlet environment," IEEE Access, vol. 6, pp. 24533-24542, DOI: 10.1109/ACCESS.2018.2830639, 2018.
[19] J. Feng, Z. Liu, C. Wu, Y. Ji, "AVE: autonomous vehicular edge computing framework with ACO-based scheduling," IEEE Transactions on Vehicular Technology, vol. 66 no. 12, pp. 10660-10675, DOI: 10.1109/TVT.2017.2714704, 2017.
[20] A. Kiani, N. Ansari, "Toward hierarchical mobile edge computing: an auction-based profit maximization approach," IEEE Internet of Things Journal, vol. 4 no. 6, pp. 2082-2091, DOI: 10.1109/JIOT.2017.2750030, 2017.
[21] E. El Haber, T. M. Nguyen, C. Assi, "Joint optimization of computational cost and devices energy for task offloading in multi-tier edge-clouds," IEEE Transactions on Communications, vol. 67 no. 5, pp. 3407-3421, DOI: 10.1109/TCOMM.2019.2895040, 2019.
[22] H. Zhang, Y. Xiao, S. Bu, D. Niyato, F. R. Yu, Z. Han, "Computing resource allocation in three-tier IOT fog networks: a joint optimization approach combining stackelberg game and matching," IEEE Internet of Things Journal, vol. 4 no. 5, pp. 1204-1215, DOI: 10.1109/JIOT.2017.2688925, 2017.
[23] L. Tong, Y. Li, W. Gao, "A hierarchical edge cloud architecture for mobile computing," IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications,DOI: 10.1109/INFOCOM.2016.7524340, .
[24] K. Zhang, Y. Mao, S. Leng, Q. Zhao, L. Li, X. Peng, L. Pan, S. Maharjan, Y. Zhang, "Energy-efficient offloading for mobile edge computing in 5G heterogeneous networks," IEEE Access, vol. 4, pp. 5896-5907, DOI: 10.1109/ACCESS.2016.2597169, 2016.
[25] Y. Dai, D. Xu, S. Maharjan, Y. Zhang, "Joint computation offloading and user association in multi-task mobile edge computing," IEEE Transactions on Vehicular Technology, vol. 67 no. 12, pp. 12313-12325, DOI: 10.1109/TVT.2018.2876804, 2018.
[26] Q. Han, B. Yang, G. Miao, C. Chen, X. Wang, X. Guan, "Backhaul-aware user association and resource allocation for energy-constrained hetnets," IEEE Transactions on Vehicular Technology, vol. 66 no. 1, pp. 580-593, DOI: 10.1109/tvt.2016.2533559, 2017.
[27] X. Lyu, H. Tian, C. Sengul, P. Zhang, "Multiuser joint task offloading and resource optimization in proximate clouds," IEEE Transactions on Vehicular Technology, vol. 66 no. 4, pp. 3435-3447, DOI: 10.1109/TVT.2016.2593486, 2017.
[28] Y. Zhou, F. R. Yu, J. Chen, Y. Kuo, "Resource allocation for information-centric virtualized heterogeneous networks with in-network caching and mobile edge computing," IEEE Transactions on Vehicular Technology, vol. 66 no. 12, pp. 11339-11351, DOI: 10.1109/TVT.2017.2737028, 2017.
[29] Y. Lan, X. Wang, C. Wang, D. Wang, Q. Li, "Collaborative computation offloading and resource allocation in cache-aided hierarchical edge-cloud systems," Electronics, vol. 8 no. 12,DOI: 10.3390/electronics8121430, 2019.
[30] C. You, K. Huang, H. Chae, "Energy efficient mobile cloud computing powered by wireless energy transfer," IEEE Journal on Selected Areas in Communications, vol. 34 no. 5, pp. 1757-1771, DOI: 10.1109/JSAC.2016.2545382, 2016.
[31] S. Bi, Y. J. Zhang, "Computation rate maximization for wireless powered mobile-edge computing with binary computation offloading," IEEE Transactions on Wireless Communications, vol. 17 no. 6, pp. 4177-4190, DOI: 10.1109/TWC.2018.2821664, 2018.
[32] S. Boyd, L. Vandenberghe, Convex Optimization,DOI: 10.1017/CBO9780511804441, 2004.
[33] P. Zhao, H. Tian, C. Qin, G. Nie, "Energy-saving offloading by jointly allocating radio and computational resources for mobile edge computing," IEEE Access, vol. 5, pp. 11255-11268, DOI: 10.1109/ACCESS.2017.2710056, 2017.
[34] C. You, K. Huang, H. Chae, B.-H. Kim, "Energy-efficient resource allocation for mobile-edge computation offloading," IEEE Transactions on Wireless Communications, vol. 16 no. 3, pp. 1397-1411, DOI: 10.1109/TWC.2016.2633522, 2017.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2020 Yongsheng Pei et al. This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Mobile edge computing (MEC) is a promising technique to meet the demands of computing-intensive and delay-sensitive applications by providing computation and storage capabilities in close proximity to mobile users. In this paper, we study energy-efficient resource allocation (EERA) schemes for hierarchical MEC architecture in heterogeneous networks. In this architecture, both small base station (SBS) and macro base station (MBS) are equipped with MEC servers and help smart mobile devices (SMDs) to perform tasks. Each task can be partitioned into three parts. The SMD, SBS, and MBS each perform a part of the task and form a three-tier computing structure. Based on this computing structure, an optimization problem is formulated to minimize the energy consumption of all SMDs subject to the latency constraints, where radio and computation resources are considered jointly. Then, an EERA mechanism based on the variable substitution technique is designed to calculate the optimal workload distribution, edge computation capability allocation, and SMDs’ transmit power. Finally, numerical simulation results demonstrate the energy efficiency improvement of the proposed EERA mechanism over the baseline schemes.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer