1. Introduction
Mobile edge computing (MEC) has been prevailing in recent years for deploying computing resources at the network edge in proximity to end-user devices [1,2]. End users request a task offloading to improve service experiences [3]. However, the limited resources deployed at the edge can be overwhelmed by the ever-increasing number of user devices (UDs). Furthermore, the data size of different tasks ranges from tens of kilobytes to hundreds of megabytes, and the satisfactory completion time of these tasks can range from tens of milliseconds to several seconds. Therefore, an important research topic is how to effectively utilize the limited resources at the edge to provide satisfactory service quality for tasks with varied requirements.
Task offloading combined with resource allocation has garnered significant research attention in recent years [4]. Ensuring that critical tasks can be processed in a timely manner in delay-sensitive scenarios [5,6], such as automated driving [7], industrial manufacturing [8], smart cities [9], is of paramount importance. As such, the allocation of bandwidth and computing resources should be biased towards tasks with higher requirements and/or importance. While previous research has focused on minimizing task execution time [10,11,12,13] and energy consumption [14], there have been relatively few studies that focus on resource allocation among tasks with significant differences in data size. Naouri et al. [15] differentiated tasks into high-computation and high-communication tasks and proposed processing high-communication tasks at the edge or nearby peer devices, while offloading high-computation tasks to the cloud. Some prior work [10,11,16] formulated the closed-form solution for bandwidth and computing resource allocation in time-division multiple access (TDMA) MEC systems, indicating that the share of bandwidth allocated to the offloaded task was proportional to its data size. However, these studies have not thoroughly examined the impact of significant differences in data size on resource allocation, or how to address this issue if necessary.
While some articles [10,11] have attempted to differentiate the weights of mobile devices (tasks) to emphasize the differences in task requirements, to our knowledge, they, like other existing works, have overlooked the fact that tasks with small data sizes may be crowded out of resource allocation by tasks with immense sizes, thereby losing the opportunity to be offloaded. In this paper, we investigate the offloading decision and resource allocation mechanism among tasks with significant differences in data size, and we propose a scheme to prevent tasks with immense sizes from monopolizing system resources, while still allowing tasks with small sizes to contend for system resources. The main contributions of this paper are as follows:
To address the issue of tasks with immense sizes monopolizing system resources, we introduce the concept of an emergency factor to support tasks with small sizes in contending for system resources. The joint optimization of offloading decisions and edge resource allocation among tasks with significant differences in data size is formulated as a mixed-integer nonlinear programming problem.
We decompose the MINLP problem into two subproblems and propose a linear-search-based coordinate descent method and a bisection-search-based resource allocation algorithm to address the offloading decision and resource allocation subproblems, respectively.
Simulation results demonstrate the effectiveness of our proposed scheme in regulating offloading decisions and resource allocation when there is a significant difference in the data size of the offloaded tasks. When the tasks are of regular size, our scheme obtains the minimum delay as the compared baseline scheme.
The remainder of this paper is organized as follows. Section 2 discusses the related work. Section 3 shows the details of the proposed system model. Section 4 introduces the optimal solution based on the KKT conditions and CD. Finally, Section 5 presents the simulation results and analyses, and we conclude our work in Section 6.
2. Related Work
Existing research on task offloading and resource allocation has focused on various objectives. Some studies aim to minimize task completion time in the system. Ren et al. [11] designed a subgradient-based algorithm to reduce latency for mobile devices with divisible compression tasks. Xing et al. [17] minimized task execution time with the help of helpers in a TDMA system, using relaxation-based and decoupling-based approaches to obtain a suboptimal solution. Zhao et al. [18] jointly optimized beamforming and resource allocation to minimize the maximal delay encountered by users in the mmWave MEC system. Ning et al. [19] incorporated cloud and mobile edge computing and formulated a computation delay minimization problem with limited bandwidth resources. Li et al. [20] minimized service delay with a user-mobility prediction model in heterogeneous networks. Chen and Hao [21] minimized total task duration in software-defined ultradense networks. Tang and Wong [22] proposed a deep reinforcement learning (DRL) method to decide on the task offloading issue and introduced computation and transmission queues to model delays encountered in the MEC system. Edge computing resources were equally allocated for tasks at edge nodes, which implied that the computing resources allotted to current tasks would be reduced with the arrival of new tasks.
In addition, part of the current literature focuses on designs that minimize energy consumption in MEC systems. You et al. [23] studied an energy-efficient wireless resource allocation policy for computation offloading in both TDMA and orthogonal frequency-division multiple access (OFDMA) systems. Chen et al. [24] jointly optimized bandwidth and computation resource allocation to minimize UDs’ expected energy consumption, considering caching. The initial problem was formulated as an MINLP, and the caching decision subproblem was decoupled and solved by a learning-based deep neural network. Dai et al. [25] designed a DRL method to learn a joint offloading decision and edge-computing resource allocation policy to minimize energy consumption. Chen et al. [26] incorporated the Monte Carlo tree search (MCTS) algorithm with a deep neural network to learn the optimal bandwidth and computing resource allocation policy. Yan et al. [27] investigated the offloading and resource allocation problem for tasks under the general dependency model. An actor–critic-based DRL method was proposed to generate the offloading actions.
Furthermore, there have been several efforts to design the task offloading and resource allocation schemes based on other optimization goals. Chen et al. [28] established a Stackelberg game based incentive mechanism to motivate BS to allocate resources more reasonably. Bi and Zhang [16] modeled the computation rate maximization problem in wireless-powered TDMA edge networks as an MINLP problem, which was further decoupled and solved with the ADMM-based method and coordinate descent (CD) method. Huang et al. [29] decoupled the computation rate maximization problem into a computation offloading decision subproblem and a wireless resource allocation subproblem. They solved the offloading decision subproblem with a DNN method and the wireless resource allocation subproblem with a one-dimensional bisection search method. Furthermore, Bi et al. [30] adopted the Lyapunov optimization theory to decompose the maximization problem of the long-term weighted sum of the computation rate of all devices into a single-step optimization problem solved with an actor–critic-based deep reinforcement learning method. While some existing research has considered caching [2,24,31] in edge networks and user mobility issues [13,20,32], it falls outside the scope of this paper.
The characteristics of part of the discussed works are summarized in Table 1. However, they all ignore that in resource allocation, the allocation share for small-data-volume tasks can be crowded out by large-data-volume tasks. Therefore, this paper reveals how this happens and present our solutions to eliminate this effect.Since this paper focuses on tasks with significant differences in data size, its explosive state space would pose a substantial challenge to model training for deep reinforcement learning methods using neural networks. Therefore, the deep reinforcement learning algorithm is not considered in this paper.
3. System Model
As shown in Figure 1, the system works in an OFDMA manner and consists of a base station serving M UDs from a set . The BS is endowed with a bandwidth B (in hertz) and connected with an edge server whose computing capacity is denoted as (in CPU cycles). AP, BS and the edge are used interchangeably in the remainder of this article. Multiple UDs undertaking F types of computation tasks contend for resources to shorten task completion time. In this paper, we classified computation tasks into four categories (i.e., ) based on their data size, i.e., tasks of small size, tasks of regular size, tasks of large size and tasks of immense size.
A computation task is denoted as a quadruplet . denotes the number of CPU cycles required to process one bit of task () on UD m. (in bits) represents the data size of task . is the emergency factor of task , which can be utilized to regulate resource allocation and offloading decisions. indicates the maximum acceptable processing delay of . It is worth mentioning that although the emergency factor is described as an inherent part of the task, it can also be defined as a configurable parameter managed by the BS. contains all the task information from UDs requesting a task offloading. Instead of the arbitrary divisible task processing model, the binary task processing model was considered in this paper. It was assumed that a task was either completed locally () or at the AP (), where is the task offloading decision of UD m. Once , the AP has to allocate of its wireless bandwidth and of its computing resources to task . All the resources allocated to UDs should not exceed the AP’s capacity,
(1)
(2)
In this study, we focused on the offloading decision and resource allocation within a scheduling slot. It was assumed that each user device (UD) had at most one task to process and the channel status between each UD and the base station was assumed to be quasi-static.
3.1. Local Computing
Once has to be processed locally, i.e., , UD m exploits the of the computing resources to process the task. should not violate the capacity constraint,
(3)
where is the maximum computing speed in CPU cycles and denotes the computing capacity of UDs in the system. Then, the local processing delay can be written as(4)
3.2. Edge Computing
UD m utilizes the allocated bandwidth to upload task data for edge computing. Hence, the maximum achievable transmission rate can be calculated by [10]
(5)
where represents m’s transmit power, and denotes the channel gain between m and the AP. indicates the background noise power. Accordingly, the corresponding transmission delay can be expressed as(6)
The BS allocates of its computing resources to process after the transmission. In this case, the corresponding computation delay can be denoted as
(7)
3.3. Problem Formulation
We aim to maximize the processing time gain harvested from the task offloading. Hereafter, the term revenue and reward are used interchangeably to denote this objective. The joint task offloading and resource allocation problem at the edge with constrained bandwidth and computing resources is formulated as a mixed-integer nonlinear-programming (MINLP) problem, which is denoted as
(8)
(P0)
is the bandwidth allocation constraint, and is the computing resource allocation constraint. reveals the maximum local computing speed, while shows a UD’s maximal transmit power.
The formulated problem (P0) is intractable due to the coupling of variables , and . However, once is determined, (P0) is reduced to a convex optimization problem.
4. Decoupled Computation Offloading and Resource Allocation with Coordinate Descent (CD)
Inspired by [17], we adopted the CD method [16] to obtain the offloading scheme , where indicates whether UD m offloads or not. The core idea of the CD-based scheme is to fix iteratively (that is, to use the value in the ith iteration) and find the local optimum on . With the generated offloading scheme, the initial problem (P0) can be divided into two parts, i.e., a local processing part for (P1) and an edge resource allocation part (P2). The whole procedure is summarized in Algorithm 1.
Algorithm 1: Linear CD-Aided Optimal Resource Allocation |
Input: in ascending order Output: offloading decision and corresponding resource allocation scheme ;
|
For each , we solve the corresponding (P1) and (P2) and obtain a feasible solution to (P0). The computation complexity of our proposed bisection-search-based resource allocation scheme in Algorithm 2 is [16]. For the worst case, the CD method solves (P1) and (P2) times with Algorithm 2 to search for the best offloading decision scheme with maximized system gain. Therefore, the overall complexity of our proposed scheme to solve (P0) is . For simplicity, we used the brute-force search method to compare with our [d=A1]CD-based algorithmCD-aided. The brute-force method enumerates all the offloading schemes and solves the corresponding (P1) and (P2), resulting in a complexity of . But it is never a time-friendly solution because the computation time grows exponentially with M (e.g., UDs).
Algorithm 2: Bisection-Search-Based Resource Allocation |
Input: ; Output: the optimal ;
|
4.1. Local Processing Part
Once the offloading decision is determined, tasks processed locally can be extracted and further expressed as:
(P1)
represents the local processing capacity constraint. It is quite intuitive to infer from (P1) that a UD will greedily utilize all its computing resources to process the task locally. The best local computing resource allocation for UD n () is . and represent the local processing UD set and offloading UD set, respectively. Thus, given decision , (P1) can be solved and calculated with . The remaining problem is how to solve (P2), which is described in the next section.4.2. Edge Processing Part
For tasks offloaded to the edge, the BS allocates its available resources to accommodate these requests. The optimal resource allocation problem between offloading UDs can be denoted as:
(9)
(P2)
(P2) is a convex optimization problem onwith a given.
Please see the detailed proof in Appendix A. □
To get the optimal allocation scheme for (P2), Lagrange multipliers and and are introduced, and the Lagrangian function is formulated as:
the Karush–Kuhn–Tucker (KKT) conditions are denoted as:(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
The optimal allocation scheme is exhausted because all vacant resources are always allocated to all UDs in . The optimal allocation scheme for under the optimal and is:
(21)
where and .
Please see the detailed proof in Appendix B. □
For the sake of illustration, auxiliary functions and used to get and are introduced and denoted as
(22)
(23)
and are monotonically decreasing with respect to and , respectively. Thus, the optimal and can be obtained by a bisection search on auxiliary functions and . Accordingly, the proposed resource allocation scheme is summarized in Algorithm 2.5. Simulation and Results
In this section, we compare the performance of our proposed linear CD-based algorithm (LCD) with existing schemes and demonstrate the role of the emergency factor in offloading decisions and resource allocation. Additionally, we compare our approach to a DRL-based scheme [29] where the data size of a task was drawn from a distribution with the probability p on regular size ( megabits) and on small size ( megabits), large size ( megabits) and immense size ( megabits). Our scheme penalized tasks of immense size by setting their emergency factor to , where is the average of regular size, is the penalty coefficient and is the data volume of the task with immense data size. Conversely, we supported tasks of small size by setting their emergency factor to , where is the enhancement coefficient and is the data volume of the task with small data size. The baseline schemes used in this paper included:
All offload (AO): all tasks are processed at the edge server.
All local (AL): all tasks are processed locally.
Random offload (RO): the offloading decision is randomly generated and the resource allocation decisions are obtained with Algorithm 2.
Brute-force search method (BF): searches all the offloading schemes and selects the one with the highest reward as the final solution.
Naive coordinate descent (NCD): directly goes into the “while loop” [16] with the randomly initialized in Algorithm 1.
Deep-reinforcement-learning-based scheme (DRL): uses channel conditions and task data size to make offloading decisions and utilizes the critic module to get the resource allocation scheme with minimum delay, which is slightly different from [29].
5.1. Simulation Setting
By default, there were UDs in our system. The channel gain of the large-scale fading model in this paper was [30], where denotes the antenna gain of a UD, represents the carrier frequency, is the distance between UD m and BS in meters, the path loss exponent was and followed a Rayleigh distribution with unit variance. The BS had a bandwidth of and a computing capacity of by default. The maximal transmission power was (in watts). UD’s local computing capacity took the value cycles/second to . The value of was set to 1, and as the maximal acceptable service delay. We considered tasks of categories, and was randomly taken from (in cycles/bit).
5.2. Result Discussion
In Figure 2, we varied from to . This caused tasks processed locally, as shown in Figure 2b, to time out. The results in Figure 2a demonstrate that our proposed LCD algorithm could effectively converge to the optimal scheme (results from BF) and the NCD method deviated slightly from the optimal solution. Furthermore, the resources deployed at the edge could support the simultaneous task offloading for six to eight UDs (with a data size of one megabit). When UDs exceeded that threshold, the overall revenue of the system significantly declined. However, in Figure 2a, the overall rewards remained unchanged and even slightly increased as the number of UDs increased. This was because the local computing resources were sufficient to process the tasks locally without incurring negative rewards. The system could even enhance revenue by offloading tasks from UDs with more competitive conditions (e.g., better channel conditions). This was no longer the case in Figure 2b for , where the revenue declined as the number of UDs increased. In this scenario, processing tasks locally resulted in negative rewards due to the timeout caused by an inadequate local processing capacity.
Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7 show how the offloading decision and resource allocation for all tasks varied with the emergency factor . We tested the emergency factor of a randomly selected task (task was selected) with a group of values () while keeping the other factors constant. We can see that for tasks that failed in the task offloading competition, setting a higher value could not only improve the likelihood of task offloading but also increased their share in the resource allocation phase (if they were offloaded to the edge).
According to Figure 3, the optimal offloading decision under the default settings was to process tasks , and locally and to offload tasks of the other seven UDs to the edge for processing. When the emergency factor () of a locally processed task took on a small value, nothing happened except that changed correspondingly. We can see that, when took on the values successively, UD 1 still processed task locally, and the bandwidth allocation (shown in Figure 4) and edge computing resource allocation (shown in Figure 5) for UD 2, UD 3, UD 5, UD 6, UD 7, UD 8 and UD 10 remained unchanged. However, as became large enough (), UD 1 started to offload task and was allocated some bandwidth and computing resources. Meanwhile, UD 3 and UD 10 were crowded out of resources and processed their tasks locally. As continued to increase, more and more devices started to process their tasks locally. When became extremely large, UD 1 monopolized all resources in the system.
It is noteworthy that when shifted from one to two, not all the released bandwidth from UD 3 and UD 10 was allocated to UD 1. This can be explained by Figure 6. We know from Equation (21) that the bandwidth allocation share () is proportional to . When took the value two, the corresponding was 1.180 (normalized), and . Therefore, the released bandwidth from UD 3 and UD 10 was reallocated to UD 1 and the remaining offloading UDs (UD 2, UD 5, UD 6, UD 7 and UD 8). It is worth noting that as became larger, existing offloading UDs with larger began to process tasks locally at first. However, UD 9, with the largest , could only process tasks locally all the time, while UD 1, with the smallest , could only process tasks locally at first. Fortunately, when UD 1 obtained a larger () due to a larger , UD 1 could not only offload task to the edge for processing but also obtain a large share of resources. This indicated that could effectively regulate resource allocation and offloading decisions among UDs.
Figure 7 shows the processing delay of each task in the system and the total revenue when takes different values. When UD 1 began to offload its task for edge processing (the emergency factor of UD 1 took the value two), both the total delay of each task in the system and the system revenue increased. This was because a larger indicated that the system favored UD1 in the resource allocation and received a larger reward for prioritizing UD 1. As a result, other UDs lost the opportunity to offload their tasks to the edge for processing. When , the completion time of all other UDs reached the maximum because their tasks were processed locally. Although the reward increased significantly when varied from 16 to 256, the total delay of all UDs increased because the edge resources were exclusively occupied by UD 1.
Results in Figure 7 and Figure 8 share the same offloading decision, bandwidth allocation and edge computing resource allocation schemes. The distinction is that was set to the default value for all tasks, which meant remained unchanged in this situation. System rewards and processing delays of tasks were obtained when shifted from to 256 times the default data size (1 megabit). With equal emergency factors, i.e., , those tasks of large data size were offloaded in preference to tasks of small data size, even monopolizing the edge resources ( Mb). Tasks of large data size were more advantageous for offloading decisions. When tasks were of the same data size, task could only be processed locally. As got larger, the system preferred to process task (tasks with large quantities of data). This is what took place in existing research works. From Figure 8, we can conclude that tasks of extremely large size will be offloaded to the edge if no restrictions are taken. This is not what we want to see because it stops UDs with limited computing resources from offloading their tasks to the edge. Luckily, we can prevent a task of extremely large size from monopolizing edge resources by setting a sufficiently small for the data-intensive computing task .
Figure 9 illustrates how the emergency factor impacts data-intensive tasks. We randomly sampled from megabits and set it as the data size of a randomly selected task ( was selected and Mb in this experiment). The data size of the other tasks remained at the default value. Setting a sufficiently small emergency factor for the task with a large data size prevented the task from monopolizing system resources. When took the default value, as the others (), only was offloaded to the edge, and the processing delay of was less than 10 s. As we set a smaller value of , more and more UDs could offload their tasks to the edge for processing (UD 9 and UD 10 for , UD 3, UD 9 and UD 10 for ). When took the value 0.008, task started to be processed locally. We can conclude that when the emergency factor of a task with a large data volume is small enough, it loses its advantage in task offloading.
In Figure 10, we compare the performance among “DRL” [29], “LCD”, “RO” and “AL” (results are organized in this order) under different sampling probabilities. We tested four types of tasks of different data sizes: regular size, small size, large size and immense size. We can see that both our LCD scheme and the DRL scheme achieved the minimum delay when tasks were of regular size (sampling probability ). However, when tasks of immense size (the task from UD 8) and regular size coexisted (), our scheme penalized tasks of immense size by setting sufficiently small emergency factors for tasks of immense size, which in turn disadvantaged our scheme in obtaining the minimum delay. Similarly, when tasks of small size (tasks from UD 2 and UD 8) emerged (), our scheme failed to obtain the minimum delay as well. However, our scheme succeeded in excluding tasks of immense size from a monopoly on edge resources and supporting tasks of small size to contend for edge resources. For example, tasks from UD 2 and UD 8 with and tasks from UD2, UD5 and UD 10 with obtained shorter delays when compared with the DRL scheme.
6. Conclusions
Current task-offloading schemes targeting a minimum delay tend to prioritize tasks of large data size, which prevents tasks of small data size from being offloaded. When coexisting with tasks of large data size, tasks of small data size may lose opportunities to be offloaded to the edge for processing. In this paper, we introduced the emergency factor to penalize tasks of immense size for monopolizing system resources and support tasks of small size in contending for system resources. The joint task offloading and resource allocation issue was formulated as an MINLP problem that aimed to maximize the processing time reward. A bisection-search-based resource allocation algorithm combined with a CD-based method was proposed to solve the problem. Simulation results validated the effectiveness of our proposed scheme in regulating offloading decision and resource allocation when there was a significant difference in the data size of the offloaded tasks.
In future work, we will study resource allocation based on a more fine-grained task classification scheme and explore the use of state-of-the-art deep reinforcement learning methods [29,33] for efficiency. We may also consider schemes for different objectives, such as profit [28] and QoS, and may also consider deploying caching [24,31] at the edge.
Conceptualization, L.D. and H.Y.; methodology, H.Y.; software, L.D.; validation, L.D. and W.H.; formal analysis, L.D.; investigation, L.D.; resources, H.Y.; data curation, L.D. and W.H.; writing—original draft preparation, L.D.; writing—review and editing, L.D. and W.H.; visualization, L.D.; supervision, H.Y.; project administration, H.Y.; funding acquisition, H.Y. All authors have read and agreed to the published version of the manuscript.
Not applicable.
Not applicable.
Not applicable.
The authors declare no conflict of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 2. Rewards versus UDs in the system. (a) Default setup where the maximal local computing frequency [Forumla omitted. See PDF.] cycles/s. (b) The others are the same as the default, except for [Forumla omitted. See PDF.] cycles/s.
Figure 5. Computing resource allocation of UDs versus [Forumla omitted. See PDF.].
Summary of part of the discussed works.
Work | Offloading Mode | Optimization Variables | Objective | Methodology |
---|---|---|---|---|
[ |
Partial | D | Decomposition and Karush–Kuhn–Tucker conditions | |
[ |
Partial | D | Lagrange multiplier method | |
[ |
Partial | D | Successive convex approximation | |
[ |
Binary | E | Branch-and-bound | |
[ |
Binary | R 6 | The alternating direction method of multipliers and CD | |
[ |
Binary | E | Deep deterministic policy gradient (DDPG) | |
[ |
Binary | D 4, E 5 | Monte Carlo tree search, DNN and replay memory | |
[ |
Binary | D+E | Actor–critic-based DRL | |
[ |
Binary | R 6 | Lyapunov optimization and DRL | |
Our work | Binary | Revenue maximization | CD and Lagrange multiplier method |
1: x denotes the offloading decision vector; 2: b denotes the communication resource allocation vector; 3: α denotes the edge-computing resource allocation vector; 4: D stands for latency/delay minimization; 5: E stands for energy consumption minimization; 6: R stands for computation rate maximization; 7: denotes the computation node selection vector; 8: λ denotes the splitting ratio; 9: p denotes the transmission power.
Appendix A
The first terms in stationarity Equations (10)–(12) are positive, which result in the positiveness of Lagrangian multipliers
Then, as we go back to (10), with a fixed
Appendix B
References
1. Mach, P.; Becvar, Z. Mobile edge computing: A survey on architecture and computation offloading. IEEE Commun. Surv. Tutor.; 2017; 19, pp. 1628-1656. [DOI: https://dx.doi.org/10.1109/COMST.2017.2682318]
2. Wang, X.; Han, Y.; Wang, C.; Zhao, Q.; Chen, X.; Chen, M. In-edge ai: Intelligentizing mobile edge computing, caching and communication by federated learning. IEEE Netw.; 2019; 33, pp. 156-165. [DOI: https://dx.doi.org/10.1109/MNET.2019.1800286]
3. Chen, Y.; Zhang, N.; Zhang, Y.; Chen, X. Dynamic computation offloading in edge computing for internet of things. IEEE Internet Things J.; 2018; 6, pp. 4242-4251. [DOI: https://dx.doi.org/10.1109/JIOT.2018.2875715]
4. Wu, Y.; Ni, K.; Zhang, C.; Qian, L.P.; Tsang, D.H. NOMA-assisted multi-access mobile edge computing: A joint optimization of computation offloading and time allocation. IEEE Trans. Veh. Technol.; 2018; 67, pp. 12244-12258. [DOI: https://dx.doi.org/10.1109/TVT.2018.2875337]
5. Raza, S.; Wang, S.; Ahmed, M.; Anwar, M.R.; Mirza, M.A.; Khan, W.U. Task offloading and resource allocation for IoV using 5G NR-V2X communication. IEEE Internet Things J.; 2021; 9, pp. 10397-10410. [DOI: https://dx.doi.org/10.1109/JIOT.2021.3121796]
6. Yousefpour, A.; Ishigaki, G.; Gour, R.; Jue, J.P. On reducing IoT service delay via fog offloading. IEEE Internet Things J.; 2018; 5, pp. 998-1010. [DOI: https://dx.doi.org/10.1109/JIOT.2017.2788802]
7. Yang, B.; Cao, X.; Xiong, K.; Yuen, C.; Guan, Y.L.; Leng, S.; Qian, L.; Han, Z. Edge intelligence for autonomous driving in 6G wireless system: Design challenges and solutions. IEEE Wirel. Commun.; 2021; 28, pp. 40-47. [DOI: https://dx.doi.org/10.1109/MWC.001.2000292]
8. Qiu, T.; Chi, J.; Zhou, X.; Ning, Z.; Atiquzzaman, M.; Wu, D.O. Edge computing in industrial internet of things: Architecture, advances and challenges. IEEE Commun. Surv. Tutor.; 2020; 22, pp. 2462-2488. [DOI: https://dx.doi.org/10.1109/COMST.2020.3009103]
9. Peng, K.; Huang, H.; Liu, P.; Xu, X.; Leung, V.C. Joint Optimization of Energy Conservation and Privacy Preservation for Intelligent Task Offloading in MEC-Enabled Smart Cities. IEEE Trans. Green Commun. Netw.; 2022; 6, pp. 1671-1682. [DOI: https://dx.doi.org/10.1109/TGCN.2022.3170146]
10. Ren, J.; Yu, G.; He, Y.; Li, G.Y. Collaborative cloud and edge computing for latency minimization. IEEE Trans. Veh. Technol.; 2019; 68, pp. 5031-5044. [DOI: https://dx.doi.org/10.1109/TVT.2019.2904244]
11. Ren, J.; Yu, G.; Cai, Y.; He, Y. Latency optimization for resource allocation in mobile-edge computation offloading. IEEE Trans. Wirel. Commun.; 2018; 17, pp. 5506-5519. [DOI: https://dx.doi.org/10.1109/TWC.2018.2845360]
12. Kai, C.; Zhou, H.; Yi, Y.; Huang, W. Collaborative cloud-edge-end task offloading in mobile-edge computing networks with limited communication capability. IEEE Trans. Cogn. Commun. Netw.; 2020; 7, pp. 624-634. [DOI: https://dx.doi.org/10.1109/TCCN.2020.3018159]
13. Saleem, U.; Liu, Y.; Jangsher, S.; Li, Y.; Jiang, T. Mobility-aware joint task scheduling and resource allocation for cooperative mobile edge computing. IEEE Trans. Wirel. Commun.; 2020; 20, pp. 360-374. [DOI: https://dx.doi.org/10.1109/TWC.2020.3024538]
14. El Haber, E.; Nguyen, T.M.; Assi, C. Joint optimization of computational cost and devices energy for task offloading in multi-tier edge-clouds. IEEE Trans. Commun.; 2019; 67, pp. 3407-3421. [DOI: https://dx.doi.org/10.1109/TCOMM.2019.2895040]
15. Naouri, A.; Wu, H.; Nouri, N.A.; Dhelim, S.; Ning, H. A novel framework for mobile-edge computing by optimizing task offloading. IEEE Internet Things J.; 2021; 8, pp. 13065-13076. [DOI: https://dx.doi.org/10.1109/JIOT.2021.3064225]
16. Bi, S.; Zhang, Y.J. Computation rate maximization for wireless powered mobile-edge computing with binary computation offloading. IEEE Trans. Wirel. Commun.; 2018; 17, pp. 4177-4190. [DOI: https://dx.doi.org/10.1109/TWC.2018.2821664]
17. Xing, H.; Liu, L.; Xu, J.; Nallanathan, A. Joint task assignment and resource allocation for D2D-enabled mobile-edge computing. IEEE Trans. Commun.; 2019; 67, pp. 4193-4207. [DOI: https://dx.doi.org/10.1109/TCOMM.2019.2903088]
18. Zhao, C.; Cai, Y.; Liu, A.; Zhao, M.; Hanzo, L. Mobile edge computing meets mmWave communications: Joint beamforming and resource allocation for system delay minimization. IEEE Trans. Wirel. Commun.; 2020; 19, pp. 2382-2396. [DOI: https://dx.doi.org/10.1109/TWC.2020.2964543]
19. Ning, Z.; Dong, P.; Kong, X.; Xia, F. A cooperative partial computation offloading scheme for mobile edge computing enabled Internet of Things. IEEE Internet Things J.; 2018; 6, pp. 4804-4814. [DOI: https://dx.doi.org/10.1109/JIOT.2018.2868616]
20. Li, J.; Zhang, X.; Zhang, J.; Wu, J.; Sun, Q.; Xie, Y. Deep reinforcement learning-based mobility-aware robust proactive resource allocation in heterogeneous networks. IEEE Trans. Cogn. Commun. Netw.; 2019; 6, pp. 408-421. [DOI: https://dx.doi.org/10.1109/TCCN.2019.2954396]
21. Chen, M.; Hao, Y. Task offloading for mobile edge computing in software defined ultra-dense network. IEEE J. Sel. Areas Commun.; 2018; 36, pp. 587-597. [DOI: https://dx.doi.org/10.1109/JSAC.2018.2815360]
22. Tang, M.; Wong, V.W. Deep reinforcement learning for task offloading in mobile edge computing systems. IEEE Trans. Mob. Comput.; 2020; 21, pp. 1985-1997. [DOI: https://dx.doi.org/10.1109/TMC.2020.3036871]
23. You, C.; Huang, K.; Chae, H.; Kim, B.H. Energy-efficient resource allocation for mobile-edge computation offloading. IEEE Trans. Wirel. Commun.; 2016; 16, pp. 1397-1411. [DOI: https://dx.doi.org/10.1109/TWC.2016.2633522]
24. Chen, J.; Xing, H.; Lin, X.; Nallanathan, A.; Bi, S. Joint resource allocation and cache placement for location-aware multi-user mobile edge computing. IEEE Internet Things J.; 2022; 9, pp. 25698-25714. [DOI: https://dx.doi.org/10.1109/JIOT.2022.3196908]
25. Dai, Y.; Zhang, K.; Maharjan, S.; Zhang, Y. Edge intelligence for energy-efficient computation offloading and resource allocation in 5G beyond. IEEE Trans. Veh. Technol.; 2020; 69, pp. 12175-12186. [DOI: https://dx.doi.org/10.1109/TVT.2020.3013990]
26. Chen, J.; Chen, S.; Wang, Q.; Cao, B.; Feng, G.; Hu, J. iRAF: A deep reinforcement learning approach for collaborative mobile edge computing IoT networks. IEEE Internet Things J.; 2019; 6, pp. 7011-7024. [DOI: https://dx.doi.org/10.1109/JIOT.2019.2913162]
27. Yan, J.; Bi, S.; Zhang, Y.J.A. Offloading and resource allocation with general task graph in mobile edge computing: A deep reinforcement learning approach. IEEE Trans. Wirel. Commun.; 2020; 19, pp. 5404-5419. [DOI: https://dx.doi.org/10.1109/TWC.2020.2993071]
28. Chen, Y.; Li, Z.; Yang, B.; Nai, K.; Li, K. A Stackelberg game approach to multiple resources allocation and pricing in mobile edge computing. Future Gener. Comput. Syst.; 2020; 108, pp. 273-287. [DOI: https://dx.doi.org/10.1016/j.future.2020.02.045]
29. Huang, L.; Bi, S.; Zhang, Y.J.A. Deep reinforcement learning for online computation offloading in wireless powered mobile-edge computing networks. IEEE Trans. Mob. Comput.; 2019; 19, pp. 2581-2593. [DOI: https://dx.doi.org/10.1109/TMC.2019.2928811]
30. Bi, S.; Huang, L.; Wang, H.; Zhang, Y.J.A. Lyapunov-guided deep reinforcement learning for stable online computation offloading in mobile-edge computing networks. IEEE Trans. Wirel. Commun.; 2021; 20, pp. 7519-7537. [DOI: https://dx.doi.org/10.1109/TWC.2021.3085319]
31. Fang, C.; Liu, C.; Wang, Z.; Sun, Y.; Ni, W.; Li, P.; Guo, S. Cache-assisted content delivery in wireless networks: A new game theoretic model. IEEE Syst. J.; 2020; 15, pp. 2653-2664. [DOI: https://dx.doi.org/10.1109/JSYST.2020.3001229]
32. Fang, C.; Yao, H.; Wang, Z.; Wu, W.; Jin, X.; Yu, F.R. A survey of mobile information-centric networking: Research issues and challenges. IEEE Commun. Surv. Tutor.; 2018; 20, pp. 2353-2371. [DOI: https://dx.doi.org/10.1109/COMST.2018.2809670]
33. Fang, C.; Xu, H.; Yang, Y.; Hu, Z.; Tu, S.; Ota, K.; Yang, Z.; Dong, M.; Han, Z.; Yu, F.R. et al. Deep-reinforcement-learning-based resource allocation for content distribution in fog radio access networks. IEEE Internet Things J.; 2022; 9, pp. 16874-16883. [DOI: https://dx.doi.org/10.1109/JIOT.2022.3146239]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Edge computing enables devices with insufficient computing resources to offload their tasks to the edge for computing, to improve the service experience. Some existing work has noticed that the data size of offloaded tasks played a role in resource allocation shares but has not delved further into how the data size of an offloaded task affects resource allocation. Among offloaded tasks, those with larger data sizes often consume a larger share of system resources, potentially even monopolizing system resources if the data size is large enough. As a result, tasks with small or regular sizes lose the opportunity to be offloaded to the edge due to their limited data size. To address this issue, we introduce the concept of an emergency factor to penalize tasks with immense sizes for monopolizing system resources, while supporting tasks with small sizes to contend for system resources. The joint offloading decision and resource allocation problem is formulated as a mixed-integer nonlinear programming (MINLP) problem and further decomposed into an offloading decision subproblem and a resource allocation subproblem. Using the KKT conditions, we design a bisection search-based algorithm to find the optimal resource allocation scheme. Additionally, we propose a linear-search-based coordinate descent (CD) algorithm to identify the optimal offloading decision. Numerical results show that our proposed algorithm converges to the optimal scheme (for the minimal delay) when tasks are of regular size. Moreover, when tasks of immense, small and regular sizes coexist in the system, our scheme can exclude tasks of immense size from edge resource allocation, while still enabling tasks of small size to be offloaded.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer