Content area
In this study, we propose a novel cloud-edge collaborative task assignment model for smart farms that consists of a cloud server, m edge servers, and n sensors. The edge servers rely solely on solar-generated energy, which is limited, whereas the cloud server has access to a limitless amount of energy supplied by the smart grid. Each entire task from a sensor is processed by either an edge server or the cloud server. We consider the task to be unsplittable. Building on the algorithm for the multimachine job scheduling problem, we develop a corresponding approximation algorithm. In addition, we propose a new discrete heuristic based on the dwarf mongoose optimization algorithmm, named the discrete dwarf mongoose optimization algorithm, and we utilize the proposed approximation algorithm to improve the convergence speed of this heuristic while yielding better solutions. In this study, we consider task sets with heavy tasks independently, where a heavy task is a task that requires many computing resources to process. If such tasks are assigned as ordinary tasks, the assignment results may be poor. Therefore, we propose a new method to solve this kind of problem.
Introduction
Many companies and businesses have benefited from the advantages offered by cloud computing in the last 10 years. However, in this digital age, the amount of data that needs to be calculated and communicated is exponentially increasing. This not only puts enormous pressure on cloud servers and the entire network but also weakens the advantages of cloud computing. Therefore, in recent years, computing tasks have begun to be shifted to edge devices. Our model is designed for this scenario.
Figure 1 depicts our model’s framework. This framework includes n sensors, m edge servers, remote cloud server(data center), solar panels and a smart grid. All edge servers can use only energy from the solar panels, whereas the cloud server uses energy from the smart grid. The issue of adjusting the sensor power need not be considered because the energy needed by each sensor is supplied by the built-in solar panel and the distance between each sensor and the nearest base station(BS) is very short [1, 2]. The energy from the solar panels is first stored in a battery and subsequently used by the edge servers to help all of them operate steadily, similar to the access point (AP) described by Zhang et al. [3]. The BS in the figure communicates directly with the sensors, edge servers, and cloud servers, which do not directly communicate with each other. Before working, the BS needs to be initialized based on parameters such as the edge server set, sensor set, time slot size, etc. The BS periodically collects tasks from the sensors and dispatches these tasks to different servers (edge and cloud servers) in accordance with a certain. Each task is associated with two values, the computing resources needed and the cloud computing cost (if the task is performed by the cloud server).
In this study, we consider each task to be unsplittable. Each unsplittable task must be processed by only one edge server or the cloud server. If the number of CPU cycles to be executed by each edge server at a time slot is known before assigning, then the concept of the multiknapsack problem can be used to solve this problem. However, this is generally not the case. In our research, we have found that the multimachine scheduling problem has similarities with our problem. Specifically, we regard the servers as machines and the tasks as jobs in the multimachine job scheduling problem. However, in the multimachine scheduling problem, each job is characterized by only one value, whereas in our problem, there are two associated values for each task. This problem is addressed in our research, and we derive our own approximation algorithm for task assignment based on an existing approximation algorithm for the multimachine job scheduling problem.
Nevertheless, although our approximation algorithm can quickly find a satisfactory result, it still does not meet our expectations. We hope to obtain a better solution so that obtain greater benefits. Therefore, we further introduce a heuristic algorithm and used this approximation algorithm to improve the convergence speed of this heuristic algorithm and let the heuristic algorithm return a better solution, that is better than their own solutions. To achieve this end, We need to consider r the details of how to introduce the approximation algorithm into the heuristic algorithm.
The Dwarf Mongoose Optimization Algorithm was chosen in this study because it possesses several advantages that other heuristic algorithms do not have. Other discrete heuristic algorithms, such as the discrete particle swarm algorithm, the discrete artificial bee colony algorithm, and the discrete bat algorithm, rely on various parameters. In contrast, the discrete dwarf mongoose optimization algorithm requires only a specified population size and is not influenced by other parameters. In the discrete dwarf mongoose optimization algorithm, different methods are used to update members belonging to different groups, which can increase the diversity of the population and prevent the algorithm from becoming trapped in a local optimum. The improved Discrete Dwarf Mongoose Optimization Algorithm using an approximation algorithm not only converges faster than other improved discrete heuristic algorithms, but also produces better results than other discrete heuristic algorithms and Greedy Algorithm. Moreover, in [4], experiments showed that the convergence speed of the dwarf mongoose algorithm was significantly faster than that of other heuristic algorithms, therefore the fast convergence speed of the dwarf mongoose algorithm is another reason for its selection.
This study establishes a mathematical model for the cloud-edge collaborative task gassignment problem on a samrt farm, which is helpful in proposing solutions. From this mathematical model, it is weasily found that the problem of this study is NP-hard, meaning that it would take considerable time and computing resources to obtain the optimal solution. Moreover, this problem is nonconvex, meaning that mathematical programming problem solvers cannot be used. Therefore, an approximation algorithm and a discrete heuristic algorithm are designed in this research to solve this problem. The approximation algorithm is used to improve the performance of the discrete heuristic algorithm. The final algorithm outperforms both traditional approximation algorithms and existing discrete heuristics. A special kind of problem will arise in this study, the heavy task problem. If the algorithm proposed above is used directly to solve sthe problem of interest for heavy tasks, a poor result may be obtained. Therefore, a method specifically targeted at large task problems was proposed.
Related work
Traditional farms require considerable labor and provide relatively low food yields, whereas smart farms rely on a vast number of sensors, services and control systems to minimize labor requirements while increasing crop yields. These sensors are able to collect information about the surrounding environment and the growth of crops. The tasks consisting of this information are processed by the processing equipment, which returns some control information to the farm’s control system [5]. Based on these data, which are typically the smart farm’s temperature, humidity, and light intensity, the control system alters the environment [5]. Some sensorsproduce data involved in many complex tasks, meaning that these tasks cannot be processed on the corresponding local devices. In Ghosh et al. [6], the authors propose to upload these tasks to a data center (cloud server) for processing. This model places ever increasing demands on communication and computational infrastructure with inevitable adverse effects on quality-of-service and experience [7], and cloud servers are relatively poor at handling time-sensitive tasks [8], such these tasks are processed by edge servers as much as possible. Nevertheless, a tiny number of jobs must still be processed on the cloud server because the edge servers’ access to resources is constrained. As a result, we must compromise between the edge server’s efficiency and energy consumption to reap higher benefits, which is also mentioned in Li et al. [9].
In research on cloud-edge collaborative computing, resource allocation is one of the most popular research topics at present. Although the system architecture and constraints may be different in different scenarios, these works can be broadly divided into three different categories in accordance with their optimization goals: task delay minimization [10–12], energy consumption minimization [13, 14], and weighted delay and energy minimization [15, 16]. The present study focuses on energy consumption minimization. The traditional energy minimization problem for cloud-edge collaboration considers either minimizing the energy of all edge servers or maximizing the total revenue of all tasks processed by edge servers while limiting the maximum energy used by each edge server [17–19]. We assume that the total energy of all edge servers is limited, and the energy of each edge server is not limited. This issue was not considered in the abovementioned articles.
Traditional methods for solving cloud-edge collaborative task assigning problems include neural networks, deep reinforcement learning, and mathematical programming for the establishment of related problems, that are then solved with specialized tools (CPLEX, Gurobi, etc.). Although a trained neural network [20–22] and deep reinforcement learning method [23–25] can give an appropriate solution within an acceptable time, it is necessary to train a model according to the actual problem before using these two methods. Such training consumes many resources (which is unacceptable for edge servers). In Tao et al. [25], the author proposes to train the model on the cloud server first and then deploying the model to the edge servers by means of transfer learning and fine-tuning. This can greatly reduce the resources consumed for model training. However, running trained deep learning models and deep reinforcement learning models on edge servers also consumes considerable memory. Moreover, both CPLEX and Gurobi can solve only convex optimization problems. Whereas in actual practice, some combinatorial optimization problems are nonconvex, such as the energy optimization problem explained in the literature [19]. Therefore, we consider a heuristic algorithm to solve this convex optimization problem.
Because of the discrete nature of our problem, among heuristic algorithms, we prefer algorithms that solve discrete problems over those that solve continuous problems, that is, algorithms such as the gray wolf algorithm [26], the dwarf mongoose algorithm [4], genetic algorithms, the particle swarm algorithm [27], etc. In Kashan et al. [28], a discrete particle swarm optimization algorithm is proposed for assigning parallel machines; the authors redefine operators and use binary vectors to represent some parameters. In Pan et al. [29], a discrete artificial bee colony algorithm is proposed for the lot-streaming flow shop scheduling problem. Liu et al. [30] propose multipoint shortest path planning based on an improved discrete bat algorithm. To improve the speed at which the optimal solution is found, they use two kinds of neighborhood operators including 2-opt and a mutation operator, where 2-opt is a common neighborhood operator to solve the traveling salesman problem.
Fig. 1 [Images not available. See PDF.]
The task dispatch model for a smart farm
Overview of the problem
In this section, we propose a task assignment model for a smart farm. The set of sensors, , consists of n sensors of the same type or different types. The set of edge servers, which consists of m homogeneous machines, is . Each edge server has only one CPU and works at a frequency . Since the sensors continuously generate tasks, to capture the dynamics of our model, such as the time-varying number of tasks, we simplify the assigning operations by adopting a time slot model [31]. The tasks collected in time slot t-1 will be executed in time slot t, so every task will be executed. We assume that each time slot t is of equal size ().
We mathematically characterize the amount of computational resources (CPU cycles) required to process the task produced by sensor at time slot t by . We consider the assignment of tasks with one time slot only.
Let be the task produced by sensor at time slot t. Each task is divided into multiple CPU cycles before being processed by a server. When the amount of computing resources required for is , the number of CPU cycles is [12]
1
As described in [32, 33], is a coefficient related to the task j and processor (CPU) type i. In this model, all edge servers are homogeneous, so this coefficient is related only to the task type and . In the following descriptions, (CPU cycles) represents the computing resources required for . After a server has processed , it sends a feedback message to the farm. Based on this feedback, the controller on the farm creates a new growth environment for the crops, which helps the crops to grow better, thus increasing the benefit to the farmer. Because all edge servers can consume only a certain amount of resources, it is possible that the edge servers may be unable to handle all tasks. Therefore, some tasks may need to be processed by the cloud server. To represent the relationship between each task and each server, variables x are introduced. represents how much of is executed on edge server , thus, . indicates that is executed entirely on the cloud server. indicates that is executed entirely on a edge server. In this study, each task is unsplittable; hence, . Depending on the amount of computing resources required by each task and the variables , the number of CPU cycles executed by edge server at time slot t is2
The run time of a processor (server) is related not only to the number of tasks is processes but also to its frequency. The time is proportional to and inversely proportional to , as follows [19],3
In this model, all tasks should be processed at the current time slot. In other words, the run time of each edge server should be less than or equal to , . Thus,4
According to (2), (4), the relationship between the amount of computing resources needed for each task and the working frequency of edge server can be expressed as5
When edge server executes CPU cycles at frequency , the energy it consumes is [12]6
where is the energy consumption coefficient of edge server .To enable the edge servers to work stably, the energy generated by the solar panels is stored in a battery first and then used by the edge servers. represents the energy stored in the battery before time slot (Time slot t=0 represents the first time slot). represents the energy generated by the solar panels at time slot k. represents the energy consumed by all edge servers at time slot k. Thus, at the beginning of time slot t, the energy stored in the battery is
7
The energy consumed by all edge servers at time slot t cannot exceed the energy stored in the battery, so8
where represents the energy consumed by edge server at time slot t.In accordance with (6), when the energy to be used by edge server is given, its frequency should be reduced as much as possible to increase the number of tasks (). In other words, is desired to be as large as possible. Therefore, according to (4), takes the maximum value of . Substituting into (6) yields
9
According to (8) and (9), the relationship between the total energy (E(t)) that is stored in the battery and the frequency of each edge server () is10
Due to energy constraints, the edge servers can process only a limited number of tasks. Therefore, when the number of tasks is large, we need to rent the services of a cloud server. Cloud service providers charge based on the type of task. We use to denote the fee charged by the cloud server for processing task . Our goal is to pay a cloud fee as esmall as possible, that is, to minimize . is equivalent . Thus, the following model can be derived from the above description.11
11a
11b
11c
11d
where (11a) is obtained from (10). Because all edge servers are homogeneous, the coefficient associated with server is equal to the coefficient associated with server , . Therefore, in (11a)Algorithm based on longest processing time (LPT)
Because our problem is nonconvex and cannot be solved with CPLEX or other existing tools, we consider other solutions. Before discussing our approach, we first prove the following theorem:
Theorem 1
Let us consider m sets. The set of tasks processed by edge server is . The number of CPU cycles for all tasks in set is . A new task needs to be added to one of these sets. Let . When this new task is processed by edge server ii, all edge servers will consume less energy in processing all tasks than when this task is processed by any other edge servers except edge server ii.
Proof
Consider two edge servers and and a new task. The numbers of CPU cycles to be executed by and are and , respectively, where , and the number of CPU cycles needed for the task is q. For convenience of description, let and . When we assign this task to and , the total energy consumed by the two edge servers in each case is as follows:
12
13
According to formulas 12 and 13, it is easy to obtain when .We cannot use the concept of the multiknapsack problem to address our problem because the energy that each edge server will consume at the current time slot is unknown before the assignment operation. However, the problem addressed in this study has several similarities with the multimachine job scheduling problem, as follows: 1. In the multimachine job scheduling problem, the number of jobs that will be completed by each machine is also unknown at the beginning of the scheduling process. 2. In the longest processing time(LPT) algorithm for multimachine job scheduling, a job to be scheduled is placed on the machine whose current load (= total processing-time of scheduled jobs) is the smallest. This coincides with our Theorem 1. Despite to the similarities between our problem and the multimachine job scheduling problem, there is also an important difference. In our problem, task j is represented by two values, and . In the multimachine job scheduling problem, only one characteristic value (run time) is associated with job j. To avoid this difference, we acharacterize task j by the ratio . Our algorithm implementation is shown in Algorithm 1.
Fig. 2 [Images not available. See PDF.]
An example of the algorithm 1
In Algorithm 1, all the tasks are first sorted in nonincreasing order by . Then, each task is sequentially assigned to the edge server whose current load (= total number of CPU cycles for all assigned tasks) is the lowest. An example is shown in Fig. 2. In this example, there are 10 tasks and 3 edge servers. The 10 tasks are sorted in nonincreasing order by . Then, they are assigned to the three edge servers following Algorithm 1. can not be assigned because executing on would cause the energy consumed by all edge servers to be greater than E(t).
Discrete dwarf mongoose algorithm
Agushaka et al. [4] proposed a new metaheuristic algorithm called the dwarf mongoose optimization algorithm (DMO) to solve classical and CEC 2020 benchmark functions as well as 12 continuous/discrete engineering optimization problems. Although this algorithm can solve some discrete problems, it does not work very well for such problems and is mainly designed to solve continuous problems. Because our problem is discrete, applying this algorithm directly to this problem may yield poor results; therefore, the algorithm needs to be discretized.
The authors [4] designed the DMO algorithm to simulate the compensatory behavioral adaptation of the dwarf mongoose and stratify the dwarf mongoose social structure into the alpha group, scouts, and babysitters. The members of the alpha group elect an alpha female, and all members of the group maintain a corresponding relationship with the alpha female when hunting and scouting. The members of scouting group search for the next sleeping mound based on each individual’s location. The babysitters stay home to care for the young while other mature members hunt and scout. An individual cannot be a babysitter forever. After scouting and hunting, the group can reselect individuals to play the role of babysitters.
To obtain a discrete version of the DMO algorithm, the arithmetic operations in the algorithm should first be discretized. Then some parameters used in the algorithm need to be changed into vectors. The discrete arithmetic operations in Kashan et al. [28] are used.
The add operator () is a crossover operator, as shown in Fig. 3. Two positions are randomly selected from the particle chain and dimensions are exchanged between these two positions. Thus two particles are obtained and one of them is then randomly selected as the result of the add operator [28].
The subtract operator () acts as follows: If the elements in the corresponding positions of the subtrahend and minuend are the same, then the difference at this position is 0. If the elements in the corresponding positions of the subtrahend and minuend are different, the difference at this position is equal to the value of the minuend [28]. The subtract operator is illustrated in Fig. 4.
The multiply operator () is equivalent to the Hadamard product [28], as shown in Fig. 5. The elements in the corresponding positions of the two particles are multiplied together to produce a new particle. When the value obtained by multiplying the corresponding elements of two particles exceeds an allowed maximum value, the remainder is obtained when dividing this value by the maximum value.
Discrete arithmetic operations may cause tasks that could be processed by edge servers to instead be dispatched to cloud servers. After the subtraction operation, Algorithm 1 can be used to move these tasks from the cloud server to the edge server as much as possible. The specific method: after an individual performs a subtraction operation, the tasks whose elements are 0 in this individual are placed into the unassigned task set, and the tasks whose elements are not 0 in this individual are placed into the assigned task set. The unassigned task set is taken as the input to Algorithm 1. In step 3 of Algorithm 1, the value of for each server is calculated based on the set of assigned tasks. Finally, a new individual will be generated.
Fig. 3 [Images not available. See PDF.]
Illustration of the manner in which the operator performs
Fig. 4 [Images not available. See PDF.]
Illustration of the manner in which the operator performs
Fig. 5 [Images not available. See PDF.]
Illustration of the manner in which the operator performs
Fig. 6 [Images not available. See PDF.]
An individual representing a discrete solution
Even once the arithmetic operations are discretized, the final solution produced by this algorithm is likely to be continuous if the individuals represent solutions to a continuous problem. In other words, the value of each individual element comes from a finite set. In the problem of this study, this set is . Therefore, each individual represents a discrete solution. Figure 6 depicts an individual representing a discrete solution to our problem. This figure shows an individual corresponding to a task assignment problem with five tasks, three edge servers and one cloud server, where a number greater than 0 in a rectangle indicates that the task is processed by the edge server, and a number equal to 0 indicates that this task is processed by the cloud server. It can be easily seen from the figure that this finite set is .
Let X be the set of current candidate population as presented in (14).
14
where represents a feasible solution and denotes the position of the jth dimension of the ith population, indicating which server is processed by.We initialize by using (15), where uniformint(Varmin, Varmax) a function that returns a uniformly distributed random integer between Varmin and Varmax.
15
The dwarf mongoose social structure is divided into three groups: the alpha group, babysitter group and the scout group. Each member of the population may belong to multiple groups. For example, an individual can belong to both the alpha group and the scout group simultaneously. However, an individual belonging to either the alpha group or scout group cannot also belong to the babysitter group. The positions of the individuals in the three groups are updated using different methods.Alpha group
The update operation for the alpha group focuses on the individual. The number of individuals in the alpha group is L. One individual in the alpha group needs to be selected as the alpha female based on a corresponding probability [4], which is calculated using formula (16). The alpha female’s vocalization that keeps the family within a path is denoted by peep [4].
16
where is the fitness value of . The fitness value of each individual can be calculated according to (17).17
In our problem, can be obtained via Algorithm 2. In this algorithm, is the cost of the processing task j on the cloud server. In Algorithm 2, if the weight of the current individual is greater than B, then the fitness value of this individual is equal to 0; otherwise the fitness value of this individual is less than 0. This algorithm is designed to return a value less than or equal to 0 because our problem is a maximum value problem, whereas this heuristic algorithm is designed to solve minimization problems. The method of selecting the alpha female is shown in Algorithm 3. In this algorithm, pro denotes the set of all .In each iteration, the alpha group’s food position needs to be updated the and maintain a corresponding relationships with the alpha female expressed equation (18).
18
where iter is the number of iterations and phi is a uniformly distributed random number . is a 1-by-n array comprising elements of 0 or 1 generated from a uniform distribution with the probability |phi| of generating 1.In Agushaka et al. [4], the authors simulated this behavior by computing the average sleeping mound value for each iteration and using this average value to determine the next step of the dwarf mongoose population. The sleeping mound and the average value of the sleeping mound found are given by (19) and (20), respectively. In formula (19), if the updated value of individual i is better than the value before the update, , and if the updated value of individual i is worse than the value before the update, . in formula (20) can be used to judge the development trend of the group. If has decreased after an iteration, the group is tending toward the optimal value, and if has increased, the group is moving away from the optimal value.
19
20
Scout group
The update operation for the scout group focuses on the group. The scout group consists of the entire population and the scouts look for the next sleeping mound as follows:
21
where is the vector that determines the movement of the mongoose group to the new sleeping mound. CF denotes the parameter that controls the collective-volitive movement of the mongoose group [4] and is a 1-by-n array comprising elements of 0 or 1 generated from a uniform distribution with a probability of generating 1 is equal to . is calculated by Algorithm 4. This formula will choose the update method based on the increase and decrease of . This will help the algorithm converge.
Steps 6 to 8 of Algorithm 4 guarantee that will change. means that the fitness of the scout group in generation is worse than that of the population in generation iter; otherwise the fitness of the scout group in generation is better than that of the population in generation iter.
The babysitters
In the work of Agushaka et al. [4], the babysitters were a set of randomly selected individuals. In our research, the babysitters represent the set of individuals who have not been updated for at least k consecutive iterations, where k is a constant. After each iteration, the population elects new babysitters and the original babysitters are updated through initialization
The pseudocode for the discrete dwarf mongoose optimization (DDMO) algorithm is given in Algorithm 5. Steps 1 to 6 of Algorithm 5 represent the initialization process of the algorithm, including using formula (15) to initialize the population, calculating the initial fitness value Cost[i] of each individual, using formulas (19) and (20) to calculate the initial value of , and setting the number C[i] of consecutive iterations for which an individual has not been updated and the babysitter exchange parameter k. Steps 10 to 26 of Algorithm 5 represent the specific details of the alpha group update process using formula (18). Steps 27 to 28 in Algorithm 5 is the process of selecting babysitter from the populations based on the value of C[i]. The selected individuals will be initialized and play the role of babysitters. Steps 35 to 47 of Algorithm 5 are the specific steps of updating individuals in the scout group using the fomula (21) method.
Special case
Algorithm 5 works well when solving problems with , but performs poorly when solving problems with ; therefore, we propose another method to address the latter kind of problem. For example, consider the case of a single edge server and a task set . The numbers of CPU cycles corresponding to each task in this set are 3q, q, q, q, 1 and the costs are 3v, v, v, v, 1. The total energy that this edge server can use is . Directly using the above algorithm may cause task to be executed on the edge server, meaning that this edge server is capable of processing only this task with a value of 3v. However, if task is not assigned to this edge server, then all tasks except task can be processed on the edge server with total value of . The value gained in latter case is significantly greater than that in the former.
Fig. 7 [Images not available. See PDF.]
An example about H
Let denote the set of heavy tasks. If a task from is to be executed on an edge server, it must occupy the edge server alone. We describe which heavy tasks occupy edge servers by using a binary vector H with the dimensionality . Figure 7 presents an example of H, where , and are executed on edge servers. The energy consumed for the processing of heavy tasks is
22
where represents the number of CPU cycles needed for and represents the value of the vector H in the jth dimension. Therefore, when the edge servers are processing light tasks, they cannot consume more energy than .To address this kind of problem, a method called combination method of DDMO and algorithm based on LPT(CMDL) is proposed, which performs better than either Algorithm 1 or DDMO alone. In this method, the heavy tasks to be processed by edge servers are first selected by the heuristic algorithm, and the light tasks to be executed by edge servers are subsequently selected by Algorithm 1. To achieve this, we need to make only a small modification to the above heuristic algorithm. Each individual represents a feasible H here, where H is feasible if . H is initialized via (15) with and . Algorithm 2 takes only negative values for individual as the fitness values. In this case, the fitness of an individual includes the value of the individual and the value of all light tasks processed by edge servers. The weight of each individual is equal only to the total number of CPU cycles of selected heavy tasks and does not depend on the number of selected light tasks.
The analysis of results
Table 1. Experimental data (no heavy tasks)
dataset1 | Index | - | - | - |
U(1,10) | U(10,20) | U(5,50) | ||
U(1,10) | U(2,20) | U(2,50) | ||
dataset2 | Index | - | - | - |
U(1,10) | U(10,20) | U(5,50) | ||
U(1,10) | U(2,20) | U(2,50) | ||
dataset3 | Index | - | - | - |
U(1,10) | U(10,20) | U(5,50) | ||
U(1,10) | U(2,20) | U(2,50) | ||
dataset4 | Index | - | - | - |
U(1,10) | U(10,20) | U(5,50) | ||
U(1,10) | U(2,20) | U(2,50) | ||
dataset5 | Index | - | - | - |
U(1,10) | U(10,20) | U(5,50) | ||
U(1,10) | U(2,20) | U(2,50) | ||
dataset6 | Index | - | - | - |
U(1,10) | U(10,20) | U(5,50) | ||
U(1,10) | U(2,20) | U(2,50) | ||
dataset7 | Index | - | - | - |
U(10,100) | U(100,200) | U(50,500) | ||
U(1,10) | U(2,20) | U(2,50) | ||
dataset8 | Index | - | - | - |
U(100,1000) | U(1000,2000) | U(500,5000) | ||
U(1,10) | U(2,20) | U(2,50) | ||
dataset9 | Index | - | ||
U(1,10) | ||||
U(1,10) |
Table 2. Experimental data (including heavy tasks)
dataset10 | Index | - | - |
U(1,10) | U(50,200) | ||
U(1,10) | U(20,500) | ||
dataset11 | Index | - | - |
U(1,10) | U(50,200) | ||
U(1,10) | U(20,500) | ||
dataset12 | Index | - | - |
U(1,10) | U(50,200) | ||
U(1,10) | U(20,500) | ||
dataset13 | Index | - | - |
U(1,10) | U(50,200) | ||
U(1,10) | U(20,500) | ||
dataset14 | Index | - | - |
U(1,10) | U(50,200) | ||
U(1,10) | U(20,500) |
Table 3. Full names of the heuristic algorithms
Abbreviation | Full name |
|---|---|
DDMO | Discrete Dwarf Mongoose Optimization Algorithm |
DPSO | Discrete Particle Swarm Optimization Algorithm |
GA | Genetic Algorithm |
DABC | Discrete Artificial Bee Colony Algorithm |
DBA | Discrete Bat Algorithm |
DMO | Dwarf Mongoose Optimization Algorithm |
In our experiments, experimental data were generated by a program that generates random numbers following a uniform distribution U(a, b). We created two types of datasets, without and with heavy tasks, corresponding to 9 and 5 datasets, respectively. The detailed characteristics of the two groups of datasets are shown in Tables 1 and 2, respectively. In the first three datasets in Table 1, each dataset contains only a small number of tasks and optimal solution can be obtained via traditional methods within an acceptable time. As seen by comparing the results obtained on datasets 1, 2 and 3, our algorithm is more advantageous than either the optimal solution algorithm based on backtracking and pruning (OPT) or a greedy algorithm. For the remaining datasets in Table 1, the numbers of sensors in dataset 4 and dataset 5 are 50 and 100, whereas there are 150 sensors in datasets 6, 7, 8, and 9. and in datasets 4–8 obey three different uniform distributions, whereas in dataset 9, and obey a single uniform distribution. By comparing the results obtained on datasets 4, 5 and 6, we can observe the experimental influence of various numbers of sensors. By comparing the results on datasets 6, 7 and 8, we can observe the experimental effects of the number of CPU cycles needed each task. By comparing the results on datasets 6 and 9, we can observe the experimental influence of the number of distributions describing the number of CPU cycles needed for each task. Table 2 lists the five datasets, each with a different number of heavy tasks. From dataset 10 to dataset 14, the number of heavy tasks gradually increases.
Table 4. Comparison of Discrete Dwarf Mongoose Algorithm, Backtracking and pruning (OPT) and Greedy Algorithm
Dataset | m | B | OPT | DDMO | Greedy | m | B | OPT | DDMO | Greedy | m | B | OPT | DDMO | Greedy |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dataset1 | 2 | 27.50 | 27.50 | 25.80 | 2 | 53.15 | 53.15 | 49.15 | 2 | 78.55 | 78.55 | 76.10 | |||
5 | 46.80 | 46.80 | 45.55 | 5 | 90.80 | 90.80 | 90.05 | 5 | 119.20 | 118.95 | 114.55 | ||||
8 | 54.05 | 54.05 | 54.05 | 8 | 104.40 | 104.00 | 8 | 129.00 | 129.00 | ||||||
dataset2 | 2 | 58.45 | 58.45 | 54.75 | 2 | 99.15 | 98.95 | 97.00 | 2 | 131.50 | 131.50 | 127.45 | |||
5 | 102.45 | 102.45 | 98.40 | 5 | 173.15 | 163.95 | 5 | 212.55 | 206.65 | ||||||
8 | 127.70 | 125.00 | 8 | 206.65 | 202.10 | 8 | 221.40 | 221.40 | |||||||
dataset3 | 2 | 86.10 | 86.05 | 83.05 | 2 | 135.75 | 135.75 | 132.85 | 2 | 172.95 | 169.45 | ||||
5 | 147.10 | 142.45 | 5 | 223.70 | 216.10 | 5 | 248.90 | 247.30 | |||||||
8 | 185.85 | 179.85 | 8 | 249.45 | 247.45 | 8 | 249.85 | 249.85 |
Table 5. Values of B
dataset1 | dataset2 | dataset3 | dataset4 | dataset5 | dataset6 | dataset7 | dataset8 | dataset9 |
|---|---|---|---|---|---|---|---|---|
Fig. 8 [Images not available. See PDF.]
Run time of Discrete Dwarf Mongoose Algorithm and Backtracking and pruning (OPT) on dataset 1
Fig. 9 [Images not available. See PDF.]
Run time of Discrete Dwarf Mongoose Algorithm and Backtracking and pruning (OPT) on dataset 2
Fig. 10 [Images not available. See PDF.]
Run time of Discrete Dwarf Mongoose Algorithm and Backtracking and pruning (OPT) on dataset 3
Fig. 11 [Images not available. See PDF.]
Relationship between the run time of Discrete Dwarf Mongoose Algorithm and Backtracking and pruning (OPT) and the number of tasks
Fig. 12 [Images not available. See PDF.]
Relationship between the run time of Discrete Dwarf Mongoose Algorithm and Backtracking and pruning (OPT) and the value of B
Fig. 13 [Images not available. See PDF.]
Relationship between the run time of Discrete Dwarf Mongoose Algorithm and Backtracking and pruning (OPT) and the number of edge servers
For our experimental framework without heavy tasks, the number of edge servers was set to three levels, namely 2, 5 and 8 servers, and values of B corresponding to the different datasets are presented in Table 5. For our experimental framework with heavy tasks, the number of edge servers is also set at three levels, namely 8, 12 and 16 servers, and the value of B was . The reason way the numbers of edge servers used in the experiments with heavy tasks were greater than those in the experiments without heavy tasks is that a single heavy task needs to occupy an entire edge server.
The specific experimental process involved computing 20 sets of data from each dataset and then averaging the solutions.
On the first three datasets in Table 1, we compare the results of our algorithm with the optimal solution obtained by the backtracking method and the greedy method. As the number of tasks in the dataset increases, however, the time complexity for the backtracking method to find the optimal solution increases exponentially. Therefore, for the remaining datasets in Table 1, an optimal solution cannot be found in this way within an acceptable time. Our Algorithms are compared with 5 other heuristic algorithms: DPSO, GA, DABC, DBA and DMO. The full names of these algorithms are shown in Table 3. DPSO, DABC and DBA are discrete algorithms. When the GA algorithm is used to solve a discrete problem, only the individuals need to be discretized. The DMO algorithm can be used to solve discrete problems by rounding the results [4].
Fig. 14 [Images not available. See PDF.]
Convergence rate comparison of DDMA, DMO, DBA, GA, DPSO and DABC
Fig. 15 [Images not available. See PDF.]
Convergence rate comparison of DDMO, DBA, DPSO and DABC
Table 6. Comparison of Discrete Dwarf Mongoose Algorithm, Discrete Particle Swarm Optimization Algorithm and Genetic Algorithm (Third method)
Dataset | m | B | DDMO | DPSO | GA | m | B | DDMO | DPSO | GA | m | B | DDMO | DPSO | GA |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dataset4 | 2 | 254.65 | 253.85 | 160.4 | 2 | 386.15 | 386.15 | 271.95 | 2 | 487.5 | 487.55 | 385.2 | |||
5 | 438.1 | 437.75 | 324.05 | 5 | 636.9 | 636.55 | 562.3 | 5 | 740.45 | 740.45 | 730.6 | ||||
8 | 568.5 | 567.55 | 463.85 | 8 | 748.75 | 748.6 | 743.3 | 8 | 755.85 | 755.85 | 755.85 | ||||
dataset5 | 2 | 515.6 | 515.6 | 273.3 | 2 | 779.0 | 778.65 | 478.25 | 2 | 984.4 | 984.05 | 691.25 | |||
5 | 885.35 | 885.25 | 578.4 | 5 | 1288.45 | 1288.4 | 1067.55 | 5 | 1481.8 | 1481.75 | 1448.6 | ||||
8 | 1152.5 | 1151.75 | 875.55 | 8 | 1498.6 | 1498.25 | 1477.15 | 8 | 1509.6 | 1509.6 | 1509.6 | ||||
dataset6 | 2 | 709.3 | 709.0 | 362.85 | 2 | 1086.8 | 1086.6 | 640.4 | 2 | 1392.45 | 1392.05 | 937.4 | |||
5 | 1246.7 | 1246.1 | 783.6 | 5 | 1845.75 | 1845.4 | 1499.75 | 5 | 2134.9 | 2134.7 | 2094.2 | ||||
8 | 1642.05 | 1641.0 | 1208.5 | 8 | 2152.85 | 2152.45 | 2145.85 | 8 | 2157.25 | 2156.25 | 2150.25 | ||||
dataset7 | 2 | 363.45 | 363.05 | 161.05 | 2 | 560.9 | 560.7 | 274.6 | 2 | 721.6 | 721.45 | 365.35 | |||
5 | 644.0 | 643.6 | 331.2 | 5 | 984.5 | 983.75 | 559.25 | 5 | 1254.9 | 1254.3 | 784.85 | ||||
8 | 859.65 | 858.35 | 480.6 | 8 | 1302.8 | 1301.15 | 834.85 | 8 | 1642.75 | 1641.25 | 1224.55 | ||||
dataset8 | 2 | 351.85 | 351.45 | 152.75 | 2 | 546.4 | 545.85 | 267.75 | 2 | 702.6 | 702.5 | 356.45 | |||
5 | 627.4 | 626.9 | 326.5 | 5 | 964.9 | 964.4 | 555.05 | 5 | 1241.2 | 1240.7 | 778.75 | ||||
8 | 838.1 | 836.7 | 473.6 | 8 | 1290.75 | 1289.2 | 829.85 | 8 | 1639.65 | 1638.25 | 1213.0 | ||||
dataset9 | 2 | 111.5 | 111.35 | 56.7 | 2 | 173.8 | 173.65 | 82.15 | 2 | 222.2 | 222.1 | 109.5 | |||
5 | 199.1 | 198.9 | 102.6 | 5 | 298.2 | 297.85 | 159.55 | 5 | 374.25 | 373.8 | 215.2 | ||||
8 | 261.75 | 260.85 | 139.05 | 8 | 387.15 | 386.45 | 224.75 | 8 | 481.85 | 481.45 | 309.8 |
Bold indicates the best of the three results
Table 7. Comparison of Discrete Dwarf Mongoose Algorithm, Discrete Artificial Bee Colony Algorithm and Discrete Bat Algorithm (Third method)
Dataset | m | B | DDMO | DABC | DBA | m | B | DDMO | DABC | DBA | m | B | DDMO | DABC | DBA |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dataset4 | 2 | 254.65 | 251.45 | 253.65 | 2 | 386.15 | 383.2 | 383.0 | 2 | 487.5 | 485.45 | 481.65 | |||
5 | 438.1 | 430.3 | 433.2 | 5 | 636.9 | 633.15 | 627.8 | 5 | 740.45 | 738.7 | 735.7 | ||||
8 | 568.5 | 560.65 | 554.65 | 8 | 748.75 | 747.55 | 744.85 | 8 | 755.85 | 755.85 | 755.85 | ||||
dataset5 | 2 | 515.6 | 513.25 | 510.5 | 2 | 779.0 | 776.6 | 764.15 | 2 | 984.4 | 982.45 | 965.4 | |||
5 | 885.35 | 882.05 | 873.25 | 5 | 1288.45 | 1287.2 | 1267.1 | 5 | 1481.8 | 1481.1 | 1475.35 | ||||
8 | 1152.5 | 1148.5 | 1129.55 | 8 | 1498.6 | 1497.65 | 1494.3 | 8 | 1509.6 | 1509.6 | 1509.1 | ||||
dataset6 | 2 | 709.3 | 707.4 | 700.05 | 2 | 1086.8 | 1085.25 | 1065.5 | 2 | 1392.45 | 1391.15 | 1370.6 | |||
5 | 1246.7 | 1243.85 | 1227.9 | 5 | 1845.75 | 1844.65 | 1822.05 | 5 | 2134.9 | 2134.55 | 2129.35 | ||||
8 | 1642.05 | 1638.0 | 1614.35 | 8 | 2152.85 | 2152.3 | 2151.9 | 8 | 2157.25 | 2157.25 | 2156.95 | ||||
dataset7 | 2 | 363.45 | 361.55 | 361.15 | 2 | 560.9 | 559.7 | 554.7 | 2 | 721.6 | 720.75 | 712.75 | |||
5 | 644.0 | 638.15 | 637.1 | 5 | 984.5 | 980.5 | 971.4 | 5 | 1254.9 | 1252.95 | 1237.1 | ||||
8 | 859.65 | 852.9 | 846.5 | 8 | 1302.8 | 1297.55 | 1283.05 | 8 | 1642.75 | 1639.25 | 1618.1 | ||||
dataset8 | 2 | 351.85 | 348.8 | 350.25 | 2 | 546.4 | 544.6 | 540.2 | 2 | 702.6 | 701.5 | 691.8 | |||
5 | 627.4 | 622.4 | 620.0 | 5 | 964.9 | 961.85 | 953.5 | 5 | 1241.2 | 1237.85 | 1222.0 | ||||
8 | 838.1 | 826.6 | 825.3 | 8 | 1290.75 | 1284.5 | 1270.7 | 8 | 1639.65 | 1635.9 | 1615.95 | ||||
dataset9 | 2 | 111.5 | 110.8 | 111.35 | 2 | 173.8 | 172.7 | 171.2 | 2 | 222.2 | 221.5 | 218.5 | |||
5 | 199.1 | 196.45 | 196.3 | 5 | 298.2 | 296.55 | 293.85 | 5 | 374.25 | 372.8 | 368.6 | ||||
8 | 261.75 | 259.7 | 258.6 | 8 | 387.15 | 385.8 | 382.35 | 8 | 481.85 | 480.6 | 475.3 |
Bold indicates the best of the three results
Table 8. Experiments with heavy tasks in the task set
Dataset | m | B | A1 | DDMO | CMDL | m | B | A1 | DMO | CMDL | m | B | A1 | DDMO | CMDL |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dataset10 | 8 | 245.1 | 245.1 | 245.1 | 12 | 484.1 | 484.5 | 484.5 | 16 | 564.35 | 564.4 | 564.35 | |||
dataset11 | 8 | 226.15 | 226.15 | 239.25 | 12 | 525.1 | 525.5 | 525.5 | 16 | 598.15 | 598.15 | 600.2 | |||
dataset12 | 8 | 192.7 | 192.7 | 224.9 | 12 | 581.75 | 582.3 | 582.85 | 16 | 621.85 | 623.25 | 623.9 | |||
dataset13 | 8 | 157.6 | 157.6 | 187.2 | 12 | 589.25 | 589.25 | 589.5 | 16 | 591.35 | 591.5 | 596.95 | |||
dataset14 | 8 | 154.7 | 154.7 | 196.35 | 12 | 536.8 | 538.0 | 538.0 | 16 | 594.75 | 594.95 | 597.45 |
Bold indicates the best of the three results
Fig. 16 [Images not available. See PDF.]
Result of DDMO- and DDMO on dataset 1
Fig. 17 [Images not available. See PDF.]
Result of DDMO- and DDMO on dataset 2
Fig. 18 [Images not available. See PDF.]
Result of DDMO- and DDMO on dataset 3
As mentioned earlier, the numbers of tasks in datasets 1–3 in Table 1 are small, so we conducted experiments on these three datasets to compare the performance of our algorithm with that of the OPT method and the greedy method. The detailed results are shown in Table 4. It can be seen from this table that the result of our algorithm is basically the same as the optimal solution(OPT), whereas the solution found by the greedy algorithm is worse than both our result and the OPT solution. Note that for some cases in this table no value is given for the OPT solution because this solution could not be found within the allowed time (5 s). Figures 8, 9 and 10 show the time consumed by the DDMO and backtracking and pruning algorithms on each of these three datasets. These figures show that when and the time consumption of backtracking and pruning algorithm on dataset 1 is less than that of DDMO. We also experimentally examined the effects of the number of tasks, the value of B and the number of edge servers on the run times of our algorithm and the backtracking and pruning method. As shown in Figs. 11, 12 and 13, when the number of tasks, the value of B and the number of edge servers are all small, the backtracking pruning method consumes less time than DDMO. However, as the number of tasks, the value of B and the number of edge servers increase, the time consumed by backtracking and pruning method increases exponentially, whereas, the run time of DDMO does not change much.
Figures 14 and 15 illustrate the convergence of each algorithm. Figure 14 shows that the convergence speeds of the GA and DMO are slower than those of the other four algorithms, and their results after 1000 iterations are not as good as those of the other four algorithms. From Fig. 15, it is additionally evident that the results of the DABC and the DBA are not as good as those of DPSO and DDMO, and the convergence speed of DDMO is significantly faster than that of DPSO.
To clearly show that the discrete dwarf mongoose algorithm improved by the approximation algorithm exhibits better performance than it does without such improvement, the DDMO results without improvement by Algorithm 1. In this experiment, the two algorithms were tested on datasets 1, 2 and 3 respectively. The experimental results are shown in Figs. 16, 17 and 18. In these figures, DDMO- and DDMO represent the DDMO without the improvement via the approximation algorithm and the DDMO with the improvement, respectively. These figures show DDMO always outperforms DDMO-.
In Table 6, with the exception of the results for dataset1 with m=2 and , the results of DPSO are better than those of DDMO. In other cases, DDMO is no weaker than either of the other two algorithms (DPSO and GA). Table 7 similarly shows the superiority of DDMO over two other algorithms (DABC and DBA). The results in Tables 6 and 7 show that DDMO is not impacted by changes in the distribution of data in the dataset, the number of edge servers, or the B transformation.
In Table 8, CMDL denotes the combination method of DDMO and Algorithm 1. A1 denotes the LPT-based algorithm. In traditional cloud-edge collaborative task dispatching, when a heavy task arrives, it will always be placed on the cloud server because no edge server can execute it. However, in the scenario considered in this study, such a task can be processed by an edge server, although that edge server will need additional resources to do so. We have not yet found an algorithm specifically to solve this type of problem. Therefore, this table presents experimental comparisons of the proposed CMDL method with its two component algorithms, namely, Algorithm 1 and the improved DDMO algorithm. Although Algorithm 1 and DDMO are not specifically designed to solve such problems, they can handle such problems. The CMDL method is based on Algorithm 1 and DDMO algorithm. Table 8 demonstrates that CMDL performs better than the other two algorithms.
Conclusion
In this research, we propose a smart farm task assignment model based on a cloud-edge collaboration framework. The mathematical model considered in this study describes a nonconvex mathematical programming problem, which cannot be solved using traditional tools for combinatorial optimization problems (CPLEX, Gurobi, etc.). This problem is also NP-hard (even with only one edge server), meaning that considerable time and resources are needed to directly find the optimal solution. Therefore, we design an approximation algorithm (Algorithm 1) inspired by the LPT algorithm for the multimachine job scheduling problem. To obtain better solutions, we additionally modify the dwarf mongoose optimization algorithm to a discrete version and use Algorithm 1 to optimize this DDMO. As seen from experiments, DDMO can obtain the optimal solution for assigning problem with a small number of tasks. Moreover, whereas the run time of the backtracking and pruning method increases exponentially with an increasing number of tasks, an increasing number of edge servers and increasing B (energy), but the run time of DDMO does not change much. This is the main advantage of DDMO. In greater detail, the DDMO algorithm can obtain results that the backtracking and pruning method cannot obtain within the tolerance time, and these results are always better than those of the greedy algorithm. In addition, we demonstrate that the result of the discrete dwarf mongoose algorithm optimized by Algorithm 1 are superior to those of other discrete heuristic algorithms (DDMO, DPSO, GA, DABC, DBA) and DMO. In this study, task sets that include heavy tasks are considered independently. For such task sets, Algorithm 1 is used to select light tasks after heavy tasks have been assigned via DDMO. Through testing, we verify that the performance of this combined CMDL method is superior to that of either DDMO or Algorithm 1 alone.
Author contributions
Conceptualization, WL; methodology, MX and WL; software, MX; validation, MX; formal analysis, QS and MX; investigation, MX; resources, QS and MX; data curation, QS and MX; writing-original draft preparation, MX; writing-review and editing, QS and MX; visualization, MX; supervision, XZ and WL; project administration, XZ; funding acquisition, XZ. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported in part by the National Natural Science Foundation of China [Grant nos. 62266051, 12071417, and 62062065] and the Basic Research Project of Yunnan Province [Grant no. 202401AT070471].
Data availability
Enquiries about data availability should be directed to the authors.
Declarations
Conflict of interest
The authors declare no competing interests.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
1. Liu, X; Li, W; Xie, R. A primal-dual approximation algorithm for the k-prize-collecting minimum power cover problem. Optim. Lett.; 2022; 16,
2. Liu, X; Li, W; Dai, H. Approximation algorithms for the minimum power cover problem with submodular/linear penalties. Theoret. Comput. Sci.; 2022; 923, pp. 256-270.MathSciNet ID: 4436572[DOI: https://dx.doi.org/10.1016/j.tcs.2022.05.012]
3. Zhang, Q., Li, W., Su, Q., Zhang, X.: A local-ratio-based power control approach for capacitated access points in mobile edge computing. In: Proceedings of the 6th International Conference on High Performance Compilation, Computing and Communications, pp. 175–182 (2022)
4. Agushaka, JO; Ezugwu, AE; Abualigah, L. Dwarf mongoose optimization algorithm. Comput. Methods Appl. Mech. Eng.; 2022; 391,MathSciNet ID: 4372742[DOI: https://dx.doi.org/10.1016/j.cma.2022.114570]
5. Choi, J., Lim, D., Choi, S., Kim, J., Kim, J.: Light control smart farm monitoring system with reflector control. In: 2020 20th International Conference on Control, Automation and Systems (ICCAS), pp. 69–74 (2020). IEEE
6. Ghosh, S., Sayyed, S., Wani, K., Mhatre, M., Hingoliwala, H.A.: Smart irrigation: a smart drip irrigation system using cloud, android and data mining. In: 2016 IEEE International Conference on Advances in Electronics, Communication and Computer Technology (ICAECCT), pp. 236–239 (2016). IEEE
7. Varghese, B., Wang, N., Barbhuiya, S., Kilpatrick, P., Nikolopoulos, D.S.: Challenges and opportunities in edge computing. In: 2016 IEEE International Conference on Smart Cloud (SmartCloud), pp. 20–26 (2016). IEEE
8. Yang, C; Lan, S; Wang, L; Shen, W; Huang, GG. Big data driven edge-cloud collaboration architecture for cloud manufacturing: a software defined perspective. IEEE Access; 2020; 8, pp. 45938-45950. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.2977846]
9. Li, W; Liu, X; Cai, X; Zhang, X. Approximation algorithm for the energy-aware profit maximizing problem in heterogeneous computing systems. J. Parallel Distrib. Comput.; 2019; 124, pp. 70-77. [DOI: https://dx.doi.org/10.1016/j.jpdc.2018.10.013]
10. Zhao, Y., Zhou, S., Zhao, T., Niu, Z.: Energy-efficient task offloading for multiuser mobile cloud computing. In: 2015 IEEE/CIC International Conference on Communications in China (ICCC), pp. 1–5 (2015). IEEE
11. Yang, L; Liu, B; Cao, J; Sahni, Y; Wang, Z. Joint computation partitioning and resource allocation for latency sensitive applications in mobile edge clouds. IEEE Trans. Serv. Comput.; 2019; 14,
12. Dinh, TQ; Tang, J; La, QD; Quek, TQ. Offloading in mobile edge computing: Task allocation and computational frequency scaling. IEEE Trans. Commun.; 2017; 65,
13. Sardellitti, S., Barbarossa, S., Scutari, G.: Distributed mobile cloud computing: joint optimization of radio and computational resources. In: 2014 IEEE Globecom Workshops (GC Wkshps), pp. 1505–1510 (2014). IEEE
14. Chen, M.-H., Dong, M., Liang, B.: Joint offloading decision and resource allocation for mobile cloud with computing access point. In: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3516–3520 (2016). IEEE
15. Du, J; Zhao, L; Feng, J; Chu, X. Computation offloading and resource allocation in mixed fog/cloud computing systems with min-max fairness guarantee. IEEE Trans. Commun.; 2017; 66,
16. Zhang, G; Zhang, W; Cao, Y; Li, D; Wang, L. Energy-delay tradeoff for dynamic offloading in mobile-edge computing system with energy harvesting devices. IEEE Trans. Ind. Inform.; 2018; 14,
17. Zhang, W; Wen, Y; Wu, DO. Collaborative task execution in mobile cloud computing under a stochastic wireless channel. IEEE Trans. Wirel. Commun.; 2014; 14,
18. Zhang, W., Wen, Y., Wu, D.O.: Energy-efficient scheduling policy for collaborative execution in mobile cloud computing. In: 2013 Proceedings IEEE Infocom, pp. 190–194 (2013). IEEE
19. Li, G., Cai, J., Chen, X., Su, Z.: Nonlinear online incentive mechanism design in edge computing systems with energy budget. IEEE Trans. Mob. Comput. (2022)
20. Xu, S; Zhang, Z; Kadoch, M; Cheriet, M. A collaborative cloud-edge computing framework in distributed neural network. EURASIP J. Wirel. Commun. Netw.; 2020; 2020,
21. Zhong, J; Xiong, X. An orderly EV charging scheduling method based on deep learning in cloud-edge collaborative environment. Adv. Civil Eng.; 2021; 2021, pp. 1-12. [DOI: https://dx.doi.org/10.1155/2021/6690610]
22. Ma, J; Zhou, H; Liu, C; Mingcheng, E; Jiang, Z; Wang, Q. Study on edge-cloud collaborative production scheduling based on enterprises with multi-factory. IEEE Access; 2020; 8, pp. 30069-30080. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.2972914]
23. Wang, J; Zhao, L; Liu, J; Kato, N. Smart resource allocation for mobile edge computing: a deep reinforcement learning approach. IEEE Trans. Emerg. Top. Comput.; 2019; 9,
24. Jia, Q; Chen, S; Yan, Z; Li, Y. Optimal incentive strategy in cloud-edge integrated demand response framework for residential air conditioning loads. IEEE Trans. Cloud Comput.; 2021; 10,
25. Tao, Y; Qiu, J; Lai, S. A hybrid cloud and edge control strategy for demand responses using deep reinforcement learning and transfer learning. IEEE Trans. Cloud Comput.; 2021; 10,
26. Mirjalili, S; Mirjalili, SM; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw.; 2014; 69,
27. Poli, R; Kennedy, J; Blackwell, T. Particle swarm optimization. Swarm Intell.; 2007; 1,
28. Kashan, AH; Karimi, B. A discrete particle swarm optimization algorithm for scheduling parallel machines. Comput. Ind. Eng.; 2009; 56,
29. Pan, Q-K; Fatih Tasgetiren, M; Suganthan, PN; Chua, TJ. A discrete artificial bee colony algorithm for the lot-streaming flow shop scheduling problem. Inf. Sci.; 2011; 181,
30. Liu, L; Luo, S; Guo, F; Tan, S. Multi-point shortest path planning based on an improved discrete bat algorithm. Appl. Soft Comput.; 2020; 95, [DOI: https://dx.doi.org/10.1016/j.asoc.2020.106498]
31. Zhou, Z; Liu, F; Chen, S; Li, Z. A truthful and efficient incentive mechanism for demand response in green datacenters. IEEE Trans. Parallel Distrib. Syst.; 2018; 31,
32. Miettinen, A.P., Nurminen, J.K.: Energy efficiency of mobile clients in cloud computing. In: 2nd USENIX Workshop on Hot Topics in Cloud Computing (HotCloud 10) (2010)
33. Melendez, S., McGarry, M.P.: Computation offloading decisions for reducing completion time. In: 2017 14th IEEE Annual Consumer Communications & Networking Conference (CCNC), pp. 160–164 (2017). IEEE
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.