1. Introduction
The Internet of Things (IoT) has expanded very quickly, providing many services in different domains, such as traffic management, vehicle networks, energy management, healthcare, smart homes, among others [1,2,3]. Addressing diverse requirements means connecting end devices, such as sensors, smart mobile phones, actuators, advanced vehicles, advanced appliance, smart meters, etc. Although real-time tasks demand heterogeneous resource requirements for processing, the processing of tasks at end devices with limited resources run down the performance, which forces them to switch to other computing environments. Cloud computing with a large resource center can compute these tasks of end devices with on-demand resource requirements.
Cloud servers are usually located remotely from the end devices. With increasing of end devices the task offloading is also increased. This excessive data transfer most likely will create network congestion and degrade the performance of the network. Most of the applications cannot afford the delay in processing the tasks in the cloud [4,5], as this is detrimental to the application sensitivity.
The above paradox is addressed with fog computing [6], which works in the middle tier between cloud and end devices. The fog computing being closer to end devices provides high-quality services by satisfying the requirements of delay-sensitive tasks and reducing the workload of the cloud server.
The devices (routers, gateways, embedded servers, controllers, etc.) that have the capability of computation, storage, and communication are treated as fog nodes. These nodes with limited resources and computation capability may not satisfy the requirement of heterogeneous resources for multiple tasks execution at a time [7,8]. The improper resource allocation may change the order of execution of tasks, which may lead to low throughput and failure in achieving deadlines of tasks.
The majority of the research work concerning IoT applications concentrates on exploration of fog computing or cloud computing environments individually. A relatively unexplored dimension in this research arena is a hybrid environment that can handle both delay-sensitive data and non-sensitive data with equal efficacy. This hybrid environment, termed as the cloud–fog model, is formed by combining both the cloud environment and the fog environment. There has not been a very significant number of studies carried out on the cloud–fog model. Therefore, the intricacies in handling the real-time heterogeneous tasks with different features such as deadline, data size, arrival time, and execution time, etc., are another challenge in the cloud–fog model. In this present work, the first task at hand is to process the heterogeneous tasks by multiple queues. As the fog node is limited in resources and resource allocation is a NP-hard problem [9,10], it motivates us to use meta-heuristic techniques for optimally allocating resources. The recent optimization technique named whale optimization algorithm (WOA) gives more optimal results under many complex situations. Therefore, the second motivation is to employ the whale optimization and explore the optimal solution for allocating resources. Energy consumption is another issue that leads to worldwide carbon emissions problem; thus, the third motivation is necessitating minimization of energy consumption in the cloud–fog model [11,12,13].
The primary objective is resource allocation for heterogeneous real-time tasks in the cloud–fog model within the deadline requirement of tasks, which can improve the makespan, task completion ratio, cost function, and energy consumption. In this paper, a three-tier cloud–fog model with parallel virtual queues architecture is considered.
The significant contributions of this work are as follows:
1. The task classification and buffering (TCB) module is designed for classifying tasks into different types using dynamic fuzzy c-means clustering, and these classified tasks are buffered in parallel virtual queues based on enhanced least laxity time scheduling.
2. Another module, named task offloading and optimal resource allocation (TOORA), is modeled for deciding on offloading the task in cloud or fog and uses WOA to allocate the resources of the fog node.
3. The approach is evaluating the metrics, such as makespan, cost, energy consumption, and the successful completed tasks within the deadline and comparing them with other algorithms such as SJF, MOMIS, and FLRTS for performance evaluation.
4. When 100 to 700 tasks are executed in 15 fog nodes, the results show that the WORA algorithm saves 10.3% of the average cost of MOMIS and 21.9% of the average cost of FLRTS. When comparing the energy consumption, WORA consumes 18.5% less than MOMIS and 30.8% less than FLRTS. The WORA is also performed 6.4% better than MOMIS and 12.9% better than FLRTS in terms of makespan and 2.6% better than MOMIS and 4.3% better than FLRTS in terms of successful completion of tasks.
The structure of the remaining sections are organized as follows. The survey on resource allocation in different environments is presented in Section 2. In Section 3 system model is described and problem is formulated. Section 4 describes the optimal resource allocation algorithm in fog nodes. Section 5 presents the performance evaluation of our proposed algorithm. Finally, the conclusion and future work are presented in Section 6.
2. Related Work
Currently, fog computing is the most popular research area in terms of service management. Many researchers are focused on the concept, architecture, and resource management issues of fog computing. The fog computing paradigm as a virtual platform was introduced by Bonomi et al. [14]. Refs. [15,16,17] highlighted the issues and challenges related to fog computing that need to be solved.
The cloud node contains massive storage and processors with high-speed network connectivity and various application services [18,19,20]. For assigning services to suitable service nodes with appropriate distribution of workload in every node, Chen et al. [21] proposed RQCSA and FSQSM, which improved the efficiency and minimized queue waiting time and makespan. Behzad et al. [22] proposed the queue based hybrid scheduling algorithm for storing jobs in the queue according to the order of priority. The job with lower quantum time is allocated with the CPU and executed. Venkataramanan et al. [23] studied the problem due to overflow of queue in wireless scheduling algorithm. In [24], the stability of the queue was achieved by applying a reinforcement learning approach to Lyapunov optimization for resource allocation of edge computing. Similarly, Eryilmaz and Srikant [25] stated that the length of the queue is bounded with the setting of the Lyapunov function drift. Hence, the Lyapunov function is important to control the virtual queue length. Some researchers have also used this queuing theory in fog computing. Iyapparaja et al. [26] designed a model based on queueing theory-based cuckoo search (QTCS) to improve QoS of resource allocation. Li et al. [4] considered heterogeneous tasks to be placed in parallel virtual queues. The task offloading is decided on the basis of the urgency of the task based on laxity time.
In the real world, continuous streaming data are generated that are required for online analysis. Most adaptive clustering is application-specific, so Sandhir and Kumar [27,28] proposed a modified fuzzy c-means clustering technique called dynamic fuzzy c-means (dFCM) clustering with the aid of a synthetic dataset. Most of the researchers in [1,4,29] considered laxity time for prioritizing the tasks: the lower laxity time task will be executed first. The laxity time is also estimated on account of the deadline, execution time, and current time, which also decided the task offloading. Ali et al. [30] proposed a fuzzy logic task scheduling algorithm for deciding tasks to be executed either in the fog node or cloud center. The tasks with constraints such as deadline and data size exploited the heterogeneous resources of fog nodes, which improved makespan, average turnaround time, delay rate, and successful task completion ratio. According to Pham et al. [10], resource allocation is a non-linear programming problem and an NP-hard problem. Such types of problem can be solved using three methods, including heuristic, meta heuristics, and hybrid. As there is no optimal performance guarantee of the heuristic method, one is forced to adopt a meta-heuristic method, such as a whale optimization algorithm, as a recent efficient optimization method. Here, WOA was used for solving allocation of power, secure throughput, and offloading in mobile edge computing. Hosseini et al. [31] used WOA for optimal resource allocation and minimized the total run-time of requested services in cloud center. Several optimization techniques in different platforms were studied [32,33,34].
Several studies have proposed solving resource allocation problems in different networks. Table 1 summarizes the methodology, policies, and limitations of the resource allocation problem. Some research work is discussed here. Li et al. [4] combined the method of fuzzy c-means clustering and particle swarm optimization to design a new resource scheduling algorithm that improved user satisfaction. Rafique et al. [9] proposed a novel bio-inspired hybrid algorithm (NBIHA) for task scheduling and resource allocation of fog node, which reduced the average response time. Sun et al. [35] designed a resource scheduling model using improved non-dominated sorting genetic algorithm (NSGA-II) for the same fog clusters by which improved task execution and reduced service latency. In [36], Taneja and Devy handled the modules of fog-cloud model and mapped the modules to the mapping algorithm, that gave better performance to energy consumption, network usage, and end-to-end latency than that of traditional cloud infrastructure. In [11], Mao et al. designed separate energy-aware algorithm and time-aware algorithm for handling the task in a heterogeneous environment and developed a combined algorithms ETMCTSA that managed and controlled the performance of the cloud on the basis of parameter of the algorithm. Bharti and Mavi [37] adopted ETMCTSA and discovered that underutilized resources of the cloud can increase the usage of resources. Anu and Singhrova [38] modeled P-GA-PSO algorithm that allocate resources efficiently in fog computing that reduced delay, waiting time, and energy consumption compared to round-robin and genetic algorithms. In a three-layer computing network, Jia et al. [39] presented an extension of the deferred acceptance algorithm called double-matching strategy (DA-DMS) that was a cost-efficient resource allocation in which a paired partner cannot change unilaterally for more cost-efficiency. In [40], an algorithm based on a Pareto-domination mechanism to particle swarm optimization algorithm searched for a multi-objective optimal solution. Ni et al. [41] modeled a dynamic algorithm based on PTPN, where the user can use appropriate resources from the available group of resources autonomously. Both price and cost are considered for the completion of the task. Many such resource allocation algorithms in different systems are found in [42,43,44,45,46,47,48].
Most of the research discussed different resource allocation methods in the fog environment, the cloud environment, and wireless networks. These studies also tried to improve metrics (i.e., response time, makespan, consumption of energy, overhead, etc.). This paper adopts WOA which can allocate resources optimally. The metrics such as cost, makespan, task completion ratio, and energy consumption are improved and compared with recent studies. The abbreviation table presented in Table 2 lists all abbreviations that are used in this paper.
3. System Model
Considering end devices, fog layer and cloud layer a three-tier cloud-fog model is designed as shown in Figure 1.
End devices: The end devices include sensors, actuators, mobile vehicles, smart cameras, etc. The end devices generate tasks with different resource requirements. These tasks are classified and buffered in fog node for further execution.
Fog layer: Fog nodes are the network devices (e.g., controller, router, gateways, embedded server). Every fog node consists of a set of containers . The tasks require different resource requirements (e.g., CPU, bandwidth, memory, and storage configuration) to process the data. Therefore, each container contains a set of resources where . Due to the limited resource of fog nodes, all tasks cannot process at fog nodes simultaneously, thus necessitating buffering of tasks in the queue.
Cloud layer: This layer has a cloud server that includes unlimited resources. The cloud is placed far from fog nodes, thus causing data transmission latency. Even if there is data transmission latency for transferring tasks to the cloud, it completes its processing without waiting for resources, because of unlimited resources.
In the fog layer, two modules are designed as follows:
Task classification and buffering (TCB): On the arrival of tasks at the fog node, the similar type of tasks are gathered and buffered in parallel virtual queues according to their execution order.
Task offloading and optimal resource allocation (TOORA): All the tasks may not be assigned with fog resources by their deadline. The tasks may wait long time in queue which may lead failure of execution. These tasks can be transferred to cloud layer and achieved the deadline. The transmission of tasks may increase the transmission cost, thus, TOORA try to assign maximum tasks with fog resources. Table 3 represents all notations of this paper.
3.1. Process Flow Model
The process flow model shows how the tasks are executed in the cloud–fog model by assigning limited resources of fog nodes. The following are presented and shown in Figure 2.
-
1. Step-1: The end devices collect data and send task requests to the nearest fog node.
-
2. Step-2: The task requests transfer from fog node to the TCB.
-
3. Step-3: The resource usage, data size, arrival time, deadline, etc., are estimated.
-
4. Step-4: Tasks are classified into different types in the TCB, which can be buffered in the waiting queue by running an algorithm for ordering the task.
-
5. Step-5: Tasks are transferred to the waiting queue for buffering.
-
6. Step-6: A set of tasks of the queues are transferred to the TOORA for further processing.
-
7. Step-7: TOORA makes a decision of task offloading so that task may execute in cloud server or fog node.
-
8. Step-8: The tasks meant for offloading to the cloud are transferred to the cloud server. The tasks are sent back to the end devices that are not achieved the deadline.
-
9. Step-9: An optimal resource allocation scheduler is run in the TOORA module to optimally assign resources of the fog node to the task.
-
10. Step-10: As the result of the algorithm, the tasks are assigned to the fog nodes.
-
11. Step-11: Each task is processed in the respective node.
-
12. Step-12: After completion of task execution, the result is sent back to the end devices through the fog node.
3.2. Problem Formulation
We are considering a set of fog nodes , where every fog node consists of set of containers , and each container contains set of resource blocks . The resource can be represented as the collection of . The fog node has limited resource capacity. The total resource of a fog node is
(1)
The allocated resource of a fog node cannot exceed than total resource of the fog node. Let be total tasks that process at t time in fog node where each task has different resource requirement configuration (i.e, ). The total resource requirement is
(2)
The constraint of resource allocation can be represented as follows:(3)
Example: Suppose a fog node has three containers , , and , and each container has three resource requirement configurations , , and . All the resource requirements with different configuration of are represented as follows:
Let one task with resource requirement of try to allocate the resource of fog node . There are several solutions to allocate the required resource of fog node (e.g., ,, , , , and ). By considering higher numbers of fog nodes, the resource availability will be increased. If we consider another task with resource requirements (e.g., ), it may not be allocated in fog nodes and offloaded to cloud server. Therefore, our task is making the decision that the task will be executed either in the cloud server or a fog node and tasks will be optimally allocated the resources of fog nodes that processed at time t.
4. Proposed Work
To solve the above problem, two modules—task clasification and buffering (TCB) and task offloading and optimal resource allocation (TOORA)—are modeled. The working process of these modules is given below.
4.1. Task Classification and Buffering (TCB)
Due to computation incapability of end devices, the tasks are transferred to nearest fog. The latency-sensitive tasks need to be processed first, thus these tasks are transferred to fog node. As noted above, the fog nodes are limited in resources and prediction of resource allocation is not immediately possible, that forced for buffering of tasks in the queue. If the queue length is long, then the time complexity is high. Similar to [4], parallel virtual queues are considered for buffering the same type of tasks into separate virtual queues, which helps to reduce the time complexity, as shown in Figure 3.
Parallel virtual queues reduce the time complexity.
If a single queue with length M is considered for buffering tasks, then the time complexity for buffering all tasks is . If we consider four types of tasks that can be buffered in four separate virtual queues, then each queue length is . So, the time complexity is also decreased to . □
The real-time tasks are streaming continuously from end devices and transferred to fog nodes. Each Task can be represented with , where , , , , , , present arrival time, execution lower bound time, execution upper bound time, data size, number of instructions, response time, and deadline of the ith task, respectively. Assume that the tasks arrive at fog nodes in equal time intervals. The , , and of a task cannot be predicted before the task’s arrival. The execution time of the task is also not predicted before completion of the task. However, the upper and lower bounds of execution time (i.e., and ) can be estimated using machine learning algorithms proposed in [49]. As per estimation, should not exceed . Here, we set . Taking the above parameters of the task, the tasks can be classified into different types. The similar tasks can be grouped using a clustering algorithm. The tasks are overlapping, hence the FCM clustering algorithm is applied so that each task has a strong or weak association to the clusters. For the set of tasks T, the association to each cluster can be calculated as follows:
(4)
where n is the total tasks, m is the fuzziness index , and represents the membership of the ith task to jth cluster center. can minimize when and for all i and j. Then is(5)
The cluster center can be calculated as
(6)
An iteration technique is applied until the minimum of or minimum error criteria are satisfied. An error threshold can satisfy the condition, . The tasks are selected into a cluster using validity index. The Xie–Beni index [27,28] is one of widely used validity index is used here and can be defined as
(7)
Fuzzy c-means cluster can classify the tasks for a given time interval t. As the tasks are streaming continuously, dFCM [27] is used adaptively to update cluster centers. A new cluster center is generated automatically with new cluster generation. Initially, number of clusters are generated, where . Upon arrival of new tasks, the membership of the present cluster is calculated. If the maximum membership value of the task exceeds or equals to membership threshold value (), then it takes a new cluster center and generates a new cluster. Membership threshold () can avoid evaluation of cluster validity every time the tasks arrive. If the tasks satisfy the cluster membership, then there is no need to check for other, better clusters. The validity index is also evaluated when new centers are dissimilar to old ones. Then can have the condition
(8)
Let C number of clusters are there in time t and maximum membership value of a task is lower than then validity of clusters C is compared with to . The clusters are generated using FCM and evaluated the validity index. The new cluster centers are generated for deviated tasks. This process is repeated to get the cluster center of best validity index until the arrival of tasks stop. The algorithm of task classification is as follows.Algorithm 1 presents task classification using dFCM, which is discussed as follows. In this paper, an algorithm is presented using number of lines, and here, we consider the line number as a step. The parameters, such as threshold error , membership threshold , and range of c (i.e., number of clusters) are initialized in step-1. Here we are considering that tasks are coming in the same interval of time, hence is considered as the last interval. Time interval t is initialized with 0 in step-2, and the initial number of cluster c is in step-3. The following steps are computed until t reaches :
In step-5, take all the arrival tasks T in time t.
Calculate c number of cluster centers, i.e., and , using Equations (6) and (5); in steps 6 and 7.
In steps 8–20, check if the maximum membership value (i.e., ) of a task is more than or equal to membership threshold value (i.e., ). If true, then update and until , otherwise do steps 11–18 for to cluster centers. If no changes in clusters generated before then, store values of , otherwise generate new cluster for deviated tasks and update c. Then, update and until in step-20.
In step-21, compute validity index using Equation (7) and select best clusters with best validity and assign to in step-22.
Update the time interval t with t+1 in step-23.
Finally, return clusters of tasks in step-25.
Algorithm 1 dFCM for task classification. |
Input: Continuous streaming tasks
|
On basis of the number of clusters, that number of virtual queues are modeled for buffering the tasks. The task can be buffered in the queue by comparing the level of urgency that presents how much time a task can wait. The level of urgency of the task can be determined multiple ways. Here, we are considering deadline and laxity time, which are most useful for finding maximum waiting time from current time. The upper bound execution time is considered as actual execution time of a task cannot predict before completion of task. The waiting time of a task is calculated using laxity time as follows:
(9)
According to the lowest laxity time, the tasks can be buffered in different queues. However, some tasks may have the same ; those tasks are then grouped, and the earliest deadline first (EDF) time is considered for determining the waiting time. EDF of task i is calculated as follows:
(10)
The algorithm for buffering the tasks in different queues is given below.Algorithm 2 presents the task buffering in the queue that is discussed here. The results of Algorithm 1 are fed as the input of this algorithm (i.e., clusters of tasks). According to the number of clusters , that number of queues are created (i.e., Q) in step-1. The following steps are computed for each cluster .
-
Compute using Equation (9); for each task in step-4.
-
Sort all the tasks according to in ascending order in step-6.
-
If any tasks have similar , then group them and store them in in step-8.
-
For each task of , compute using Equation (10), and sort the tasks according to in ascending order in steps 10–13.
-
Insert all the tasks in queue according to their and in step-14.
-
Finally, return the queues Q in step-16.
Algorithm 2 Buffering task in queues. |
Input: Cluster of tasks
|
4.2. Task Offloading and Optimal Resource Allocation (TOORA)
The buffered tasks in virtual queues are going to be executed in either the cloud or fog node. The head tasks of each virtual queue are checked in parallel as to whether they will be executed in the cloud server or fog node or there may be a failure to achieve the deadline. The laxity time () of the task is used to determine the participation of the number of tasks of each queue for further operations.
The laxity time of tasks in each queue are compared with the maximum laxity time of the head task of the queues; if the laxity time of the task is below or equivalent to maximum laxity time of the head task, then those tasks are fetched for further processing, which can be represented as follows:
(11)
(12)
The fetched tasks from queues are further processed in TOORA for deciding whether the task will be offloaded or failed due to longer waiting time with three conditions as follows:
When , the deadline and executable upper bound time are nearly the same, so the task cannot wait for longer time to execute in fog node. therefore, the task must be moved to the cloud server for successful completion.
When , the executable upper bound time is more than the deadline, thus, the task cannot complete before the deadline and is sent back to end devices requesting to increase the deadline.
When , the task has enough time for executing successfully at the fog node before the deadline.
Algorithm 3 can be represented as follows for task offloading:
Algorithm 3 Task offloading at fog node. |
Input:Tasks in -type queues
|
Algorithm 3 presents the task offloading at the fog node, which can distinguish the tasks of different types of queues. It takes all the tasks of -type queues and considers the tasks that are eligible for processing at that time. The number of tasks from each queue can be considered by computing steps 1–11. First, maximum laxity time of the head tasks of queues is computed in steps 1–3; next, the tasks from all -type queues whose laxity time is less than or equal to are selected and stored in list in steps 4–11. In steps 12–20, for each task in , check if laxity time of the ith task is equal to zero, then that task will send to the cloud server; if laxity time of the ith task is less than zero, then that task is marked as a failure and sent back to end devices for increasing the deadline; otherwise it will be executed in fog node. Finally, the tasks for fog node, the cloud, and failure are returned in step-21.
According to parallel virtual queues, let the number of tasks of type-c queue in time slot t be and . The tasks leave the queue when tasks are allocated resources in fog node or moved to the cloud server. The current length of type-c queue in a given time can be evaluated based on total tasks arrived and removed from the queue at the previous time slot. If is total tasks of type-c that arrived, then the length of the queue can be evaluated as follows:
(13)
where , , and contain total tasks that are moved to the cloud, tasks allocated for resources at fog node, and tasks that are failed at time slot t.To improve throughput and avoid starvation of tasks, the length of the can be controlled using a Lyapunov function as follows:
(14)
The Lyapunov drift, a difference of the Lyapunov function of two slots, can be defined as follows:
(15)
Applying Equations (13)–(15), we can rewrite as follows:
The conditional expected Lyapunov drift can be represented as follows:
(16)
On basis of Lyapunov drift theory, if is equivalent to zero or non-positive value, then the queue length is stable. The stability of queue depends on . Although , , and can influence the value of Equation (16), the number of tasks in and are independent of tasks containing . The tasks of are allocated to the available resources of fog nodes and satisfied the following:
(17)
where is the total ongoing tasks that cannot be released the resources in time t. The objective of our work is to satisfy Equation (17) and optimally allocate the resources. Most of the time meta-heuristic algorithms gives a near-optimal solution for the resource allocation problem [15,17]. Here, we are considering a meta-heuristic algorithm named whale optimization algorithm (WOA) [50]. The main strategy of WOA is the hunting behavior of one species of whale called Humpback. Humpback whales use the unique feeding method named bubble-net feeding to create circle around the prey and spread bubbles, so that the prey move to nearer surface of the ocean, as shown in Figure 4. The WOA get optimum solution using enclosing, bubble-net and explore methods.In WOA, the random generated whale population are considered for optimization. These whales try to explore the location of prey and enclose them with bubble-net. During enclosing method, the whales upgrade their locations depending on best agent (i.e., target prey) as follows:
(18)
(19)
where is the position vector difference of best agent ( and whales (), t is the present iteration, ⊗ is used for element-wise multiplication, and and are coefficient vectors and computed as(20)
(21)
where every iteration decreases from 2 to 0 linearly, and random vector value lies in [0, 1]. The control parameter can be improved as (where is the maximum iterations).Equations (20) and (21) balance the exploration and exploitation. When , exploration occurs, and exploitation occurs when . During exploitation, the probability of getting location solutions can be avoided by taking parameter as a random value in [0, 2].
The bubble-net method has two approaches: shrinking enclosing and spiral updating. The shrinking enclosing can be achieved by taking in [−1, 1] with a linear decreasing value of in each iteration. The spiral updating inspired with helix-shaped movement of Humpback whales is applied to update the position of the best agent and the whales as follows:
(22)
(23)
where a random generated l value lies in [−1, 1] and b is a constant used for logarithmic spiral shape.The shrinking enclosing and spiral updating are performed simultaneously as whales move around the prey using both approaches. This behavior can be modeled by taking each approach with 50% probability as follows:
(24)
where . When the coefficient vector is greater than 1, the explore method is applied in which the whale location is replaced with a random whale rather than best agent. Thus, the algorithm can extend the search to a global search and can be represented as follows:(25)
(26)
The bubble-net attach exploits the local solution from the current solution; whereas explore method tries to get a global solution from the population.Here, we are considering WOA for allocating resources of fog nodes. Our whale optimized resource allocation (WORA) algorithm begins with generating a population of whales. Each whale denotes a random solution for a resource allocation problem. The fitness of each whale is calculated using a fitness function and selects a best solution with minimum fitness value as the current best agent. After this, the whales begin searching the global solution by updating each whale values of A, C, a, l, in each iteration. Where A and C are random coefficients, a is decreasing from 2 to 0 linearly, is [0, 1], and l is [2, 0]. Distance function is the most important function in WOA, which is designed for a continuous problem. As resource allocation problem is a discrete problem, the distance function can be modified. Whale creation, fitness function, and distance function as per our model is discussed below.
-
Whale creation: In our algorithm, each whale denotes a solution to the resource allocation problem. If we have a set of resources and a set of request tasks , then the whale can be represented as a random combination of resource with task . The resource is represented as , where f, c, r, , , and represent fog node, container of the fog node, resource block of the container, CPU usage, bandwidth, and available memory, respectively. The task can be represented as , which denote task identification number, requirement of CPU usage, bandwidth, and memory. For example,
Then a whale can be generated as follows:
=
=
=
In a similar fashion, all the whales are generated.
-
Fitness function: For each whale, the fitness function is the optimal resource allocation to the task and can be calculated as
(27)
The whale with minimum fitness is the optimum solution. Hence, the goal of the algorithm is the minimization of the fitness function.
The population can be generated by the collection of whales with their corresponding fitness.
(28)
-
Distance function: The most important function of WOA is the distance function. As three parameters (i.e., CPU usage, bandwidth, and memory) are considered, the distance function can be redefined as follows:
(29)
(30)
The WORA algorithm is given below.
Algorithm 4 presents assignment of fog resources to the tasks contained in . We initialize the whale population where , time t is 0, and the maximum iteration is in step-1. The best search agent that has minimum fitness value is identified in step-2. While t is less than , steps 3–21 are performed as follows:
For each whale, steps 4–16 are performed. The value of A, C, a, l, and are found in step 5.
If is less than 0.5, then check the absolute value of A in steps 6 and 7. If the absolute value of A is less than 1, then update D and using Equations (18), (19), and (29) in step 8. Otherwise, select a random whale and update D and using Equations (25), (26), and (29) in steps 10 and 11.
If p is greater than 0.5, then update and using Equations (22), (23), and (30) in step 14.
After updating, amend that goes beyond the search space in step 17. Then compute the fitness of all and update the best search agent with minimum fitness in steps 18 and 19.
Increment t by 1 in step 20.
Finally, return the best search agent that has optimal resource allocation to the tasks in step 22.
The complexity of an algorithm measures both space and time complexity. The space complexity is the amount of space occupied by the algorithm. In the WORA algorithm, the space complexity is related to the population size and the dimension of the problem. The population size is P and the dimension of the problem is D. Then, the space complexity is . In WORA, for , thus the space complexity is .
For time complexity, three major processes (i.e., initialization of the best whale, main loop for updating, and return of best solution) are considered. In WORA, is the maximum iterations.
Initializing the best whale takes times. The main loop updates the parameters, the whale that goes beyond the search space, and the optimum solution. The time complexity of these three stages are as follows:
Time required for updating the parameters is ;
Time required for searching whales beyond the search space is ;
Time for updating of optimal solution is ;
Time required for main loop is the sum of above the operations where and ignored;
The time required for last step .
Therefore, total time complexity of WORA algorithm is .
Algorithm 4 Whale optimized resource allocation (WORA) algorithm. |
Input: Set of resources R and tasks for fog node where
|
The following lemmas [51] are required for optimal convergence of the algorithm:
The population of WOA supports Markov chain which is finite and homogeneous.
The population of WOA absorb Markov process.
If an individual of WOASU is stuck in local optima in the tth iteration, the transition probability of population is
(31)
The probability of convergence of WOASU algorithm towards the global optimal solution cannot possible.
If an individual of WOAEP is stuck in the local optima in the tth iteration, the transition probability of population is
(32)
The WOAEP can converge in probability to the global optimum.
WOA can converge in probability to the global optimum.
In the WORA algorithm, each whale represents random combination of resource with task as . The valid whales, where the amount of of resource is more than the requested task, can be considered for generating the populations. The fitness function, Equation (27), calculates the average minimum difference of requested resource to allocated resource. Thus, the best whale is the whale that has the minimum fitness value. Using Lemmas 1–7, it is proved that WOA with spiral updating or enclosing method with probability of 50% can converge to a global optimum. Even if WOA is trapped to local optima by executing spiral updating mechanism, it can be come out from local optima using the enclosing mechanism. The WORA algorithm also adopts both spiral updating and enclosing method with 50% probability. Hence, the WORA algorithm can converge to global optima with probability to a point in infinite iterations.
The whole process of Algorithms 1–4 of our work is shown in the flowchart in Figure 5.
5. Performance Evaluation
This section provides simulation setup, metrics performance, and evaluation of WORA compared with other algorithms.
5.1. Simulation Setup
We used python for implementing and evaluating our proposed algorithm. The hardware or software taken for the simulation is given in Table 4. We assumed different resource configurations for the different containers of the fog. Each fog has different resource configurations, hence each resource of the containers of fog is also different. The tasks are configured randomly. Table 5 gives a detailed configuration of cloud–fog infrastructure and tasks.
We performed extended simulations with varied number of tasks and fog nodes in the system. The results of WORA are compared with SJF, FLRTS [30], and MOMIS [4]. We considered 3 to 20 fog nodes and 8 to 700 tasks.
5.2. Performance Metrics
Here, the algorithm considered cost, energy consumption, makespan, and completion of task ratio as the performance metrics. All are defined below.
-
Cost: Cost is the amount of monetary cost for processing the tasks in cloud and fog nodes. The cloud charges cost for both processing and communication, whereas the fog node only charges a cost for communication [1]. The cost of the system is defined as follows:
(33)
-
Energy consumption: The total amount of energy consumed to execute all the tasks of a system is represented with metric. The total energy consumed in fog nodes is summed of the energy consumption for executing tasks and utilization of energy of the fog nodes being idle. When tasks are executed in the cloud, then total energy is summed of consumed energy for the execution of the task and also energy for transferring the task and data. The total consumed energy is as follows:
(34)
-
Makespan: The time required for completing all the tasks in the system is represented as [30]. It can be computed as
(35)
-
Task completion ratio: is the ratio of total tasks successfully completed within the deadlines.
(36)
The parameters for evaluating the metrics are given Table 6.
5.3. Performance Analysis
Several experiments were carried out with different scenarios. When three fog nodes are considered where each fog node has three containers and each container has three resource blocks, Figure 6 shows the cost, energy consumption, makespan, and task completion ratio of varying tasks.
The proposed WORA algorithm is analyzed and compared with other three algorithms considering the metrics that we have taken. Figure 7 compares expenditure of cost of WORA with the other three algorithms with different numbers of fog nodes with 500 tasks. With an increase in fog nodes, the resource blocks are increased. Hence, a larger number of tasks are assigned to the fog nodes and a small number of tasks are transferred to the cloud for execution, which reduces the cost. The SJF algorithm forwards tasks to the cloud while the required resource is unavailable in the fog layer. The deadline as well as transmission delay of the task are considered in FLRTS. The tasks with a soft deadline or minimal latency are forwarded to the cloud. Therefore, less tasks are executed at the fog nodes in the FLRTS algorithm, which can increase the cost. Most of the tasks are assigned with resources of fog nodes in the MOMIS algorithm. Hence, the cost of the system is nearer of our WORA algorithm. The proposed WORA algorithm saves 23.89% of the average cost of FLRTS and 17.24% of the average cost of MOMIS algorithm.
Figure 8 shows the computation of energy consumption with the number of fog nodes handling 500 tasks. It can be observed that increasing fog nodes can reduce energy consumption, because most of the tasks are executed in fog nodes where less tasks are moved to the cloud. When comparing the average energy consumption, it is observed that the WORA algorithm consumes 23.8% less energy than MOMIS and 30.76% less energy than FLRTS.
When considering makespan with the number of fog nodes handling 500 tasks in Figure 9, it is observed that with an increase in fog nodes, the makespan is decreased. Instead of waiting for resources, the tasks are executed when fog nodes increases, which decreases the makespan. It is also observed that our WORA algorithm performed 6.8% better than MOMIS and 9% better than FLRTS in terms of makespan.
When 500 tasks are executed in different fog nodes from 5 to 20, Figure 10 shows that our WORA algorithm performed 3.51% better than MOMIS and 5.4% better than FLRTS in terms of successful completion ratio of task.
When 15 fog nodes are considered with tasks varying from 100 to 700, Figure 11 shows that cost increased with increasing tasks. Our WORA algorithm saves 10.3% of the average cost of MOMIS and 21.9% of the average cost of FLRTS. Similarly, the WORA algorithm saves 18.57% of the average energy of MOMIS and 30.8% of the average energy of FLRTS, shown in Figure 12. Figure 13 shows that WORA performed 6.4% better than MOMIS and 12.9% better than FLRTS in terms of makespan. The successful completion of tasks within the deadline is shown in Figure 14, where it is observed that WORA is 2.6% better than MOMIS and 4.3% better than FLRTS.
In our WORA algorithm, whale optimization algorithm is used for resource allocation. The tasks have arrived at different time intervals. Figure 15 shows the minimum fitness value in different time intervals for numbers of tasks in three fog nodes.
6. Conclusions
In this work, two modules—task classification and buffering (TCB) and task offloading and optimized resource allocation (TOORA)—are modeled for buffering the tasks in several queues according to their types, using the enhanced least laxity time the tasks are transferred to the cloud or fog. Considering the resource demand and deadline constraints of the tasks, a WOA is applied to assign the task to the optimal resource block of the fog node. The simulation results of our WORA algorithm evaluate metrics such as cost, energy consumption, makespan, and successful completion ratio of tasks and compare them with the standard SJF algorithm and existing algorithms such as MOMIS and FLRTS. When 500 tasks are executed in different fog nodes (e.g., 5 to 20), the results show that the WORA algorithm saves 23.89% of the average cost of FLRTS and 17.24% of the average cost of MOMIS. In terms of energy consumption, the WORA algorithm consumed 23.8% less energy than MOMIS and 30.76% less energy than FLRTS. Similarly, the WORA algorithm performed 6.8% better than MOMIS and 9% better than FLRTS in terms of makespan; and the WORA algorithm performed 3.51% better than MOMIS and 5.4% better than FLRTS in terms of successful completion ratio of the task. Similarly, when 100 to 700 tasks are executed in 15 fog nodes, it was observed that the WORA algorithm performed 3.51% better than MOMIS and 5.4% better than FLRTS in terms of successful completion ratio of the task, saving 18.57% of the average energy of MOMIS and 30.8% of the average energy of FLRTS. WORA performed 6.4% better than MOMIS and 12.9% better than FLRTS in terms of makespan and 2.6% better than MOMIS and 4.3% better than FLRTS in terms of successful completion ratio of the task. In the future, we will consider other metrics, such as throughput and delay rate for evaluating the performance of the algorithm. We also expand our research for virtual machine (VM) migration to balance the resource allocation.
Conceptualization, R.S., S.K.B. and N.P.; methodology, R.S.; software, R.S.; validation, R.S., S.K.B. and N.P.; formal analysis, R.S.; investigation, S.K.B., N.P.; resources, R.S., S.K.B., N.P. and K.S.S.; data curation, R.S.; writing—original draft preparation, R.S.; writing—review and editing, S.K.B., N.P., K.S.S., N.J., M.A.A.; visualization, K.S.S., N.J., and M.A.A.; supervision, S.K.B., N.P.; project administration, N.J., M.A.A.; funding acquisition, N.J., M.A.A. All authors have read and agreed to the published version of the manuscript.
Data and materials are available on request.
Taif University Researchers Supporting Project number (TURSP-2020/98), Taif University, Taif, Saudi Arabia. We want to thank BPUT Rourkela (Govt.), Odisha, India for providing adequate facility and infrastructure for conducting this research work.
The authors declare no conflict of interest.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figure 6. Cost, energy consumption, makespan, and task completion ratio in three fog nodes.
Figure 10. Compuatation of successful completion of task ratio for fog with 500 tasks.
Related work on resource allocation in different systems.
Article | Ideas | Target System | Improved Criteria | Limitations |
---|---|---|---|---|
Li et al. [ |
Laxity time and Lyapunov optimization | Fog computing | Throughput and task completion ratio | No other parameters are considered |
Bae et al. [ |
Reinforcement learning and Lyapunov optimization | Edge computing | Time-average penalty cost cost | Operates with general non-convex and discontinuous penalty functions |
Iyapparaja et al. [ |
Queueing theory-based cuckoo search | Fog computing | Response time and energy consumption | Resource allocation to the edge node is challenging |
Ali et al. [ |
Fuzzy logic | Cloud–fog environment | Makespan, average turnaround time, success ratio of the tasks, and delay rate | Large-scale network |
Pham et al. [ |
Whale optimization algorithm | Wireless network | System utility, overhead | Small dataset of user |
Li et al. [ |
Fuzzy clustering with particle swarm optimization | Fog computing | User satisfaction | Small dataset of tasks |
Rafique et al. [ |
Novel bio-inspired hybrid algorithm (NBIHA) | Fog computing | Average response time | Small dataset of tasks |
Sun et al. [ |
Non-dominated sorting genetic algorithm (NSGA-II) | Fog computing | Reduced service latency and improved stability of task execution | Other parameters such as cost is not considered |
Taneja and Devy [ |
Module mapping algorithm | Fog–cloud Infrastructure | Energy consumption, network usage, and end-to-end latency | Only compared with traditional cloud infrastructure |
Mao et al. [ |
Energy-performance trade-off multi-resource cloud task scheduling algorithm (ETMCTSA) | Green cloud computing | Energy consumption, execution time, overhead | Small task dataset |
Bharti and Mavi [ |
ETMCTSA for underutilized resources | Cloud computing | Energy consumption, overhead | Used 100 cloudlets |
Anu and Singhrova [ |
Hybridization of priority, genetic algorithm, and PSO | Fog computing | Reduced energy consumption, waiting time, execution delay, and resource wastage | Considered end devices |
Jia et al. [ |
Double-matching strategy based on deferred acceptance (DA-DMS) | Three-tier architecture (cloud data center, fog node, and users) | High-cost efficiency | Large-scale network |
Feng et al. [ |
Particle swarm optimization with Pareto-dominant | Cloud computing | Large-scaled instances, middle-scaled instances, small-scaled instances | Did not use complex tasks and resources |
Ni et al. [ |
Priced timed Petri nets strategy | Fog computing | Makespan, cost | Did not consider average completion time and fairness |
Abbreviations and description.
Abbreviation | Description |
---|---|
TCB | Task classification and buffering |
TOORA | Task offloading and optimal resource allocation |
WORA | Whale optimized resource allocation |
SJF | Shortest job first |
MOMIS | Multi-objective monotone increasing sorting-based |
FLRTS | Fuzzy logic-based real-time task scheduling |
FCM | Fuzzy c-means |
dFCM | Dynamic fuzzy c-means |
EDF | Earliest deadline first |
WOA | Whale Optimization Algorithm |
WOASU | Whale optimization algorithm spiral updating |
WOAEP | Whale optimization algorithm encircling prey |
Notations and description.
Sl. No. | Notation | Description |
---|---|---|
1 |
|
Represents end devices |
2 |
|
Represents fog nodes |
3 |
|
Containers of fog node |
4 |
|
Resources of a container |
5 |
|
Individual task where |
6 |
|
Arrival time of ith task |
7 |
|
Execution lower bound time of ith task |
8 |
|
Execution upper bound time of ith task |
9 |
|
Data size of ith task |
10 |
|
Response time of ith task |
11 |
|
Deadline time of ith task |
12 |
|
Number of instructions of ith task |
13 |
|
Membership of ith task to jth cluster center |
14 |
|
Cluster center |
15 |
|
Error threshold |
16 |
|
Xie–Beni index |
17 |
|
Membership threshold |
18 |
|
Laxity time of ith task |
19 |
|
Earliest deadline first of ith task |
20 |
|
|
21 |
|
Maximum laxity time of head task of the queue |
22 |
|
Laxity time of ith task of jth queue |
23 |
|
Best agent |
24 |
|
Coefficient vectors |
25 |
|
Random vector value lies in |
26 |
|
Parameter controller |
27 | b | Constant used for logarithmic spiral shape |
28 | l | Random value in |
29 |
|
Represents whale |
30 |
|
Processing cost per time unit for cloud |
31 |
|
Communication cost per time unit for cloud |
32 |
|
Communication cost per time unit for fog |
33 |
|
Energy per unit for execution of the task in fog |
34 |
|
Energy used when fog node is idle |
35 |
|
Energy per unit for execution of task cloud |
36 |
|
Energy per unit for transmission of data |
Hardware/software specification.
Sl. No. | Hardware/Software | Configuration |
---|---|---|
1 | System | Intel® Core ™ i5-4590 CPU @ 3.30 GHz |
2 | Memory (RAM) | 4 GB |
3 | Operating System | Windows 8.1 Pro |
Resource configuration of cloud–fog infrastructure and task.
Name | Values |
---|---|
CPU rate of cloud | 44,800 MIPS |
Bandwidth of cloud | 15,000 Mbps |
Memory of cloud | 40,000 MB |
CPU rate of fog | 22,800 MIPS |
Bandwidth of fog | 10,000 Mbps |
Memory of fog | 10,000 MB |
Arrival time of tasks ( |
[0, 10] ms |
Execution lower bound of task ( |
[1, 6] ms |
Execution upper bound of task ( |
|
Execution time ( |
( |
Data size of task | [10, 500] MB |
deadline |
|
resptime |
|
No. of Instructions ( |
[10, 1700] MI |
Bandwidth required for task | [10, 1800] Mbps |
Memory required for task | [10, 1800] MB |
CPU required for task | [10, 2200] MIPS |
Simulation parameters and values setup.
Parameters | Values |
---|---|
Processing cost per time unit for cloud ( |
0.5 G$/s |
Communication cost per time unit for cloud ( |
0.7 G$/s |
Communication cost per time unit for fog ( |
[0.3, 0.7] G$/s |
Energy per unit for execution of the task in fog ( |
[1, 5] w |
Energy used when fog node is idle ( |
0.05 w |
Energy per unit for execution of task cloud ( |
10 w |
Energy per unit for transmission of data ( |
2 w |
References
1. Pham, X.Q.; Man, N.D.; Tri, N.D.T.; Thai, N.Q.; Huh, E.N. A cost- and performance-effective approach for task scheduling based on collaboration between cloud and fog computing. Int. J. Distrib. Sens. Netw.; 2017; 13, pp. 1-16. [DOI: https://dx.doi.org/10.1177/1550147717742073]
2. Sahoo, K.S.; Tiwary, M.; Luhach, A.K.; Nayyar, A.; Choo, K.K.R.; Bilal, M. Demand–Supply-Based Economic Model for Resource Provisioning in Industrial IoT Traffic. IEEE Internet Things J.; 2021; 9, pp. 10529-10538. [DOI: https://dx.doi.org/10.1109/JIOT.2021.3122255]
3. Lin, Z.; Lin, M.; De Cola, T.; Wang, J.B.; Zhu, W.P.; Cheng, J. Supporting IoT with Rate-Splitting Multiple Access in Satellite and Aerial-Integrated Networks. Internet Things J.; 2021; 8, pp. 11123-11134. [DOI: https://dx.doi.org/10.1109/JIOT.2021.3051603]
4. Li, L.; Guan, Q.; Jin, L.; Guo, M. Resource allocation and task offloading for heterogeneous real-time tasks with uncertain duration time in a fog queueing system. IEEE Access; 2019; 7, pp. 9912-9925. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2891130]
5. Bhoi, S.K.; Panda, S.K.; Jena, K.K.; Sahoo, K.S.; Jhanjhi, N.; Masud, M.; Aljahdali, S. IoT-EMS: An Internet of Things Based Environment Monitoring System in Volunteer Computing Environment. Intell. Autom. Soft Comput.; 2022; 32, pp. 1493-1507. [DOI: https://dx.doi.org/10.32604/iasc.2022.022833]
6. Fog Computing and the Internet of Things: Extend the Cloud to Where the Things Are. Available online: https://studylib.net/doc/14477232/fog-computing-and-the-internet-of-things–extend (accessed on 2 September 2022).
7. Sahoo, K.S.; Sahoo, B. Sdn architecture on fog devices for realtime traffic management: A case study. Proceedings of the International Conference on Signal, Networks, Computing, and Systems; Springer: Berlin/Heidelberg, Germany, 2017; pp. 323-329.
8. Nayak, R.P.; Sethi, S.; Bhoi, S.K.; Sahoo, K.S.; Nayyar, A. ML-MDS: Machine Learning based Misbehavior Detection System for Cognitive Software-defined Multimedia VANETs (CSDMV) in smart cities. Multimed. Tools Appl.; 2022; pp. 1-21. [DOI: https://dx.doi.org/10.1007/s11042-022-13440-8]
9. Rafique, H.; Shah, M.A.; Islam, S.U.; Maqsood, T.; Khan, S.; Maple, C. A Novel Bio-Inspired Hybrid Algorithm (NBIHA) for Efficient Resource Management in Fog Computing. IEEE Access; 2019; 7, pp. 115760-115773. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2924958]
10. Pham, Q.V.; Mirjalili, S.; Kumar, N.; Alazab, M.; Hwang, W.J. Whale Optimization Algorithm with Applications to Resource Allocation in Wireless Networks. IEEE Trans. Veh. Technol.; 2020; 69, pp. 4285-4297. [DOI: https://dx.doi.org/10.1109/TVT.2020.2973294]
11. Mao, L.; Li, Y.; Peng, G.; Xu, X.; Lin, W. A multi-resource task scheduling algorithm for energy-performance trade-offs in green clouds. Sustainable Computing: Informatics and Systems; Elsevier Inc.: Amsterdam, The Netherlands, 2018; Volume 19, pp. 233-241. [DOI: https://dx.doi.org/10.1016/j.suscom.2018.05.003]
12. Nayak, R.P.; Sethi, S.; Bhoi, S.K.; Sahoo, K.S.; Jhanjhi, N.; Tabbakh, T.A.; Almusaylim, Z.A. TBDDosa-MD: Trust-based DDoS misbehave detection approach in software-defined vehicular network (SDVN). CMC-Comput. Mater. Contin.; 2021; 69, pp. 3513-3529. [DOI: https://dx.doi.org/10.32604/cmc.2021.018930]
13. Ravindranath, V.; Ramasamy, S.; Somula, R.; Sahoo, K.S.; Gandomi, A.H. Swarm intelligence based feature selection for intrusion and detection system in cloud infrastructure. Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC); Glasgow, UK, 19–24 July 2020; pp. 1-6.
14. Bonomi, F.; Milito, R.; Zhu, J.; Addepalli, S. Fog Computing and Its Role in the Internet of Things Characterization of Fog Computing. Proceedings of the MCC’ 12; Helsinki, Finland, 17 August 2012; pp. 13-15.
15. Lahmar, I.B.; Boukadi, K. Resource Allocation in Fog Computing: A Systematic Mapping Study. Proceedings of the 2020 5th International Conference on Fog and Mobile Edge Computing; Paris, France, 20–23 April 2020; pp. 86-93. [DOI: https://dx.doi.org/10.1109/FMEC49853.2020.9144705]
16. Ahmed, K.D.; Zeebaree, S.R.M. Resource Allocation in Fog Computing: A Review. Int. J. Sci. Bus.; 2021; 5, pp. 54-63. [DOI: https://dx.doi.org/10.5281/zenodo.4461876]
17. Ghobaei-Arani, M.; Souri, A.; Rahmanian, A.A. Resource Management Approaches in Fog Computing: A Comprehensive Review. J. Grid Comput.; 2020; 18, [DOI: https://dx.doi.org/10.1007/s10723-019-09491-1]
18. Mishra, S.K.; Mishra, S.; Alsayat, A.; Jhanjhi, N.; Humayun, M.; Sahoo, K.S.; Luhach, A.K. Energy-aware task allocation for multi-cloud networks. IEEE Access; 2020; 8, pp. 178825-178834. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.3026875]
19. Bhoi, A.; Nayak, R.P.; Bhoi, S.K.; Sethi, S.; Panda, S.K.; Sahoo, K.S.; Nayyar, A. IoT-IIRS: Internet of Things based intelligent-irrigation recommendation system using machine learning approach for efficient water usage. PeerJ Comput. Sci.; 2021; 7, e578. [DOI: https://dx.doi.org/10.7717/peerj-cs.578] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34239972]
20. Rout, S.; Sahoo, K.S.; Patra, S.S.; Sahoo, B.; Puthal, D. Energy efficiency in software defined networking: A survey. SN Comput. Sci.; 2021; 2, pp. 1-15.
21. Chen, C.L.; Chiang, M.L.; Lin, C.B. The high performance of a task scheduling algorithm using reference queues for cloud-computing data centers. Electronics; 2020; 9, 371. [DOI: https://dx.doi.org/10.3390/electronics9020371]
22. Behzad, S.; Fotohi, R.; Effatparvar, M. Queue based Job Scheduling algorithm for Cloud computing. Int. Res. J. Appl. Basic Sci.; 2013; 4, pp. 3785-3790.
23. Venkataramanan, V.J.; Lin, X. On the queue-overflow probability of wireless systems: A new approach combining large deviations with lyapunov functions. IEEE Trans. Inf. Theory; 2013; 59, pp. 6367-6392. [DOI: https://dx.doi.org/10.1109/TIT.2013.2268918]
24. Bae, S.; Han, S.; Sung, Y. A Reinforcement Learning Formulation of the Lyapunov Optimization: Application to Edge Computing Systems with Queue Stability. arXiv; 2020; pp. 1-14. arXiv: 2012.07279
25. Eryilmaz, A.; Srikant, R. Asymptotically tight steady-state queue length bounds implied by drift conditions. Queueing Systems; Springer: Berlin/Heidelberg, Germany, 2012; Volume 72, pp. 311-359. [DOI: https://dx.doi.org/10.1007/s11134-012-9305-y]
26. Iyapparaja, M.; Alshammari, N.K.; Kumar, M.S.; Krishnan, S.S.R.; Chowdhary, C.L. Efficient resource allocation in fog computing using QTCS model. Computers, Materials and Continua; Tech Science Press: Henderson, NV, USA, 2022; Volume 70, pp. 2225-2239. [DOI: https://dx.doi.org/10.32604/cmc.2022.015707]
27. Sandhir, R.P.; Kumar, S. Dynamic fuzzy c-means (dFCM) clustering for continuously varying data environments. Proceedings of the 2010 IEEE World Congress on Computational Intelligence; Barcelona, Spain, 18–23 July 2010; [DOI: https://dx.doi.org/10.1109/FUZZY.2010.5584333]
28. Sandhir, R.P.; Muhuri, S.; Nayak, T.K. Dynamic fuzzy c-means (dFCM) clustering and its application to calorimetric data reconstruction in high-energy physics. Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment; Elsevier: Amsterdam, The Netherlands, 2012; Volume 681, pp. 34-43. [DOI: https://dx.doi.org/10.1016/j.nima.2012.04.023]
29. Xu, J.; Hao, Z.; Zhang, R.; Sun, X. A Method Based on the Combination of Laxity and Ant Colony System for Cloud-Fog Task Scheduling. IEEE Access; 2019; 7, pp. 116218-116226. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2936116]
30. Ali, H.S.; Rout, R.R.; Parimi, P.; Das, S.K. Real-Time Task Scheduling in Fog-Cloud Computing Framework for IoT Applications: A Fuzzy Logic based Approach. Proceedings of the 2021 International Conference on COMmunication Systems and NETworkS, COMSNETS 2021; Bengaluru, India, 5–9 January 2021; Volume 2061, pp. 556-564. [DOI: https://dx.doi.org/10.1109/COMSNETS51098.2021.9352931]
31. Hosseini, S.H.; Vahidi, J.; Tabbakh, S.R.K.; Shojaei, A.A. Resource allocation optimization in cloud computing using the whale optimization algorithm. Int. J. Nonlinear Anal. Appl.; 2021; 12, pp. 343-360. [DOI: https://dx.doi.org/10.22075/ijnaa.2021.5188]
32. Lin, Z.; Niu, H.; An, K.; Wang, Y.; Zheng, G.; Chatzinotas, S.; Hu, Y. Refracting RIS-Aided Hybrid Satellite-Terrestrial Relay Networks: Joint Beamforming Design and Optimization. IEEE Trans. Aerosp. Electron. Syst.; 2022; 58, pp. 3717-3724. [DOI: https://dx.doi.org/10.1109/TAES.2022.3155711]
33. Lin, Z.; An, K.; Niu, H.; Hu, Y.; Chatzinotas, S.; Zheng, G.; Wang, J. SLNR-based Secure Energy Efficient Beamforming in Multibeam Satellite Systems. IEEE Trans. Aerosp. Electron. Syst.; 2022; pp. 1-4. [DOI: https://dx.doi.org/10.1109/TAES.2022.3190238]
34. Lin, Z.; Lin, M.; Wang, J.B.; De Cola, T.; Wang, J. Joint Beamforming and Power Allocation for Satellite-Terrestrial Integrated Networks with Non-Orthogonal Multiple Access. Signal Process.; 2019; 13, pp. 657-670. [DOI: https://dx.doi.org/10.1109/JSTSP.2019.2899731]
35. Sun, Y.; Lin, F.; Xu, H. Multi-objective Optimization of Resource Scheduling in Fog Computing Using an Improved NSGA-II. Wirel. Pers. Commun.; 2018; 102, pp. 1369-1385. [DOI: https://dx.doi.org/10.1007/s11277-017-5200-5]
36. Taneja, M.; Davy, A. Resource aware placement of IoT application modules in Fog-Cloud Computing Paradigm. Proceedings of the IM 2017—2017 IFIP/IEEE International Symposium on Integrated Network and Service Management; Lisbon, Portugal, 8–12 May 2017; [DOI: https://dx.doi.org/10.23919/INM.2017.7987464]
37. Bharti, S.; Mavi, N.K. Energy efficient task scheduling in cloud using underutilized resources. Int. J. Sci. Technol. Res.; 2019; 8, pp. 1043-1048.
38. Anu,; Singhrova, A. Prioritized GA-PSO algorithm for efficient resource allocation in fog computing. Indian J. Comput. Sci. Eng.; 2020; 11, pp. 907-916. [DOI: https://dx.doi.org/10.21817/indjcse/2020/v11i6/201106205]
39. Jia, B.; Hu, H.; Zeng, Y.; Xu, T.; Yang, Y. Double-matching resource allocation strategy in fog computing networks based on cost efficiency. J. Commun. Netw.; 2018; 20, pp. 237-246. [DOI: https://dx.doi.org/10.1109/JCN.2018.000036]
40. Feng, M.; Wang, X.; Zhang, Y.; Li, J. Multi-objective particle swarm optimization for resource allocation in cloud computing. Proceedings of the Proceedings—2012 IEEE 2nd International Conference on Cloud Computing and Intelligence Systems, IEEE CCIS 2012; Hangzhou, China, 30 October–1 November 2013; Volume 3, pp. 1161-1165. [DOI: https://dx.doi.org/10.1109/CCIS.2012.6664566]
41. Ni, L.; Zhang, J.; Yu, J. Priced timed petri nets based resource allocation strategy for fog computing. Proceedings of the 2016 International Conference on Identification, Information and Knowledge in the Internet of Things, IIKI 2016; Beijing, China, 20–21 October 2016; Volume 2018, pp. 39-44. [DOI: https://dx.doi.org/10.1109/IIKI.2016.87]
42. Wang, Z.; Deng, H.; Zhu, X.; Hu, L. Application of improved whale optimization algorithm in multi-resource allocation. Int. J. Innov. Comput. Inf. Control.; 2019; 15, pp. 1049-1066. [DOI: https://dx.doi.org/10.24507/ijicic.15.03.1049]
43. Alsaffar, A.A.; Pham, H.P.; Hong, C.S.; Huh, E.N.; Aazam, M. An Architecture of IoT Service Delegation and Resource Allocation Based on Collaboration between Fog and Cloud Computing. Mob. Inf. Syst.; 2016; 2016, 6123234. [DOI: https://dx.doi.org/10.1155/2016/6123234]
44. Talaat, F.M. Effective prediction and resource allocation method (EPRAM) in fog computing environment for smart healthcare system. Multimed. Tools Appl.; 2022; 81, pp. 8235-8258. [DOI: https://dx.doi.org/10.1007/s11042-022-12223-5]
45. De Vasconcelos, D.R.; Andrade, R.M.D.C.; De Souza, J.N. Smart shadow—An autonomous availability computation resource allocation platform for internet of things in the fog computing environment. Proceedings of the IEEE International Conference on Distributed Computing in Sensor Systems, DCOSS 2015; Fortaleza, Brazil, 10–12 June 2015; pp. 216-217. [DOI: https://dx.doi.org/10.1109/DCOSS.2015.25]
46. Wu, C.G.; Wang, L. A Deadline-Aware Estimation of Distribution Algorithm for Resource Scheduling in Fog Computing Systems. Proceedings of the 2019 IEEE Congress on Evolutionary Computation, CEC 2019; Wellington, New Zealand, 10–13 June 2019; pp. 660-666. [DOI: https://dx.doi.org/10.1109/CEC.2019.8790305]
47. Bian, S.; Huang, X.; Shao, Z. Online task scheduling for fog computing with multi-resource fairness. Proceedings of the IEEE Vehicular Technology Conference 2019; Honolulu, HI, USA, 21–25 September 2019; Volume 2019, [DOI: https://dx.doi.org/10.1109/VTCFall.2019.8891573]
48. Zhang, H.; Xiao, Y.; Bu, S.; Niyato, D.; Yu, F.R.; Han, Z. Computing Resource Allocation in Three-Tier IoT Fog Networks: A Joint Optimization Approach Combining Stackelberg Game and Matching. IEEE Internet Things J.; 2017; 4, pp. 1204-1215. [DOI: https://dx.doi.org/10.1109/JIOT.2017.2688925]
49. Pham, T.p.; Durillo, J.J.; Fahringer, T. Predicting Workflow Task Execution Time in the Cloud using A Two-Stage Machine Learning Approach. IEEE Trans. Cloud Comput.; 2017; 8, pp. 256-268. [DOI: https://dx.doi.org/10.1109/TCC.2017.2732344]
50. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Advances in Engineering Software; Elsevier Ltd.: Amsterdam, The Netherlands, 2016; Volume 95, pp. 51-67. [DOI: https://dx.doi.org/10.1016/j.advengsoft.2016.01.008]
51. Feng, W. Convergence Analysis of Whale Optimization Algorithm. J. Phys. Conf. Ser.; 2021; 1757, pp. 1-10. [DOI: https://dx.doi.org/10.1088/1742-6596/1757/1/012008]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Fog computing has been prioritized over cloud computing in terms of latency-sensitive Internet of Things (IoT) based services. We consider a limited resource-based fog system where real-time tasks with heterogeneous resource configurations are required to allocate within the execution deadline. Two modules are designed to handle the real-time continuous streaming tasks. The first module is task classification and buffering (TCB), which classifies the task heterogeneity using dynamic fuzzy c-means clustering and buffers into parallel virtual queues according to enhanced least laxity time. The second module is task offloading and optimal resource allocation (TOORA), which decides to offload the task either to cloud or fog and also optimally assigns the resources of fog nodes using the whale optimization algorithm, which provides high throughput. The simulation results of our proposed algorithm, called whale optimized resource allocation (WORA), is compared with results of other models, such as shortest job first (SJF), multi-objective monotone increasing sorting-based (MOMIS) algorithm, and Fuzzy Logic based Real-time Task Scheduling (FLRTS) algorithm. When 100 to 700 tasks are executed in 15 fog nodes, the results show that the WORA algorithm saves 10.3% of the average cost of MOMIS and 21.9% of the average cost of FLRTS. When comparing the energy consumption, WORA consumes 18.5% less than MOMIS and 30.8% less than FLRTS. The WORA also performed 6.4% better than MOMIS and 12.9% better than FLRTS in terms of makespan and 2.6% better than MOMIS and 4.3% better than FLRTS in terms of successful completion of tasks.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details



1 Faculty of Engineering (Computer Science and Engineering), BPUT, Rourkela 769015, Odisha, India
2 Department of Computer Science and Engineering, Parala Maharaja Engineering College (Govt.), Berhampur 761003, Odisha, India
3 Department of Computer Science and Engineering, SRM University, Amaravati 522502, AP, India; Department of Computing Science, Umeå University, 901 87 Umeaå, Sweden
4 School of Computer Science, SCS Taylor’s University, Subang Jaya 47500, Malaysia
5 Department of Information Technology, College of Computers and Information Technology, Taif University, Taif 21944, Saudi Arabia