Content area
The fog computing paradigm has emanated as a widespread computing technology to support the execution of the internet of things applications. The paradigm introduces a distributed, hierarchical layer of nodes collaboratively working together as the Fog layer. User devices connected to Fog nodes are often non-stationary. The location-aware attribute of Fog computing, deems it necessary to provide uninterrupted services to the users, irrespective of their locations. Migration of user application modules among the Fog nodes is an efficient solution to tackle this issue. In this paper, an autonomic framework MAMF, is proposed to perform migrations of containers running user modules, while satisfying the Quality of Service requirements. The hybrid framework employing MAPE loop concepts and Genetic Algorithm, addresses the migration of containers in the Fog environment, while ensuring application delivery deadlines. The approach uses the pre-determined value of user location for the next time instant, to initiate the migration process. The framework was modelled and evaluated in iFogSim toolkit. The re-allocation problem was also mathematically modelled as an Integer Linear Programming problem. Experimental results indicate that the approach offers an improvement in terms of network usage, execution cost and request execution delay, over the existing approaches.
Introduction
The proliferation of smart devices and gadgets have contributed to the ever-increasing popularity of internet of things (IoT) applications. Most of the IoT applications leverage artificial intelligence (AI) capabilities to accomplish tasks which include processing of the streams of big data generated by the connected systems of smart devices. Considering the low-power, resource-constrained nature of devices at the network edge, the IoT applications urge the use of a supporting distributed computing system. In the previous decade, this role was enacted by the Cloud paradigm (Verma et al. 2018).
Following the upsurge in IoT applications developed, and their requirements, the quasi-central nature of the Cloud environment hindered the inextricable relation between the Cloud and the IoT (Barcelo et al. 2016). Contemplating the delay-sensitive and real-time processing needs of the IoT applications, the Fog Computing paradigm was introduced. The Fog paradigm conforms to a hierarchical and distributed structure. Devices with sufficient processing capabilities, coupled with storage resources, enable the fruition of the vision of Fog Computing. These devices are termed Fog nodes or Fog devices (Marín-Tordera et al. 2017). The Fog paradigm reinforces IoT application processing by synchronizing itself with the Cloud paradigm. Thus, the Fog paradigm enables extending the Cloud functionalities to the edge of the network.
In order to harness the distributed nature of the Fog paradigm, IoT applications are modelled as a collection of application modules or services. Each module may be deployed and executed independently. The application modules communicate between themselves, in the course of their request processing (Skarlat et al. 2017; Martin et al. 2019). To ensure negligible performance overhead, the Fog Computing environments may opt for lightweight virtualization, where the application modules are hosted in containers, rather than on virtual machines (Bellavista and Zanni 2017). The containers encapsulate the user data and application modules (Martin et al. 2018).
The user devices sense the environment data and generate streams of data to be processed by the application modules. Each request for processing is received by the Fog node to determine the most apt course of action. The user devices submitting these requirements may not be stationary. This implies that the access points that the devices used to communicate with the Fog nodes, may change. This results in an increase in the hop counts, which adversely impacts the deadline constraints and delay requirements. In order to ensure timely service delivery, it is essential to ensure that the control of processing is relinquished to a Fog node which is closer to the access point to which the user device is currently connected. This process involves transferring of data and the application context encased in the containers. In container-based systems, this can be realized by migrating the containers corresponding to the user, to a Fog node situated closer to the end device. However, approaches that keep migrating containers along the trail of the mobile user may result in several unwanted migrations. Each migration action incurs an overhead on the system. This overhead is due to resource constraints and network bandwidth constraints. Thus, migration is opted only in situations where it is not possible to prolong the execution further. In the event of no viable options to transfer the processing, the application may be offloaded to the Cloud to ensure uninterrupted service delivery.
The proposed framework addresses the problem of migrating the containers corresponding to the user application modules, across the Fog nodes. The migration of containers is done in an autonomic manner, by adopting the autonomic control loop. The conceptual framework introduces agents which perceive the environment and relays information about the environmental context to determine the execution plan. The control loop is called the Monitor–Analyze–Plan–Execute (MAPE) loop. The Monitor phase constantly monitors the mobility of the user device, by observing the change in distance between the user device and the Fog node to which it is currently connected. Information about the resource availability and resource usage at the Fog nodes is also gathered in this phase. The information collected in this phase is used by the subsequent phases to determine whether the distance between the Fog device and the user device is within the acceptable limits. If the distance is outside the limits, the connection between the Fog device and the user device may be dropped, leading to a denial of service. To prevent this, the container corresponding to the user may be migrated from the current Fog device. The Analyze phase applies forecasting techniques to derive the possible location of the user in the next time step. Rather than checking whether the current user location is outside limits, it is proposed to consider the forecasted location of the mobile device. The Plan phase performs the checking operation to determine whether the forecasted location of the mobile user device falls outside the range of the connected Fog node. In such cases, the plan phase identifies a suitable Fog node closest to the forecasted location of the mobile user, to which the control of the user mobile device can be transferred. This transfer decision is taken pertaining to the Fog node location and the availability of resources. In the event of no suitable Fog nodes being available, the decision is to offload the application to the Cloud. A genetic algorithm (GA)-based approach is adopted to determine the destination/target Fog node that can accommodate the migrating container. The execute phase ensures that the decision taken by the previous phase is put into action. The iFogSim toolkit was extended to support container virtualization and container migration, to evaluate the performance of the proposed approach.
To summarize, the major contributions of this work are as follows.
Designed a conceptual framework for the autonomic delay-driven migration of application modules, which performs the migration of application containers based on the autonomic MAPE loop.
Developed a mathematical model representing the optimization problem in the re-allocation phase. The problem is formulated as a (0/1) integer linear programming problem.
Proposed a Genetic Algorithm based approach to determine the target node for the migration of the user application containers.
Developed a baseline method based on the distance metric and compared the results of the MAMF with that of the baseline.
Performed a series of experiments to analyze the performance of the proposed approach using real-world mobility traces.
Table 1. Summary of existing approaches for migration
Research article | Factors influencing decision | Virtualization level | Environment | Solution approach |
|---|---|---|---|---|
Bi et al. (2018) | Mobility | Virtual machine | Fog computing environment | SDN based controller |
Islam et al. (2016) | User mobility, increased/decreased load at cloudlet | Virtual machine | Mobile cloud computing | Genetic algorithm based solution |
Lopes et al. (2017) | User mobility | Container/virtual machine | Mobile edge computing | Layered incremental file synchronization |
Bittencourt et al. (2015) | Mobility, latency | Virtual machine | Fog computing environment | Possible solutions suggested |
Our work | Mobility, resource capacity, latency and processing time | Container | Fog computing environment | Predetermined autonomic migration |
Related works
In this section, some fundamental and conceptual works in related areas, are discussed. Fog computing extends the Cloud capabilities towards the network edge. There exists a large body of research highlighting the importance of Fog in the modern era (Dastjerdi et al. 2016). Bonomi et al. (2014) proposed a hierarchical distributed architecture for Fog . The data from the end devices are sent to the Fog layer, rather than sending the entire data to the Cloud. Depending on the requirements and availability, the data may be processed either in the Fog layer or in the Cloud.
The recent developments in internet of everything (IoE) demands delay sensitive processing of the requests (Mishra et al. 2019). The Fog nodes located near to the network edge enables the fast processing of the data, thus catering to the needs of delay critical applications as well.
Many researchers came up with solutions for the initial placement of application modules in heterogeneous distributed Fog environments. The placement solutions are either centralised or decentralised (Guerrero et al. 2019). Many of these solutions were focussed on reducing service delivery latency (Mahmud et al. 2019a) and increasing user quality of experience (Mahmud et al. 2019b).
The users or the end devices requesting the services may not be stationary. Mobility support is one of the important characteristics of the Fog. Maintaining the continuity in service delivery across different locations is a challenging task. Bi et al. (2018) proposed a Fog computing architecture to support mobility. Their architecture decouples mobility control and data forwarding using software defined networks (SDN). They also developed an efficient route optimization algorithm which improves the performance of handover mechanisms. Islam et al. (2016) proposed a virtual machine migration model for the Mobile Cloud Computing (MCC) environment. Their approach is not only based on user mobility but also on the load in the cloudlet. They used genetic algorithm to identify the optimal cloud server and also tried to reduce the total number of migrations. Machen et al. (2018) developed a framework to support user mobility in Mobile Edge Computing (MEC) environments. In order to provide the service without interruption for mobile users, they have used migration of services across MECs.This approach aims to reduce service down time and overall time for the migration. Bittencourt et al. (2017) emphasise the need for mobility aware scheduling in Fog computing environments. They accentuate the important metrics to be considered by a Fog scheduler while taking decisions in a mobility supported environment. The priority level of the applications and user mobility patterns can be considered to take efficient scheduling decisions. However, user mobility has not been taken into consideration in the existing scheduling approaches. Similar to Cloud, Fog computing also makes use of the virtualization technology to provide services to the end users. Lopes et al. (2017) enhanced the basic iFogsim simulator to support mobility through the migration of virtual machines. They developed migration policies and migration strategies for mobile users. Even though many of the Fog based applications are based on Container virtualization,the extensions to the iFogSim simulator supports only VM based migrations and does not support container migration. They have not considered any user attributes (such as user location, user mobility pattern, etc.) for making migration decisions. Bittencourt et al. (2015) proposed an architecture to enable VM migration in Fog computing environment. Their approach is based on the concept that each user has a virtual machine running on the cloudlet which serves as the endpoint for the users to access services. The decision of migration is based on user location, direction of movement, applications running, capacity of the cloudlet and capacity of the network. The authors just outlined the framework and mandatory components for the VM migration in Fog environment and have not tested for the suitability of the framework for real-time Fog environments. Liao et al. (2019) proposed a vehicle mobility based migration model for the vehicular Fog computing environment. The load is balanced across the different nodes based on the resource pricing based intensive scheme. Limited resource capacity restricts the degree of load balance achieved. The limited computing resources in the vehicles hinders the system from a perfect load balance. Talaat et al. (2020) proposed a reinforcement learning based load balancing approach which allocates the incoming request based on the task and available resource capacity of Fog devices. Zhu et al. (2018) proposed a task allocation approach for vehicular Fog computing. The task allocation is modelled as an optimisation problem with constraints as service latency, quality loss and Fog capacity. However, this approach may not be feasible in scenarios with increased number of Fog nodes and users. A comparison of the existing approaches in the literature for performing migration is provided in Table 1.
Recent developments in Fog shows that container virtualization technology provides better performance compared to the traditional virtual machine based one (Kaur et al. 2017). Hoque et al. (2017) proposed a framework for the orchestration of containers in Fog computing environments. They also presented a detailed analysis on the drawbacks of the existing orchestration tools. However, they have not considered migration of container-based applications.
Motivation
There is a dearth of research that considers the mobility support feature in Fog environments. The research community has not considered the migration of containers running the applications to support the mobility of users. There is a need for research approaches for efficient migration of containers in Fog environments. Though a few approaches discuss the migration of virtual machines, the same techniques are not directly applicable in the context of container virtualization.1 Designing approaches for migration include challenges in identifying the situations when migrations are required and also identifying the subset of containers and Fog nodes to be considered for the migration process.
Fig. 1 [Images not available. See PDF.]
Problem scenario
System model
The decentralized deployment model of Fog varies according to the scenario in which it is posited. In this paper, a hierarchical structure is considered. The data from the IoT devices are received by the Fog nodes, rather than transporting the bulk volume of data to the Cloud. The Fog nodes receiving this data, process the data and take decisions which are communicated to the edge devices. The data required for future analysis and those which cannot be processed by Fog layer is transported to the Cloud. The IoT devices are considered to exhibit mobility. When the users or mobile devices change their location with respect to time, data and processing related information also needs to be transferred in a timely manner, to avoid intermittent delays or interruptions in the service. A hybrid approach is proposed in this article, for the migration of application modules in the Fog environment based on autonomic computing and genetic algorithm.
In this paper, light weight virtualization technology is considered, which can be leveraged in the form of containers for deploying the application modules and user data. Containers are placed on the Fog nodes and the mobile users are connected to these nodes for accessing the services. To provide lower latency values and better quality of experience (QoE), containers must be migrated to the Fog nodes which are closer to the current position of the user. Determining the instant at which migration actions must be initiated and identifying a suitable location for the module in action are the two major issues in the context of migration in Fog environments.
Table 2. Notations used in the system model
Notation | Element | Description |
|---|---|---|
F | Fog node | Set of all available Fog Nodes in the Fog environment |
N | # of Fog Nodes available in the system | |
Fog Node, | ||
# of CPU cores available on the Fog Node, | ||
Amount of memory available on the Fog Node, | ||
Bandwidth available on the Fog Node, | ||
AM | Application module | Set of all application modules to be migrated in the Fog environment |
M | # of migrating application modules in the system | |
application module | ||
# of CPU cores requested by application module | ||
Amount of memory requested by application module | ||
Amount of bandwidth requested by application module | ||
Tolerable delay of application module | ||
Time to Completion of application module | ||
Processing time of application module on Fog node | ||
Migration time for application module to migrate to Fog node | ||
Distance between current position of user submitting request for application module i and the Fog node in which is deployed | ||
Distance between the current Fog node k and target candidate Fog node j |
The system under consideration has been portrayed in Fig. 1. User devices connected to the Fog form a layer represented as the ‘Sensor Layer’. The Fog layer basically acts as the arbitrator between the Sensor and Cloud layers. The Fog layer consists of different Fog nodes, that provide Fog services. Fog nodes may be gateways, routers, switches or dedicated physical servers. The data collected by the sensors in the Sensor layer are sent to the Fog nodes. We consider a scenario in which the Fog environment is used to provide low latency services for users whose locations vary with respect to time. The Fog environment considered in this paper, consists of a set of Fog nodes , where the number of Fog nodes in the environment is denoted as N. The characteristics of a Fog node can be defined as a tuple where represents the number of CPU cores available on the Fog node and is the amount of memory available and is the bandwidth available, respectively. The mobile users access the services from the application modules which are deployed as containers on various Fog nodes or in the Cloud. An application module i can be denoted as . There may exist a set of M modules running in the Fog environment and the set of all application modules is represented as AM. Each is allocated to a suitable Fog node which satisfies the resource requirements of the corresponding module. The requirements of the module can be denoted by a tuple , , , where is the number of cores of CPU requested by the module and , are the amount of memory and bandwidth resources requested by the module, respectively. The elements in the Fog system and their representations have been summarized in Table 2.
Optimization model
The migration of application modules to destination Fog nodes from the currently allocated set of Fog nodes involves three steps.
Identifying when a migration action is to be initiated.
Identifying which application modules are to be migrated.
Determining destination nodes for each of the migrating modules.
1
The decision variable of the problem denotes the Fog node to which the application module can migrated. The binary decision variable can be defined as2
subject to,3
4
5
6
7
8
The optimization problem tries to reduce the time to completion of migrating modules and thus reduces the latency experienced by users for processing the requests. Equations 3, 4 and 5 represent the constraints on processing, memory and bandwidth resources. The sum of the resource requests by all the application modules on a Fog node should not exceed the total resource capacity of the Fog node. Equation 6 attempts to reduce unwanted migrations by permitting a migration only if it falls completely within the range of the target Fog node. Equation 7 ensures that the delay constraints are not affected due to the migration process. Equation 8 makes sure that every application module is placed on one and only one Fog node.Table 3. Mathematical model solution
Parameter | Optimization value |
|---|---|
Objective value | 6.41 |
1 | |
0 | |
0 | |
1 | |
0 | |
1 |
Model example
Consider a Fog computing environment with two Fog nodes and three mobile devices which needs to be migrated. The location co-ordinates of the Fog nodes are as follows: is at and is at . The resource capacities of each of the Fog nodes are and , respectively. The current locations of the mobile users are marked by the co-ordinates and and resource requirements of their corresponding containers are and . Identification of the suitable destination for migration can be formulated as a (0/1) integer programming problem. The objective function formed is given in Eq. 9
9
subject to,Fig. 2 [Images not available. See PDF.]
MAMF framework based on control MAPE loop
The optimization problem can be solved using any integer linear programming solvers. The solution using IBM CPLEX engine is given in Table 3. IBM CPLEX is considered as the top performing open source solver in terms of speed and capability (Gearhart et al. 2013). Classical optimization techniques provide accurate results for small problem spaces. However, with increase in the number of Fog nodes and application modules, this may not be feasible. The subsequent sections discuss metaheuristic techniques that can be used to obtain near-optimal solutions in such cases.
Proposed conceptual framework
The proposed approach adopts autonomic computing paradigm for the effective orchestration of the Fog computing environment. Fog computing environments face challenges to provide support for applications demanding mobility support. The mobile nature of the users and the heterogeneous nature of the resources available, raises the need for migration of application modules from one Fog device to the other. Migration of application modules minimizes the latency experienced by the users in motion thereby ensuring that the delay requirements of the applications are met. Autonomic systems can easily adopt themselves to fluctuations in the environment and this feature serves as a promising concept for the management of various distributed infrastructures. The proposed approach uses the MAPE (Jacob et al. 2004; Ghobaei-Arani et al. 2016) autonomic control loop for the management of migrating application modules and their mapping to Fog nodes in the Fog environment. In the MAPE loop, M represents Monitoring, A stands for Analyser, P stands for Planner and E is for Execution.
A framework called Mobility aware Autonomic Migration Framework (MAMF), is proposed, for the autonomic orchestration of mobility aware migration of application modules in the Fog computing environment. As depicted in Fig. 2, the MAMF considers the three layers namely the Cloud layer, Fog layer and sensor layer. Sensor layer consists of end devices which may or may not be in motion. The Sensor layer generates the requests to be processed. It must be noted that the devices in this layer generally do not have any kind of processing abilities. These requests are forwarded to those nodes in the higher layers, which possess sufficient infrastructure and computing resources to process the data. The nodes may be either from the Fog computing layer or the Cloud computing layer. The Fog layer falls between the Sensor layer and the Cloud layer. The Fog layer receives the processing requests from the Sensor layer. The first phase of the processing of requests is carried out at the Fog node, rather than merely relaying the requests as received, to the Cloud. The Fog layer processes majority of the requests and sends the results back to the Sensor layer. The requests which are delay tolerant and cannot be processed at the Fog layer due to lack of resources, are forwarded to the Cloud layer. In the proposed MAMF, the autonomic algorithm based on the MAPE autonomic loop runs in the Fog layer.
A model for smart vehicle system was considered, which uses Fog computing environment for hosting services. Each vehicles pose as clients availing services from the Fog nodes and Fog nodes providing the services are deployed across the roads. The communication protocol used for communication between the mobile clients and the Fog nodes is IEEE802.11p. The protocol is an enhanced version of the IEEE802.11, permitting wireless access in vehicular environments (Gozálvez et al. 2012). This protocol supports communication for transmission frequencies in the band range of 5.85–5.9 GHZ. Each Fog node/server controls and co-ordinates the mobile users located within its coverage area. The actual coverage area for each Fog server depends on several factors such as geographical conditions, propagation conditions and terrain types. The Friis transmission Equation (Friis 1946) may be applied to calculate the power received by the mobile user and for calculating the coverage area of each Fog server. The Friis equation is provided in Eq. (10)
10
where is the power at receiver, is the output power of transmitting antenna, is the gain of transmitting antenna, is the gain of receiving antenna, is the wavelength, R is the distance between antennas.The use of an omnidirectional antenna is assumed, implying that and the antenna is assumed to have 100% aperture efficiency, . The value of is taken as 1w.
Considering the minimum power requirements and transmission range of IEEE802.11p, the lower and upper thresholds for migration was fixed at dbm and dbm which corresponds to 600 m and 1000 m, respectively.
MAPE control loop
MAMF follows the MAPE control loop concept and provides mobility support through the efficient orchestration of application modules. The MAPE loop running in the Fog layer consists of four phases, namely monitor, analyze, plan and execute. The operations in each phase is discussed in this section. The overview of the MAPE loop is excerpted in Algorithm 1. Initially, the system boots the required number of Fog nodes. For each user request arriving, the MAPE control loop is executed at specified time intervals . The loop includes a check to determine whether migration of application modules must be initiated or not. If migrations are to be initiated, the MAMF proceeds to identify suitable destination Fog nodes for migrating application modules.
Monitoring phase
In an autonomic environment, the monitoring phase is responsible for sensing the managed process and its operation context (refer Algorithm 2). In this phase, the location history of the users submitting requests for the application module is monitored by the user monitor (line 1) and the status of resource is monitored by the resource monitor (line 2). The knowledge base collects the information from the monitoring base and stores it for use by the subsequent phases.
Analyse phase
The Analyse phase is devoted to process the data collected by the monitor. The Analyser module (refer Algorithm 3) reads the time-based location history of mobile devices from the knowledge database till time t, that is and forecasts the location at time using the Double Exponential Smoothing method (DES).
Fig. 3 [Images not available. See PDF.]
Chromosome representation example for a system with four fog nodes and six application components
Planning phase
The planning phase decides the actions to be taken to achieve the goals of the MAMF (refer Algorithm 4). The Planning phase uses the forecasted location from the Analyse phase and calculates the distance between the user submitting request for application module, and the Fog node, in which it is deployed. Equation 11 is used to analyse whether migration is required or not. Considering the minimum power requirements and transmission ranges of the IEEE802.11p protocol, the lower and upper thresholds for migration were fixed at 600 m and 1000 m. The lower threshold indicates when the MAMF starts the check for a suitable Fog node for the migration of the application module. If a suitable Fog node is identified with lower value of TTC than the current Fog node on which the module is deployed, then is migrated to the identified Fog node. Otherwise, the module continues to be deployed on the same node. The upper threshold indicates the maximum limit above which it is no longer feasible to continue the execution of the module on the same Fog node, since the location of the user submitting the request falls outside the coverage area. This implies that a migration of the application module is inevitable. Thus, our framework checks for a suitable Fog node to migrate to. If no such Fog nodes satisfying the requirements of the application module can be found, then the module is deployed to the Cloud, thereby ensuring no service interruptions in the future
11
Check migration in lines 7 and 15 is a procedure that collects all the application modules requesting for migration at the current time instant. It then invoke the GA procedure to check for suitable destinations and returns the results to the invoking procedure. Genetic algorithm (refer Algorithm 6) is used to identify the target candidate Fog nodes for the migration process.Execution phase
The Execution phase provides mechanisms to enact the plan provided by the Planner module. The action plan developed in the Plan phase is enacted in the Execution phase (refer Algorithm 5). If the migration destination is a Fog node, then the application module is deallocated from the current Fog node and re-allocated to the destination Fog node. If Cloud is obtained as the possible destination, then, the application module must be deallocated from the current Fog node and sent to the Cloud.
Genetic algorithm
Genetic algorithm (GA) is a classical metaheuristic approach for solving optimization problems. GA can be effectively used to provide near-optimal solutions for NP-hard problems (Mitchell 1998; Guerrero et al. 2018). To ensure better results using GA, choosing suitable genetic operators is paramount. Algorithm 6 discusses the adapted genetic algorithm used by the MAMF to identify a suitable Fog node to which the application module can be migrated. The parameter values chosen are provided in the Algorithm. The parameter values were fixed after performing several trials with different values.
Chromosome representation
In GA, the set of possible solutions are encapsulated in the population and each individual solution in the population is called ‘chromosome’. The chromosome is encoded to represent the set of possible re-allocations as shown in Fig. 3. The representation of possible re-allocations is an array where the indices of the array elements corresponds to the identifiers of the application modules to be migrated and the element values are the identifiers of Fog nodes to which the application modules can be migrated. The length of the chromosome corresponds to the number of modules which are to be migrated.
Crossover and mutation operator
In GA, the solutions from one population are transformed to generate the next population in each iteration. As the number of generations advance, the chromosomes in the generation draw closer to the near-optimal solution. GA progresses through the biological evolution concepts of crossover and mutation, where the better or stronger solutions are selected for reproduction and the weaker ones are eliminated.
Crossover provides structured yet randomized exchange of genetic materials among the solutions. Crossover combines two solutions to generate new solutions. The crossover operator is fuelled by the idea that the combination of two good solutions give rise to better offsprings. The combination can be interpreted based on the chromosome representation as taking pieces/portions from good solutions to generate new solutions. In the MAMF, the uniform crossover operator was used. Uniform crossover exchanges individual bits of the parents rather than dividing the array into separate segments. The bits in the new offspring are determined by swapping the bits from its parents, and the order in which they are swapped is indicated by a random real number . The Uniform crossover operator selects two parents and generates two children from the parents such that the random number u, decides whether the ith gene of the child is inherited from the first parent or the second parent.
The mutation operator is used to maintain diversity among the populations across the generations and is applied on an individual-by-individual basis. The purpose of the mutation is to avoid the local minima thereby improving the chances of obtaining better results. Random resetting was used as the mutation operator. Here, a random Fog node from a set of possible Fog nodes is selected to replace one of the Fog nodes in the solution.
Fitness function, selection operator
In GA, fitness functions are identified to quantify the quality of the solutions represented as chromosomes. Generally, fitness functions are directly derived from the objective function. Fitness functions play a vital role in ensuring the convergence of the GA. It assigns a score to every solution and enables one to identify the best solutions in each of the populations. The score of fitness function indicates how close the obtained solution is to the desired solution.
The MAMF employs as the GA fitness function. It represents the time to completion of the application module, in the event of migration to a particular destination Fog node, . For the application module migrating to , the is computed as provided in Eq. 1.
Stopping criteria
The Genetic algorithm is said to have converged if there is no much improvement in the solutions generated from one generation to the other. In the proposed MAMF, the genetic algorithm is stopped when successive iterations no longer produce better values for the fitness function.
Experimental evaluation
The experiments in our evaluation were designed to analyze the performance of the MAMF in various scenarios. The Fog simulation toolkit called iFogSim was used to simulate the Fog environment and Cloud resources. A real time mobility dataset was used in order to mimic the real time movement of users.
Evaluation environment
The evaluation of our framework was done by considering a city with smart vehicle system which uses the Fog computing environment for accessing services. The simulation was done in the iFogsim simulator (Gupta et al. 2017). iFogsim allows the modelling and simulation of Fog computing environments and creates an evaluation platform to demonstrate the capabilities of various resource management policies. Though the toolkit provides the necessary features to model the environment depicted in Sect. 3, few modifications were required. iFogSim extensions were developed to support the deployment of application modules in light weight virtualization entities called containers. Migration of application modules hosted in containers, was enabled to support user mobility. Location co-ordinates were added for the mobile devices to enable recording of the user location. The application modules are initially deployed according to the Edgeward Placement policy (Gupta et al. 2017) in all the considered scenarios.
Table 4. Experimental details
Configuration number | Number of modules | Number of fog nodes |
|---|---|---|
Config_1 | 4 | 2 |
Config_2 | 6 | 3 |
Config_3 | 8 | 4 |
Config_4 | 10 | 5 |
Config_5 | 20 | 10 |
Config_6 | 30 | 15 |
Config_7 | 50 | 25 |
Simulation parameter settings
The performance of the MAMF was evaluated in the Fog computing environment taking various combinations of application modules and Fog nodes. The details of the configurations are provided in Table 4.
Mobility trace description
In order to validate efficiency of our MAMF approach, a real-time mobility dataset was used. The vehicular dataset was populated by the General Departmental Council of Val de Marne situated in Creteil town, in France (Lèbre et al. 2015). The General Departmental Council is the regional agency that handles the control and co-ordination of the transportation system in France. The dataset contains the traces of the vehicle flows in the city for a period of two morning peak hours and two evening peak hours. The traces for different types of vehicles were considered in our experiments.
Baseline approach
As there are no known policies for the migration of container based application modules in Fog computing environments, it is difficult to find a baseline approach to compare and evaluate the relative performance of the proposed framework. When the user moves away from the coverage of a Fog node, in order to provide effective user mobility support, the application modules must be migrated from the present Fog node to another Fog node, closer to the current position of the user and thus provide better coverage for the user along with his/her movement. Selection of a suitable target node for the application module when a number of Fog nodes are available is challenging. An interesting baseline is introduced which selects the destination Fog node as the one with the minimum distance among the various Fog nodes that can provide coverage for the current location of the user. The baseline method is called Shortest Distance Policy (denoted as SDP in the remainder of the paper).
A migration strategy involves three activities: identification of when containers are to be migrated, selecting the containers to be migrated and re-allocating the migrating containers to appropriate Fog nodes. For the SDP migration strategy, containers are migrated when the users travel beyond the coverage area of the Fog nodes to which they are connected. The coverage area bounds are calculated using Eq. 10 and also considering the minimum power requirements and transmission range of IEEE802.11p. Container migrations are initiated when the distance between the user position and the Fog node falls between 600 and 1000 m.
The SDP assumes that the user is within complete coverage of the Fog node when between the current position of user submitting request for application module i and the Fog node in which particular module deployed is less than 600 m. When is in between 600 and 1000 m, the probability for the user to travel beyond the coverage region in the next time step is higher. Thus, SDP checks for a suitable migration destination by calculating the distance between the current user location and the different Fog nodes with sufficient resources available to host the application module. The Fog node closest to the user location is returned as the migration target node. When the is greater than 1000 m since the user is already beyond the coverage area, SDP tries to find a suitable destination Fog node. If none of the suitable destinations are obtained then the module will be migrated to cloud. The SDP Algorithm is given in Algorithm 7.
Evaluation metrics
In order to assess the performance of the proposed MAMF, we have used several metrics. The first metric is the Total Time to Completion (TTC). Along with this metric, we have also considered the widely used evaluation metrics from the simulation environment (Gupta et al. 2017).
The TTC is the Time To Completion of a migrating application module. When the distance between the user submitting the request for application and the Fog node on which the application is running, exceeds a bound then the proposed framework, MAMF starts to check for a suitable destination to which the module can be migrated. In order to choose one among the possible destination Fog nodes satisfying the resource requirements, we introduced the metric . It is calculated as the sum of time required for the migration of application module, to the destination Fog node, and the time required to process a request to the application module on the destination . The TTC is calculated as given in Eq. 1.
Network usage This metric indicates the overall network usage of the application modules. As the number of devices connected to the application increases, the overall network usage increases. Uncontrolled use of the network may lead to congestions in the network which results in performance degradation of the applications.
Average loop delay The processing of a user request may involve processing by a series of application modules or a loop of application modules. Users receive the response only after the complete execution of the loop of modules. A lag in any connection in this loop will impact the response time experienced by user. Thus, proper monitoring of average loop delay is essential to avoid the violations in user Service Level Agreements (SLAs).
Average tuple execution delay It is the average over the time taken to complete the execution of user requests in a particular Fog computing environment. Requests to a Fog application module contain values according to a defined data format. Each request can be thus be considered as a tuple of values, to be processed. Increased delays in processing the request may cause violations of the delay requirements. Thus, in a Fog computing environment average tuple execution delay can be considered as a measure to assess the conformance level of quality oriented services.
Monetary cost of execution in cloud In the hierarchical Fog computing environment, Fog nodes receive and pre-process data from the end devices rather than transporting the entire data in to the cloud. Thus, Fog reduces the need for increased bandwidth and also helps in reducing the congestion in the network. The data which cannot be processed by the Fog or demands long term storage are sent to the Cloud. An efficient orchestration framework for the Fog should balance between the effective utilization of Fog infrastructure and optimum usage of Cloud. The monetary cost of execution includes the cost required for the execution of application modules in the Cloud.
Results
The performance of the proposed framework was evaluated using real world mobility traces based on the metrics described in Sect. 5.5. The values of these metrics were collected by averaging across several runs. We compare the proposed MAMF approach with the baseline SDP and also with the commonly used Void migration approach which does not offer migration support for container based application modules it is denoted as VoMig.
Network isage
Figure 4 shows the overall network usage of all the application modules. The mobility of the user may cause an increase in the distance between the user and the Fog node in which the application is deployed. When the user travels beyond the coverage of Fog node, the service may be interrupted. The traditional approach VoMig, offloads the application module to the Cloud for avoiding service denial. This causes tremendous increase in the network usage, which increases with the number of modules migrating to the Cloud.
The baseline SDP approach migrates the application module to a Fog node among the set of available Fog nodes which is at the shortest distance from the current user location.The proposed framework, MAMF tries to find a suitable destination Fog node to which the application module can be migrated and the module will be offloaded to the Cloud only if a suitable destination which meets the requirements is not obtained. The migration of modules within the Fog layer does not create significant impact on network usage. Thus, MAMF reduces the overall network usage in the Fog environment.
Fig. 4 [Images not available. See PDF.]
Network usage
Average loop delay
In Fog environments, users are in motion. Once the user has travelled across the boundaries of the coverage area of the Fog node where the application is currently running, one can experience an increased delay in the execution completion time. This can be attributed to the increase in the number of hops that a request has to travel to arrive at the Fog node hosting the application. A single user request may consist of execution of a sequence of such application modules. Figure 5 shows the average loop delay in the various Fog computing configurations considered. The application loop delay includes the total time taken for a request to be processed by all the modules that form a loop in the application. It is observed that in the VoMig approach which does not offer migration support, more number of nodes are offloaded to the Cloud than necessary. This creates some additional communication latencies as communications with the Fog are significantly faster than communication with the distant Cloud servers. The SDP approach which supports migration based on only shortest distance reduces the offloading of data in to the Cloud tremendously. Our proposed approach MAMF outperforms both the other approaches by choosing the most appropriate migration destination based on a number of parameters.
Fig. 5 [Images not available. See PDF.]
Average loop delay
Average tuple execution delay
IoT datastreams emitted from the end devices are in the form of sequence of values, this is reffered to as a tuple. Figure 6 shows the average execution delay of requests. The distance between the user and the Fog node imposes some additional communication delay on the response time for each re-quest. Enabling migration reduces the execution delay drastically in SDP. The proposed MAMF further reduces this delay by migrating the application module to a suitable Fog node which is not only located nearer to the user but also capable of efficient execution of the requests and sending quick responses to the user.
Fig. 6 [Images not available. See PDF.]
Average tuple delay
Execution cost in cloud
The data which cannot be processed in the Fog is ultimately transferred to the Cloud. An efficient orchestrator for Fog environments must try to reduce the amount of data transfer to the Cloud and thus try to reduce the execution cost in Cloud. Figure 7 shows the comparison of cost of execution in Cloud for all the three approaches. The proposed MAMF makes effective utilization of Fog environment through the timely migration of application modules to support user mobility, rather than transporting all the data to the Cloud as in the case of VoMig. The MAMF offloads modules to the Cloud, only when there are absolutely no possibilities of hosting the application module in the Fog.
Fig. 7 [Images not available. See PDF.]
Execution cost in cloud
Fig. 8 [Images not available. See PDF.]
MAE, MAPE and RMSE value comparison for different prediction techniques
The SDP approach also tries to migrate the application module to the nearest available Fog node, thus reducing the data transfer to the Cloud. But since it determines the destination, based only on the distance without regard for the processing rate, new requests arriving at the destination node after the migration may suffer from longer processing delays.
Discussion
In this section, we present the inferences and analysis results based on the experiments conducted.
Location value at time
In order to forecast the location at time from the history of location values upto time t, a time series forecasting method called Double Exponential Smoothing (DES) was used. Several methods such as Moving average(MA), Exponential Smoothing(ES), Double Exponential Smoothing (DES), AutoRegressive Integrated Moving Average(ARIMA) and AutoRegressive (AR) model, were compared. The different methods were evaluated using the metrics given in the subsequent Section. The comparison results are shown in Fig. 8. It is noted that ARIMA and DES techniques fare better when compared to all the other approaches, and among these two, DES performs exceptionally well. Thus, in MAMF, DES is used to calculate the location value at time .
Evaluation criteria
The accuracy of the forecasted location values was evaluated based on several metrics, namely Mean Absolute Percentage Error(MAPE), Mean Absolute Error(MAE) and Root Mean Squared Error(RMSE).
Mean absolute error
MAE is defined as the mean of the absolute errors. It is one of the simplest measures of accuracy. The absolute value is measured as the difference between the forecasted value and the expected value (Shcherbakov et al. 2013).
Mean absolute percentage error
MAPE gives the accuracy of the forecasted values (Makridakis et al. 1982). The accuracy is expressed as percentage, and it is defined in Eq. 12
12
where indicates the actual value, indicates the forecasted value and n is the number of observations based on which the predictions are made. The lower MAPE value indicates better accuracy.Root mean squared error
RMSE (Chai and Draxler 2014) is a common measure to quantify the difference between two values. It is defined in Eq. 13.
13
where indicates the forecasted value, indicates the actual output and n is the number of observations used for prediction.
Fig. 9 [Images not available. See PDF.]
Convergence of time to completion (TTC) across generations for different configurations
Fig. 10 [Images not available. See PDF.]
Evolution of time to completion with distance across generations
Fig. 11 [Images not available. See PDF.]
Variation of time to completion and deadline (delay) with number of migrating modules
Evaluation of GA
This section analyzes the performance of GA in the MAMF.
Evolution of objective function
The optimal value of the objective function is obtained at the termination of the optimization algorithm. The number of generations required for the GA to converge to an optimum was fixed after several trials. The convergence of the objective function with varying number of generations is plotted in Fig. 9. It is evident that stabilization of the objective function is achieved in different generations for different configurations of the Fog computing environment. For all the four configurations, convergence was attained before the 400th generation. It is observed that the configuration 1 shows an early convergence, which may be due to the small size of search space. Convergence in configuration 4 occurs later, that is only after 300 generations which indicates that the increase in the number of Fog nodes and application modules to migrate increases the search space thus, requiring more number of generations to converge.
The Fig. 10 shows the evolution of population across varying generations. Initially, the individuals were in random positions. It is observed that, after 200 generations, the individuals are shifted from random positions to the corresponding solution space. After 300 generations, the individuals do not undergo much changes. The distance parameter is directly proportional to the Time to completion. Through all the generations, it is observed that the distance keeps varying till the population converges to the final solution. The distance decreases along with an increase in the number of generations. All the solutions are concentrated mostly at the same values of distance and TTC from the 300th generation onwards.
Figure 11 shows the variation of TTC of the application modules against their delays in the four different configurations. The red line indicates the tolerable delay of each of the application modules for processing the user request. TTC is the response time of a migrated application module, which includes the time to migrate and also the time to process a request in the destination Fog node. Fog computing environments with heterogeneous Fog nodes and application modules are considered. All the configurations completed the processing of requests within the tolerable delay. Thus, through the migration of application modules, MAMF ensures that the Fog computing environment fulfils the delay requirements of the users. When the total amount of resources available in the Fog layer is not sufficient to satisfy the requirements of the users, the processing will be transferred to the Cloud. The requests are immediately executed in the Cloud because of unlimited capacity of resources in the Cloud. However, it imposes additional overheads in the form of communication delay (between the user and the Cloud) due to the distance and execution cost. This may lead to violation of delay requirements of the user.
ILP and heuristic approach
In order to identify a suitable destination Fog node for the migrating application module , we have proposed approaches based on Integer Linear Programming and meta-heuristics. A real time example scenario is modelled as an optimization problem and the mapping of application modules to suitable Fog nodes is solved using (0/1) Integer Linear Programming, in Sect. 3. The same problem scenario is also solved using the well known meta-heuristic approach, Genetic Algorithm and the results are shown in Table 5.
Table 5. Comparison of solutions
Parameter | Solution based on ILP | Solution based on metaheuristic |
|---|---|---|
1 | 1 | |
0 | 0 | |
0 | 0 | |
1 | 1 | |
0 | 0 | |
1 | 1 |
We can see that hueristic approximates the ILP very well by providing the same results for the cosidered scenario. When the problem size is increasing that is when the Fog environment consist of large number of Fog nodes and application modules, it is difficult to model the scenario using the linear programming approach. It takes alot of time for modelling and solving such scenarios using integer programming problem. Therefore here in MAMF we used metahueristic approach based on genetic algorithm.
Conclusions and future work
In this paper, the problems and challenges faced by Fog computing environments to provide mobility support are investigated. A mobility aware autonomic migration framework based on a combination of autonomic computing and genetic algorithm, has been proposed. MAMF ensures that the quality of services of the end users are satisfied, through the migration of container based application modules which works based on the concepts of control MAPE loop. The approach can be used to predetermine the migration of application modules and thus reduce the service down time. An integer linear programming model was developed to find the destination Fog nodes for the migrating modules. The performance of the proposed approach was evaluated under real world mobility traces using different metrics using the iFogsim toolkit. Results show that the average delay of execution, network usage, execution delay per tuple and the cost of execution can be significantly reduced by using the proposed approach.
The MAMF may be extended to use more precise techniques, such as mobility prediction models, to determine the location of the user at . In order to evaluate the practical feasibility of the proposed approach, the evaluation may be done in real-time Fog environments. To enhance the robustness of the MAMF, security and energy considerations may also be incorporated into the MAPE loop.
Acknowledgements
This research is an outcome of the R&D project work undertaken under the Visvesvaraya PhD Scheme of Ministry of Electronics & Information Technology, Government of India, being implemented by Digital India Corporation. We would like to thank the reviewers for the suggestions and comments made which have helped us to improve our work.
1https://www.virtuozzo.com/connect/details/blog/view/live-migration-in-virtuozzo-7.html.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
Barcelo, M; Correa, A; Llorca, J; Tulino, AM; Vicario, JL; Morell, A. Iot-cloud service optimization in next generation smart environments. IEEE J Selected Areas Commun; 2016; 34,
Bellavista P, Zanni A (2017)Feasibility of fog computing deployment based on docker containerization over Raspberrypi. In: Proceedings of the 18th international conference on distributed computing and networking. ACM, p 16
Bi, Y; Han, G; Lin, C; Deng, Q; Guo, L; Li, F. Mobility support for fog computing: an SDN approach. IEEE Commun Mag; 2018; 56,
Bittencourt, LF; Diaz-Montes, J; Buyya, R; Rana, OF; Parashar, M. Mobility-aware application scheduling in fog computing. IEEE Cloud Comput; 2017; 4,
Bittencourt LF, Lopes MM, Petri I, Rana OF (2015) Towards virtual machine migration in fog computing. In: P2P, parallel, grid, cloud and internet computing (3PGCIC), 2015 10th international conference on. IEEE, pp 1–8
Bonomi, F; Milito, R; Natarajan, P; Zhu, J. Fog computing: a platform for internet of things and analytics. Big data and internet of things: a roadmap for smart environments; 2014; Cham, Springer: pp. 169-186. [DOI: https://dx.doi.org/10.1007/978-3-319-05029-4_7]
Chai, T; Draxler, RR. Root mean square error (RMSE) or mean absolute error (MAE)? Arguments against avoiding RMSE in the literature. Geosci Model Dev; 2014; 7,
Dastjerdi AV, Gupta H, Calheiros RN, Ghosh SK, Buyya R (2016) Fog computing: principles, architectures, and applications. In: Internet of things. Morgan Kaufmann, pp 61–75
Friis, HT. A note on a simple transmission formula. Proc IRE; 1946; 34,
Gearhart JL, Adair KL, Detry RJ, Durfee JD, Jones KA, Martin N (2013) Comparison of open-source linear programming solvers. Tech Rep SAND2013-8847
Ghobaei-Arani, M; Jabbehdari, S; Pourmina, MA. An autonomic approach for resource provisioning of cloud services. Cluster Comput; 2016; 19,
Gozálvez, J; Sepulcre, M; Bauza, R. IEEE 802.11 p vehicle to infrastructure communications in urban environments. IEEE Commun Mag; 2012; 50,
Guerrero, C; Lera, I; Juiz, C. Genetic algorithm for multi-objective optimization of container allocation in cloud architecture. J Grid Comput; 2018; 16,
Guerrero, C; Lera, I; Juiz, C. A lightweight decentralized service placement policy for performance optimization in fog computing. J Ambient Intell Human Comput; 2019; 10,
Gupta, H; Vahid Dastjerdi, A; Ghosh, SK; Buyya, R. ifogsim: a toolkit for modeling and simulation of resource management techniques in the internet of things, edge and fog computing environments. Softw Pract Exp; 2017; 47,
Hoque S, de Brito MS, Willner A, Keil O, Magedanz T (2017) Towards container orchestration in fog computing infrastructures. In: Computer software and applications conference (COMPSAC), 2017 IEEE 41st annual. IEEE, pp 294–299
Islam M, Razzaque A, Islam J (2016) A genetic algorithm for virtual machine migration in heterogeneous mobile cloud computing. In: Networking systems and security (NSysS), 2016 international conference on. IEEE, pp 1–6
Jacob, B; Lanyon-Hogg, R; Nadgir, DK; Yassin, AF. A practical guide to the ibm autonomic computing toolkit. IBM Redbooks; 2004; 4, 10.
Kaur, K; Dhand, T; Kumar, N; Zeadally, S. Container-as-a-service at the edge: trade-off between energy efficiency and service availability at fog nano data centers. IEEE Wireless Commun; 2017; 24,
Lèbre M-A, Le Mouël F, Ménard E (2015) Microscopic vehicular mobility trace of europarc roundabout, creteil, france. Open data trace, v1.0, Creative Commons Attribution-NonCommercial 4.0 International License
Liao, S; Li, J; Wu, J; Yang, W; Guan, Z. Fog-enabled vehicle as a service for computing geographical migration in smart cities. IEEE Access; 2019; 7, pp. 8726-8736. [DOI: https://dx.doi.org/10.1109/ACCESS.2018.2890298]
Lopes MM, Higashino WA, Capretz MA, Bittencourt LF (2017) Myifogsim: a simulator for virtual machine migration in fog computing. In: Companion proceedings of the10th international conference on utility and cloud computing. ACM, pp 47–52
Machen, A; Wang, S; Leung, KK; Ko, BJ; Salonidis, T. Live service migration in mobile edge clouds. IEEE Wireless Commun; 2018; 25,
Mahmud, R; Ramamohanarao, K; Buyya, R. Latency-aware application module management for fog computing environments. ACM Trans Internet Technol (TOIT); 2019; 19,
Mahmud, R; Srirama, SN; Ramamohanarao, K; Buyya, R. Quality of experience (QOE)-aware placement of applications in fog computing environments. J Parallel Distrib Comput; 2019; 132, pp. 190-203. [DOI: https://dx.doi.org/10.1016/j.jpdc.2018.03.004]
Makridakis, S; Andersen, A; Carbone, R; Fildes, R; Hibon, M; Lewandowski, R; Newton, J; Parzen, E; Winkler, R. The accuracy of extrapolation (time series) methods: results of a forecasting competition. J Forecast; 1982; 1,
Marín-Tordera, E; Masip-Bruin, X; García-Almiñana, J; Jukan, A; Ren, G-J; Zhu, J. Do we all really know what a fog node is? Current trends towards an open definition. Computer Commun; 2017; 109, pp. 117-130. [DOI: https://dx.doi.org/10.1016/j.comcom.2017.05.013]
Martin, JP; Kandasamy, A; Chandrasekaran, K. Exploring the support for high performance applications in the container runtime environment. Hum Centric Comput Inf Sci; 2018; 8,
Martin, JP; Kandasamy, A; Chandrasekaran, K; Joseph, CT. Elucidating the challenges for the praxis of fog computing: an aspect-based study. Int J Commun Syst; 2019; 32,
Mishra M, Roy SK, Mukherjee A, De D, Ghosh SK, Buyya R (2019) An energy-aware multi-sensor geo-fog paradigm for mission critical applications. J Ambient Intell Human Comput 1–19
Mitchell, M. An introduction to genetic algorithms; 1998; Cambridge, MIT Press:zbMath ID: 0906.68113[DOI: https://dx.doi.org/10.7551/mitpress/3927.001.0001]
Shcherbakov, MV; Brebels, A; Shcherbakova, NL; Tyukov, AP; Janovsky, TA; Kamaev, VA. A survey of forecast error measures. World Appl Sci J; 2013; 24,
Skarlat, O; Nardelli, M; Schulte, S; Borkowski, M; Leitner, P. Optimized iot service placement in the fog. Service Oriented Comput Appl; 2017; 11,
Talaat FM, Saraya MS, Saleh AI, Ali HA, Ali SH (2020) A load balancing and optimization strategy (LBOS) using reinforcement learning in fog computing environment. J Ambient Intell Human Comput 1–16
Verma, P; Sood, SK; Kalra, S. Cloud-centric iot based student healthcare monitoring framework. J Ambient Intell Human Comput; 2018; 9,
Zhu C, Pastor G, Xiao Y, Li Y, Ylae-Jaeaeski A (2018) Fog following me: Latency and quality balanced task allocation in vehicular fog computing. In: 2018 15th annual IEEE international conference on sensing, communication, and networking (SECON). IEEE, pp 1–9
© Springer-Verlag GmbH Germany, part of Springer Nature 2020.