Content area
Mobile cloud computing (MCC) integrates mobile devices with cloud computing to enhance performance and provide scalable services. It enables the partitioning of computationally intensive mobile applications, utilizing the cloud resources for remote execution. This process is vital for optimizing resource utilization and energy efficiency in distributed computing environments. Initially, static application partitioning was developed, dividing system resources into fixed-sized partitions. Though it is simple and reduces operational overhead, it lacks adaptability to dynamic computing environments due to mobility. Modern frameworks use runtime partitioning, dynamically allocating resources based on runtime profiling, which increases computational overhead and energy consumption. Despite various studies on MCC, there is a notable oversight in numerical analysis of application partitioning. This paper presents a comparative numerical study of existing partitioning techniques and proposes a Cost-Efficient Partitioning (CEP) model. The CEP model combines the static nature to reduce computational overhead with the dynamic nature to address runtime challenges. This research addresses the shortcomings of existing approaches, potentially transforming computational offloading into energy-saving solutions. Our results show that the execution time (ET) of a task in the CEP model is 0.826 s, which is longer than the ET of static partitioning however shorter than the 0.962 s of dynamic partitioning models. Similarly, the CEP's energy consumption in a test scenario is 0.609 J, slightly higher than the 0.587 J of static partitioning but lower than the 0.711 J of dynamic partitioning.
Introduction
Application partitioning refers to the process of dividing a mobile application into smaller components or modules that can be executed or deployed across different computing resources. The goal of application partitioning is often to optimize performance, resource utilization, or energy efficiency in distributed computing environments [1]. In addition, application partitioning becomes essential once we are in the mobile computing environment. This is due to the resource scarcity of mobile devices. There have been different studies carried out in the last two decades to overcome the resource insufficiency of mobile devices through partitioning the complex tasks of the mobile devices [2, 3]. These tasks are then to offload for remote execution to reduce the execution time and to save the battery power of the mobile device.
Subsequently, the main challenge is to how partition a mobile application and which module/part should be executed locally and which one should be executed remotely. For a better understanding of the phenomenon of application partitioning in computational offloading, a cost estimation model should be developed. Cost estimation models are the abstract representation of application partitioning in computational offloading that aims to provide an estimation of the processing or communication cost in terms of computational time or battery power of mobile devices [4]. These costs: a mobile device must pay while offloading the complex tasks for remote executions [5, 6–7]. In addition, different cost estimation models have been developed in previous studies based on the application partitioning techniques [8, 9–10].
Static application partitioning is an initial technique which was developed in the early stages of computational offloading of mobile devices. This approach is quite simple. The division of the application’s components is determined at design or compilation time which remains fixed during the runtime [11]. Static partitioning takes less effort and less computational time [12]; however, it cannot cope with the dynamic changes of the computational environment due to mobility.
On the other hand, runtime/dynamic partitioning involves making partitioning decisions at run time based on the current system conditions or performance matrices. The runtime parameters play an important role in determining the execution cost of each component in terms of execution time/delay and energy consumption [13, 14–15]. The dynamic techniques of component division are considered more efficient in mobile computing. It copes far better with the dynamic changes of the computational environment [16, 17–18], yet it turns itself into a computational-intensive task due to increasing the size of processing data. It also adds a library to keep contextual information for tracking the changes during the dynamic executing environment. Thus, the dynamic partitioning turns to a resource-intensive approach due to the additional computation “C” and of the additional information processing [19]. According to [20] the overhead execution in some cases occupies half of the total execution time. Considering Computation “C” at mobile devices is significant because the overall offloading concept is built on to reduce “C”. Mobile devices are essential to locate first where the execution of codes will take less computational effort; local or remote. The precise decision of partitioning application for local or remote execution will then reduce C which ultimately saves energy [21]. Any process at the partitioning stage of the application, fails to reduce “C” or adopts a procedure which increases “C” rather than reducing and therefore does not surface efficient results. Further, dynamic partitioning of mobile applications in a distributed environment is resource intensive i.e. increases C and energy consumption [22, 23, 24, 25–26]. Although, the dynamic partitioning approach is essential to apply in distributed application processing for coping with the computational changes and for the simultaneous execution at different platforms partitioning the mobile application at runtime requires adaptation to the conditions of the network, server and available bandwidth which increases the overhead “C” [23, 27, 28]. It is indirectly means that the application needs to be autonomous in partitioning, based on the previous offloaded pattern or the context sensed from the executing environment. As a result, runtime partitioning/dynamic partitioning of applications increases the utilization of mobile resources due to additional computation “C” and hence it turns to resource intensive.
Keeping in view the shortcomings of both the techniques it is vital to have a mechanism for coping the dynamic changes of computational environment for reducing the computational load “C”. In this paper, our contributions are as follows:
Conducted a numerical analysis of existing application partitioning techniques.
Performed a comparative analysis of these techniques.
We proposed a Cost-Efficient Partitioning (CEP) model to address the limitations of existing approaches. Our findings indicate that the CEP model outperforms dynamic partitioning in terms of execution time and energy consumption. Additionally, unlike static partitioning, the CEP model effectively manages random changes during mobility, thus mitigating challenges inherent in mobile computing.
The rest of the paper is structured as follows: Sect. Extant Literature on Partitioning Techniques presents the extant literature. Sect. "Analysis of Extant Approaches" elaborates on the analysis of the existing approaches, while Sect. "Experimentation" highlights the experiments. Sect. "Results Evaluation" sheds light on the results and evaluation. The discussion and comparison were placed in Sect. "Discussion".
Extant Literature on Partitioning Techniques
This article is about to discuss the mathematical model of mobile application partitioning techniques in computational offloading. A brief review of the existing techniques is presented to show how it developed throughout the years. Computational offloading is as a capability augmenting technique adopted to enhance the limited resources of mobile devices through task distribution to a more powerful computing environment, It is thus different from the traditional way of client–server architecture [29]. The initial phenomenon started from the static partitioning was developed on labelling the predefined tasks. Then it evolved to the most recent development including context awareness, VM migration [10], Services Migration [10], Process Migration [30], mobility aware offloading [31, 32–33], and the use of cloudlet or edge computing [34, 35–36]. Few of these techniques/frameworks developed overtime are presented in the table below.
Name of the framework/model | Developed in | Technique | Focus to achieve | Results | Limitations |
|---|---|---|---|---|---|
MTCOM [32] | 2024 | Lagrange interpolation equations: Mobility Aware Offloading | Minimize TT and Energy Consumption | Reduced TT by 42% and Energy Consumption by 10% | |
MADRLOM [35] | 2024 | cloud-edge computing power network architecture | Latency, Energy and Adaptability of Dynamic changes | Significant reduction of the computation overhead | |
Opportunistic Access fog-cloud computing network (OFCN) | 2023 | Optimization problem, considering both resource allocation and computation offloading under the constraints of users’ quality of service (QoS) requirements | Reduce latency and energy consumption of data transmission and computation | Lower latency and energy consumption than the conventional fog-cloud computing networks | Opportunism approach to the fog nodes compromises consistency |
A Heuristic [33] mobility-aware offloading algorithm (HMAOA) | 2020 | Decomposed into two sub problems: a convex computation allocation and a non-linear integer programming (NLIP) offloading decision | Optimal Utilization of the resources and reduces latency | Improves performance in term of latency and resources utilization | Precession of the problem decomposition decision |
Conceptual [37] framework of MEC based on artificial intelligence | 2020 | Offloading strategy for mobile device based on task prediction, and task migration | Optimizing the edge computing offloading model | Data size of the Subtasks and Delays Reduced | Complex coupling between computation tasks offloading and caching |
Distributed [38] collaborative data caching and computing offloading (CDCCO) | 2021 | Cache capacity at APs and the computing capability of MEC servers | Maximize storage utilization while reducing service latency and energy consumption | Reduces Network Costs | |
Collaborative [39] computation offloading and resource allocation optimization (CCORAO) | 2019 | Optimizing computation offloading Decision and computation resource allocation | Utilization of Resources and Execution Time | In case of Low velocity highest system utility | In case of high speed the utility drops abruptly |
Multi-agent [40]proximal policy optimization (MAPPO) | 2023 | Markov decision process (MDP) with multiple types of agents | Reduce Energy and Speed up the resources Utilization | Significant Decrease in Energy | Increasing of MUs will hit the efficiency in term of utilization |
While partitioning the application in computationally improved subsequently, yet a precise partitioning should ensure before identifying the off loadable/local executable tasks. In the most recent developed partitioning techniques/algorithms the numerical analysis of the partitioning models/frameworks is missing. The aim of this research is to develop a mathematical model in order to ensure identifying the cost of partitioning and utilization of the resources prior to any computational offloading. The execution cost can be depicted by using the consumption Call Graph (CG). The following sections describe the Call Graphs for computational offloading costs where the cost estimation models of the static and dynamic application partitioning are analysed based on the scenario considered in CG and WCG.
There are two types of cost consumption of a task execution. One computational cost of the task (node) includes the memory cost, processing cost, and time cost [41, 42]. The second one is communication cost which includes the transmission of data and the node calls or requisite messages. The communication cost of all non-offloadable nodes = 0, as the nodes are going to be executed locally. The same for offloadable nodes = 1, that is all the offloadable nodes have the computation costs plus the communication costs. The same task can have different costs due to execution on different sides. If the task is executed on the mobile side, it will have much higher costs than the one executed on the cloud side due to the high-speed processing units. Therefore, a significant amount of energy and execution time can be saved [26, 43, 44] however the runtime (dynamic partitioning) of the tasks will increase the interactions between the mobile side and the cloud side which leads to an increase the communication costs [45, 46–47]. To construct the model, a call graph is going to be used here which describes the data dependency. Each vertex of the graph represents a task (method) while each edge represents a call from one task to another (Caller to callee relationship).
Figure 1a, b represent the CG which consists of seven tasks (nodes). The computation cost of each task is represented by the vertex (). The communication cost between tasks is represented by the edges ().
Fig. 1 [Images not available. See PDF.]
a Construction of consumption graph (CG). b Construction of weighted consumption graph (WCG)
The dependency of tasks on each other is represented by the Direct Acyclic Graph, , where the set of vertices which denotes the application tasks and edges represents the invocations and data access between nodes and whereas and are the neighbour tasks. Each task is characterized by the following parameters.
—Offloadable, Non-offloadable. —The memory consumption by each task. —The size of each task (Code size). —The data size from task to task . —The data size of output from to .
Further, a weighted call graph (WCG) was constructed as shown in Fig. 2b, to identify the costs of each vertex as well as each edge. Each vertex is annotated with two types of costs as, , where is the computation cost of vertex () at mobile device, while is the computational cost of vertex () at cloud. Each vertex of the WCG is assigned a value from the tuple after the partitioning results which identifies the vertex to be executed locally or to be offloaded for remote execution. The edge set . The represents the communication costs between the tasks. The weighted communication cost between two tasks and is denoted as:
1.1
Where the amount of data size is transferred from node to node , is the size of data returning by node to node as a result, is the uploading bandwidth while communicating both the nodes and is the downloading bandwidth during accumulating the results.
Fig. 2 [Images not available. See PDF.]
a Consumption graph (CG) in dynamic partitioning. b Weighted consumption graph (WCG) in dynamic partitioning
Equation 1.1 gives the communications cost of two nodes when they are placed at different sides of the execution environment (cloud side and mobile side). The mobile node gives input and gets back the result from the server node. If the nodes which are calling each other are observed on the same side (local or remote) will not demand any communication costs. The partitioning decision of application into offloadable and non-offloadable can be static or dynamic which divides the applications into local and remote components [48]. This decision needs to be precise to avoid the unnecessary utilization of mobile resources during offloading. Thus, it leads to the minimum communication and computation costs at this stage. The partitioning of applications through static and dynamic techniques is going to be further analysed and modelled in the next section. Here, is a simple description of the separation of tasks into local and offloadable tasks as shown in Fig. 2b. Two sets and shows the sets of local tasks and cloud tasks respectively. Therefore, And .
A General Cost Model of Application Partitioning
The application partitioning in computational offloading aims to find the best possible execution environment for each task of the application to reduce the execution delay and battery consumption. Some of the applications are non-partitionable (having a single module) [49] and can be either executed locally or remotely. Some of the applications are not offloadable such as the application activities which utilize the local mobile resources (GPS, Camera, Microphone, Sensors) [50]. The applications which are statically partitioned need to annotate at compile time while the dynamic partitioning of applications is based on the execution parameters in the execution environment. For example, the dynamic partitioning of an application has to read the bandwidth, data size, processing load of the mobile device, application needs, user preferences, and remote execution environment [47]. Although dynamic partitioning of applications brings automation and flexibility in computational offloading techniques yet increases the data size, local computational and frequency of calls (invocations) to remote environments [18, 51, 52].
In order to test every type of partitioning technique, we intend to develop a generic model which estimates the cost of local execution, remote execution and communication. Every type of partitioning technique included these three types of execution costs. As shown in Fig. 2b, the overall execution cost of Application (A) can be calculated by aggregating the computational costs of Set , and the communication cost of edges . The computational cost in terms of the Execution Time of one single vertex (task) can be calculated by multiplying the cycle per second of the processor by the number of instructions in a million instructions () of a task and then dividing the resultant by the frequency (speed of the processor in million instructions per second (MIPS), such as:
1.2
Where is the clock cycle per second of the processor, is the instruction count of a task and is the total processing efficiency of the mobile device? To calculate MIPS, the CPI or clock time of the processor must be identified in advance. If the CPI of the mobile processor is considered 2 cycles, then the MIPS of the processor will be:Similarly, the energy consumption of one single task (Vertex) depends on the execution time of the task and the frequency of the processor. The energy consumption varies based on the different processing frequencies of the processor [47]. If is the energy coefficient constant, then the total energy consumption of the task is:
1.3
Where is the energy coefficient of frequency while is the execution time of the task . Further, the computational cost of all the local vertices can be calculated as:
1.4
Similarly, the computational cost of all offloadable vertices (Cloud vertices) can be calculated as:
1.5
The communication costs while offloading the task from the local node to the remote is given by Eq. (1.1) between two nodes. The total communication cost for all the offload-able nodes can be calculated as:
1.6
Whereas shows the cost between two nodes. The total offloading costs of an application can be achieved by aggregating all the three costs of Eqs. (1.4), (1.5) and (1.6), such as:
1.7
The partitioning of vertices into local and remote gives us an arbitrary tuple of partitions from the set of vertices and the cut-off edge set in the following way:
And
Cost Model for Static and Dynamic Application Partitioning
Static application partitioning takes place at the time of compilation of source codes [12]. The complex methods (tasks) are annotated as remote by using a special keyword. Once the compiler reads the keyword, the method is separated and assigned to the set of which is the subset . Similarly, the rest of all the methods are combined in a set which is again the subset of , such that and . The static partitioning of the application does not involve any kind of additional computation [53]. The partitioning is settled at compile time and execution of the tasks takes place. Therefore, the model presented in Eq. (1.7) can be used as a cost estimation model for static application partitioning. If we denote the total cost estimation of static application partitioning then the total cost estimation of static partitioning can be:
1.8
In the dynamic application partitioning approach, an algorithm needs to be deployed to cope with the changes occurring in the surrounding environment. Based on some predefined parameters the application pattern changes many times during execution [54]. Therefore, the dynamic application partitioning involves additional computation which is required to collect the contextual information. Based on the collected information a decision of partitioning takes place. It also had to deploy one additional node as a pattern node to store all the information which is a cost to the partition in terms of energy and time [12]. Two main additional costs are involved here apart from the three basic costs as given in Eq. (1.7). The additional cost occurs due to increasing invocation frequency between the nodes [47] The second cost is due to additional computation required to read the parameters and decide the pattern of partitioning based on it. The two additional costs of dynamic application partitioning are further elaborated as:
A. Frequency Cost
After partitioning takes place, the data and requisite calls are exchanged between two neighbour nodes. Once a pattern of node execution is decided, the computation starts at both ends (mobile and cloud) to execute the nodes. Nodes invoke each other for the exchange of data. Node injects input to Node and get back the output in the similar way. The interaction between nodes continues until the last node gets processed. However, in the case of dynamic partitioning, the pattern of execution alters due to changes occurring in the execution environment. As the node’s position changes, some of the local executable nodes are now transferred to a remote environment for execution and vice versa, as shown in Fig. 2a, b. Due to the changing of positions, the invocation of calls between the nodes increases for fetching the input and output data, which is considered a time-consuming as well as processing-intensive process [45, 47].
The path of the node in CG graphs is recorded in one of the additional nodes “d” called the decision node. Each node of the graph has to return to the decision node to update the path after any alteration observed in the execution environment. The number of invocation calls to the decision node through the main node is (n) 2. The number of invocation calls varies based on the node’s existing level. The square of the level of a node gives the total number of calls a node makes to the decision node. The level of a node can be fined by, the level of node = depth + 1. The depth of the node is equal to the number of connections between the node and the root. A node which is directly connected to the decision node having level 1 and therefore the number of calls is (1)2 = 2. The node which is connected through the leaf node at level 2 has (2)2 = 4 calls, and so on. If we consider node 4 of the Fig. 2b,
Considering frequency in this case increases. Frequency is the number of calls, the local node makes to remote nodes for injecting input and the number of calls the remote nodes return to the local node for output. In Fig. 2b, nodes 1, 2, 6, 8 and 9 are local nodes while 4, 7, 5 and 3 are the remote nodes. The number of calls from local nodes to remote nodes is 4 (1–3, 2–4, 6–7 and 8–5). The number of calls from remote nodes to local is 3 (4–6, 7–8 and 5–9). The frequency between nodes increases if the node changes its position quickly due to changes occurring in the surroundings. If for example, the bandwidth gets high more of the nodes are expected to be offloaded and in such case, the remote nodes will call back to local nodes for input more often, which increases the invocations (). The total frequency cost of all the remote nodes can be calculated as:
1.9
whereas , all the nodes migrated to the cloud for execution is the calling frequency of a node across the network and is the weighted value of the frequency of a remote node which is multiplied by the tuple value .
B. The Additional Cost of Node
Similarly, the additional cost is the cost due to configuring the decision node by the computation required to read the parameters and update the pattern of execution inside the node . The concept of CART (Classification Analysis and Regression Tree) was used to create the model to predict the pattern of execution of the CG graph as shown in Fig. 3. Normally Classification of nodes works in the environment where nodes are divided into homogenous classes (local and remote in this case). The regression is used to predict the value of a particular variable based on the existing values. The CART tree of all the possible parameters of the node is elaborated as.
Fig. 3 [Images not available. See PDF.]
The CART tree for decision node
Every parameter of the CART tree is checked for the threshold values of the parameter. C.R is the required computation for any node. Based on the required computation the first decision is made. If the computation required is very low, then the node is considered for local execution otherwise check the next level parameters to decide the node position. B.W is the minimum bandwidth for offloading the node. P.E is the Processing Efficiency of the device and W.L is the workload on the device. B.L depicts the battery level. Based on all of the observed parameters the pattern of execution is decided. This whole process of gathering information and configuring the decision node for the execution of CG required some computations which are not involved in the static partitioning. This additional computation costs of node can be calculated as:
2.0
is the weighted value of the node while reading the parameters and configuring the pattern of execution. By adding Eqs. (1.9) and (2.0) to Eq. (1.8), the total cost of dynamic partitioning computational offloading can be obtained as:2.1
Analysis of Extant Approaches
Response Time Estimation
Computational offloading aim is to find the optimal partitioning solution that leads to the minimum execution cost, to make the best trade-off between the time/energy and the communication costs/delay. The cost of application partitioning can be measured in two parameters. The Execution Time (Response Time) and Energy Consumption. Therefore, the cost models of Eqs. (1.8) and (2.1), can be re-written for execution time estimation and energy consumption both in static and dynamic computational offloading.
Minimum Response Time in Static Partitioning
Response time/execution time of a node depends on the communication costs while the communication costs depend on the size of data to transfer/receive and the available bandwidth. The execution time cost is therefore equal to the size of the data divided by the available bandwidth. I.e.whereas, is the size of data to be transferred and received while is the available uploading and downloading bandwidth. Equation 1.7 of static partitioning can be re-written for time cost as:
2.2
whereas, ; the computation time of the task on the mobile device when executed locally is equal to the speed constant of the cloud server multiplied by the execution time of the task on the cloud server . Cloud server is always equipped with a high-speed processor compared to the processing capacity of the mobile device, therefore . is the communication time required when transferring and receiving the data between the nodes.
To find the saving time between the local execution and remote execution of the same task, we used to calculate the percentage difference between local and remote time. i.e.
The percentage difference of the two values is equal to the absolute change of values divided by the average of the two original values. If is the execution time of executing a task locally and is the execution time of the task at the cloud, then the percentage difference or time save of both times can be:
Minimum Response Time in Dynamic Partitioning
In dynamic application partitioning the response time cost can be re-written based on Eq. 2.1. It will include the time consumption during increasing invocation calls and the time consumption during configuring the decision node . The time consumption while frequent calls made between the nodes is:whereas is the time consumption during calling each node of the cloud or local node which is multiplied by the tuple value . Similarly, the time cost of configuring the node can be calculated as:whereas is the time consumption of reading parameters and deciding the execution pattern inside the node . Now the total time cost of Eq. 2.2 during dynamic partitioning can be calculated as:
2.3
Minimum Energy Consumption in Static Partitioning
The energy consumption cost of partitioning application through static partitioning can be calculated by re-writing the Eq. 1.8, of total cost consumption as:
2.4
whereas = is the energy consumption of each single node executing by the mobile device locally while is the power consumption of the node and is the execution time the node required to be executed locally. is taken as an energy coefficient based on the frequency of the processor. The value is different in different mobile devices due to the design of the processors. If the node is offloaded for remote execution, in such case the device needs to wait and it consumes some amount of energy which is denoted by idle energy . Therefore, the energy consumption of the executing node in the cloud is . Here, is the energy consumption during an idle time while is the time required at the cloud to execute the node. Some of the energy is consumed while offloading which is called transmission energy denoted by which is . Here, is the energy consumption during transmitting/receiving while is the time required to transmit and receive the edge. Based on Eq. 1.14, the energy consumption efficiency in percentage between executions of nodes locally and remotely can be calculated as:
Minimum Energy Consumption in Dynamic Partitioning
The equation for finding the minimum energy consumption while partitioning application dynamically can be calculated by re-writing the cost consumption Eq. 2.1 for energy consumption.
ß.
2.5
whereas = is the energy consumption of each single node locally. is the power consumption of the node and is the execution time the node required to execute locally.. Here, is the energy consumption of the executing node at the cloud, is the energy consumption during the idle time while is the time required at the cloud to execute the node. Some of the energy is consumed while offloading which is called transmission energy denoted by which is . Here, is the energy consumption during transmitting/receiving while is the time required to transmit and receive the edge. is the energy consumption cost of executing the invocation calls () and it is calculated as, = . Similarly, is the energy consumption while executing the additional node , and where is the execution time of node .
Overall Execution Time and Energy Efficiency Gain of the Application
The optimal distribution of mobile application components plays an important role in overall execution time and energy efficiency gain. As discussed earlier, not every component of the application is offloadable because of the dependency on the local resources (sensors, camera, microphone, GPS) and User Interface (UI). Some of the components can be placed locally or can be offloaded based on the required computation. Therefore, the overall gain in the performance of an application consequently depends on the precise distribution of the components.
Let be the set of total components (vertices) of an application. Let be the set of mobile (local) components of the application and be the set of total cloud components of the application. Let be the total number of components of the whole application, local and cloud respectively, whereas . Let be the time of executing a task on the mobile device and be the execution time of a component at the cloud, then the overall performance gain in execution time can be calculated as:where,.
Similarly, if is the energy consumption of a task on the mobile device and is the energy consumption of a component at the cloud, then the overall performance gain in energy can be calculated as:where, .
Experimentation
To test the mathematical models of static, dynamic and proposed CEP model, this section presents entities, parameters and the values of parameters. The computational time and energy consumption cost of the developed model observed after calculating all the listed parameters. The operation flow of the CEP shown in the flowchart Fig. 4. CEP is deployed in two modules which is capable of executing a task locally as well as at the remote server node. During execution, the model profiler continuously monitor resources such as battery level, execution time of a task and the available bandwidth. These are the predefine conditions which profile the resources prior to any partitioning takes place. After completion of the profiling resources, the first attempt of any mobile application would be to execute the task locally in order to eliminate the remote execution time and avoid resource consumption in remote processing.
Fig. 4 [Images not available. See PDF.]
Operational flow of the proposed CEP model
If the profiler identify the task suitable to offload based on the complexity level and the predefined conditions for remote execution then resources monitoring for remote processing starts searching for the availability of a remote server (i.e., a surrogate) and then establish a connection to the remote server.
If the conditions have not been met, then local processing starts inspecting again. If the local execution is not possible, then execution chain turns back to the first step of the execution flow of the mobile application and subsequently stop further processing. If the condition has been satisfactory based on the defined values, then resources monitoring takes place and the middleware component activates to check the available services. Similarly, it allows the application component to employ static partition application into lightweight parts and computational intensive parts where both start execution concurrently.
If a component of a mobile application is lightweight, then that particular component proceeds for local processing. After completion of the local processing the result send back to the mobile application. Otherwise, if a component is not lightweight, then it passes to the Offload Trigger component, which then activates the synchronizer and offload the components (i.e., methods) to the surrogate for execution. At the completion of the remote execution of the application, the results are triggered back to the mobile execution manager components. This will consequently lead to the combination of both the result of the local execution and that of the remote execution. Finally, the collective results of both the local execution and remote execution presented.
To test the mathematical models of static, dynamic and proposed CEP model, this section presents entities, parameters and the values of parameters. The computational time and energy consumption cost of a prototype application were observed after calculating all the listed parameters.
Entities, Parameters and their Values
The computational offloading models are based on five main entities such as mobile device, cloud server, network, task (mobile application) and cloud application. There are multiple parameters of each entity which are used in the developed model. Table 1, depicts the details of all the entities and their respective parameters to calculate the relative variables presented in Table 2. To run the model for observing the partitioning efficiency between static and dynamic, the values of the parameters should be known. The accuracy of the model is based on the values of the parameters used in the model. Some of the information (values of the parameters) is not available officially which has been taken somewhere from the research. Some of the values are provided by the vendors such as processor speed. The bandwidth values are taken from speedtest.net while some of the values are calculated using statistical and mathematical calculations both for static and dynamic partitioning.
Table 1. Entities and parameters
Entities | Parameters | Descriptions |
|---|---|---|
Task | Instructions count | |
Code size of task i | ||
Memory consumption by task i Data size input from node i to j Data size of output from node j to i | ||
Mobile | Processor speed at mobile | |
Processing efficiency at mobile | ||
Cycles per instructions | ||
Network | Uploading bandwidth | |
Downloading bandwidth | ||
Uploading/downloading bandwidth | ||
Cloud | Processor speed at cloud |
Table 2. Notations of computed variable
Variables | Descriptions |
|---|---|
Weight of local vertex | |
Weight of remote vertex | |
Weight of edge between vi and vj | |
Weight of configuring node d | |
Energy cut of CG between local and remote | |
Tuple variable for edges | |
Tuple variable for vertices | |
Total cost of local vertices | |
Total cost of remote vertices | |
Total cost of communication between local and remote | |
Cost total | |
Cost total of static partitioning | |
Cost total of dynamic partitioning | |
Frequency calls between local and remote nodes | |
Additional computational cost of node d | |
Calling cost of single node at remote | |
Computation required | |
Bandwidth available | |
Processing efficiency | |
Workload | |
Battery Level | |
Data size transfer and receive | |
Time cost of transfer and receive of an edge | |
Total time cost of static partitioning | |
Total time cost of dynamic partitioning | |
Time cost of local vertices | |
Time cost of remote vertices | |
Speed factor of remote processor | |
Time difference between two values of time | |
Absolute value of time difference | |
Time save between local and remote execution of the same node | |
Time cost of configuring d | |
Total Energy cost of static partitioning | |
Total Energy cost of dynamic partitioning | |
Energy cost of local vertices | |
Energy cost of remote vertices | |
Energy cost of transfer and receive of the edges | |
Power consumption at mobile | |
Power consumption in idle time | |
Power consumption in communications | |
Energy consumption in configuring node d | |
Performance | |
Total time of local execution at mobile | |
Total time of execution at cloud | |
Energy performance | |
Total energy consumption while executing at mobile | |
Total energy consumption while executing at cloud |
Working of the Model
This section presents the working of both static and dynamic partitioning models by taking an example application.
Example 1.
We are supposed to give weights in code size to each node of the CG graph in Fig. 2.2.1 (b). Based on the weighted values of each node, the execution time both local and remote, and the energy consumption of the tasks both local and remote will be calculated. Each local node has half of the weight as compared to the remote node, as more instructions involved means more computations are required. Let supposed Node1 = 50 KB, Node2 = 60B, Node3 = 100 KB, Node4 = 150 KB, Node5 = 130 KB, Node6 = 70 K, Node7 = 200 KB, Node8 = 80 KB, Node9 = 70 KB and Node d = 100 KB. If we consider each 1 KB of code having 150,000 instructions, then Table 3 gives the values of all the parameters for each node (task) in the model.
Table 3. Number of instructions of each node against the code size
Node | Code size (KB) | Number of instructions (MI) | Memory consumption |
|---|---|---|---|
1 | 50 | 150,000 × 50 KB = 7,500,000 (7.5) | 50 |
2 | 60 | 150,000 × 60 KB = 9,000,000 (9.0) | 60 |
3 | 100 | 150,000 × 100 KB = 15,000,000 (15) | 100 |
4 | 150 | 150,000 × 150 KB = 22,500,000 (22.5) | 150 |
5 | 130 | 150,000 × 130 KB = 19,500,000 (19.5) | 130 |
6 | 70 | 150,000 × 70 KB = 10,500,000 (10.05) | 70 |
7 | 200 | 150,000 × 200 KB = 30,000,000 (30) | 200 |
8 | 80 | 150,000 × 80 KB = 12,000,000 (12) | 80 |
9 | 70 | 150,000 × 70 KB = 10,500,000 (10.05) | 70 |
d | 100 | 150,000 × 100 KB = 15,000,000 (15) | 100 |
Before finding out the execution cost of the dynamic and static partitioning of the CG graph, we find out the individual cost of each node executed locally and remotely . By using Eqs. 1.2 and 1.3 for calculating the execution time and energy consumption cost of each node respectively, Tables 5 and 6 below give the observed costs for both local and remote nodes. This aggregated cost of the local and remote nodes precisely shows the difference of execution time and energy cost between the two executing platforms. Further, it accordingly provide the specific cost in terms of time and energy for understanding the inclusive cost of both the static and dynamic portioning. To calculate the execution time cost of each node we need to know , and which are instruction count, cycle per instruction and processing efficiency respectively. CI of each node is calculated as given in Table 4. is the processing efficiency of the mobile processor which can be calculated by . If the processor clock frequency is , then by substituting the value in this equation the is 200. The of every processor can be adjusted (increase or decrease) as per the application demand. We consider a constant = 2 in the working of this model. Next, using Eq. 1.2, the execution costs are calculated in seconds and Eq. 1.3, the energy consumption costs are calculated in joule for all the local nodes as shown in Table 5.
Table 4. The parameters of relative entities and their values
Entities | Parameters | Values | Units |
|---|---|---|---|
Mobile | 400 | ||
200 | MIPS | ||
2 | |||
0.75 | Processing constant | ||
Network | 5 | KB | |
10 | KB | ||
5.91 | Mb/s | ||
28.02 | Mb/s | ||
33.93 | Mb/s | ||
Cloud | F.400 |
Table 5. Execution time and energy consumption cost of all local nodes (Vl). The bold values represent the combined execution time cost (in seconds) and energy cost (in joules) for all local nodes
Vertices | Data Size in KB | CI in Millions | CPI in C/s | The execution Cost of in | The Energy Cost of E in J | |
|---|---|---|---|---|---|---|
50 | 7.5 | 2 | 200 | 0.075 | 0.056 | |
60 | 9 | 2 | 200 | 0.090 | 0.067 | |
70 | 10.05 | 2 | 200 | 0.100 | 0.075 | |
80 | 10.05 | 2 | 200 | 0.100 | 0.075 | |
70 | 5 | 2 | 200 | 0.050 | 0.037 | |
Aggregate | 0.415 | 0.310 | ||||
Next, the execution cost in time of the tasks at a remote server depends on many factors. Firstly, apart from the actual processing time of the task at the remote server, the transmission time involved. Collectively, the transmission time and actual execution time of the task at a remote location are called idle time , because during the transmission and remote execution of the task mobile resources stay idle. The idle time of the remote task will increase if either transmission time or processing time at the remote location increases. Therefore, to calculate the execution cost of a task in the cloud, it is important to know the transmission time of remote tasks, apart from the actual execution cost in time . The transmission (communication) time can be calculated by dividing the total transmitting/receiving data size () by the uploading and downloading bandwidth i.e. whereas is the size of transmitting and receiving data, is the uploading downloading bandwidth. Similarly, to calculate the actual execution of the task at the cloud, the value of , and are significant to know, where is the clock time cycle per second of the remote server, is the number of instructions needs to process while is the processing efficiency of the cloud server in . Thus, the actual execution cost of the task is . Based on and , the collective execution cost of the task at the cloud will be:
Further, the execution cost of remote nodes will be significantly low as compared to the execution time of the same task on mobile due to the factor of cloud server speed. Therefore, the values of will be higher than the value of the mobile processor, where is the speed constant of the remote server. If the processor of the cloud is (F times faster than the mobile processor) and substituted to 2, then . The value of the cloud server is considered 2 for the remote server.
Similarly, the energy consumption cost of executing task remotely involves the energy consumption of transmitting and the energy consumption of the actual processing of the task remotely. The can be calculated by multiplying the energy coefficient constant of the mobile with the transmission/receiving time cost . Thus, . of the task can be obtained by multiplying with the execution time cost of the task at the remote server as: . The collective energy consumption of the task while executing at a remote server is:
Based on Eqs. 1.2 and 1.3, the execution time cost and energy consumption cost of all the remote nodes are calculated and given in Table 6.
Table 6. Execution time and energy consumption cost of all the remote nodes (Vc). The bold values represent the combined execution time cost (in seconds) and energy cost (in kilojoules) for all remote nodes
Vertices | Data Size in KB | CI in Millions | CPI in C/s | Execution Time Cost in | Execution Energy Cost in KJ | |||||
|---|---|---|---|---|---|---|---|---|---|---|
100 | 15 | 2 | 400 | 0.003 | 0.08 | 0.083 | 0.002 | 0.06 | 0.062 | |
150 | 22.5 | 2 | 400 | 0.003 | 0.11 | 0.113 | 0.002 | 0.08 | 0.082 | |
130 | 19.5 | 2 | 400 | 0.002 + 0.003 | 0.09 | 0.095 | 0.003 | 0.06 | 0.063 | |
200 | 30 | 2 | 400 | 0.003 | 0.06 | 0.063 | 0.002 | 0.05 | 0.052 | |
Aggregate | 0.354 | 0.259 | ||||||||
Next, to find the weighted value of an edge between two vertices in terms of communication time, the size of input data from node to node and the output size of data from node to node need to find first. Also, the size of the request between the two nodes needs to be observed before finding the weighted value of the edge . The size of GET requests in Web Services is from 2 to 8 KB for different browsers as per RFC (Request for Comments) IETF. The size of the POST request is almost the same as GET. Here, the average size of GET and POST is considered which is 4 KB. The uploading and downloading bandwidth where the values for a broadband network are retrieved from speedtest.net are 5.91 MB/s and 28.02 MB/s respectively. Now, if the input data is 5 KB and the output data is 10 KB then the cost in terms of communication time can be calculated as:
Using Eq. 1.1, the communication time cost of an edge is,
As the weight of the edge is calculated here in terms of communication time, therefore can be replaced with in the above Equation:
Also, based on Eq. 1.3, the energy consumption of a single edge can be calculated as:
Similarly, if we assume the input and output size of data for every edge in Fig. 2b, invocation (communication) cost in terms of execution time and energy consumption can be calculated as depicted in Table 7.
Table 7. Communication cost in terms of execution time and energy consumption, where the bold values represent the combined execution time (in seconds) and energy consumption (in kilojoules) for the set
Edges | Input data size in | Output data size in | The execution Cost of E in | The Energy Cost of E in KJ |
|---|---|---|---|---|
3 | 6 | 0.002 | 0.001 | |
8 | 16 | 0.003 | 0.002 | |
7 | 14 | 0.003 | 0.002 | |
4 | 8 | 0.002 | 0.001 | |
8 | 16 | 0.003 | 0.002 | |
6 | 12 | 0.002 | 0.002 | |
8 | 18 | 0.003 | 0.002 | |
4 | 8 | 0.002 | 0.001 | |
5 | 10 | 0.002 | 0.002 | |
7 | 14 | 0.003 | 0.002 | |
4 | 8 | 0.002 | 0.001 | |
Aggregate | 0.027 | 0.018 | ||
The aggregate values of time cost and energy consumption of the local, remote and edges were calculated using a prototype. Next, to find the total dynamic partitioning and total static partitioning cost in a computational offloading model, we have to substitute the aggregate values of time cost and energy cost in Eqs. 2.2, 2.3, 2.4 and 2.5. At the end evaluation of the result will be presented to compare and contrast the efficiency of both types of partitioning approaches in terms of execution time and energy.
(A) Static Partitioning Cost in Terms of Time and Energy
(a) Time Cost
To obtain the aggregate value of execution time cost in static partitioning, the values of all three entities of time cost from Tables 5, 6 and 7 are going to be substituted in Eq. 2.2:
(b) Energy Cost
Similarly, to obtain the energy cost of static partitioning in computational offloading, the aggregate values of energy consumption from Tables 5, 6 and 7 are going to be substituted in Eq. 2.4 to get the total energy cost.
(A) Dynamic Partitioning Cost in Terms of Time and Energy
(a) Time Cost
To obtain the execution cost and energy cost during dynamic partitioning for each entity (local, remote, communication, frequency and additional node) the aggregate value of every node needs to be out first. The values of three of the entities i.e. local execution cost, remote execution cost and communication cost of CG Graph are already calculated in Tables 5, 6 and 7. The aggregate cost of calling frequency and configuration of the additional node are going to obtain here.
Based on the execution environment conditions, the nodes quickly switch positions and therefore execution pattern change. Thus, with the effect of any new change, there will be a new execution pattern and it ultimately increases the computation due to calling frequency and updating of the decision node. The calling frequency is the invocation calls between nodes for requesting the input or delivering the output. The calling frequency between remote and local nodes of Fig. 4b is considered for this example. If are all the remote nodes and required to call the local node for the exchange of data, then the calling pattern will be,
As every single edge here is one complete cycle of invocation between the local and remote nodes, therefore frequency of the existing pattern is . Referring to Eq. 1.15, the summation of the transmission cost of all the edges gives the total frequency Cost. The calculated values of transmission time and energy consumption of the above 6 edges are already presented in Table 7. By substituting the values of the mentioned 6 edges in Eq. 1.15 and Equation the aggregate cost will be,
Similarly, based on Eq. 1.3, the energy consumption cost will be,
Next, to observe the execution cost of an additional node , it is important to know the values of , and . The value is taken as 2 in the example. The number of instructions of node is given in Table 3 and is the processing efficiency of mobile in , which is 200. Substituting the values in Eq. 1.2, the obtained execution cost in time of node is:
After acquiring the values of the five entities of dynamic application partitioning and by substituting the values in Eq. 1.17, the execution cost in terms of the time of dynamic application partitioning is,
(b) Energy Cost
Next, to obtain the energy consumption cost of dynamic application partitioning, the energy consumption cost of the node can be calculated by using Eq. 1.3:
The values of all five entities are substituted in Eq. 1.20 and the resulting cost in terms of energy is:
Results Evaluation
Five basic parameters so far observed to have an adequate impact on the computational time and Energy consumption. The increase of any parameters in the listed five leads to an increase in the computational time which turns out to consume more power i.e.
All of the five parameters are directly proportional to the computational time. Any one of the five if increases, will increase the computational time. We are intended here to evaluate the two parameters only (number of Instructions and input/output data). Both parameters significantly impact the computational time difference between static and dynamic partitioning. The impact of the rest of the three parameters is the same both for static and dynamic partitioning.
Impact of CI and Data Size on Computational Time and Energy Consumption
Local Execution
The first parameter is the number of instructions (). For increasing , the computational time of the task always increases as shown in Fig. 5a. By plotting the computational cost in terms of the execution time of different local nodes from Table 5, depicts that with each intensity of instructions, the computational time significantly increased and therefore it ascertains that the intensity in the number of instructions leads to an increase in computational time. For instance, the computational time of the lowest intensity = 5 is about 0.05 , which then reaches up to 0.10 , once the intensity reaches up to the highest point ( = 10). The increasing computational time also increases the energy consumption which has been observed from the analysis in this model. While locally executing a task, the energy consumption cost is plotted in Fig. 5b. With each intensity of the instructions, more power is drained which is identical to computational time cost.
Fig. 5 [Images not available. See PDF.]
a The impact of intensity in instructions (CI) and data size in (KBs) over the computational time in local execution. b The impact of intensity in instructions (CI) and data size in (KBs) over the energy consumption in local execution
Remote Execution
Similarly, Fig. 6a depicts the impact of on computational time while executing the task at the remote server. If the number of instructions increases both in local and remote execution, it increases the computational time. For instance, the computational intensity = 12 requires about 0.063 of time to complete execution. The execution time reaches up to 0.095 when the intensity rose up to 19 . As the efficiency of the remote server is better than the mobile , therefore regardless of the other factors (transmission time, bandwidth, size of data) the execution time of processing at the remote server is less than the mobile’s execution time and hence it leads to consumes less energy if the bandwidth and data size give the optimal results too, as shown in Fig. 6b. For = 12, the energy consumption is about 0.53 , which then reaches up to 0.82 for the highest intensity = 22.5.
Fig. 6 [Images not available. See PDF.]
a The impact of intensity in instructions (CI) and data size in (KBs) over the computational time in remote execution. b The impact of intensity in instructions (CI) and data size in (KBs) over the energy consumption in remote execution
With each intensity in the number of instructions, it utilizes more resources of the mobile device. Moreover, if the instruction increases it leads to more execution time at remote locations which ultimately increases the idle time of the mobile device. Consequently, the extended period of idle time leads to increased power drain, which is known as idle power consumption. So far, we observed that the increasing computational intensity (CI) of a task ultimately results in higher computational time and energy consumption.
Further, the size of input/output data plays a significant role in determining the computational cost in terms of both execution time and energy consumption, especially during the remote task execution. In the case of local execution, the data exchanges only between the local nodes which does not require longer time to wait for the turnaround response. On the other hand, the exchange of data (input/output) data between the local and remote execution environments always prolongs the execution time. Based on , As the data size increases, the transmission time also increases. If increases it then increases the transmission time of data, which ultimately increases the execution time of the remote task. If increases, it reduces the transmission time which then reduces the overall execution time of the task. We consider the edges , , and of Fig. 2.2.1 (b) which is exchanging the data between the local and remote nodes. If the available bandwidth is 33 and if the size of data between nodes is, () = 20 , , and then by putting the values in the equation the execution time between the nodes will be, , , ,.
The relationship between data size and computational time is plotted in Fig. 6a. As the size of the exchanging data increases between the local and remote nodes, the execution time rises. For the smallest size of data, the computational time is while for the biggest size, the computational time reaches up to . Furthermore, the increase in data size also has a direct impact on the energy consumption of mobile devices. It is already established that the bigger the data size, the longer the computational time which ultimately drains battery power more quickly. The impact of data size on energy consumption is shown in Fig. 6b.
Comparison of CI and Data Size in Static and Dynamic Partitioning
In continuation of the previous section, if the size of the instruction increases it ultimately hits the computational time of the task. Initially, the size of instructions of a task and the size of data are fixed to be processed locally or remotely. Further, once the partitioning takes place, it sometimes escalates the size of the instructions and the size of the data. In static partitioning, the pattern of execution is known in advance and partitioning takes place once the execution starts. In dynamic partitioning, the pattern of execution is decided at runtime, therefore it needs to process additional instructions to decide the pattern of execution of the original task. In the above example, the number of instructions and the size of data of both static and dynamic partitioning are observed after calculations. The total number of instructions in static partitioning of the tasks will be, the total local instructions, the total remote instructions, and the total communication instructions i.e.
Similarly, the total size of data in static partitioning is equal to the total size of data of local nodes , the total size of data of remote nodes and the total size of input/output data such as:
In the dynamic partitioning, the instructions of the additional node and the instruction of the invocation frequency increase the data size and size of total instructions i.e. the total number of instructions of an application after the dynamic partitioning takes place will be:
Similarly, the total size of data in a dynamic partitioning will be the total size of local, remote and input/output data with the addition of data size of node and invocation calls data size, such as:
We consider five different scenarios of node sizes with fixed bandwidth both for static and dynamic partitioning of applications. The Comparison of the total number of instructions for static and dynamic partitioning is shown in Fig. 7a. The comparison of total data size between static and dynamic is shown in Fig. 7b.
Fig. 7 [Images not available. See PDF.]
a Comparison of number of instruction (CI) for the same task both in static and dynamic. b Comparison of data size (KB) for the same task both in static and dynamic
The comparison shows that the number of instructions and total size of data of static partitioning in each scenario is less than the number of instructions and size of the data of dynamic partitioning. It is due to the additional information and data which is necessary for additional invocation calls () and configuring the additional node dynamic partitioning.
Cost Estimation of the Proposed Cost-Efficient Partitioning Model
Based on the analysis, we propose a Cost-Efficient Partitioning (CEP) model which is going to cope with the shortcomings of both the existing techniques. Static partitioning is considered efficient in eliminating the overhead “C” while less efficient in coping with the dynamic changes occurring in the computing environment. On the other hand, the dynamic partitioning techniques are more efficient in coping with the dynamic changes observed during the execution period, but ultimately increase the additional computation “C”. Considering Computation “C” at the mobile device is significant because the overall offloading concept is built on to reduce “C” at the mobile end. Thus, mobile devices are essential to first estimate the cost of the execution of codes in terms of computational efforts (time, energy). The comparative cost analysis of local and remote execution takes a precise decision of application partitioning into local and remote execution. This is how it reduces C which ultimately saves energy [21]. The proposed CEP model combined the features of both static and dynamic partitioning to overcome the limitations of both existing techniques. CEP adopts the static partitioning feature to reduce the additional computation “C”. To cope with the network changes which are expected to occur during a task execution; can efficiently be dealt with by deploying an algorithm. The algorithm is going to read the basic parameters i.e. battery level, network type, network connection and execution time of the executing environment. If all the parameters produce a “Yes” outcome, a partitioning module at method level activates for conducting the partitioning of the application which is then followed by the transfer of control to the offloading component for offloading the remote parts of the application for execution.
The Cost-Efficient Partitioning (CEP) model is based on the static partitioning technique and therefore it costs the same as a static partitioning application for local tasks and remote tasks . Deploying an additional algorithm which is going to cope with the dynamic changes, costs a constant volume because the algorithm will each time check the fixed number of parameters. This additional cost will be less than the additional cost of dynamic partitioning because the additional node of the dynamic partitioning model requires feedback from the local and remote nodes constantly and then it changes the execution pattern accordingly. The invocation calls and internal operation of the execution pattern is a computationally intensive process. On the other hand, in the CEP model, the execution of the additional algorithm takes place only once and then goes for the execution of the remote nodes.
Execution Time Cost of the Cost-Efficient Partitioning Model
The execution time cost of the CEP model is equivalent to the execution time cost of the static partitioning with the additional execution cost of deploying an algorithm . Thus, by Eq. 2.2, the execution time cost of CEP is calculated as:
2.6
Where, is the total cost of the deployed algorithm. The algorithm runs for every single remote node, as a result, the total cost of the deployed algorithm is Here, is the remote node.
Working Example of the Time Cost in CEP
If we run the model for the calculated values of the mentioned example 1, in Fig. 2.2.1(b), then the estimated execution time cost of the CEP model is:
2.7
If the weighted cost of the additional algorithm is 5 KB, then the execution time cost of the algorithm based on Eq. 1.2 for a single node is,
By substituting values in the above equation for and the resulted time cost of the algorithm for a single node is:
The total time cost for all the remote nodes is . By substituting the values of and in Eq. 2.8, the total time cost of the hybrid application partitioning will be:
Energy Cost in Cost-Efficient Partitioning Model
The energy cost of the CEP model is equal to the cost of a static partitioning technique. Based on Eq. 1.18, the energy consumption cost of the hybrid partitioning of an application will be the, total energy cost of local nodes , the total energy cost of remote nodes and the total energy cost of communications . This whole energy consumption of an application will be added to the additional energy consumption cost of the deployed algorithm . Therefore, the total energy consumption of the CEP model is:
2.8
Where is the energy consumption cost while executing each node.
Working Example of the Energy Cost in CEP
The estimated cost for energy can be similarly calculated by aggregating the total energy cost of the static partitioning with the energy cost of the deployed algorithm as,
2.9
The calculated of a single node having the execution time cost = 0.075 s, the energy cost of the node based on Eq. 1.3 will be:
By substituting the value of and in the above equation, the energy consumption cost of the algorithm for a single node is, = 0.0056 kJ. The cost of all the remote nodes is . Similarly, to obtain the total energy cost of the ECP model, the values of and are going to be substituted in Eq. 2.9, as
Discussion
Efficiency Comparison of the Cost-Efficient Partitioning (CEP) Model Against the Static and Dynamic Approaches
In order to effectively compare the performance, Table 8 presented the key metrics of static, dynamic and the proposed CEP partitioning in computational offloading. This table highlights the trade-offs between static, dynamic and the proposed partitioning approaches in computational offloading, showing that static partitioning is simpler and less resource-intensive but less adaptable, while dynamic partitioning offers greater flexibility and optimization at the cost of higher complexity and overhead. In contrast, the proposed CEP model incur the static and dynamic nature of the existing models and thus efficiently deal with the shortcomings of both the approaches.
Table 8. Key matrices of static, dynamic and proposed CEP model
Matrices | Static Partitioning | Dynamic Partitioning | CEP Model |
|---|---|---|---|
Definition | Predefined partitioning of tasks before execution | Task partitioning decisions are made at runtime | Runtime Partitioning with Static nature of predefine parameters |
Flexibility | Low Fixed partitioning plan, not adaptable to changes | High Adapts to real-time conditions and system state | High Adapts dynamic changes |
Complexity | Low Easier to implement, less computational overhead | High Requires real-time monitoring and decision-making | High Due to flexible nature to dynamic changes requires additional computation as compare to static |
Response to Changing Condition | Poor Cannot adapt to changes in network, resources, or load | Good Adjusts to changes in network, resources, or load | Excellent Adapts abrupt changes to the computing Environment |
Overhead | Minimal No runtime decision-making required | Very High Requires continuous monitoring and re-evaluation | igh Higher than Static and lower than dynamic due to hybrid nature |
Scalability | High Scales well due to simplicity and low overhead | Moderate Complexity increases with the number of tasks/devices | Moderate Complexity increases with the number of tasks/devices |
Performance Optimization | Limited Optimized for specific, known conditions | Optimal Can optimize performance based on current conditions | Ideal Can optimize in both static and dynamic environment. It can deal with the task in an online and offline mode |
Energy Efficiency | Moderate May not be optimal for all scenarios | High Can choose the most energy-efficient partitioning at runtime | Highest Energy efficient due runtime partitioning as well as eliminating overhead |
Use Case Suitability | Best for predictable, stable environments | Good for dynamic, unpredictable environments with fluctuating resources | Ideal for unpredictable computing environment with fluctuating resources |
Implementation Time | Short Faster to develop and deploy | Long Requires more development time for real-time decision mechanisms | LongWith static and dynamic nature, it takes time in implementation |
Dependency Handling | Static Must predefine dependencies; not adaptable | Dynamic Can adjust task dependencies at runtime | Hybrid Can have both the features |
Latency Sensitivity | Fixed Does not account for latency changes post-deployment | Adaptive Can account for and respond to latency changes | Adaptive Can respond to latency changes during execution |
Example Scenario | Embedded systems, IoT devices with stable conditions | Mobile applications, cloud computing with varying conditions |
The proposed cost-efficient (CEP) application partitioning mathematical model based on the results evaluated of the extant application partitioning models. The costs in terms of task execution (time) and energy consumption (joule) of the existing models are calculated and then compared against the proposed model. The execution cost is the time which is initiated at the time of saving the states of the running application, the time taken during partitioning the application, the time required for offloading the executable codes to the remote server, the time for processing the remote instance of the application, the time required for uploading the states files, the time required for resuming the states of the mobile application and the time consumes in communicating the result back to the mobile device. The total of these times is termed as turnaround time (TT). TT is an essential parameter in a mobile cloud computing environment. Besides, the TT depends on many other factors which are discussed in the previous sections. The most important are the length of the offloadable codes, the bandwidth of the transmission line, the size of the transmission codes between the remote server and the mobile device, the current status of the mobile device, the processing efficiency of the device etc.
The above parameters are considered constant/excluded to perceive the results during the application partitioning only. The results of the execution costs/TT are observed from running different tasks with different intensities in the three partitioning models. The calculated cost efficiency in execution time of the proposed CEP model against the static and dynamic partitioning is presented below:
The average execution cost was then recorded which shows that the average execution cost of the proposed CEP model is 0.826 s which is greater than the average execution cost of 0.796 s in static partitioning while less than 0.962 s in dynamic partitioning as shown the Fig. 8. As, static partitioning, refers to a method of dividing system resources such as memory, CPU time, or storage into fixed-sized partitions in advance. The pre-redefined division of components saves additional computation during partitioning and thus reduces the operational time. On the other hand, the execution time of a task in dynamic application partitioning is greater than both static and proposed models. It is due to the additional cost incurred for coping with the dynamic changes and support to work in mobility thus increasing the execution time. To monitor and counter the lowest points of both the static and dynamic, the proposed CEP model thus reduces the executable time which is 0.826 s for the tasks of similar intensities. The CEP partitioning thus gained in execution time as it can efficiently cope with the dynamic changes as well as reduces the execution time against the dynamic application partitioning.
Fig. 8 [Images not available. See PDF.]
The execution cost efficiency comparison of all three techniques
Further, the execution time/TT of a task has a direct impact on the energy consumption of the mobile device. It is due to E = α * T, where E is the energy consumption, T is the execution time of a task while α is the constant of proportionality. Thus, if the execution time of task prolongs during computational offloading, it will increase the energy consumption at mobile end and vice versa.
Now, the average energy efficiency comparison of the CEP model against the static and dynamic application partitioning is observed as below:
It is established that energy consumption increases whenever the execution time of a task increases. It is thus observed during the evaluation of all the three partitioning techniques. The energy consumption of the proposed CEP model is 0.609 KJ which is higher than the energy consumption 0.587 KJ of the static partitioning approach. It is due to the additional cost of the deploying additional two nodes as discussed in the analysis chapter. The additional nodes cause to increase in the execution time in the proposed CEP. Similarly, the execution time cost of the proposed CEP model is less than the execution cost of the dynamic partitioning and therefore the energy cost 0.711 KJ of the dynamic partitioning, is higher than the energy cost of the CEP model. The reasons which are observed and discussed previously are that during the execution of a task in dynamic approach, the resources allocation at mobile device increases. The additional utilization of the resources and monitoring every single parameter runtime turn the approach time intensive which ultimately turn to energy intensive. The above working example is iterated seven times for calculated execution and energy costs for all three techniques. It is done to have an effective and fair comparison between the discoursed applications partitioning techniques. The average efficiency of the CEP model against the static and the dynamic partitioning approaches is plotted in Fig. 9.
Fig. 9 [Images not available. See PDF.]
The energy efficiency comparison of all three techniques
It is established that the CEP model performs better than the traditional static and dynamic approaches. It is due to the precise partitioning of the mobile application and considering the predefined conditions of a mobile computing environment. Further, any additional processing and offloading overhead which can either prolong the Turnaround Time (TT) or can engage the mobile resources for longer operations; the CEP model thus based to avoids any such processing overhead. Therefore, by eliminating the additional processing load and precisely partitioning the executing tasks, the proposed CEP model becomes an efficient solution in terms of Execution Time and Energy Consumption.
Conclusion
After evaluating the results, it is evident that static partitioning is more efficient in terms of execution time and energy consumption compared to the other two approaches. However, its applicability is limited to specific types of applications and scenarios where dynamic changes are not expected to occur. In mobile computing, frequent dynamic changes are expected due to the device’s mobility between locations. Dynamic partitioning introduces automation that relieves programmers from the burden of partitioning, nevertheless, the increased computational demands in mobile devices counteract the goal of offloading execution load, as the dynamic approach adds to the computational load. The increase of a computational load thus turns the dynamic partitioning technique energy-intensive.
A Cost-Efficient Partitioning Model is proposed based on the numerical analysis of both the static and dynamic approaches. The CEP model aims to address the shortcomings of both; and potentially transform computational offloading into an energy-saving solution. A test scenario was developed with a fixed size of data including similar computational parameters in terms of bandwidth, and processing efficiency of the computing devices at mobile and cloud end. The result is then evaluated and presented. It shows the CEP model is more efficient than the dynamic partitioning approaches in terms of execution time and energy consumption. In case of the static partitioning, the CEP can efficiently deal with random changes during mobility and cope with the shortcoming that arises in mobile computing.
Limitation and Future Works
The limitations of the model arises once there is no any remote execution resources available. This can be handled using the locally available computing resources, yet the privacy and security of the devices could compromise. Further, additional cloudlet (a server at single hop) distance is proposed in few recent studies which can benefit the execution time and availability of the local server for remote execiton. However, any additional installation of the computing resources leads to additional cost as well as administrative overhead. It is therefore proposed for future work to have a hybrid access point which can help in providing the internet facility as well as can offer the additional processing unit for executing the offloaded tasks.
Acknowledgements
This research was supported by Research Ireland under grant numbers 18/CRT/6223 (SFI Centre for Research Training in Artificial Intelligence and CSE UG.
Authors Contribution
Conceptualization, MA and DM; methodology, MA, OK and RH; software, DM, MA and RH; validation, MA, OK; formal analysis, DM; investigation, OK, MA.; resources, MA and DM; data curation, RH, OK and MA.; writing–-original draft preparation, MA and OK; writing–-review and editing, RH and DM.; visualization, OK.; supervision, MA; project administration, DM; All authors have read and agreed to the published version of the manuscript.
Funding
The funding has been received from Research Ireland with Grant no. 18/CRT/6223.
Data Availability
Not Applicable.
Declarations
Conflict of Interest
The authors declare no potential conflict of interest.
Research Involving Humans and Animals
Not Applicable.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
1. Cook, H; Moreto, M; Bird, S; Dao, K; Patterson, DA; Asanovic, K. A hardware evaluation of cache partitioning to improve utilization and energy-efficiency while preserving responsiveness. ACM SIGARCH Comput Arch News; 2013; 41,
2. Ali, M; Zolkipli, MF; Zain, JM; Anwar, S. Mobile cloud computing with SOAP and REST web services. J Phys Conf Ser.; 2018; 1018,
3. Liu, J; Ahmed, E; Shiraz, M; Gani, A; Buyya, R; Qureshi, A. Application partitioning algorithms in mobile cloud computing: taxonomy, review and future directions. J Netw Comput Appl; 2015; 48, pp. 99-117. [DOI: https://dx.doi.org/10.1016/j.jnca.2014.09.009]
4. Barrameda, J; Samaan, N. A novel statistical cost model and an algorithm for efficient application offloading to clouds. IEEE Trans Cloud Comput; 2015; 6,
5. Lu, J et al. A multi-task oriented framework for mobile computation offloading. IEEE Trans Cloud Comput; 2019; 10,
6. Akherfi, K; Gerndt, M; Harroud, H. Mobile cloud computing for computation offloading: Issues and challenges. Appl Comput Inform; 2018; 14,
7. Fan, Y; Zhai, L; Wang, H. Cost-efficient dependent task offloading for multiusers. IEEE Access; 2019; 7, pp. 115843-115856. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2936208]
8. Liu X, Guo S, Yang Y. Task offloading with execution cost minimization in heterogeneous mobile cloud computing. In: Mobile Ad-hoc and Sensor Networks: 13th International Conference, MSN 2017, Beijing, China, December 17-20, 2017, Revised Selected Papers 13, Springer, pp. 509–522. 2018.
9. Shi C, Habak K, Pandurangan P, Ammar M, Naik M, Zegura E. COSMOS: computation offloading as a service for mobile devices. In: Proceedings of the 15th ACM international symposium on Mobile ad hoc networking and computing, pp. 287–296. 2014.
10. Shiraz, M; Gani, A. A lightweight active service migration framework for computational offloading in mobile cloud computing. J Supercomput; 2014; 68, pp. 978-995. [DOI: https://dx.doi.org/10.1007/s11227-013-1076-7]
11. Grewe D, O’Boyle MF. A static task partitioning approach for heterogeneous systems using OpenCL. In: Compiler Construction: 20th International Conference, CC 2011, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2011, Saarbrücken, Germany, March 26–April 3, 2011. Proceedings 20, Springer, pp. 286–305. 2011.
12. Wu, H; Knottenbelt, WJ; Wolter, K. An efficient application partitioning algorithm in mobile environments. IEEE Trans Parallel Distrib Syst; 2019; 30,
13. Wu H, Knottenbelt W, Wolter K, Sun Y. An optimal offloading partitioning algorithm in mobile cloud computing. In: Quantitative Evaluation of Systems: 13th International Conference, QEST 2016, Quebec City, QC, Canada, August 23-25, 2016, Proceedings 13. Springer, pp. 311–328. 2016.
14. Li, G; Lin, Q; Wu, J; Zhang, Y; Yan, J. Dynamic computation offloading based on graph partitioning in mobile edge computing. IEEE Access; 2019; 7, pp. 185131-185139. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2960887]
15. Ou S, Yang K, Liotta A. An adaptive multi-constraint partitioning algorithm for offloading in pervasive systems. In: Fourth Annual IEEE International Conference on Pervasive Computing and Communications (PERCOM'06), IEEE, pp. 10–125. 2006.
16. Rakocevic V, Griffiths JM, Cope G. Dynamic partitioning of link bandwidth in IP/MPLS networks. In: ICC 2001. IEEE International Conference on Communications. Conference Record (Cat. No. 01CH37240), vol. 9: IEEE, pp. 2918–2922. 2001.
17. Lakhan, A et al. Dynamic application partitioning and task-scheduling secure schemes for biosensor healthcare workload in mobile edge cloud. Electronics; 2021; 10,
18. Chen, H; Qin, W; Wang, L. Task partitioning and offloading in IoT cloud-edge collaborative computing framework: a survey. J Cloud Comput; 2022; 11,
19. Haibeh, LA; Yagoub, MC; Jarray, A. A survey on mobile edge computing infrastructure: design, resource management, and optimization approaches. IEEE Access; 2022; 10, pp. 27591-27610. [DOI: https://dx.doi.org/10.1109/ACCESS.2022.3152787]
20. Kovachev D, Yu T, Klamma R. Adaptive computation offloading from mobile devices into the cloud. In: 2012 IEEE 10th International Symposium on Parallel and Distributed Processing with Applications, IEEE, pp. 784–791. 2012.
21. Flores, H; Hui, P; Tarkoma, S; Li, Y; Srirama, S; Buyya, R. Mobile code offloading: from concept to practice and beyond. IEEE Commun Mag; 2015; 53,
22. Kwon, M et al. Development and validation of a smartphone addiction scale (SAS). PLoS One; 2013; 8,
23. Shiraz, M; Abolfazli, S; Sanaei, Z; Gani, A. A study on virtual machine deployment for application outsourcing in mobile cloud computing. J Supercomput; 2013; 63,
24. Yang X-S. Bat algorithm: literature review and applications. 2013. arXiv:1308.3900
25. Shiraz, M; Gani, A; Shamim, A; Khan, S; Ahmad, RW. Energy efficient computational offloading framework for mobile cloud computing. J Grid Comput; 2015; 13,
26. Sinha K, Kulkarni M. Techniques for fine-grained, multi-site computation offloading. In: 2011 11th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing. IEEE, pp. 184–194. 2011.
27. Shuja J, et al. Towards native code offloading based MCC frameworks for multimedia applications: a survey. J Netw Comput Appl. 2016.
28. Jungum, NV; Mohamudally, N; Nissanke, N. Application partitioning for offloading in mobile pervasive environments. Proc Comput Sci; 2019; 160, pp. 206-213. [DOI: https://dx.doi.org/10.1016/j.procs.2019.09.472]
29. Kumar, K; Liu, J; Lu, Y-H; Bhargava, B. A survey of computation offloading for mobile systems. Mob Netw Appl; 2013; 18, pp. 129-140. [DOI: https://dx.doi.org/10.1007/s11036-012-0368-0]
30. Yousafzai, A; Yaqoob, I; Imran, M; Gani, A; Noor, RM. Process migration-based computational offloading framework for IoT-supported mobile edge/cloud computing. IEEE Internet Things J; 2019; 7,
31. Qin, W; Chen, H; Wang, L; Xia, Y; Nascita, A; Pescapè, A. MCOTM: mobility-aware computation offloading and task migration for edge computing in industrial IoT. Fut Gen Comput Syst; 2024; 151, pp. 232-241. [DOI: https://dx.doi.org/10.1016/j.future.2023.10.004]
32. Huang H, Zhan W, Min G, Duan Z, Peng K. Mobility-aware computation offloading with load balancing in smart city networks using MEC federation. IEEE Trans Mob Comput. 2024.
33. Zhan, W; Luo, C; Min, G; Wang, C; Zhu, Q; Duan, H. Mobility-aware multi-user offloading optimization for mobile edge computing. IEEE Trans Veh Technol; 2020; 69,
34. Peng, P et al. A survey on computation offloading in edge systems: from the perspective of deep reinforcement learning approaches. Comput Sci Review; 2024; 53,
35. Guo, Y; Xu, X; Xiao, F. MADRLOM: a Computation offloading mechanism for software-defined cloud-edge computing power network. Comput Netw; 2024; 245, [DOI: https://dx.doi.org/10.1016/j.comnet.2024.110352] 110352.
36. Hasan MK, et al. Federated learning for computational offloading and resource management of vehicular edge computing in 6G-V2X network. IEEE Trans Consum Electron. 2024.
37. Miao, Y; Wu, G; Li, M; Ghoneim, A; Al-Rakhami, M; Hossain, MS. Intelligent task prediction and computation offloading based on mobile-edge cloud computing. Fut Gen Comput Syst; 2020; 102, pp. 925-931. [DOI: https://dx.doi.org/10.1016/j.future.2019.09.035]
38. Feng, H; Guo, S; Yang, L; Yang, Y. Collaborative data caching and computation offloading for multi-service mobile edge computing. IEEE Trans Veh Technol; 2021; 70,
39. Zhao, J; Li, Q; Gong, Y; Zhang, K. Computation offloading and resource allocation for cloud assisted mobile edge computing in vehicular networks. IEEE Trans Veh Technol; 2019; 68,
40. Liu, W; Li, B; Xie, W; Dai, Y; Fei, Z. Energy efficient computation offloading in aerial edge networks with multi-agent cooperation. IEEE Trans Wirel Commun; 2023; 22,
41. Kumar, MS; Karri, GR. Eeoa: cost and energy efficient task scheduling in a cloud-fog framework. Sensors; 2023; 23,
42. Khademi Dehnavi M, Broumandnia A, Hosseini Shirvani M, Ahanian I. A hybrid genetic-based task scheduling algorithm for cost-efficient workflow execution in heterogeneous cloud computing environment. Cluster Comput. 2024;1–26
43. Wu Y, et al. Google's neural machine translation system: bridging the gap between human and machine translation. 2016. arXiv:1609.08144.
44. Teka FA. Seamless live virtual machine migration for cloudlet users with. Carleton University Ottawa. 2014.
45. Khan, SH; Hayat, M; Bennamoun, M; Sohel, FA; Togneri, R. Cost-sensitive learning of deep feature representations from imbalanced data. IEEE Trans Neural Netw Learn Syst; 2017; 29,
46. Othman, M; Khan, AN; Shuja, J; Mustafa, S. Computation offloading cost estimation in mobile cloud application models. Wirel Pers Commun; 2017; 97,
47. Othman, M; Madani, SA; Khan, SU. A survey of mobile cloud computing application models. IEEE Commun Surv Tutor; 2013; 16,
48. Cong, P; Zhou, J; Li, L; Cao, K; Wei, T; Li, K. A survey of hierarchical energy optimization for mobile edge computing: a perspective from end devices to the cloud. ACM Comput Surv (CSUR); 2020; 53,
49. Bi, S; Huang, L; Wang, H; Zhang, Y-JA. Lyapunov-guided deep reinforcement learning for stable online computation offloading in mobile-edge computing networks. IEEE Trans Wirel Commun; 2021; 20,
50. Mach, P; Becvar, Z. Mobile edge computing: a survey on architecture and computation offloading. IEEE Commun Surv Tutor; 2017; 19,
51. Jiang, C; Cheng, X; Gao, H; Zhou, X; Wan, J. Toward computation offloading in edge computing: a survey. IEEE Access; 2019; 7, pp. 131543-131558. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2938660]
52. Islam, A; Debnath, A; Ghose, M; Chakraborty, S. A survey on task offloading in multi-access edge computing. J Syst Architect; 2021; 118, [DOI: https://dx.doi.org/10.1016/j.sysarc.2021.102225] 102225.
53. Almutairi, J; Aldossary, M. Modeling and analyzing offloading strategies of IoT applications over edge computing and joint clouds. Symmetry; 2021; 13,
54. Ismayilov, G; Topcuoglu, HR. Neural network based multi-objective evolutionary algorithm for dynamic workflow scheduling in cloud computing. Fut Gen Comput Syst; 2020; 102, pp. 307-322. [DOI: https://dx.doi.org/10.1016/j.future.2019.08.012]
Copyright Springer Nature B.V. Jan 2025