Content area
The demand for cloud enabled computing is rising which motivated the researchers to develop various computings such as mobile cloud computing, edge computing, transparent computing, fog computing, federated cloud etc. This paper discusses different distributed remote computing techniques and its related aspects. It proposes a new computing paradigm for distributed remote computing named as EVACON (Evaporation–Condensation)-Rainsnow Computing. As the name suggests the term EVACON-Rainsnow represents the environmental phenomena of evaporation, condensation, rain, and snow. How these distributed computing is related to this environmental phenomenon is discussed in detail in this manuscript. The proposed work represents the comparative analysis of new computing with the existing computing technologies. It also demonstrates the detailed architecture, feature, and benefits of EVACON-Rainsnow Computing. This paper explains principle, components, working architecture, functionality of different layers, advantages, applications, and challenges involved with proposed computing. In this work, existing SKYR framework for distributed computing of mobile cloudlet-based computing is improved further by incorporating proposed Task-Segregation () and Scalability () algorithms to accommodate federated cloud and dew computing which comprehensively make it best suited for the proposed computing. Working flow and architecture of this improved framework to execute proposed computing and its comparison with the different frameworks is also illustrated in this paper.
Introduction
The demand for advance mobile devices and its related sophisticated software generates the boom for highly data intensive, remote and distributed computing. Due to this, scope for cloud computing and its inherited computing such as mobile cloud computing, edge computing, fog computing, transparent computing and dew computing is on its crescents. Among these, edge computing is in high demand as it facilitates the local computation in the vicinity of user and hence reduces the latency and improves the performance. In this research work, all mentioned distributed computing techniques are thoroughly covered and compared to analyse their pros and cons and, a new computing paradigm along with its framework is proposed.
Cloud Computing
The concept of cloud computing was first proposed by IBM in 2007 [1] and later promoted by various commercial companies such as Google, IBM, Microsoft, Amazon, and others. Since then, the use of cloud computing rises drastically and different commercial companies have developed various tools and applications for it. The cloud computing has changed the fortune of various technology-based companies and this can be perceived from the fact that Amazon’s cloud computing revenue is 12.2 billion USD in 2016 which rises to 45.3 billion USD in 2020 [2]. In the past 14 years, computer-based communication technologies have flourished and developed drastically which motivated the development of cloud computing. However, overwhelmed growth of cloud computing has some inherent deficiency and flaws which prompted researchers to consider and examine the network computing paradigm in post cloud computing era.
The traditional cloud computing architecture is a two-tier hierarchy which is shown in Fig. 1. In this Figure, top tier comprises of cloud data center which provides the remote cloud service to end user or mobile devices. The lower tier includes all devices, mobile phones, tablets which want to use the services of remote cloud. So, it’s just a two-way communication between two entities that are cloud and user devices [3].
[See PDF for image]
Fig. 1
Traditional cloud architecture
Although cloud is a high computation intensive and holds large volume of data but some time it does not able to fulfill the task and service requirement of end users and hence in that case it looks for some other alternatives. This challenge in cloud computing can be resolved using the concept of federated cloud, whose architecture is shown in Fig. 2. In this Figure, there are three clouds, of which two are public cloud and one is private cloud. Public clouds are those cloud which are open to all the users and private cloud is limited to the users of a particular organization, institution etc. So, the three clouds interact with each other via Internet connectivity and abide by service level agreement (SLA) among themselves [4].
[See PDF for image]
Fig. 2
Advanced federated cloud architecture
Zhu et al. [5] have discussed various federated cloud computing frameworks such as modified artificial bee colony (MABC), hybrid chaotic particle search (HCPS), modified cuckoo search (MCS) and proposed their own framework named as matching and multi round allocation (MMA). These mentioned frameworks can also work in standalone mode at cloud. MMA framework is our main reference framework for execution at cloud and federated cloud and other frameworks are discussed for more illustrative and effective comparative analysis of result. This MMA framework uses a novel scheduling method which works more effectively for heterogeneous multi cloud environment and optimizes time and total cost for all submitted tasks subject to all security and reliability constraints. Authors claim that this framework is more stable and efficient as compared to other existing frameworks. Therefore, the proposed work compares all these frameworks to represent the effectiveness of our proposed framework in synchronization with proposed computing. As it’s a remote computation which causes latency and limited bandwidth to end users. Therefore, post cloud computing like mobile cloud computing and edge computing address these challenges.
Here authors [6], the M/M/n queuing model is used to create a work scheduling problem for a cloud computing environment. The waiting time matrix is a new type of data structure that will be used by a priority assignment algorithm to determine the order of importance of individual jobs as they arrive. Additionally, in order to extract the task with the highest priority, the waiting queue uses a special idea based on the Fibonacci heap principle. Regarding the non-preemptive and preemptive nature of tasks, the present research provides a parallel method for task scheduling in which priority assignment to tasks and heap construction are conducted concurrently. The suggested work is step-by-step illustrated with the right number of tasks. To assess the effectiveness of our suggested algorithms, the performance of the proposed model is evaluated against various existing strategies like BATS, IDEA, and BATS + BAR in terms of total waiting time and CPU time [6]. Three different scenarios have also been considered to show how well the task scheduling approach handles tasks with various priority. Furthermore, a dynamic cloud computing environment uses the work scheduling method.
Mobile Cloud Computing
Mobile cloud computing solves the problem of high latency and poor bandwidth available to the end users using intermediate device between remote cloud and end users. As the smart phone is getting sophisticated and advanced in terms of storage, battery life, computation intensive which makes it suitable to act as intermediate between remote cloud and thin mobile devices which lacks the resources [7]. Resource rich mobile device must have efficient software applications to entertain the requirements of thin mobile devices [8]. Mobile cloud computing can be defined as collaboration of cloud computing with mobile devices to facilitate later with high computational power, storage, memory, context awareness and energy. In other words, it can be stated as interdisciplinary approaches comprise of cloud computing and mobile computing. Hence, this transdisciplinary domain can be named as mobicloud computing [9]. Computation offloading is the process of migrating resource intensive computation from mobile devices to cloud or near by computation intensive intermediate devices. This computation offloading reduces the battery power consumption and enhances application performance. But computation offloading is different for cloud and resource intensive intermediate devices. Therefore, the techniques of computation offloading used in cloud computing cannot be directly employed in mobile cloud computing as these are energy unaware and consume lot of bandwidth [10, 11–12]. Hence, smartphones require an application model that supports computation offloading and is optimized for mobile cloud environment in terms of heterogeneity, context awareness, application partitioning overhead, network data cost, bandwidth, and energy consumption [13].
Satyanarayanan et al. [14] have proposed an intermediate resource rich mobile device which can act as cloud in proximity to end user to facilitate the cloud services without much consumption of bandwidth and providing low latency. This resource rich mobile device is named as cloudlet. A cloudlet is defined as [15] “a trusted, resource rich computer or cluster of computers that is well-connected to the internet and available for use by nearby mobile devices”. So, the ideal location for cloudlet in the network to facilitate mobile device is one wireless hop away from the later. That means cloudlet should be positioned as cellular base station or as a Wi-Fi access point. Therefore, with this cloudlet a three-tier architecture of mobile cloud computing is designed as shown in Fig. 3. This cloudlet provides data offloading and computation offloading [16]. In this architecture, the top layer is composed of federated cloud, bottom layer includes different heterogeneous mobile devices and the middle layer comprises of heterogeneous cloudlets which facilitate the local computation near to the end user. These cloudlets can do inter cloudlet communication and also interact with central federated cloud to better facilitate the mobile devices [17]. There are many frameworks which provide cloudlet based mobile cloud computing. To facilitate this intercloudlet communication, a framework was proposed named as intercloudlet communication framework (ICCF) [18]. This framework provides efficient communication among various cloudlets available in local proximity and can also interact with cloud for centralized processing at cloud. This framework is further improved to framework of mobile cloudlet-based computing (FMCC) [19]. This framework sorts the problem of former which lacks in centralized controlling and management of resources. So, these two effective and efficient frameworks are considered for task execution with proposed framework for scenario 2 which includes result comparison of primary cloudlet as mentioned in Sect. 5.3. Although MCC is very useful in providing services in proximity to mobile devices but it has the challenge of limited Wi-Fi access to cloudlet and cloudlet is very transient device which can go offline quite often. So, these challenges must be resolved to provide efficient and undisrupted services to mobile devices [4].
[See PDF for image]
Fig. 3
Architecture of mobile cloud computing
On today's Arm application (APP) processors, which rule the smartphone industry, a trusted execution environment (TEE) is a system-on chip and CPU system with a broad security solution [20]. To process sensitive data, such as payment processing or message encryption, mobile APPs typically build a trusted application (TA) in the trusted execution environment (TEE), which is transparent to the APPs running in the rich execution environments (REEs). In more detail, using the interface that the TA provides, the REE and TEE communicate with one another and ultimately convey the results back to the APP in the REE. The overhead of mobile APPs is undoubtedly increased by such a procedure. In this research [20], they first gave a thorough evaluation of open-source TEE encrypted text performance and then, suggested an ETS-TEE, or extremely energy-efficient task scheduling technique. Researchers considers the complexity of TA tasks by utilizing the deep learning method, which is dynamically scheduled between modelling on the local device and offloading to an edge server. They test the strategy using a Jetson TX2 edge server and a Raspberry Pi 3B as the local mobile device. The results demonstrate that the solution achieves an average 38.0% energy reduction and 1.6 times speedup over the local device's default scheduling strategy. In order to safeguard the safe execution of programs and ensure that the trusted execution environment has both security and excellent performance, this significantly decreases the performance loss brought on by mobile devices.
Mobile Edge Computing
In the era of remote distributed computing paradigm, next computing which comes after cloud computing and mobile cloud computing is mobile edge computing [21, 22]. Mobile edge computing (MEC) is supported by mobile cellular networks like 3G, 4G & 5G, or via Wi-Fi enabled network. The prime objective of MEC is to address the challenges of cloud computing and mobile cloud computing (MCC) [23, 24–25]. Former addresses the challenges of later by deploying resource rich mobile devices which have enhanced storage and processing capacity within the range of radio access network. Hence, this enables the end user with prompt and impressive computing, mobility, energy efficiency, storage capacity, location and context awareness support [26, 27]. In the MCC technology, cloudlets are deployed at the edge of a network to facilitate the end user but it has the problem of limited Wi-Fi coverages. Although cloudlet processes computationally intensive task but MEC is better equipped to offload task with low latency and high bandwidth [28, 29].
The mobile edge computing is best suited for edge-oriented computing but it’s a quite new technology and lot of research is needed to overcome the challenges involved and uncover the potential strength of it [30, 31–32]. The term mobile edge computing was given by European Telecommunications Standards Institute (ETSI) and Industry Specification Group (ISG). This ISG includes the different reputed telecommunication and technology company like IBM, Intel, Vodafone, NTT, Nokia Networks, Huawei, DOCOMO etc. [33]. The standard definition provided by ETSI for mobile edge computing is as [33]: “Mobile edge computing provides an IT service environment and cloud computing capabilities at the edge of the mobile network, within the radio access network (RAN) and in close proximity to mobile subscribers”.
MEC is a three-tier hierarchy, in which the inner most layer is core data center at cloud that is a centralized service provider. Next layer comprises of small edge servers which are located in the proximity of end users and facilitate the remote cloud services with reduced latency and high bandwidth [34]. The outer most layer is of edge devices, end user mobile devices or any sensor-based devices which use the services of edge server in proximity to them. This architecture of MEC is shown in Fig. 4. The important characteristics of MEC published in white paper by ETSI are: proximity, on-premises, lower latency, location awareness and network context information [35]. There are many frameworks which execute the sensor enabled task and handle other aspects related to it. Some of the main frameworks are energy allocation optimization (EAO) framework, flow split optimization (FSO) framework and profit maximization multi round auction (PMMRA) framework. These frameworks are latest and very effective for sensor enabled task execution in mobile edge computing. Among these, EAO framework is main reference framework for result comparison at edge level computing as shown in scenario 4 of Sect. 5.3. This EAO framework sorts out the problem of energy averaging and minimization problem of edge enabled wireless sensor network [36].
[See PDF for image]
Fig. 4
Architecture of mobile edge computing
In a dynamic MEC system, researchers examine task offloading scheduling [37]. They suggest a hybrid energy supply paradigm that involves incorporating energy harvesting technology into IoT devices. To reduce system cost, researchers jointly optimize local computing, offloading time, and edge computing choices. They developed an online dynamic task offloading algorithm for MEC with a hybrid energy supply dubbed DTOME based on stochastic optimization theory. DTOME can decide which tasks to offload by balancing system cost and queue stability. To find the best task offloading method, researchers use dynamic programming theory. The effectiveness of DTOME is confirmed by simulation findings, which also demonstrate that DTOME has a lower system cost than two standard task offloading schemes.
Fog Computing
The term fog computing was initially suggested by a research scholar of Cisco system in 2012 [38]. Processing the data at edge is not a new concept, edge computing principle was initially surfaced around 2000 [39, 40] and cloudlet in 2009 [41]. All the three-concept edge, cloudlet and fog revolve around the edge at the local network to the end users. These cloudlets are applied in the mobile networks whereas edge server and fog server are applied to connected things such as IoT [42]. Fog computing provides both virtualized and non-virtualized computing that allows access to storage, networking and computation services amid cloud server and end user IoT devices [38, 43]. However, fog is not always located at edge of the network near to end user devices. It is a distributed computing approach which facilitates low latency as compared to cloud and also assists non-latency services [44]. It also facilitates the use of idle computation services near to mobile devices to improve the overall service performance of the computing system [45]. Fog computing includes various heterogeneous devices such a sensors, actuators and others which are connected together within a network [38]. Time sensitive computations are directly performed by the fog processing devices without the involvement of third party. Yi et al. [46], have suggested that fog devices execute the new services and basic applications including network functions in a sandboxed environment as done by cloudlets devices.
Fog computing is distributed computing paradigm which integrates itself with cloud computing to provide services at the edge of the network [47]. It facilitates latency sensitive applications like IoT environment etc. By 2020, 50 million things are already connected to make IoT network [48, 49]. So, processing the data of all these connected devices requires huge bandwidth to transmit and large storage capacity to store the processed data [50]. All these devices are controlled by some controller via IP with IoT industrial protocol. Definition of fog computing proposed by [51]: “Fog computing is a scenario where a huge number of heterogeneous (wireless and sometimes autonomous) ubiquitous and decentralised devices communicate and potentially cooperate among them and with the network to perform storage and processing tasks without the intervention of third parties. These tasks can be for supporting basic network functions or new services and applications that run in a sandboxed environment. Users leasing part of their devices to host these services get incentives for doing so.”
The basic architecture of fog computing is shown in Fig. 5. In this architecture, the main fog computing components are cloud server, core IP network gateway and fog devices. Other than this, there are end devices which look for the task and services which get executed in fog computing. Cloud fog server is distributed centralized entity which imparts services to various fog devices and manage it. Core IP network gateway helps to establish connection between cloud fog server and fog devices and, is also responsible for translation services among different heterogenous fog devices. It also facilitates the translation services among IoT, fog and cloud layer. Fog device is the main computing component which facilitates the end user mobile devices to its proximity and it is called as fog server. Any resource rich devices which provide access to networking, computation and storage capabilities can perform as fog device [44]. The resource rich devices can be proxy server, switches, routers, base stations, set top box or any computing devices. Various new challenges have surfaced in this computing paradigm in past few years and researchers are working on it to make fog computing highly efficient and performance oriented.
Table 1. Basic background comparison of different computing’s
Parameter | Cloud computing | Mobile cloud computing | Mobile edge computing | Fog computing | Dew computing | Transparent computing |
|---|---|---|---|---|---|---|
Original proposer | John McCarthy at MIT [52] | M. Satyanarayanan [14] | Proposed by ETSI, ISG, MEC and group of six companies: IBM, Vodafone, NTT DoCoMo, Huawei, Intel, Nokia [53] | Cisco [54] | Wang [55] | Zhang [Transparent] [56] |
Promoting organizations | Open Cloud Consortium (OCC), Cloud Computing Interoperability Forum (CCIF), Distributed Management Task Force (DMTF) etc. [57]. Salesforce, Amazon have commercially made it popular | Different cloud promoting organization are managing it | ETSI ISG MEC [58] | Open fog consortium [59] | No specific organization | No specific organization |
Motivating drivers | Easy access of resource via internet, resources are SaaS, PaaS, IaaS etc. | Context awareness, augmented reality, Staas, DaaS, NaaS etc. | 5G network, context awareness, IoT, augmented reality etc. | IoT, wireless sensor, and actuator networks etc. | Internet web access is main driving factor | Availability of computation at local level |
Aim | Delivery of data, services, and resource via internet on demand. Its services include SaaS, PaaS etc. | Mobile cloud computing facilitates the cloud service to mobile devices with the help of remotely deployed mobile apps which are efficient in handling user request | It helps to reduce the latency by shifting the computation and storage from the remote distributed core network to wireless network at the edge | Its main purpose is to facilitate the low latency, mobile support, location awareness and geographical distribution of IoT | Its objective is to provide services to users using on premise computers without collaborating with remote distributed cloud | It facilitates the user to access the heterogeneous OS’s and App’s and run at its terminal without knowing the underlying hardware and its implementation details |
Popularity as per available content on internet in million searches via google as on October 2021 | 341 | 369 | 219 | 28.5 | 5.5 | 338 |
[See PDF for image]
Fig. 5
Architecture of fog computing
By deploying fog layer devices close to edge devices, fog computing offers customers data storage, processing, and other services. Fog computing task and resource scheduling have grown in popularity as research topic. Here researchers [60] use an adaptive multi-objective optimization task scheduling approach for fog computing (FOG-AMOSM) which is best suited for the multi-objective task-scheduling problem in fog computing. The multi-objective task scheduling model is created using this method, which uses the total execution time and task resource cost in the fog network as the optimization target of resource allocation. Since the target model is a Pareto optimal solution problem, the enhanced multi-objective evolutionary heuristic algorithm and multi-objective optimization theory can be used to find the global optimal solution. Additionally, in fog computing, the neighborhood is adaptively changed in accordance with the current task scheduling group situation to achieve a better distribution. This avoids the issue where the neighborhood value caused by the neighborhood policy in the multi-objective algorithm affects the distribution of the task scheduling population. This algorithm is used to try to solve the multi-objective cooperative optimization problem in fog computing task scheduling by resolving the non-inferior solution set of the utility function index. The findings demonstrate that, in terms of total job execution time, resource costs, and load dimensions, the suggested method performs better than the previous methods [60].
Transparent Computing
The past 15 years of development in remote distributed computing led to emergence of cloud computing, big data, IoT and edge computing which have changed the core functions of computer and Internet from computing and communicating to fetching, storing, examining, and using various data and services. With the abrupt growth in mobile devices and communication technology, it accesses different services via mobile internet such as 4G, 5G, Wi-Fi, cellular and Adhoc networks. Therefore, the drastic growth in mobile devices and software services motivates the computing paradigm to shift its focus from PCs to resource rich mobile devices. However, these mobile devices and its networking have new challenges such as privacy, user dependency, mobility, portability, and immediacy [61, 62]. In the era of mobile internet, server centric computing paradigms such as cloud computing offer only partial solution to some problems. In other words, it can be stated as cloud computing and other server centric computing paradigm only solves problem from the server and network perspective, but not from that of user’s and services perspectives.
Transparent computing was proposed in 2004 [63] which can be one of the solutions to mitigate and counter the problem of server centric computing. In this computing, all data and software which includes user information, different OS and App’s are stored at server but computing is performed at the user terminals [64, 65]. It facilitates the user to access the heterogeneous OS’s and App’s and run at its terminal without knowing the underlying hardware and its implementation details [66]. Therefore, in transparent computing, data storage happens on server but data computation is performed at terminal node whereas in cloud computing both data storage and computations are performed at server. The transparent computing provides four main advantages and these are: It reduces the complexity and cost of terminals, improves the user experiences, provides high security to terminals, and also facilitates the cross-platform capability [67, 68]. Figure 6 shows the comparison of traditional computer architecture with the transparent computing architecture. Server-centric computing paradigms such as cloud computing are based on parallel virtualization and solve the issue of data in the cloud, whereas transparent computing emphasizes data storage on servers and computation on terminals, streaming both execution and heterogeneous services support for heterogeneous terminals. In essence, transparent computing extends the bus transmission found in traditional computer architecture to the network as shown in Fig. 6.
[See PDF for image]
Fig. 6
Comparison of architecture of traditional computing and transparent computing
Dew Computing
The idea of dew computing was given in the year 2012 [69] but its first fully described architecture was proposed in 2015 [55]. The architecture of dew computing is shown in Fig. 7. The basic idea behind the dew computing is to facilitate the availability of website even if there is no internet available to user. In this client–server architecture, a web server is at the local user machine which acts as dew server and hence it allows access to website without internet. This happens because the user’s data is not only stored at remote cloud but also at local user machine. In this dew computing, the processing is distributed between the local computer and centralized server. The formal definition proposed is as follows [70]: “Dew computing is a software organization model for PCs in the cloud computing era, which strives to fully realize the potential of PCs and cloud computing services. In the dew computing paradigm, software is organized according to the cloud-dew architecture. Local computers can provide rich functionality independently of cloud services and can also collaborate with cloud services”.
[See PDF for image]
Fig. 7
Architecture of dew computing
Skala et al. [71] have proposed the hierarchical dew computing architecture in which dew layer is located at ground layer under cloud and fog layer but it works in parallel with edge layer. Dew layer works for PC’s or any computing devices and user data whereas edge layer mostly handles the sensor generated data or data of light mobile devices and processed at edge server which is resource rich mobile devices. However, dew computing mainly focusses on three areas: high equipment efficiency, high productivity with respect to user request, and information processing. The working architecture of dew computing is shown in Fig. 7.
In this architecture, cloud server is connected to locally installed dew servers via internet. Client machine accesses the content from remote cloud if internet is available otherwise, it will be accessible via local web server deployed on the machine. Both cloud and local web server have identical data base copies. Hence, website and other content can be easily available without internet is the main advantage of dew computing [3]. As discussed above that the dew layer work in parallel with the edge layer at proximity to end users. At this layer, a cloudlet can be used as a dew server. So, there are many frameworks which support the computing at dew server. These frameworks are profit maximization incentive mechanism (PMIM) framework, efficient multi user computation offloading (EMUCO) framework and combinational auction service provider selection (CASPS) framework. Among this PMIM framework is the latest, which is considered as main reference framework for comparison with proposed framework at dew server level computing. These frameworks are considered for result comparison with proposed framework and is shown in scenario 3 of Sect. 5.3 [72].
Comparative Analysis
Figure 8 shows the architectural comparison of various post cloud computing’s paradigm including cloud computing. This Figure also suggests different tiers of computing which are involved while facilitating the user. Tables 1 and 2 shows the basic background and generalized comparison about these computing respectively.
[See PDF for image]
Fig. 8
Architectural comparison on different computing’s
Table 2. Generalized comparison of different computing’s
Parameter | Cloud computing | Mobile cloud computing | Mobile edge computing | Fog computing | Dew computing | Transparent computing |
|---|---|---|---|---|---|---|
Architecture layer | Two layers | Three layers | Three layers | Three layers | Three layers | Two layers |
Computation or storage models | Centralized | Partially centralized | Partially centralized or device to device | Partially centralized | Device to device | Device to device |
Execution of computation | Data center | At devices like cloudlet, cloud | Network edge, adjacent device, edge server | Network device like router etc. | Adjacent device | Adjacent device |
Sequence of execution of computation | Serial | Serial or parallel | Serial or parallel | Serial or parallel | Serial or parallel | Serial or parallel |
Target user | Internet users | Thin mobile device users | Sensors and mobile device users | Mobile device users | Internet users | Mobile device user |
Distance of computation from users | Far and remote from users | Close to user | Very close to user | Close to user | Very close to user | Close to user |
Resources | High storage and powerful computational capacities resources | Limited storage and moderate computing resource | Little storage and moderate computing resource like radio access points, base stations etc. | Limited storage and moderate computing resource like: gateways, access points, routers etc. | Little storage and low computing resource of premises computers | Limited storage and moderate computing resource of an organizations |
Operating environment | Great warehouse with cooling infrastructure | Maybe in inside or outside | Maybe in inside or outside | Maybe in inside or outside | Maybe in inside or outside | Maybe in inside or outside |
Geographical distribution | Centralized | Partially centralized/Distributed | Distributed | Distributed | Small centralized | Distributed |
Coverage area | Global | Global | Local near to edge of network | Local | Local | Local or wider |
Types of services | Facilitates global information | Facilitates global and local information | Facilitates local information | Facilitates global and local information | Facilitates local information | Facilitates local information |
Service location | Across internet | Within local network like LAN etc. | Within the edge network of sensor and other thin mobile devices | Within wider localized network | Very localized network | Within local network like LAN etc. |
Detailed Comparison
The detailed technical comparison of different computings with respect to various parameters such as resource, time, storage, mobility, computation, connectivity, users, application, distance and services is discussed in this section. This comparison is given in Table 3, which helps to find the drawbacks in existing computing techniques.
Table 3. Detailed technical comparison of different computing’s
Parameter | Cloud computing | Mobile cloud computing | Mobile edge computing | Fog computing | Dew computing | Transparent computing |
|---|---|---|---|---|---|---|
Resources | ||||||
Bandwidth | No | Yes | Yes | Yes | Yes | Yes |
Participating nodes | Variable | Transient | Constantly dynamic | Constantly dynamic | Transient | Transient |
Power source | Direct power | Battery, direct power, or any green energy | Battery, any green energy | Battery, direct power, or any green energy | Battery, direct power | Battery, direct power |
Power consumption | High | Moderate | Very low | Low | Very low | Low |
Space required for deployment | Warehouse size building | Manageable space is required can be installed outdoor | Very little space required to install in existing environment | Very little, also possible to install at outdoor on existing infrastructure | Very little space required to install in existing environment | Very little, also possible to install at outdoor on existing infrastructure |
Number of users/ devices | Ten million to billions | Few billions | Ten-hundred billions | Ten billion | Few millions | Few millions |
Number of server nodes | Few | Large | Very large | Very large | Large | Large |
Storage | ||||||
Storage capacity | High | Medium | Limited | Moderate | Limited | Limited |
Permanency of storing data | Permanent | Partially permanent | Transient | Partially permanent | Partially permanent | Transient |
Time | ||||||
Latency | High | Moderate | Low | Medium | Low | Moderate |
Delayed-jitter | High | Moderate | Low | Moderate | Low | Low |
Real-time interaction | Limited | Limited | Supported | Supported | Limited | Limited |
Response time | Seconds to minutes | Seconds to minutes | Milliseconds | Milliseconds | Milliseconds | Seconds to minutes |
Network latency | High | Less | Very less | Less | Very less | Less |
Mobility | ||||||
Support for mobility | Limited | Partially supported | Supported | Supported | Moderate support | Moderate support |
Node mobility | Very low | Frequent | Slightly frequent | Highly frequent | Slightly frequent | Frequent |
Computation device mobility | No | Yes | Yes | Yes | Yes/No | Yes |
Control | ||||||
Management | Centralized | Distributed/ Centralized | Distributed | Distributed/ Centralized | Distributed | Distributed |
Nature of failure | Predictable | Partially predictable | Highly diverse | Highly diverse | Manageable | Manageable |
Control mode | Centralized/ Layering | Distributed/ Layering | Distributed/ Layering | Distributed/ Layering | Distributed | Distributed |
Attack on data enroute | High probability | Low probability | Very low probability | Very low probability | Very low probability | Low probability |
Security | Yes | Yes | Yes/No | Yes | Yes/No | Yes/No |
Cost and energy | ||||||
Computation cost | High | Moderate | Very low | Low | Very low | Moderate |
Cooling cost | High | Moderate | Very low | Very low | Very low | Moderate |
Price of each device | 1500–1300 USD | 50–200 USD | 10–100 USD | 50–200 USD | 50–200 USD | 50–200 USD |
Bandwidth utilization cost | No | Yes | Yes | Yes | Yes | Yes |
Device energy consideration | No | Yes | Yes | Yes | Yes | No |
Computation | ||||||
Computation device | Powerful server system | Resource rich mobile devices & remote cloud | Edge server | Any device with computation power | Dew server | Any mobile device with computation power |
Computation capacity | High | Moderate | Very low | Moderate | Low | Low |
Virtualization technology | Hypervisor/Container | Hypervisor/Container | Hypervisor/Container | Hypervisor/Container | Container | Container |
Main computation element | Cloud | Base station server, cloudlet etc. | MEC server | Any device with the capabilities of computation, storage, and network adapter | Any computing device like mobile phones, dew server etc. | Any computing device |
Connectivity | ||||||
Connectivity from users with internet | High speed with wireless and wired combination | Wired/wireless | Mostly wireless | Mostly wireless | Wired/ wireless | Wired/ wireless |
Deployment environment | Centralized remote server | Within the network | Network edge | Edge and near edge | User device | Within the network |
Network connectivity | No | Yes | Yes | Yes | Yes/No | Yes |
Connection to the cloud | Yes | Yes | Yes or No | Yes | No | Yes or No |
Application | ||||||
Application type | Non-latency aware | Partially latency aware | Latency aware | Latency aware | Latency aware | Partially latency aware |
Real time application handling | Difficult | Manageable | Smartly handle | Achievable | Achievable | Difficult |
Support company | Large internet service company | ISP with local vendors | ISP with local vendors | Small operator and equipment manufacturers | Small operator | Small operator and equipment manufacturers |
Application | Central repository, data storage, SaaS, PaaS, IaaS etc. | Distributed access of content and facilitates, NaaS, DaaS, StaaS etc. | Augmented reality, intelligent video acceleration, IoT | IoT, smart grid, internet of vehicles | Web browsing | Local computation |
Service | ||||||
Service access | Through the center | Through center and mobile devices | On the edge or handheld devices | On the edge or handheld devices | handheld devices | handheld devices |
Main content generator | Humans | Humans/Devices | Sensors/Devices | Devices/Sensor/Humans | Humans/Devices | Humans/Devices |
Content generation | Central location | Distributed | Any where | Anywhere | Distributed | Distributed |
Content consumption | End devices | End devices | End users | End devices | End devices | End devices |
Availability | 99.99% | Transient and volatile | Transient and volatile | Highly volatile/highly redundant | Transient and volatile | Less than cloud but better than other computing’s |
Context awareness | No | Yes | Yes | Yes | Yes | Yes |
Location awareness | No | Yes | Yes | Yes | Yes | Yes/No |
Distance | ||||||
Distance between client and server | Multiple hops | Multiple hops | Mostly one hop | Mostly one hop | One hop | Mostly one hop |
Number of intermediate hop | Multi | Multi hop | One hop | One/few | One hop | One/few |
Users | ||||||
Types of users | Stationary/Mobile | Mobile | Smart sensor-based devices | Stationary/Mobile | Stationary/Mobile | Stationary/Mobile |
Target user | Common internet users | Mobile users | Sensors, mobile users | Mobile users | Mobile users/ PC users | Mobile Users |
Usage of virtualized environment | Yes | Yes | Yes | Yes | No | No |
Usage of end device | Yes | Yes | No | Yes | Yes | Yes |
Miscellaneous | ||||||
Choice of computation | No | No | No | No | No | No |
Scalability | No | Yes/No | Yes/No | Yes/No | No | No |
Motivation | No | Yes/No | Yes/No | No | No | No |
Utility driven | No | No | No | No | No | No |
Drawbacks of Existing Distributed Computing’s Inherited from Cloud
The detailed comparison discussed in the previous sub-section, points the weak area in these computing and some most significants are considered here, which are addressed by proposed EVACON-Rainsnow computing and its SKYR framework [73]. This SKYR framework is improved in this proposed work to fully facilitate aforesaid computing. Following are the major drawbacks of existing computings:
No support for either localized or remote distributed global computation by any single computing. Hence, user does not have any choice for mode of computation, that is whether to have local computation or remote globalized computation.
No support for sensor-based task and user-based task collectively by any single computing and its executing framework.
The discussed computing’s do not have embedded pricing model with its executing framework to encourage resource rich mobile device to impart services as resource provider.
Existing distributed computing’s do not support either horizontal or vertical scalability.
Some of the distributed computings neither support heterogenous device to avail computing nor allow it to be service provider.
Discussed distributed computings are not utility driven and energy efficient at each level of computation.
None of the discussed computing architecture have four levels of computation.
These seven points lacks in most of the six-referenced computing’s taken into consideration and few points lacks in all of the them, hence this motivated us to propose a new four tier architecture and its improved SKYR framework as discussed in Sect. 3. Therefore, to address above mentioned seven points is our prime objective for this research work.
Proposed Work
The proposed work is divided into seven sub sections: first includes the complete idea and principle of EVACON-Rainsnow computing, second comprises of components of proposed computing. Third includes working architecture of EVACON-Rainsnow computing and fourth discusses about working flow comprises of user-based task and sensor-based task execution. Fifth sub section mentions various functions performed in proposed computing by different layers. Sixth and seventh sub section discusses about improved SKYR framework [73] and its working flow for executing EVACON-Rainsnow computing respectively.
Principle of EVACON-Rainsnow Computing
The working principle of this proposed computing is based on the idea of water cycle which has four stages: evaporation, condensation, precipitation, and collection. In evaporation, water gets converted into vapours due to sunlight and move upward in environment. Condensation is the process of cooling down the vapour in the form of gas to liquid, in precipitation stage the cool water falls down from sky to earth surface and gets collected in water bodies and underground water in collection stage. In the same way, data also behave like water. So, here we have proposed the data cycle for EVACON-Rainsnow computing. Stages of this computing resemble with water cycle. These stages are data evaporation, data condensation, data precipitation and data collection. Among these, first three stages are very important for the proposed computing. Therefore, based on these three stages, the proposed computing is named as EVACON-Rainsnow computing. EVA denotes the data evaporation process in which the data is collected from different data storage and computing bodies such as any mobile device, PC, server, or any sensor-based devices. CON denotes the second stage of data condensation process, in which the gathered data is processed as per the requirement at different levels according to the processing capability of computing and storage devices. The third important stage is data precipitation in which the data and services desired by end mobile users are facilitated by federated cloud, cloud, primary cloudlet, secondary cloudlet, and edge server. So, Rainsnow is denoting this third stage and fourth stage is data storage, which stores data at different computing and storage devices such as cloud, primary cloudlet, and secondary cloudlet for future use. Hence, there are three main steps that are data evaporation, data condensation and data precipitation and based on these steps, this proposed computing is named as EVACON-Rainsnow computing. Therefore, this idea motivated us to propose the mentioned computing and its framework.
The conceptual architecture of EVACON-Rainsnow computing is shown in Fig. 9, in which five layers denote federated cloud, cloud, primary cloudlet, secondary cloudlet as edge server and dew server, edge sensor equipment & mobile user from top to bottom. The dotted line in upward direction denotes the data vapourization, in which edge sensor devices, secondary cloudlet, primary cloudlet and cloud acquire data and process it for future use. Data processing stage is dependent on the data and the processing capability of computing device. Highly data intensive computing service will evaporate to cloud, slightly lighter will be managed by the primary cloudlet and secondary cloudlet and, sensor based very light data can easily be processed by edge server which is a secondary cloudlet. Figure 9 shows four levels of computation, level 1 denotes the secondary cloudlet, which acts as both edge and dew server, level 2 represents primary cloudlet which acts as fog server, level 3 and level 4 of computation denote the cloud and federated cloud computation respectively.
[See PDF for image]
Fig. 9
Conceptual architecture of EVACON-Rainsnow computing
User request (marked upward with purple arrow) denotes that the request is directly made to the primary cloudlet which is the main computing and controlling device at local level, it assigns the secondary cloudlet or dew server to process the lighter data and service requests. Intensive computing tasks and data requests are forwarded to cloud and federated cloud at global level. Solid arrows marked downward in different colors denote the data precipitation. Therefore, the very first layer which facilitates the user is dew & edge layer in which secondary cloudlet acts as edge server & dew server. Next layer in this series is primary cloudlet which is acting as fog server to facilitate slightly complex task and sensing request which are not processed by its subordinates. Primary cloudlet processes the request and assigns edge server or dew server or sensor node based on their capability and availability to execute the requested task of end user. If a task cannot be handled or executed at local level by the primary cloudlet or its subordinate then it will be forwarded to the cloud. If later is also unable to process, other clouds are contacted and forms the system of federated cloud to process the request. This processed request is facilitated to end user via primary cloudlet. Hence, in this four-tier computation, most of the tasks are executed at local level which improves latency, bandwidth, energy efficiency, performance enhancement and other important parameters. Only those tasks which are not handled at local level are forwarded to the global level at remote distributed cloud and federated cloud.
Therefore, the precipitation of data comparison with water comes in the form of snow, rain, drizzle, fog, and dew. This means, when federated cloud facilitates the task request it is considered as snow, when cloud processes it is denoted as rain, when primary cloudlet does it is marked as drizzle and for secondary cloudlet as dew server it is considered as fog and as an edge server it is denoted as dew. It is correlated with the precipitation form of water because snowfall is very rare activity all across the world and occurs at higher altitude, similarly rain and drizzle occurs quite often and precipitates forms at lower altitude and happens in many parts of the world. Similarly fog and dew concepts are already quite familiar with distributed computing which means data precipitates very close to surface. If it is near to surface and in air it is considered as fog and on surface it is considered as dew. So, this same hierarchy of water precipitation is directly used here in the proposed work and precipitation step is named as Rain-Snow. Therefore, overall computing is termed as EVACON-Rainsnow computing. Figure 10 represents this hierarchy where the data precipitation occurs and also shows the relation among vital parameters such as latency and intensity of computation etc.
[See PDF for image]
Fig. 10
Forms of parcipitates of data in EVACON/Rainsnow computing
Components of EVACON-Rainsnow Computing
As discussed in the previous section EVACON-Rainsnow computing is an amalgamation of cloud computing, mobile cloud computing, fog computing, edge computing and dew computing. Cloud computing represents both cloud and federated cloud, mobile cloud computing can be replaced with fog computing as both are considered same in the proposed work and caters by primary cloudlet. Dew computing is represented with secondary cloudlet and edge computing is performed by secondary cloudlet as edge server for sensor-based task. Therefore, this amalgamated proposed computing have following components or stakeholders.
Federated cloud Federated cloud are remote clouds which interact with each other to facilitate requests and hence make them federation of clouds. These clouds perform the jobs/tasks of user requests forwarded to them via primary cloudlet of respective local network.
Cloud Cloud is a remote distributed data center which facilitates its end users which belongs to specific service providers. Every service provider has its own cloud which facilitates primary cloudlet of specific local edge network to directly communicate with it. In this proposed work, the remote distributed cloud is introduced with two components one is cloud service unit (CSU) and another is cloud broker unit (CBU). Cloud service unit provides services to the user to accomplish their task and facilitates the desired result via primary cloudlet, whereas the cloud broker unit helps to establish the communication between cloud service unit and primary cloudlet of same internet service provider (ISP) and also facilitates the communication among different cloud service providers of different ISPs via their respective cloud broker unit. Therefore, cloud broker unit can be considered intermediate to establish communication between primary cloudlet and its own or other cloud service unit in EVACON-Rainsnow computing.
Primary cloudlet Primary cloudlet is a resource rich mobile device which is available in proximity to end user within its own local edge network to facilitate task and services. Although cloudlet belongs to mobile cloud computing but it can be considered as computing element for fog computing. Therefore, it manages the functionality of both mobile cloud computing and fog computing.
Secondary cloudlet It is one of the most important elements in proposed computing and has efficient computing capability but inferior to primary cloudlet. This secondary cloudlet acts as edge server and dew server as per the requirements of the system. If any request of sensing device needs to be processed then it acts as edge server, otherwise for any other user task it acts as dew server to process the service request of PC or any mobile devices. Primary cloudlet controls and manages the working of these secondary cloudlets. Former assigns the user task and sensor-based tasks to later to execute. This working is same as of SKYR framework [73]. These secondary cloudlet and primary cloudlets are heterogeneous devices and collectively form the edge networks at proximity to end users.
Sensor devices These are the sensing components installed in environment to sense the real time parameter relevant for computing. These devices are directly under the control of secondary cloudlet which acts as edge server for former. Therefore, sensors sense the data and forward it for processing at edge server, which process it and store it or forward it to primary cloudlet or centralized remote cloud for permanent storage for future use.
User It is the end user which comprises of thin mobile devices which request for service and task from primary cloudlet and later assigns the best matched secondary devices which in this scenario are dew server. Later process the task request and hand over the desired result to the end user mobile devices. In response to this, the end user submits its feedback to the primary cloudlet for the services provided by secondary devices. This feedback is retained by primary cloudlet for consideration of future task as it helps in picking best suited secondary cloudlet [73]. If any of the user task is not accomplished by either secondary or primary cloudlet, then primary cloudlet requests to the centralized remote cloud to execute it. If that is also unable to handle the request then respective cloud contacts other remote centralized cloud and hence form the federated cloud. These federated clouds collectively execute the requested task and end result is forwarded to the end user via respective primary cloudlet of local network.
Working Architecture of EVACON-Rainsnow Computing
This section elaborates the EVACON-Rainsnow computing discussed in 3.1. Figure 11 shows the component level working architecture of EVACON-Rainsnow computing. This architecture shows component such as cloud service unit, cloud broker unit, primary cloudlet denoted as PC 1,2…n, secondary cloudlet denoted as SC1, SC2,…,SCn with respect to respective primary cloudlets, sensor devices represented with green circles with respect to secondary cloudlets and the last component is end user. All these components are discussed in detail in Sect. 3.2. Here in Fig. 11, there are two broad clouds, one represented in gray color and other one in blue. Gray one represents remote distributed federated cloud which is network of various clouds of different ISP providers.
[See PDF for image]
Fig. 11
Component level working architecture of EVACON-Rainsnow computing
These clouds can take services of each other to facilitate the task and request of end user of different ISPs. Similarly, blue color cloud represents the local network cloud of different networks and these can also communicate with each other via primary cloudlet (the main controlling unit for local network cloud). Initially, the end user sends the task request to its nearest or best suited primary cloudlet. After that, primary cloudlet checks whether this task can be computed locally by its secondary cloudlet, if not then it checks with other primary cloudlet of same ISP. If task request cannot be computed locally then it forwards it to its respective cloud. Primary cloudlet is the delivery provider to end user and in response it asks for price and feedback for the task done. Price will be shared with the entity which serviced the task either secondary cloudlet or remote distributed cloud and feedback is stored for future use in selecting the best match resource provider. But in sensor based task, the event is initiated by primary cloudlet either on the guidance of remote cloud or by itself. In such type of task, primary cloudlet make best suited secondary cloudlet as edge server to process the sensed data of various associated sensor devices. Processed data is stored by primary cloudlet or by cloud for future use. Sometimes end user in particular location ask for some sort of sensor data analysis which is done by edge server (secondary cloudlet) and finally primary cloudlet delivers the synthesised and processed sensed data to end user and in lieu of that it asks for payment and feedback.
Therefore, EVACON-Rainsnow computing facilitates both sensor based computing and end user task/service request based computing. It also facilitates the remote distributed computing by extending the cloud and federated cloud computing and provides local computing using the principle of fog computing, mobile cloud computing, mobile edge computing and dew computing. Hence, it improves the bandwidth uses, reduces latency, promotes real time computing, increases reliability and scalability in terms of working and handles the pricing issues. Figure 12 represents the federated cloud level working, in which cloud broker unit (CBU) of one cloud of ISP contacts the cloud broker unit of another cloud of different ISP and then that CBU contacts its cloud service unit (CSU). Finally the finalized task is handed over to the CBU of first cloud for delivery to end user via primary cloudlet. Main task or service request is provided by primary cloudlet to its CBU along with the task requirements as shown in Fig. 12. A CBU has three sub components such as cloud scheduler, deployment plan, cloud manager. Cloud scheduler schedules the primary cloudlet’s task based on its requirements to its associated cloud service unit using some definite set of algorithms. Deployment plan assures the efficient and smooth delivery of finalized task to primary cloudlet. Last component is cloud manager, which manages working of its associated CSU and handles the request and delivery to different CBU of other ISP.
[See PDF for image]
Fig. 12
Intercommunication at Federated cloud in EVACON-Rainsnow computing
Working Flow of EVACON-Rainsnow Computing
Above discussed computing principle and its logical work flow of EVACON-Rainsnow computing is shown in Fig. 13a and b. There are two types of tasks which are handled by the proposed computing, one is user-based task and other one is sensor-based task. User based task is marked as ‘U’ and initiated by the user to primary cloudlet but the sensor-based task is either self-motivated by primary cloudlet or by CBU of cloud and marked with prefix ‘S’ in representing the numbering sequence of task. In both the cases, primary cloudlet analyses various aspects of task like availability of resource, complexity of computation and other factors to decide whether this task can be handled by local device or by remote cloud. Mostly sensor-based task is executed at local level but user-based task may be executed locally or remotely at cloud. Sensor based task are marked with prefix ‘S’ and user-based task are marked with prefix ‘U’ in numbering of steps.
[See PDF for image]
Fig. 13
a Working flow of EVACON-Rainsnow computing for user-based task. b Working flow of EVACON-Rainsnow computing for sensor-based task
Nomenclature of marking the steps for task execution are as follows: first character from left to right represents the sequence number of task execution, second character represents whether the task is user based or sensor-based task, third represents local execution at primary cloudlet or remote execution at cloud, fourth character represents whether the task is executed by its own executing component or by executing component of other providers at local level and remote level. For example, 4.S.P.(a) means sequence number 4 of sensor-based task analysed at primary cloudlet and executed at secondary cloudlet associated with its respective primary cloudlet.
User Based Task
The task executions are explained in this subsection one by one, firstly user-based task and later on sensor-based task are discussed. In case of user-based task, user sends request to primary cloudlet denoted as 1.U in Fig. 13, then primary cloudlet checks whether this task can be executed locally by its associated secondary cloudlet denoted as 2.U, if not then it checks secondary cloudlet associated with other primary cloudlet within its reach and availability. If it’s still not get executed locally, then it is assigned to cloud broker unit of remote distributed cloud of same ISP. This means that priority is localized execution. Now, CBU checks whether its cloud service unit can execute the assigned task (3.U), if not then it searches other cloud broker unit which can facilitate assigned task with their CSU.
Local execution and remote execution follow the same step at self secondary cloudlet, another secondary cloudlet, self CSU, and other CSU and denoted as 4.U.P.(a), 4.U.P.(b), 4.U.C.(a), 4.U.C.(b) respectively. It follows the nomenclature discussed above. Parallel steps 4, 5, 6, 7, 8 and 9 follow at respective computing unit of localized and remote computation and represent steps of decomposing task, estimating task, scheduling task, send task for execution to computing unit, executing task, and sending result to the receiving unit respectively. Parallel execution means that different tasks are executed parallelly and above-mentioned steps are followed. If the task is executed at cloud, then its CBU delivers the result to its primary cloudlet and this step is marked as 10.U.C.a. Then next step is combining result if some tasks are executed locally by different secondary cloudlets and by remote cloud, if it is executed by single unit then these steps can be skipped and it is marked as 11.U.P. Next step is returning result to end user by primary cloudlet and it is marked as 12.U.P. Step 13 and 14 represents the payment and feedback steps which end user performs after completion of its task. Feedback can be retained at both the primary cloudlet and remote cloud for future use. However, payment will be distributed by the primary cloudlet among all the service providing units which facilitated particular completed task.
Sensor Based Task
Similarly, for sensor-based task same nomenclature are used for step marking and execution steps. Major difference between user-based task and sensor-based task is the origin of task. In case of sensor-based task, the task is originated by primary cloudlet or by remote distributed cloud in most of the scenarios. However, in few cases, it may be requested by end users. But in all the three scenarios either by primary cloudlet, cloud or end user, task assigned to edge server and sensor devices can only be done by primary cloudlet. Therefore, initial step in sensor-based task is 1.S.P, which means primary cloudlet receives request from either by cloud or by end user or by itself to initiate the sensor-based task execution. In second step, primary cloudlet analyses the sensor-based task and checks the suitable secondary cloudlet which can execute the assigned task and this step is marked as 2.S.P. Third step is to assign the task to secondary cloudlet which act as edge server and it is marked as 3.S.P.a for secondary cloudlet associated with primary cloudlet 3.S.P.b for secondary cloudlet associated with some other primary cloudlet. Next steps are 4, 5, 6 and 7 which denote data sensing by sensor devices attached with respective edge server, data collection from different sensor by edge server, collected data synthesizes and last step performed by edge server is data processing respectively. After edge processing, step 8 denotes transfer of data to primary cloudlet, which do the data analysis of processed data and represented as 9. After analysis, action may be taken or processed data can be saved at primary cloudlet or at cloud or it may be forwarded to end user if sensor-based task’s request given by it and it is marked as step 10. Feedback and payment step is required only in case of user request. If sensor-based task is initiated by primary cloudlet or cloud then it is skipped. Therefore, EVACON-Rainsnow computing handles both the user-based task and sensor-based task efficiently and hence improves the prospect of both remote distributed computing and localized mobile computing and it also supports horizontal & vertical scalability when executed with the proposed improved SKYR framework.
By considering the stakeholders involved in this computing, it is a four-tier architecture and level of computation is also four as shown in Fig. 14. These levels are marked as lower, medium, higher, and highest, in which lower level represents edge and dew computing both, medium level denotes fog and mobile cloud computing, higher level represents cloud computing and highest level represents federated cloud computing. At the lowest level, edge computing and dew computing are represented at the same level because the computing element in both these computing is secondary cloudlet, which cannot be involved simultaneously with user-based task and sensor-based task. In the proposed work, edge computing mainly considered for sensor-based task and dew computing is considered relevant for user-based task. Therefore, these two computing are placed at lower level in hierarchy. At medium level, primary cloudlet is the computing element which supports both fog computing and mobile cloud computing, it is considered medium level as it is still local to the end user but not very close to it. Higher level in the computing denotes cloud as it is computing element and this level supports remote distributed computing. Highest level in this hierarchy is denoted by federated cloud which executes complex tasks that cannot be managed by central cloud.
[See PDF for image]
Fig. 14
Four tier hierarchical structure of EVACON-Rainsnow computing
Detailed example of above discussed four tier hierarchy is shown in Fig. 15. In this Figure, edge layer of edge server and dew layer of dew server are kept at the same level and both can interact with primary cloudlet at fog layer or MCC layer. Both these layers are considered local service provider in which edge layer represents sensor based task and dew layer represents user based task. Fog layer or MCC layer does management of lower services and provides complex services which cannot be handled by lower one. Hence, it provides system management services. Cloud layer and federated cloud layer provides the complex services. Various services provided by different layers of hierarchy of EVACON-Rainsnow computing are shown in Table 4.
[See PDF for image]
Fig. 15
Working example of EVA-CON Computing
Table 4. Services in different layers [74]
Edge services | Dew services | Fog services | Cloud services | Federated cloud services |
|---|---|---|---|---|
Data sensing | Establish communication with device and PC | Interaction with SC and devices | Virtualization | Virtualization |
Data cleansing | Request evaluation | Data acquisition | Data integration | Data integration |
Edge storage | Fetch data | Data reduction | Cloud storage | Cloud storage |
Edge analysis | Facilitate data | Fog storage | Big data analysis | Big data analysis |
Iterative interaction | Task scheduling | Asset optimization | Task management | Task management |
Simulation | Execute services | Process planning | Supply demand matching | Supply demand matching |
Real time monitoring | Store result | Proactive MRO | Personalized customization | Intercloud interaction |
Disturbance found | Iterative interaction | Smart scheduling | Smart design collaboration | Facilitates CBU request |
Real time control | Monitoring services | Asset management | Supply chain collaboration | Resource chain management |
Fault control | Resource delegation | Commerce collaboration | Commerce collaboration |
Algorithm to Execute User-Based and Sensor-Based Task
Above discussed tasks are firstly analysed by Task-Segregation () algorithm and then by scalabiulity () algorithm. Task-Segregation () algorithm checks the granularity of the task or size of task based on execution time, complexity of task and other factors taken into consideration. This filtration helps to segregate the task for different computing environments. Coarse grain task are forwarded to the mobile computing environment or cloud computing environment. Similarly, fine grain tasks are executed at edge computing environment. Algorithm of Task-Segregation () is as follows:
In this algorithm, remote cloud depicts the execution at cloud and federated cloud computing environment and, local cloud denotes the execution at MCC, MEC, Fog and Dew computing environment. Yield factor of availability is already discussed in SKYR framework which denotes the availability of any resource in any computing environment.
Scalability () algorithm basically helps in analyzing and picking the potential resource to execute a task. In this algorithm, it basically considers all the parameters of a resource and calculates its scalability factor (Sl) and it also considers the elasticity factor (Є). When a job request is generated then it also holds its requirement. This algorithm basically calculates the (Sf, Є) for the task and for available resource as well. Finally, it generates the list of potential resources which are nearly best match result for further execution by SKYR framework. The results of the algorithm are scalability-list [i] which is to be used by SKYR framework for execution of task. Algorithm of scalabilty () is as follow. These two algorithms are executed in collaboration with the SKYR framework’s algorithms and module, which improves it further.
Functions Performed in EVACON-Rainsnow Computing:
The prime objective of EVACON-Rainsnow computing is to address the challenges/problems involved with different distributed computing as discussed in detail in Sect. 2.2 and compared in Tables 1, 2 and 3. So, to cater its prime objective, following are the main functions performed in the aforesaid proposed computing as shown in Fig. 16.
[See PDF for image]
Fig. 16
Funtionality involved in EVACON-Rainsnow computing
At the very native level of sensors, it performs and manages every aspect of physical sensors and virtual sensors. At edge level and dew server level, that is at secondary cloudlet level it performs device configuration, storage configuration, its connectivity and set up computation requirements.
Edge and dew server levels are same, only the scenario based execution demark them. Now at primary cloudlet level which acts as fog server and mobile cloudlet to facilitate MCC, it performs different functions under different categories. Under device management, it performs device configuration, storage configuration, its connectivity and computational requirements. In monitoring domain, it facilitates system monitoring, resource demand and performance prediction. Whereas in pre and post processing domain, it executes functions such as data analysis, data filtering, data flow, data trimming and data reconstruction etc. Now at top level which includes the computation at cloud and federated cloud, it performs functions under four categories that is storage, resource management, security and application. In storage at cloud, EVACON-Rainsnow computing performs data backup and storage visualization function and in resource management, it executes resource allocation, scheduling, energy saving, reliability and scalability etc. Under the security category, it facilitates encryption, privacy and authentication functions. In last but not least application domain, it performs IoT, WSN, CDN, autonomous and traffic network. Therefore, this proposed computing performs broad set of functions which makes it highly efficient and performance oriented as compared to its peer computings.
Improved SKYR Framework for Executing the EVACON-Rainsnow Computing
This paper discusses the proposed computing and its various aspects in detail and there is a need to have a framework or model to execute this computing. Therefore, this section discusses about the improved SKYR framework which is best suited to execute the proposed computing. SKYR framework [73, 75, 76] handles the user-based task and sensor-based task at local level in edge computing mobile cloud computing and at global level in remote distributed cloud. In [76], we have proposed SKYR framework which executes tasks at federated cloud in global scenario. This proposed work includes the dew computing at secondary level in localized executions and also includes the features of previous version of SKYR framework. Therefore, the proposed improved SKYR framework fully accommodates the proposed EVACON-Rainsnow computing. Detailed architecture of improved SKYR framework is shown in Fig. 17. In this Figure, there are three remote distributed clouds connected with each other to make it federated cloud. Each cloud is having few primary cloudlets and each primary cloudlet is connected with many secondary cloudlets and this secondary cloudlet is associated with various sensors and their marking scheme is shown in the Fig. 17. Working of the improved SKYR framework is same as that of previous version of SKYR framework. The only difference is that the secondary cloudlet also acts as dew server in user-based task request and it acts accordingly. In sensor-based task, it acts as edge server.
[See PDF for image]
Fig. 17
Improved SKYR framework to execute EVA-CON computing
Logical Flow of Improved SKYR Framework
Working flow of improved SKYR framework is shown in Fig. 18. This detailed flow of improved SKYR framework includes each and every execution scenario of EVACON-Rainsnow computing. The model is self-explanatory in itself as its various functionalities are already discussed in detail.
[See PDF for image]
Fig. 18
Logical flow of EVA-CON Computing
Features, Application, and Challenges of EVACON-Rainsnow Computing
This section discusses about features, application and challenges involved with the proposed EVACON-Rainsnow computing. As discussed in the proposed work aforesaid computing have various features & advantages and suitable for different applications which are discussed here. Table 5 shows the comparative feature analysis of proposed computing with its peers.
Table 5. Comparitive feature analysis of EVACON- Rainsnow computing with its peers
Parameters | Cloud computing | Mobile cloud computing | Edge computing | Fog computing | Dew computing | Transparent computing | EVACON-Rainsnow computing |
|---|---|---|---|---|---|---|---|
Localized execution | No | Yes | Yes | Yes | Yes | No | Yes |
Latency | High | Moderate | Low | Moderate | Low | Moderate | Low |
Complex computation | Allowed | Not allowed | Not allowed | Partially allowed | Not allowed | Not allowed | Allowed |
Heterogeneous devices | Not allowed | Not allowed | Allowed | Allowed | Not allowed | Not allowed | Allowed |
Choice of execution | No choice | No choice | No choice | No choice | No choice | No choice | Choice allowed |
Resource utilization | Allows utilization at cloud | Allows utilization at cloudlet | Allows utilization at edge cloudlet | Allows utilization at base station | Allows utilization at cloudlet | Allows utilization at cloudlet | Allows resource utilization at both edge and cloud |
Controlling of task | Centralized controlled by cloud | Centralized controlled by cloud | Controlled by edge server | Controlled by base station | Controlled by dew server | Centralized controlled | Controlled by edge but managed by federated cloud |
Reliability | Highly reliable | Less reliable | Moderately reliable | Moderately reliable | Moderately reliable | Less reliable | Highly reliable |
On demand services | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Scalability (Vertical) | No | Yes | No | No | No | No | Yes |
Scalability (Horizontal) | No | Yes | Yes | Yes | Yes | No | Yes |
Ubiquitous and pervasive computation | Yes | Yes | Yes | Yes | Yes | Yes/No | Yes |
Security | Highly secure | Less secure | Less secure | Less secure | Less secure | Less secure | Moderately secure |
IoT enabled sensing task | Not allowed | Not allowed | Allowed | Not allowed | Not allowed | Not allowed | Allowed |
Motivation to resource devices | No motivation | Partial motivation | Partial motivation | No motivation | Partial motivation | No motivation | Have intrinsic pricing model which motivates resource rich devices |
Utility driven | No | No | No | No | No | No | Yes |
Energy efficiency | Efficient for complex Task | Efficient for simple and light task | Efficient for simple and light task | Efficient for complex task | Efficient for simple and light task | Efficient for simple and light task | Efficient for both complex and simple task |
Features of EVACON-Rainsnow Computing
Following are twelve most important features of EVACON-Rainsnow computing.
Localized execution: This feature allows users to get their task executed in their proximity by the secondary cloudlet associated with the respective network provider and ISP. This feature reduces the bandwidth and energy consumption and helps to have prompt response.
Improved latency: As majority of the tasks are executed in proximity to the user by highly potential cloudlet, which indeed reduces the execution time and also reduces the bandwidth consumption, hence improves the latency factor.
Reliable and effective execution of complex computing task at cloud: It is discussed in detail while discussing the architecture of proposed computing in which the cloud and federated cloud work together to provide highly reliable computation services. As most of the tasks are executed by cloudlets in local network and hence gives ample time to cloud and its associated federated cloud to service complex user task requests.
Accommodate heterogenous devices: This feature allows proposed computing to adjust different heterogeneous devices to impart services as cloudlet and avail services as end user. Therefore, there is no compulsion of device compatibility as this issue is taken care in the proposed framework to accommodate any type of devices.
Choice of execution: It facilitates the end user to have choice of execution of its task either at local level or at global remote level. So, this feature gives the end user a satisfaction in task execution.
Better resource utilization: Coordinated communication at remote level between cloud and federated cloud, at edge level intercloudlet communication allows better resource sharing and task execution. Hence, overall efficiency of the system improves the resource utilization.
Improved control over task execution: Tasks are controlled and managed locally by primary cloudlet and remotely by cloud broker unit, which take care of every aspect of task such as scheduling algorithm, resource allocation, delivery of task etc.
Improved reliability: The proposed computing and its executable framework looks for potential cloudlets and if any failure occurs at the local network cloudlet, soon it will be replaced with the new cloudlet to execute the task. There are many other such features which make it more reliable as a computing environment.
On demand: Cloud computing is a metered service. At primary cloudlet in local network metered services is executed by the proposed improved SKYR framework in the same way. Hence, this proposed computing can be considered as an on-demand service as metered service is provided at local level and remote level.
Scalability: Scalability are of two types, horizontal and vertical. Horizontal scalability means more physical resources or machines are added to the system and vertical scalability refers to the addition of features or modules to the existing physical machines and resources to improve the efficiency. These features and modules can be pricing modules, utility modules etc. Therefore, these two types of scalabilities can be achieved at edge cloudlets and at remote federated cloud by allowing more cloudlets to impart services, which motivates horizontal scalability and adding more modules in the framework promotes vertical scalability at federated cloud.
Ubiquitous and pervasive computing: EVACON-Rainsnow computing allows user to avail computing anywhere and anytime either in local network or at global network. Choice of execution mode is also provided which enhances the computing experience to user.
Provision of IoT enabled sensing task and user-based task: The intrinsic pricing model of proposed framework motivates the resource rich devices to impart services as cloudlet to facilitate both sensors enabled IoT task and user-based task. This is a unique computing, which facilitates both these computations and provides four levels of computation. There are many other features but these mentioned one are most important and very significant for global and local level computation.
Applications of EVACON-Rainsnow Computing
There are many applications of the proposed computing and some valuable and significant applications are mentioned here:
Non-latency intensive task The proposed work supports non-latency applications such as online gaming, live meetings and online trading which work only when there is no latency or very low latency.
Highly computation intensive task High computation intensive task such as weather forecasting, image processing, and video processing are effectively handled by proposed computing and its framework.
Real time application Real time applications requires prompt actions after an event triggers. Some of the real time applications are: traffic control system, network multimedia system, command control system and defense radar system. Proposed computing is best suited for mentioned applications as it involves the sensor-based computation and user-based computation and, have minimal latency which facilitates the real time applications.
IoT based sensing task Involvement of sensor and sensor-oriented task allows IoT based computation at the edge level and its synthesized data can be stored at remote cloud for future use. Some of the IoT based applications are: activity tracker, AR glasses, smart farming, industrial security and safety, motion detection etc.
Incentive oriented environment Proposed computing and framework which have intrinsic pricing model motivates various resource rich mobile device to impart services and also encourage thin mobile devices to use these services at local level. Hence any computation task such as data as a service, software as a service, network as a service can be executed by proposed computing and its framework.
Scalable computing environment Scalable computing environment provides limitless thin mobile device to avail services and allow resource rich device to facilitate it. Therefore, these are some of the significant applications of EVACON-Rainsnow computing.
Challenges Involved with EVACON-Rainsnow Computing
Following are the main research area and challenges associated with the proposed computing on which we will work in our future researches. These challenges can be categorized into independent and dependent as shown in Fig. 19. Independent challenges are: standard architecture, deployment issues, service management and user participation. Dependent challenges are: resource management, pricing issues, edge sensing problems, scheduling concerns, security and privacy issues, scalability, quality of service, service level agreement between provider and seeker, interlayer communication among different entities and utility based challenges.
[See PDF for image]
Fig. 19
Research issues in EVACON-Rainsnow computing
Comparative Result Analysis
This section provides comparative analysis based on fuzzy logic as shown in Table 6 and, based on that graph is plotted to show the comparison among various computing.
Table 6. Comparative analysis of EVACON-Rainsnow computing with its peers for various parameters
Parameters | CC | MCC | MEC | FC | DC | TC | EVACON-Rainsnow Computing |
|---|---|---|---|---|---|---|---|
Values to these parameters on the scale of 10 | |||||||
Resources | |||||||
Operating environment | 7 | 9 | 9 | 9 | 9 | 8 | 9 |
Participating nodes | 8 | 6 | 7 | 7 | 7 | 8 | 9 |
Power source | 7 | 9 | 8 | 9 | 9 | 9 | 9 |
Power consumption | 5 | 6 | 9 | 8 | 9 | 8 | 9 |
Space required for deployment | 7 | 8 | 9 | 9 | 8 | 8 | 9 |
Number of server nodes | 7 | 8 | 9 | 9 | 8 | 8 | 9 |
Number of user devices | 7 | 8 | 9 | 8 | 7 | 7 | 9 |
Bandwidth | 8 | 9 | 9 | 9 | 9 | 9 | 9 |
Average total on scale of 10: | 7 | 7.87 | 8.65 | 8.5 | 8.25 | 8.12 | 9 |
Storage | |||||||
Storage model | 8 | 8 | 9 | 8 | 7 | 7 | 9 |
Storage capacity | 9 | 7 | 5 | 8 | 6 | 6 | 9 |
Permanency of storing data | 8 | 6 | 5 | 6 | 6 | 5 | 8 |
Average total on scale of 10: | 8.33 | 7 | 6.33 | 7.33 | 6.33 | 6 | 8.66 |
Time | |||||||
Latency | 5 | 6 | 8 | 7 | 8 | 6 | 8 |
Delayed jitter | 5 | 6 | 8 | 6 | 8 | 8 | 8 |
Real time interaction | 7 | 7 | 8 | 8 | 7 | 7 | 9 |
Response time | 7 | 8 | 9 | 9 | 9 | 8 | 9 |
Network latency | 6 | 7 | 8 | 7 | 8 | 7 | 8 |
Average total on scale of 10: | 6 | 6.8 | 8.2 | 7.4 | 8 | 7.2 | 8.4 |
Mobility | |||||||
Support for mobility | 6 | 8 | 8 | 8 | 7 | 7 | 8 |
User node mobility | 5 | 7 | 6 | 8 | 6 | 7 | 8 |
Computing node mobility | 4 | 7 | 8 | 7 | 7 | 6 | 8 |
Average total on scale of 10: | 5 | 7.33 | 7.33 | 7.66 | 6.66 | 6.66 | 8 |
Distance | |||||||
Distance of computation to users | 6 | 8 | 9 | 8 | 9 | 8 | 9 |
Coverage area | 9 | 8 | 6 | 7 | 7 | 8 | 9 |
Distance between client and server | 7 | 8 | 9 | 9 | 9 | 9 | 9 |
Number of intermediate hops | 6 | 6 | 8 | 7 | 8 | 7 | 8 |
Average total on scale of 10: | 7 | 7.5 | 8 | 7.75 | 8.25 | 8 | 8.75 |
Control | |||||||
Management | 8 | 9 | 8 | 9 | 8 | 8 | 9 |
Nature of failure | 9 | 8 | 7 | 7 | 8 | 8 | 8 |
Control mode | 9 | 8 | 8 | 8 | 7 | 7 | 9 |
Attack on data enroute | 6 | 8 | 9 | 9 | 9 | 8 | 8 |
Security | 9 | 9 | 7 | 9 | 8 | 7 | 9 |
Average total on scale of 10: | 8.2 | 8.4 | 7.8 | 8.4 | 8 | 7.6 | 8.6 |
Computation | |||||||
Computation models | 7 | 8 | 9 | 8 | 7 | 7 | 9 |
Execution of computation | 8 | 9 | 8 | 8 | 7 | 7 | 9 |
Sequence of execution of computation | 8 | 9 | 9 | 9 | 9 | 9 | 9 |
Computation device | 9 | 8 | 7 | 8 | 7 | 7 | 9 |
Computation capacity | 9 | 8 | 6 | 8 | 7 | 7 | 9 |
Virtualization technology | 9 | 9 | 9 | 9 | 8 | 8 | 9 |
Main computation element | 8 | 7 | 7 | 7 | 7 | 7 | 9 |
Average total on scale of 10: | 8.28 | 8.28 | 7.85 | 8.14 | 7.42 | 7.42 | 9 |
Connectivity | |||||||
Connectivity from user with internet | 8 | 8 | 7 | 7 | 8 | 8 | 8 |
Deployment environment | 8 | 7 | 6 | 6 | 6 | 7 | 7 |
Network connectivity | 6 | 8 | 8 | 8 | 7 | 8 | 8 |
Connection to the cloud | 9 | 9 | 7 | 8 | 6 | 7 | 9 |
Average total on scale of 10: | 7.75 | 8 | 7 | 7.25 | 6.75 | 7.5 | 8 |
Application | |||||||
Application type | 6 | 7 | 8 | 8 | 8 | 7 | 8 |
Real time application handling | 6 | 7 | 9 | 8 | 8 | 6 | 9 |
Support company | 8 | 9 | 9 | 7 | 6 | 7 | 9 |
Application | 8 | 8 | 8 | 8 | 6 | 6 | 8 |
Average total on scale of 10: | 7 | 7.75 | 8.5 | 7.75 | 7 | 6.5 | 8.5 |
Cost and energy consideration | |||||||
Computation cost | 6 | 7 | 9 | 8 | 9 | 7 | 9 |
Cooling cost | 6 | 7 | 9 | 9 | 9 | 7 | 8 |
Price of each device | 7 | 8 | 9 | 8 | 8 | 8 | 8 |
Device energy consideration | 7 | 9 | 9 | 9 | 9 | 7 | 9 |
Bandwidth utilization cost | 7 | 9 | 9 | 9 | 9 | 9 | 9 |
Average total on scale of 10: | 6.6 | 8 | 9 | 8.6 | 8.8 | 7.6 | 8.6 |
Users | |||||||
Target user | 8 | 7 | 8 | 7 | 8 | 7 | 8 |
Types of users | 8 | 7 | 7 | 8 | 8 | 8 | 8 |
Usage of virtualized environment | 8 | 8 | 8 | 8 | 6 | 6 | 8 |
Usage of end device | 8 | 8 | 6 | 8 | 8 | 8 | 8 |
Average total on scale of 10: | 8 | 7.5 | 7.25 | 7.75 | 7.5 | 7.25 | 8 |
Services | |||||||
Types of services | 7 | 9 | 7 | 9 | 7 | 7 | 9 |
Service location | 8 | 8 | 8 | 8 | 7 | 8 | 8 |
Service access | 8 | 9 | 8 | 8 | 7 | 7 | 9 |
Main content generator | 8 | 9 | 8 | 9 | 9 | 9 | 9 |
Content generation | 8 | 8 | 8 | 8 | 8 | 8 | 8 |
Content consumption | 8 | 8 | 8 | 8 | 8 | 8 | 8 |
Availability | 9 | 7 | 7 | 6 | 7 | 8 | 9 |
Context awareness | 7 | 9 | 9 | 9 | 9 | 9 | 9 |
Location awareness | 7 | 9 | 9 | 9 | 9 | 8 | 9 |
Average total on scale of 10: | 7.77 | 8.44 | 8 | 8.22 | 7.88 | 8 | 8.66 |
Architecture | |||||||
Architecture layer | 7 | 8 | 8 | 8 | 8 | 7 | 9 |
Geographical distribution | 7 | 8 | 8 | 8 | 7 | 8 | 9 |
Average total on scale of 10: | 7 | 8 | 8 | 8 | 7.5 | 7.5 | 9 |
Comparative Analysis of Features of Different Computing’s
Parameters of Tables 3 are used to compare the different computing. A value on the scale of 10 is assigned to every computing for different parameters along with the proposed EVACON-Rainsnow computing as shown in Table 6. For example, under resource category, power consumption is assigned a value of 5, 6, 9, 8, 9, 8, and 9 for CC, MCC, MEC, FC, DC, TC and proposed EVACON-Rainsnow computing as shown in Table 6. These values are comparatively assigned with respect to previous literature. As these values are comparative and hence must not be considered absolute. Therefore, these fuzzy set value helps to analyse the comparative graph of different computing’s.
The proposed computing has improved the computation by allowing four level computation i.e. at secondary cloudlet (dew server/edge server), primary cloudlet (fog server), cloud, and federated cloud. It also facilitates the localized and remote globalized computing and provides intrinsic pricing model which encourages the resource rich mobile devices to act as cloudlet. The proposed framework of EVACON-Rainsnow computing supports module-based execution which can be scalable (vertical) in nature, means modules can be added or deleted in it. The previous computing’s focus on level of data and services, where it is imparted to user. For example, in dew computing, data & services are computed very near to users and in cloud computing, data & services are computed at remote location. But the proposed computing considers the complete process cycle of data and services, as how data and services evolves and imparted from resource provider to seeker. This proposed computing follows the horizontal scaling, means any number of devices can be added or deleted based on certain conditions and requirements.
Graphical Representation of Comparative Analysis
With the help of Table 6, a graph is plotted among all distributed computing’s along with proposed EVACON-Rainsnow computing. The graph of the same is shown in Fig. 20. This graph shows that for all major factors, the proposed computing performs better than its counterpart. Although, it performs better on main parameters but it does possess some challenges which are discussed in 4.3 and we will work upon in our future endeavor to improve it.
[See PDF for image]
Fig. 20
Comparitive analysis of EVACON-Rainsnow computing with other on various paramenter
Execution Based Result Analysis
This sub-section is divided into four scenarios. Scenario 1 considers comparison of framework or model which works at cloud and federated cloud level. Scenario 2 denotes the comparison of framework which works at primary cloudlet or at fog server level. Similarly, scenario 3 and 4 represent comparison of framework which executes at dew level or secondary cloudlet level and edge level or sensor level respectively. These scenarios include simulation parameters, various factors for comparison of different frameworks with improved SKYR framework and graphical representation of their comparative result.
Scenario 1: This scenario considers the execution of mobile users’ task at cloud and federated cloud. This result analysis considers reference frameworks such as: PBHA, MMA, MCS, HCPS, MABC which are compared with the proposed improved SKYR framework. All these are executed on cloud-sim. Under these, four main parameters are compared and these are: load balance, utilization of VM’s, total cost and makespan and these results are shown in Figs. 21, 22, 23 and 24. These results show that, for makespan and total cost, proposed improved SKYR framework performs best among its peer as most of the tasks are filtered task for cloud execution and many tasks are executed locally. Therefore, this reduces the makespan and cost but in terms of load balancing and VM’s utilization, it lags MMA framework which uses optimized scheduling algorithm for heterogeneous tasks and devices.
[See PDF for image]
Fig. 21
Variation of load balance w.r.t number of task
[See PDF for image]
Fig. 22
Variation of utilization of VM (%) w.r.t number of task
[See PDF for image]
Fig. 23
Variation of total cost w.r.t number of tasks
[See PDF for image]
Fig. 24
Variation of makespan w.r.t number of tasks
Hence, overall performance of the proposed improved SKYR framework for EVACON-Rainsnow computing is optimized at cloud and federated cloud level as it shows best result for cost and makespan but lags slightly its peers with respect to load balancing and VM utilization parameters. The simulation parameters considered for execution at cloud and federated cloud as shown in Table 7.
Table 7. Specifications used while execution at cloud
Entity | Parameter | Specification |
|---|---|---|
Task | CPU demand | (0.06–1.36) GHz |
RAM demand | (0.009–0.013) GB | |
MIPS | (0.25–2.35) × 106 | |
VM | CPU frequency | (0.15–1.60) GHz |
MIPS | (0.08–0.76) × 104 | |
RAM | (0.011–0.85) GB | |
Instance | Single-multiple |
Scenario 2: This scenario considers ETS-TEE, ICCF, FMCC, SKYR frameworks which are compared with the proposed improved SKYR framework in terms of different parameters such as CPU utilization, energy consumption, overall latency, and efficiency. These referenced frameworks execute at primary cloudlet level or fog server level. Table 8 shows various simulation parameters for execution on cloud-sim.
Table 8. Specification used while execution at primary cloudlet/fog server
Parameter | Specification |
|---|---|
Number of primary cloudlets | 3–10 |
Number of users | 5–55 |
CPU clock frequency of each user | 1–1.5 GHz |
Transmission power | 257–325 mW |
Bandwidth | 10–20 Mbps |
Total CPU cycles of task offloaded by users | 200–2000 Mega Cycles |
Required data size of task by users | 10 KB to 1 MB |
Frequency of cloudlets when idle | 300 MHz |
Frequency at high CPU usage | 1.9 GHz |
CPU | Cores 2 |
RAM | 2 GB |
Million Instructions Per Second | 10,000 MIPS |
Storage size | 16 GB |
Battery capacity | 2000 mAh |
All these experimentations were performed in Cloud-sim and results show that proposed improved SKYR framework performs better than its peers. In terms of CPU utilization, energy consumption, and overall latency, it shows improved results than its counterparts because it employed highly reliable, efficient, and potential cloudlet as primary cloudlet which manages the scheduling of tasks, its delegation of task and other aspects of effective execution with various algorithms and mechanisms [73, 75, 76]. Detailed result comparison is discussed in SKYR framework [73] and used here in this paper for comparative analysis. These results are shown in Figs. 25, 26 and 27. Results in Fig. 28 show the utility efficiency perceived to the primary cloudlet while imparting services to users and rewards & monetary incentives in lieu of it. SUPMAR pricing model of SKYR framework discussed [76], shows that primary cloudlet and its service users are highly satisfied in improved SKYR framework as compared to the previously available frameworks.
[See PDF for image]
Fig. 25
Variation of CPU utilization w.r.t number of cloudlets
[See PDF for image]
Fig. 26
Variation of energy consumption w.r.t number of cloudlets
[See PDF for image]
Fig. 27
Variation of overall latency w.r.t number of cloudlets
[See PDF for image]
Fig. 28
Variation of utility efficiency for various cloudlets
Scenario 3: In this scenario, three frameworks are considered and these are Fog-AMOSM Framework, PMIM framework, EMUCO framework and CASPS framework which execute at secondary cloudlet level or dew server level. These frameworks are compared with improved SKYR framework on the parameter of CPU utilization, energy consumption, overall delays, and utility efficiency. Table 9 represents simulation parameters to be set while execution in Cloud-sim. Result comparison for secondary cloudlet is similar to that of primary cloudlet for CPU utilization, energy consumption and overall latency factors. Therefore, the same can be considered for secondary cloudlet as shown in Figs. 25, 26 and 27. But utility efficiency perceived to user is different in this scenario and the same is shown in Fig. 29 but utility efficiency with respect to cloudlet is same as mentioned in Fig. 28. As SKYR framework uses SUPMAR pricing model for cloudlet and user, which helps them to perceives higher utility from their course of action [76].
Table 9. Specification used while execution at secondary cloudlet/dew server
Parameters | Specification |
|---|---|
Number of dew server/secondary cloudlet | 3–10 |
Number of users | 5–55 |
CPU clock frequency of each user | 1–1.5 GHz |
Transmission power | 257–325 mW |
Bandwidth | 5–10 Mbps |
Total CPU cycles of task offloaded by user | 100–1000 Mega cycles |
Required data size of task by user | 10–500 KB |
[See PDF for image]
Fig. 29
Variation of utility efficiency for various users
Scenario 4: This scenario considers four frameworks that are DTOME, EAO, FSO, PMMRA and COA frameworks which execute at edge server level. These frameworks are compared with the improved SKYR framework in terms of network throughput, latency, energy consumption and sustainability of sensor network. Table 10 shows the simulation parameters used while execution in edge cloud-sim. The proposed improved SKYR framework uses different algorithm for heterogenous sensor-based task scheduling which specifically focuses on energy consumption, latency and efficient utilization of bandwidth as compared to its counterparts. It also uses various algorithms to remove noise and redundant data from the sensed data and hence improves the efficiency and reduces the latency. Due to these factors the performance of improved SKYR framework is better in terms of network throughput, overall latency, energy consumption and network lifetime and the same can be depicted from Figs. 30, 31, 32 and 33.
Table 10. Specification used while execution at edge server and sensor node
Parameters | Specification |
|---|---|
Network area | 100 × 100 m2 |
Total number of sensor nodes | 200 |
Number of edge servers | 20 |
Initial energy of a normal node | 0.5 J |
Energy factor of edge server | 1 |
Packet size of data | 512 bytes |
Dissipated energy | 50 nJ/bit |
Energy of aggregation by edge server | 5 nj/bit/message |
Radius of sensor nodes | 150 m |
Radius of edge server | 300 m |
Data transmission rate | 250 kbit/s |
Simulation time | 10 min |
[See PDF for image]
Fig. 30
Relation of network throughput w.r.t simulation time
[See PDF for image]
Fig. 31
Relation of average delay w.r.t simulation time
[See PDF for image]
Fig. 32
Variation of energy consumption w.r.t number of nodes
[See PDF for image]
Fig. 33
Relation of network lifetime w.r.t simulation time
From the comparative result analysis this can be perceived that improved SKYR framework which executes the proposed EVACON-Rainsnow computing performs best at fog server (primary cloudlet), dew cloudlet level that is scenario 2 and 3. Its performance in scenario 1 and 4 is optimum which includes the execution at federated cloud and edge server level respectively. Hence, the proposed four level computing architecture facilitates the user with optimized solution which has varying choice of execution.
Conclusion and Future Work
The proposed work discusses in detail various distributed remote computing’s, their strength, weakness, scope, usages and comparative analysis. Keeping all the parameters into consideration a new paradigm named as EVACON-Rainsnow computing is proposed which caters main concerns of previous computing’s. This research work discussed in detail the four-tier computing’s architecture, principle, components, working flow which includes user-based task and sensor-based task, feature, applications, and functions performed. Here, we have proposed the improved SKYR framework which executes this four-tier computing architecture. This improved SKYR framework incorporates the proposed Task-Segregation () and Scalability () algorithm which simplify and streamline the task execution at different computing levels. Inclusion of these algorithm in SKYR framework can be depicted from the result. This research work shows the comparative analysis of proposed computing with its peers and results show that the proposed one has an edge over the others.
In future, we will try to work on different aspects of the proposed computing paradigms like scheduling of task, sensing of task and also propose new framework which caters all these parameters and improves further the mentioned computing. We will further improve the working coordination between the edge resources and federated cloud so that the complete EVACON-Rainsnow computing system become robust and flexible in working to entertain the user task. We will also try to improve the performance efficiency and energy efficiency of the framework associated with proposed computing. Our future work will include the emphasis on the challenges discussed here in Sect. 4.3 and also improves the SKYR framework with the addition of new modules in it and will do comprehensive comparison of various different frameworks of peer distributed computing’s.
Funding
No funding has been received from any source for this research.
Availability of Data and Materials
The data and material associated with the manuscript are available from the corresponding author, upon reasonable request.
Code Availability
The code that support the findings of this study is available from the corresponding author, upon reasonable request.
Declarations
Conflict of interest
The authors have no conflicts of interest to declare. All co-authors have seen and agree with the contents of the manuscript and there is no financial interest to report. We certify that the submission is original work and is not under review at any other publication.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
1. IBM, IBM introduces ready-to-use cloud computing. Retrieved 2007, from https://www-03.ibm.com/press/us/en/pressrelease/22613
2. Bort, J. (2017) Amazon’s massive cloud business hit over $12 billion in revenue and $3 billion in profit in 2016. Retrieved 2007, from http://www.businessinsider.com/amazons-cloud-businesshits-over-12-billion-in-revenue-2017-2
3. Zhou, Y; Zhang, D; Xiong, N. Post-cloud computing paradigms: A survey and comparison. Journal of Tsinghua Science and Technology; 2017; 22,
4. Luong, NC; Wang, P; Niyato, D; Yonggang, W; Han, Z. Resource management in cloud networking using economic analysis and pricing models: A survey. Journal of IEEE Communications Surveys & Tutorials; 2017; 19,
5. Zhu, Q; Tang, H; Huang, J; Hou, Y. Task scheduling for multi-cloud computing subject to security and reliability constraints. Journal of IEEE/CAA Automatica Sinica.; 2021; 8,
6. Lipsa, S; Dash, RK; Ivkovic, N; Cengiz, K. Task scheduling in cloud computing: A priority-based heuristic approach. Journal of IEEE Access; 2023; 11,
7. Wright, A. Get smart. Journal of Communication, ACM; 2009; 52,
8. Kemp, R., Palmer, N., Kielmann, T., Seinstra, F., Drost, N., Maassen, J., & Bal, H. (2009) Eyedentify: Multimedia cyber foraging from a smartphone. In 11th IEEE international symposium on multimedia ISM’09 (pp. 392–399). IEEE.
9. Huang, D. Mobile cloud computing. Journal of IEEE COMSOC Multimedia Communications Technical Committee (MMTC); 2011; 6,
10. Wang, C; Ren, K; Lou, W; Li, J. Toward publicly auditable secure cloud data storage services. Journal of IEEE Network; 2010; 24,
11. Chang, F; Dean, J; Ghemawat, S; Hsieh, WC; Wallach, DA; Burrows, M; Chandra, T; Fikes, A; Gruber, RE. Bigtable: A distributed storage system for structured data. Journal of ACM Transactions of Computer Systems (TOCS); 2008; 26,
12. Sakr, S; Liu, A; Batista, DM; Alomari, M. A survey of large-scale data management approaches in cloud environments. Journal of IEEE Communication Surveys & Tutorials; 2011; 13,
13. Fan, X; Cao, J; Mao, H. A survey of mobile cloud computing. Journal of ZTE Communications; 2011; 9,
14. Satyanarayanan, M; Bahl, P; Caceres, R; Davies, N. The case for vm-based cloudlets in mobile computing. Journal of IEEE Pervasive Computing; 2009; 8,
15. Satyanarayanan, M., Chen, Z., Ha, K., Hu, W., Richter, W., & Pillai, P. (2014). Cloudlets: At the leading edge of mobile-cloud convergence. In 6th IEEE international conference on mobile computing, applications and services (MobiCASE) (pp. 1–9).
16. Verbelen, T., Simoens, P., De Turck, F., & Dhoedt, B. (2012). Cloudlets: Bringing the cloud to the mobile user. In Proceedings of the third ACM workshop on mobile cloud computing and services (pp. 29–36). ACM.
17. Li, Y., & Wang, W. (2014). Can mobile cloudlets support mobile applications. In Proceedings of IEEE INFOCOM (pp. 1060–1068).
18. Rawadi, J. M., Artail, H., & Safa, H. (2014). Providing local cloud service to mobile devices with intercloudlet communication. In Proceedings of 17th IEEE Mediterranean electrotechnical conference, Beirut, Lebanon (pp. 134–138).
19. Artail, A., Frenn, K., Artail, H., & Safa, H. (2015). A framework of mobile cloudlet center based on the use of mobile devices as cloudlets. In Proceedings of 29th IEEE international conference on advanced information networking and applications (pp. 777–784).
20. Wang, H; Cai, L; Hao, X; Ren, J; Ma, Y. ETS-TEE: An energy-efficient task scheduling strategy in a mobile trusted computing environment. Journal of Tsinghua Science and Technology; 2023; 28,
21. Kai, K; Cong, W; Tao, L. Fog computing for vehicular ad-hoc networks: Paradigms, scenarios, and issues. The Journal of China Universities of Posts and Telecommunications; 2016; 23,
22. Abbas, N; Zhang, Y; Taherkordi, A; Skeie, T. Mobile edge computing: A survey. Journal of IEEE Internet of Things; 2018; 5,
23. Jararweh, Y., Doulat, A., AlQudah, O., Ahmed, E., Al-Ayyoub, M., & Benkhelifa, E. (2015). The future of mobile cloud computing: Integrating cloudlets and mobile edge computing. In 23rd international conference on telecommunications (ICT) (pp. 1–5).
24. Kitanov, S., Monteiro, E., & Janevski, T. (2016). 5g and the fog 2014: Survey of related technologies and research directions. In 18th Mediterranean electrotechnical conference (MELECON) (pp. 1–6).
25. Beck, M. T., Werner, M., Feld, S., & Schimper, S. (2014). Mobile edge computing: A taxonomy. In Proceedings of the sixth international conference on advances in future internet, Citeseer.
26. Yi, S., Li, C., & Li, Q. (2015). A survey of fog computing: Concepts, applications and issues. In Proceedings of the workshop on mobile big data, ser. Mobidata ’15 (pp. 37–42). ACM. [Online]. https://doi.org/10.1145/2757384.2757397
27. Jararweh, Y., Doulat, A., Darabseh, A., Alsmirat, M., Al-Ayyoub, M., & Benkhelifa, E. (2016). Sdmec: Software defined system for mobile edge computing. In IEEE International Conference on Cloud Engineering Workshop (IC2EW) (pp. 88–93).
28. Roman, R., Lopez, J., & Mambo, M. (2016). Mobile edge computing, fog et al.: A survey and analysis of security threats and challenges. Future Generation Computer Systems, [Online]. http://www.sciencedirect.com/science/article/pii/S0167739X16305635
29. Ahmed, A., & Ahmed, E. (2016). A survey on mobile edge computing. In 10th international conference on intelligent systems and control (ISCO) (pp. 1–8).
30. Borgia, E., Bruno, R., Conti, M., Mascitti, D., & Passarella, A. (2016). Mobile edge clouds for information-centric IoT services. In IEEE symposium on computers and communication (ISCC) (pp. 422–428).
31. Marotta, MA; Faganello, LR; Schimuneck, MAK; Granville, LZ; Rochol, J; Both, CB. Managing mobile cloud computing considering objective and subjective perspectives. Journal of Computer Networks.; 2015; 93,
32. Dinh, HT; Lee, C; Niyato, D; Wang, P. A survey of mobile cloud computing: Architecture, applications, and approaches. Journal of Wireless Communications and Mobile Computing; 2013; 13,
33. Hu, YC; Patel, M; Sabella, D; Sprecher, N; Young, V. Mobile edge computing a key technology towards 5G. ETSI White Paper; 2015; 11,
34. Asrani, P. Mobile cloud computing. International Journal of Engineering and Advanced Technology (IJEAT).; 2013; 2,
35. Patel, M., Naughton, B., Chan, C., Sprecher, N., Abeta, S., & Neal, A. (2014). Mobile-edge computing introductory technical white paper. White Paper, Mobile-edge computing (MEC) industry initiative.
36. Li, G; Xu, Y. Energy consumption averaging and minimization for the software defined wireless sensor networks with edge computing. Journal of IEEE Access; 2019; 7,
37. Chen, Y; Zhao, F; Lu, Y; Chen, X. Dynamic task offloading for mobile edge computing with hybrid energy supply. Journal of Tsinghua Science and Technology; 2023; 28,
38. Bonomi, F., Milito, R., Zhu, J., & Addepalli, S. (2012). Fog computing and its role in the Internet of Things. In Proceeding 1st Ed. MCC workshop mobile cloud computing (pp. 13–16).
39. Xie, X., Zeng, H. J., & Ma, W. Y. (2002). Enabling personalization services on the edge. In Proceedings of 10th ACM international conference on multimedia (pp. 263–266).
40. Gelsinger, P. P. (2001). Microprocessors for the new millennium: Challenges, opportunities, and new frontiers. In Proceedings IEEE international solid-state circuits conference (pp. 22–25).
41. Ibrahim, S., Jin, H., Cheng, B., Cao, H., Wu, S., & Qi, L. (2009). CLOUDLET: Towards MapReduce implementation on virtual machines. In Proceedings of 18th ACM international symposium on high perform. Distributed computing (pp. 65–66).
42. Gonzalez, N. M. (2016). Fog computing: Data analytics and cloud distributed processing on the network edges. In Proceedings of 35th international conference on Chilean computer science society (SCCC) (pp. 1–9).
43. Dastjerdi, A. V., Gupta, H., Calheiros, R. N., Ghosh, S. K., & Buyya, R. (2016). Fog computing: Principles, architectures, and applications. In Proceedings of Internet of Things: Principle & paradigms. San Mateo, CA, USA.
44. Li, J., Zhang, T., Jin, J., Yang, Y., Yuan, D., & Gao, L. (2017). Latency estimation for fog-based Internet of Things. In Proceedings of 27th international telecommunication network application conference (ITNAC) (pp. 1–6).
45. Hu, P; Dhelim, S; Ning, H; Qiu, T. Survey on fog computing: Architecture, key technologies, applications, and open issues. Journal of Networks and Computer Application; 2017; 98,
46. Yi, S., Li, C., & Li, Q. (2015). A survey of fog computing: Concepts, applications, and issues. In Proceedings of workshop mobile big data (pp. 37–42).
47. Perera, C; Qin, Y; Estrella, JC; Reiff-Marganiec, S; Vasilakos, AV. Fog computing for sustainable smart cities: A survey. Journal of ACM Computing Surveys; 2017; 50,
48. Mouradian, C; Naboulsi, D; Yangui, S; Glitho, RH; Morrow, MJ; Polakos, PA. A comprehensive survey on fog computing: State-of-the art and research challenges. Journal of IEEE Communication Surveys and Tutorials; 2018; 20,
49. Huang, C; Lu, R; Choo, KKR. Vehicular fog computing: Architecture, use case, and security and forensic challenges. Journal of IEEE Communication Magazine; 2017; 55,
50. Fog computing and the Internet of Things: Extend the cloud to where the things are. Cisco, San Jose, CA, USA, White Paper (2015). [Online]. https://www.cisco.com/c/dam/en_us/solutions/trends/iot/./computing-overview.pdf
51. Vaquero, LM; Rodero-Merino, L. Finding your way in the fog: Towards a comprehensive definition of fog computing. ACMSIGCOMM Computer Communication Review.; 2014; 44,
52. Garfinkel, S. Architects of the information society: 35 years of the Laboratory for Computer Science at MIT; 1999; MIT Press: [DOI: https://dx.doi.org/10.7551/mitpress/1341.001.0001]
53. Mobile edge computing: Introductory technical white paper (2014). ETSI. https://portal.etsi.org/Portals/0/TBpages/MEC/Docs/Mobile-edge%20Computing-IntroductoryTechnicalWhitePaper-V1%2018-09-14.pdf
54. Bonomi, F. (2011). Connected vehicles, the internet of things, and fog computing. In Proceeding of 8th ACM international workshop on vehicular inter-networking, Las Vegas, NV, USA.
55. Wang, YW. Cloud dew architecture. International Journal of Cloud Computing; 2015; 4,
56. Zhang, Y. Transparence computing: Concept, architecture, and example. Journal of Acta Electronica Sinica; 2004; 32,
57. Hurwitz, JS; Bloor, R; Kaufman, M; Halper, F. Cloud computing for dummies; 2009; Wiley:
58. Mobile edge computing (2017). ETSI. https://portal.etsi.org/tb.aspx?tbid=826&SubTB=826,835
59. Open Fog Consortium (2015). Open Fog. http://www.openfogconsortium.org
60. Yang, M; Ma, H; Wei, S; Zeng, Y; Chen, Y; Hu, Y. A multi-objective task scheduling method for fog computing in cyber-physical-social services. Journal of IEEE Access; 2020; 8,
61. Branch, R. Cloud computing and big data: A review of current service models and hardware perspectives. Journal of Software Engineering and Applications; 2014; 7,
62. Zhou, Y; Zhang, Y; Xie, Y; Zhang, H; Yang, LT; Min, G. TransCom: A virtual disk-based cloud computing platform for heterogeneous services. Journal of IEEE Transactions on Network & Service Management; 2014; 11,
63. Zhang, Y. Transparence computing: Concept, architecture and example. Journal of Acta Electronica Sinica; 2004; 32,
64. Zhang, Y. (2008). The challenges and opportunities in transparent computing. In Proceedings of IEEE/IFIP EUC, Shanghai, China.
65. Kuang, W. NSAP: A network storage access protocol for transparent computing. Journal of Tsinghua University Science and Technology; 2009; 49,
66. Zhang Y., & Zhou, Y. (2006). Transparent computing: A new paradigm for pervasive computing. In Proceedings of international conference on ubiquitous intelligence and computing (pp. 1–11). Springer.
67. Zhou, Y; Zhang, Y; Liu, H; Xiong, N; Vasilakos, AV. A bare-metal and asymmetric partitioning approach to client virtualization. Journal of IEEE Transaction on Services Computing; 2014; 7,
68. Wu, M. Analysis and a case study of transparent computing implementation with UEFI. International Journal of Cloud Computing; 2012; 1,
69. Wang, Y. W. (2015). The relationships among cloud computing, fog computing, and dew computing. http://www.dewcomputing.org/index.php/2015/11/12/therelationships-among-cloud-computingfogcomputingand-dew-computing
70. Wang, Y. W. (2015). The initial definition of dew computing. http://www.dewcomputing.org/index.php/2015/11/10/theinitial-definition-of-dew-computing
71. Skala, K; Davidovic, D; Afgan, E; Sovic, I; Sojat, Z. Scalable distributed computing hierarchy: Cloud, fog and dew computing. Open Journal of Cloud Computing; 2015; 2,
72. Wang, Q., Guo, S., Liu, J., Pan, C., & Yang, L. (2019). Profit maximization incentive mechanism for resource providers in mobile edge computing. In Proceedings of IEEE transactions on services computing (pp. 1–12).
73. Kumar, R; Yadav, SK. Scalable key parameter yields of resources model for performance enhancement in mobile cloud computing. Springer Journal of Wireless Personal Communications; 2017; 95,
74. Qi, Q; Tao, F. A smart manufacturing service system based on edge computing, fog computing, and cloud computing. Journal of IEEE Access; 2019; 7,
75. Yadav, SK; Kumar, R. A mobile cloud computing framework for execution of data as a service using cloudlet. Kuwait Journal of Science; 2021; 48,
76. Yadav, SK; Kumar, R. A scalable and utility driven profit maximized auction of resources model for cloudlet based mobile edge computing. Springer Journal of Wireless Personal Communications; 2021; 119,
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023.