1. Introduction
Novel applications of 5G services are cataloged with three kinds of quality of service (QoS) features: massive machine-type communication (mMTC), enhanced mobile broadband (eMBB), and ultra-reliable low-latency communication (uRLLC) in 5G cellular networks [1]. Two system architectures have been proposed for ensuring the scalability and flexibility of resource allocation and scheduling for the diverse QoS requirements. The first highly acceptable approach is the cloud radio access network (C-RAN). The baseband processing and networking functions are virtually centralized in a resource pool for resource scalability and flexibility [2]. The other one is mobile edge computing (MEC). It supports interactive and real-time applications associated with nearby cloud hosts to obtain the required latency for addressing urgent and distributed requirements [3]. To implement C-RAN and MEC, software-defined networking (SDN) and network function virtualization (NFV) have been integrated to develop network slicing technologies in 5G [4]. Network slices are end-to-end (E2E) mutually separate sets of programmable infrastructure resources with independent control. A slice is a logical network adapted for particular applications regarding the diverse QoS requirements [1].
However, SDN and NFV have paved the way for employing the slicing concept. The network slice-as-a-service (NSaaS) strategy has several problems and challenges [4,5]. Furthermore, the failure of virtualized network functions (VNFs) may impact the QoS for service provisioning of the control plane (e.g., MME) and the data plane (e.g., S-GWsor PDN-GWs), respectively [4]. The related factors would be addressed and collected to design an orchestrator with the resource management efficiently and effectively for the VNFs, such as resource allocation, load balancing, and high availability (HA) for building up a high performance and robust slicing network.
Furthermore, HA is considered a high stringency requirement for providing a sustainable business with emergency response applications to support a reliable performance with zero downtime and costs. Some technologies can be adopted by the redundant array of independent discs, the replication of nodes, or master-slave database redundancy to offer data protection in an HA-equipped system [6,7,8]. From the network operator perspective, 5G network slices’ resources are limited for simultaneous profit maximization. Load balancing is typically adopted by an orchestrator to achieve resource planning and provisioning and enhance performance [4,9].
In this paper, the efficient elastic mechanisms of resource management in virtual machines (VMs) with VNFs are proposed to integrate admission control, load balancing, resource allocation, and HA arrangement in 5G network slices. The system architecture is shown in Figure 1. An integer programming problem is formulated to maximize system utilization, subject to the quality of services, resource requirements, and HA constraints. The resource allocation problem is combined with knapsack and bin-packing problems. Generally, the VNFs or called tasks with various amounts of resource requirements emulated as VMs should be packed into a finite number of servers. A few servers in active status and then stepping forward servers by increasing demands are possible to reduce the operating costs in a bin-packing problem. A set of tasks for applications with diverse weights and values is selected for inclusion to remain within the resource limit and reach the maximum benefit, a well-known knapsack problem. Furthermore, the knapsack problem is the aforementioned combinatorial nondeterministic polynomial time (NP) hard problem. One possible solution is adopted by the Lagrangian relaxation (LR) method efficiently and effectively. Near-optimal solutions provide practical methods to overcome the bottleneck within limited computing resources of slices. The virtualization, parallel computing, and mathematical programming techniques are also developed in this paper to make near-optimal decisions from an NSaaS service provider’s perspective. The proposed resource management algorithms ensure that computing resources are equitably distributed in the slices of cloudlets and core clouds. The computation in the edge and core clouds is performed to achieve maximum resource utilization with HA and minimal cost. Moreover, the admission control scheme with the additional HA requirements is devised to admit the maximum number of jobs. Each job requires various types of resources.
The remainder of this paper is organized as follows: Section 2 reviews related works. Section 3 introduces the mathematical model and problem description. The LR-based solution approach is developed, as shown in Section 4. Section 5 presents computational experiments. Conclusions are drawn in Section 6.
2. Literature Review
A cloud computing environment in 5G network slices supports share-based services in a pay-as-you-go approach. It provides operators with a flexible architecture from the perspective of a network operator. Resource management with complex on-demand traffic in the share-based cloud computing environment is a considerable challenge. Inappropriately designed resource management solutions might lead to the network’s increasing costs. This is related to unsatisfactory quality of services (QoS) and system reliability regarding the penalty of user experience and call-blocking probability due to the virtualized network function failure. Table 1, Table 2 and Table 3 compare the resource management problems investigated with those investigated in related studies in terms of (i) resource allocation, (ii) load balancing, and (iii) admission control.
2.1. Resource Allocation
For the maximization of the benefits of the cloud system and service quality enhancement, Liu et al. [10] developed a joint multi-resource allocation model based on a semi-Markov decision process (SMDP). They solved the optimization problem using linear programming to attain wireless resource allocation and near-optimal decisions in cloud computing. However, cloud service providers fulfill individual requirements to achieve a win-win situation for both parties in terms of computational efficiency, budget balance, and truthfulness. Jin et al. [11] designed an incentive-compatible auction mechanism to appropriately allocate matching among cloud resources and user demands. In this auction model, the buyers are mobile devices, and the sellers are cloudlets. The nearest control center can adopt the role of the auctioneer to reduce transmission costs and latency. Furthermore, by assuming that all cloudlets can provide the same resources with distinct rewards, Liu and Fan [12] presented a two-stage optimization strategy to achieve optimal cloudlet selection in a multi-cloudlet environment and optimal resource allocation in response to the request in the cloudlet. Studies on resource allocation in a cloud computing environment are shown in Table 1. Resource allocation and scheduling algorithms have been proposed in numerous research areas, such as transportation management, industrial management, operational research, computer science, and particularly in real-time operating systems [13,14]. For instance, the earliest deadline first (EDF) is a dynamic scheduling algorithm used in real-time operating systems to allocate computing resources in central processing units (CPUs) using a priority queue. The queue searches for the task with the closest deadline; if a job cannot be completed within its time frame, the operating system must release the job. The main idea is to maximize resource use for several competing interests while balancing the load. A weighted priority for each data flow can be assigned inversely proportional to the respective flow’s anticipated resource consumption [15,16]. Ding et al. [17] proposed a Linux scheduling policy with a priority queue rather than a first-in-first-out (FIFO) queue to improve kernel-based virtual machine performance. Zhao et al. [18] proposed online VM placement algorithms to cost efficiently allocate resources to VMs to increase revenues in a managed server farm. First-fit (FF), FF migration (FFM), least reliable first (LRF), and decreased density greedy (DDG) algorithms are packing strategies relevant to task optimization for achieving desirable performance.
Based on the analysis of the previous studies, this paper regards offloading tasks as virtual machines with distinct rewards with various levels of demands in the cloudlets and core clouds. The Lagrangian relaxation-based approach is proposed to maximize system revenue, subject to constraints such as computing capacity, assignments, and quality of service requirements. The approach is adopted by considering finding the optimal solutions other than SMDP within limited states or constraints. The objective is to maximize the total value of tasks using proposed heuristics in polynomial time. It also appropriately rearranges resources using the next-fit algorithm to allocate tasks and satisfy application requirements. 2.2. Load Balancing
Load balancing is a crucial technology that optimizes mobile application performance. Several studies have proposed cloudlet technology solutions [19] to achieve load balancing for mobile devices (Table 2). Jia et al. [19] devised a load-balancing algorithm to balance the workload among multiple cloudlets in wireless metropolitan area networks. A redirection scheme shortened the average offload task response time and improved user experience. Yao et al. [20] studied load balancing for cloudlet nodes. The task allocation problem was formulated as an integer linear problem. A two-step centralized appointment-driven strategy was used to obtain a solution with minimal average response time and a balanced load. From a network operator perspective, time-varying and periodically changing demands are challenges in providing on-demand services. Furthermore, cloud service providers face load-balancing problems in cloud computing environments. The system availability of on-demand access should be considered using an adjustable resource assignment mechanism to satisfy QoS. If resource demands are stochastic, their accurate forecast is severe. Historical data are measured for estimating usage data to determine the load distribution and evaluate the proposed algorithm [21].
In general, as many tasks as possible are admitted to maximize total revenue; however, the supply and demand for resources are unbalanced during peak traffic hours. This study determines the orchestrator’s decisions for which tasks are selected and dispatched to achieve the maximum objective values subject to task assignment and limited capacity. The problems are classified as the bin-packing and 0/1 knapsack combined problems. Herein, the orchestrator’s decisions are based on resource utilization functions, including a central processing unit, random access memory, hard drive, and bandwidth for each server type. The orchestrator’s objective is also designed for that assignment strategy fitting the QoS requirements, such as those of computing and transmission in variant traffic loads in a global view adopted by the heuristics of bin-packing strategies. 2.3. Admission Control
Preventing server overload and ensuring application performance are the goals of admission control [22]. This mechanism decides whether to admit a particular service request to the server. It is implemented for meeting QoS requirements and achieving service providers’ expected revenue [23]. Hoang et al. [24] developed a semi-Markov decision process (SMDP)-based optimization model for mobile cloud computing that considers constraints such as resources, bandwidth, and QoS to perform admission control tasks. This framework ensures QoS performance and maximizes the reward. Xia et al. [25] aimed to optimize cloud system throughput. An effective admission algorithm based on the proposed resource cost paradigm to model various resource consumptions was devised according to the online request following the current workload. Table 3 presents the comparison of the admission control methods.
The SMDP-based approach relies on partial data. The computing time is also inefficient for jobs with time constraints, such as delay and delay tolerance sensitive applications. The priority of user information is unknown for decision making under a computing time constraint. In this paper, call admission control mechanisms within the conditions with non-priority and QoS constraints are jointly considered. The Lagrangian relaxation-based approach is proposed to maximize system revenue combined with the proposed resource allocation methods to appropriately admit tasks and satisfy application requirements. 2.4. Research Scope This research focuses on resource management in various scenarios in 5G network slices. An optimization-based approach (LR) is used to solve the mathematical programming problem to maximize system revenue. In our proposed cloud computing framework (cloudlets and core clouds) in 5G network slices, two primary research questions are considered. What is the most efficient algorithm for resource allocation within admission control, resource scheduling, and load-balancing policies in a limited-resource cloud computing environment? Is the HA of VMs considerably influenced in the system? Additionally, the problem is addressed under rapidly increasing data traffic conditions and a limited resource pool to obtain near-optimal policies using a combination of LR approaches. The solutions satisfy the QoS-related constraints in transmission and stand on computation perspectives to fulfill the throughput, delay, and delay jitter requirements. They are compared with other resource management schemes for efficiency and effectiveness. 3. Mathematical Formulation
A mathematical model is proposed to manage an optimization programming problem. It focuses on developing a well-designed algorithm in a parallel computing environment for admission control, resource scheduling, and an HA arrangement to maximize cloud service provider profits in 5G network slices. Based on the combined cloudlet and core cloud network architecture (Figure 1), VMs with VNFs are requested by various applications and require specific resources. They should be allocated to the server k with computation and transmission capacities concerning CPU, RAM, storage, and internal or shared bandwidth, expressed asPk,Mk,Hk,Bkn, andBc , respectively, in cloudlets and core clouds. The proposed model in the orchestrator supports network resource management in front of the cloudlet and core cloud environments when assuming that resource management strategies are followed when facing a batch of requests. Table 4 and Table 5 present lists of the given parameters and decision variables.
The system’s profit is the combination of the rewards of admitting applications, the penalties of rejecting the other applications, and the cost for turning on the servers.
The objective functionZIP is shown in Equation (1), and its goal is to maximize system profits among all VMs requested by the applications.
ZIP=∑i∈IRi yi−∑i∈IPi(1−yi)−∑s∈Sβs zs
To acquire the highest revenue, the optimization problem is shown as:
Objectivefunction:maxZIP=min−ZIPsubjectto:C1:(|Wi|+|Ti|)yi≤∑j∈Wi∑s∈Saijs+∑ℓ∈Ti∑s∈Sbiℓs,∀i∈I,C2:∑s∈Saijs≤1,∀i∈I,∀j∈Wi,C3:∑s∈Sbiℓs≤1,∀i∈I,∀ℓ∈Ti,C4:∑i∈I∑j∈Wiaijs+∑i∈I∑ℓ∈Tibiℓs≤Vs,∀s∈S,C5:∑i∈I∑j∈WiDij aijs+∑i∈I∑ℓ∈TiDiℓ biℓs≤Es,∀s∈S,C6:∑i∈I∑j∈WiGij aijs+∑i∈I∑ℓ∈TiGiℓ biℓs≤Ms,∀s∈S,C7:∑i∈I∑j∈WiQij aijs+∑i∈I∑ℓ∈TiQiℓ biℓs≤Hs,∀s∈S,C8:∑i∈I∑j∈Wi∑ℓ∈TiXijℓ fijℓs≤Bsn,∀s∈S,C9:∑i∈I∑j∈Wi∑ℓ∈TiXijℓ(1−fijℓs)≤Bc,∀s∈S,C10:aijs≤zs,∀i∈I,∀j∈Wi,∀s∈S,C11:biℓs≤zs,∀i∈I,∀ℓ∈Ti,∀s∈S,C12:aijs+biℓs≤fijℓs+1,∀i∈I,∀j∈Wi,∀ℓ∈Ti,∀s∈S.
where C1–C3 belong to the admission control and assignment constraints, C4–C9 belong to the capacity constraints, and C10–C12 belong to the HA constraints.
For admission control and assignment constraints,(|Wi|+|Ti|)is the total number of standard or additional HA VMs required by an application, in which the VMs are in disjoint sets, whereWi⋂Ti=⌀in constraint C1.∑s∈S aijsis the number of standard VMs that must be admitted and allocated into servers.∑s∈S biℓsis the number of admitted and allocated HA VMs. The total number of allocated HA and standard VMs should be greater than or equal to the requirement, as shown in constraint C1. That is, while application i is completely served, the summation ofaijsandbiℓsshould be greater than and equal to the demand on the right-hand side of constraint C1.
∑s∈S aijsand∑s∈S biℓsare shown in the constraints C2 and C3, which means when the value of∑s∈S aijsor∑s∈S biℓsis less than or equal to one, the VMs are inseparably assigned to servers. In other words, a virtual machine is not partially allocated to servers.
For capacity constraints, the resources offered by a server are defined as a set of four factors: the maximum number of VMs in the server s (Vs), the processing capacity of each CPU core (Es), the RAM capacity (Ms), and the storage capacity (Hs). The total resources required by VMs for each server cannot exceed its available resources as formulated in the constraints C4–C7.
We also set the internal bandwidth rate (Bsn) in the server s for internal transmission between VMs. For example, two applications, i andi+1, and their index sets of VMs areWi={1i,2i},Ti={3i},Wi+1={1i+1,2i+1}, andTi+1={3i+1} , as shown in Figure 2. The link between VM3iand VM2i+1represents the internal bandwidth required by VM3ito connect to VM2i+1in server s. The constraint C8 indicates that the total bandwidth required by all the VMs should not exceed the internal bandwidthBsnin server s. The external bandwidth rate (Bc ) is set for the transmission between servers in the cloud computing environment (cloudlets and core clouds), as illustrated in Figure 2. The link between servers−1and server s represents the bandwidth requested by VM1ito connect to VM3i. The constraint C9 indicates that the total bandwidth required by all VMs accepted by servers in a cloud should not exceed the external bandwidth.
In this work, we assume two kinds of VMs, standard VMs and VMs with HA, of the same application, cannot be assigned to the same server, as shown in Figure 2. In the beginning, all servers contain no VMs; then, for example: application i comes in and needs two standard VMs and one VM with HA, as mentioned in the previous paragraph. First, we put two standard VMs, VM1iand VM2i, in servers−1, while the two kinds of VMs cannot be assigned to the same server, then VM3ihas to be assigned to server s. To continue, the next applicationi+1comes in, and we put its standard VMs1i+1and2i+1in servers−1and server s, respectively, while servers−1and server s have no residual capacity to handle one more VM. Meanwhile, the VM with HA, VM3i+1, cannot be put in servers−1or server s due to our exclusive assumption. Thus, VM3i+1is assigned to servers+1.
For HA configuration constraints, the decision variablefijℓsfor application i is set for the VM assignments. The index sets j and ℓ for different kinds of VMs must be collocated to server s and separated into different servers, wherej∈Wiandℓ∈Ti. The relationships are expressed as the constraint C12. The constraints C10 and C11 indicate the server power status. If any VM is admitted and assigned to server s, server s must be powered on, andzsshould be set to one. The constraint C12 assures that when VMs j and ℓ are assigned to the same server s, which means thataijsandbiℓsare both set to one, the exclusive settingfijℓsmust also be one. In other words, the standard VM and the HA VM requested from the same application cannot be allocated to the same server s.
4. Lagrangian Relaxation-Based Solution Processes
The Lagrangian relaxation method is proposed for solving large-scale mathematical programming problems, including optimization problems with linear, integer, and nonlinear programming problems in many practical applications [26]. The key idea is to relax complicated constraints into a primal optimization problem and extend feasible solution regions to simplify the primal problem. Based on the relaxation, the primal problem is transformed into an LR problem associated with Lagrangian multipliers [27,28,29]. Figure 3 illustrates the six procedural steps in the LR method.
4.1. Procedures of Step 1: Relaxation
To separate the feasible region of the primal problem into several subproblems with an independent set of decision variables, the primal problem is transformed into an LR problem. The relaxation method associated with Lagrangian multipliers is applied to the constraints C1 and C4–C13 in Step 1, as presented in Figure 3. Then, the constraints C1 and C4–C12 with the multipliers are added to the primal problem Equation (1), as shown in Equation (2) and denoted asZLR.
ZLR=−∑i∈IRi yi+∑i∈IPi(1−yi)+∑s∈Sβs zs+∑i∈Iμi1[(|Wi|+|Ti|)yi−∑j∈Wi∑s∈Saijs−∑ℓ∈Ti∑s∈Sbiℓs]+∑s∈Sμs2[∑i∈I∑j∈Wiaijs+∑i∈I∑ℓ∈Tibiℓs−Vs]+∑s∈Sμs3[∑i∈I∑j∈WiDij aijs+∑i∈I∑ℓ∈TiDiℓ biℓs−Es]+∑s∈Sμs4[∑i∈I∑j∈WiGij aijs+∑i∈I∑ℓ∈TiGiℓ biℓs−Ms]+∑s∈Sμs5[∑i∈I∑j∈WiQij aijs+∑i∈I∑ℓ∈TiQiℓ biℓs−Hs]+∑s∈Sμs6[∑i∈I∑j∈Wi∑ℓ∈TiXijℓ fijℓs−Bsn]+∑s∈Sμs7[∑i∈I∑j∈Wi∑ℓ∈TiXijℓ(1−fijℓs)−Bc]+∑i∈I∑j∈Wi∑s∈Sμijs8[aijs−zs]+∑i∈I∑ℓ∈Ti∑s∈Sμiℓs9[biℓs−zs]+∑i∈I∑j∈Wi∑ℓ∈Ti∑s∈Sμijℓs10[aijs+biℓs−fijℓs−1]
Then, the optimization problem can be reformulated as:
Objectivefunction:minZLRsubjectto:C2,C3,
whereyi∈{0,1},aijs∈{0,1},biℓs∈{0,1},fijℓs∈{0,1}, andzs∈{0,1}.
4.2. Procedures of Steps 2 and 3: Decomposition and Solving Subproblems The LR problem can be decomposed into several independent subproblems, with their related decision variables. The divide-and-conquer approach is used to solve the subproblems correspondingly.
4.2.1. Subproblem 1 (Related toyi)
By extracting items with decision variableyi, the optimization problem of Subproblem 1 can be developed as:
Objectivefunction:min∑i∈I(−Ri−Pi+μi1|Wi|+μi1|Ti|)yisubjectto:yi∈{0,1}.
Equation (3) can be divided into|I|independent subproblems. For each application i, wherei∈I, the decision variableyiis set to one when the coefficient(−Ri−Pi+μi1|Wi|+μi1|Ti|)is less than zero. Otherwise,yiis set to zero. The run time isO(|I|), and the pseudocode is illustrated in Algorithm 1.
Algorithm 1: Subproblem 1. |
|
4.2.2. Subproblem 2 (Related toaijs)
By extracting items with decision variableaijs, the optimization problem of Subproblem 2 can be developed as:
Objectivefunction:min∑i∈I∑j∈Wi∑s∈S(−μi1+μs2+μs3 Dij+μs4 Gij+μs5 Qij+μijs8+∑ℓ∈Tiμijℓs10)aijssubjecttoC2:∑s∈Saijs≤1,∀i∈I,∀j∈Wi,aijs∈{0,1}.
Equation (4) can be divided into|I||Wi||S|cases. The decision variableaijsis set to one, and the minimum coefficient(−μi1+μs2+μs3 Dij+μs4 Gij+μs5 Qij+μijs8+∑ℓ∈Ti μijℓs10)is less than zero and corresponds to alliance subindices i, j, and s. To satisfy the constraint C2, the decision variable should be set to one only if the minimum coefficient with subindex s is a required sorting processes. Otherwise,aijsis set to zero. The run time isO(|I||Wi||S||Ti|). The pseudocode is illustrated in Algorithm 2.
Algorithm 2: Subproblem 2. |
|
4.2.3. Subproblem 3 (Related tobiℓs)
By extracting items with decision variablebiℓs, the optimization problem of Subproblem 3 can be developed as:
Objectivefunction:min∑i∈I∑ℓ∈Ti∑s∈S(−μi1+μs2+μs3 Diℓ+μs4 Giℓ+μs5 Qiℓ+μiℓs9+∑j∈Wiμijℓs10)biℓssubjecttoC3:∑s∈Sbiℓs≤1,∀i∈I,∀ℓ∈Ti,biℓs∈{0,1}.
The solution process of Equation (5) is similar to that of Equation (4) and can be also divided into|I||Ti||S|subproblems. The decision variablebiℓsis set to one when the minimum coefficient(−μi1+μs2+μs3 Diℓ+μs4 Giℓ+μs5 Qiℓ+μiℓs9+∑j∈Wi μijℓs10)is less than zero and corresponds to alliance subindices i, ℓ, and s. Otherwise,biℓsis set to zero. The run time of this subproblem isO(|I||Ti||S||Wi|). The pseudocode is illustrated in Algorithm 3.
Algorithm 3: Subproblem 3. |
|
4.2.4. Subproblem 4 (Related tofijℓs)
By extracting items with decision variablefijℓs, the optimization problem of Subproblem 4 can be developed as:
Objectivefunction:min∑i∈I∑j∈Wi∑ℓ∈Ti∑s∈S(μs6 Xijℓ−μs7 Xijℓ−μijℓs10)fijℓssubjectto:fijℓs∈{0,1}.
Equation (6) can be divided into|I||Wi||S||Ti|cases. For the alliance subindices i, j, ℓ, and s, the decision variablefijℓsis set to one when the coefficient(μs6 Xijℓ−μs7 Xijℓ−μijℓs10)is less than zero. Otherwise,fijℓsis set to zero. The run time isO(|I||Wi||S||Ti|). The pseudocode is illustrated in Algorithm 4.
Algorithm 4:Subproblem 4. |
|
4.2.5. Subproblem 5 (Related tozs)
By extracting items with decision variablezs, the optimization problem of Subproblem 5 can be developed as:
Objectivefunction:min∑s∈S[βs−∑i∈I(∑j∈Wiμijs8+∑ℓ∈Tiμiℓs9)]zssubjectto:zs∈{0,1}.
Equation (7) can be divided into|S||I||Wi|or|S||I||Ti|cases. In each case with subindex s, the decision variablezsis set to one when the coefficient[βs−∑i∈I(∑j∈Wi μijs8+∑ℓ∈Ti μiℓs9)], which corresponds to alliance subindex s, is less than zero. Otherwise,zsis set to zero. The run time isO(|S||I||Wi|)orO(|S||I||Ti|), and the pseudocode is illustrated in Algorithm 5.
Algorithm 5: Subproblem 5. |
|
4.3. Procedure of Step 4: Dual Problem and the Subgradient Method
According to the weak Lagrangian duality theorem [30], the objective values of the LR problemZLRare the lower bounds (LBs) of the primal problemZIPwith multiplesμi1,μs2,μs3,μs4,μs5,μs6,μs7,μijs8,μiℓs9,μijℓs10≥0,∀i∈I,∀j∈Wi,∀ℓ∈Ti,∀s∈S. The formulation of the dual problem (D) is constructed to calculate the tightest LB (maxZD), where maxZD=minZLR. Then, the dual problem can be formulated as:
Objectivefunction:maxZDsubjectto:μ1≥0,μ2≥0,μ3≥0,μ4≥0,μ5≥0,μ6≥0,μ7≥0,μ8≥0,μ9≥0,μ10≥0.
The subgradient method is commonly used for solving the dual problem by iteratively updating the Lagrangian multipliers [26,31,32].
First, let vectorSbe a subgradient ofZD. In theqth iteration of the subgradient optimization procedure, the multiplier vectorπq=(μ1,q,μ2,q,μ3,q,μ4,q,μ5,q,μ6,q,μ7,q,μ8,q,μ9,q,μ10,q)is updated byπq+1=πq+tq Sq. The step sizetqis determined by equationtq=λ(ZIPh−ZD(πq))∥Sq ∥2. The denominatorSqis the sum of relaxed constraints concerning to the decision variable values based on theqth iteration.ZIPhis the primal objective value in thehth iteration.ZD(πq)is the objective value of the dual problem in theqth iteration.λis a constant, where0≤λ≤2. Accordingly, the optimal objective value of the dual problem is obtained iteratively.
4.4. Procedure of Step 5: Obtaining the Primal Feasible Solutions
Applying the LR and the subgradient methods to solve the LR and dual problems determines a theoretical LB from the primal feasible solution. Crucial information regarding the primal feasible solution can be identified [28]. The feasible region of a mathematical programming problem defined by the solutions must be satisfied by all constraints. A set of primal feasible solutions toZIPis a subset of the infeasible region solutions toZLRorZD . Several alternative methods can be used sophisticatedly to obtain the primal feasible solutions from the observations of the infeasible region. For example, Figure 4 presents an experimental case. The green curve represents the process of obtaining the primal feasible solutions iteratively. The objective is to identify the minimum value of the primal problem (ZIPh). Then, the LBs are determined using the subgradient method to iteratively obtain the tightest LB (maxZD ), represented by the purple line. The proposed resource management approach for obtaining primal feasible solutions is called the drop-and-add algorithm. It is a heuristic-based allocation mechanism for optimizing the objective function. It searches for a solution that satisfies not only all the user demands, but also the constraints. The initial solution is adopted using next-fit (NF) or first-fit (FF) as baselines for evaluating solution quality [33]. The proposed algorithm, FF, and NF are simultaneously implemented for result comparison.
4.5. Procedure of Step 6: Drop-and-Add Algorithm
Lagrangian multipliers determined from the dual problem have significant values for evaluating the sensitivity of objective value improvement [26,28,30,32]. Through the subproblems, the integrated weighting factor of applications is represented in (3) as(−Ri−Pi+μi1|Wi|+μi1|Ti|); therefore, the sum of the values can be used as an index to interpret the application’s significance of i. The corresponding multipliers determine the ordering set of admitting, assigning, and scheduling decision variables. The other decision variablesaijsorbiℓsand the assignment of indices j, ℓ, and s can be regarded as two bin-packing problems. The VM j can be packed into the server s. The VM ℓ can be packed into servers except server s to comply with the HA conditions forbiℓs. For performance evaluation, NF is adopted for sequentially performing the algorithm of assignment foraijsand thenbiℓs . The flowchart of the drop-and-add algorithm is shown in Figure 5.
5. Computational Experiments
A cloud service provider’s resource parameters and the applications’ demand attributes were simulated in a cloud computing experimental environment and are presented in Table 6.U(x,y)means a number uniform distributed between the parameters of x and y.
The algorithms were constructed and implemented to analyze solution quality and improvement ratios (IRs) in several simulation cases. Solution quality is defined as the objective value gap (denoted as GAP) between the proposed algorithm and the LR problem, which is expressed asGAP=|VDrop&Add−VLR||max(VDrop&Add,VLR)|×100%, whereVDrop&Addis the objective value of applying the drop-and-add algorithm andVLRis the objective value of the LR problem.IRNFis expressed asIRNF=VDrop&Add−VNextFit|max(VNextFit,VDrop&Add)|×100%.IRFFis expressed asIRFF=VDrop&Add−VFirstFit|max(VFirstFit,VDrop&Add)|×100%, whereVNextFitandVFirstFitare the objective values of employing the NF algorithm or FF algorithm, respectively. The experiments were developed in Python and implemented in a VM on a workstation with a quad-core CPU, 8 GB RAM, and Ubuntu 14.04. The traffic loads of tasks and arrival time intervals were randomly generated. The experimental environment was initialized for a data center including edge and core clouds. VMs were requested by applications to represent computing requirements. Based on resource admission control strategies and scheduling models, the VMs were packed into the corresponding servers in the cloud computing environments.
5.1. Performance Evaluation Case: Traffic Load
This experiment was designed to analyze VMs requested by applications in a time slot, which can be interpreted as a snapshot of the system loading. First, the trend of the objective function with the number of applications as the control variable was examined. The result in Figure 6 indicates that the objective value of (IP)increases with the number of applications. The drop-and-add and LB values were almost the same in some cases (20∼80), indicating that the optimal solution was determined when the GAP was less than 0.80% (the minimized one was 0.60%). Table 7 compares drop-and-add with NF or FF (higher values are preferable) in the primal problem. The maximum IR was 89.48%, with 180 applications in both FF and NF. The penalty of unsatisfied applications significantly increased the objective value for indicating when arriving applications exceeded system capacity with the NF or FF algorithm in the cases of the number of applications being over 100. Otherwise, the drop-and-add algorithm can select valuable applications to pack into servers, which results in a higher objective value than NF or FF in the cases of the number of applications being over 100.
5.2. Performance Evaluation Case: Number of Servers
The cost function is emulated as the capital expenditure (CAPEX) of the cloudlets and core clouds related to network infrastructure deployment of appropriate levels of servers, where the budget is a significant constraint. The cost difference between high- and low-end servers is also typically significant. Service providers generally conduct a comprehensive system analysis to determine the methods of how to manage, control, and operate a system appropriately. The plans, such as turning servers on or off, provide sufficient QoS to applications efficiently and effectively. Furthermore, the servers purchased were deployed at four levels with different capacities in this experiment. Nonhomogeneous servers were deployed under the same limited budget. The rapid increases in data traffic are represented as traffic loads to determine which method delivered superior QoS for applications at affordable costs and preserved revenue. The following are the experimental scenarios tested. Figure 7 and Table 8 present the results of the proposed methods (drop-and-add, LB, NF, and FF). The drop-and-add algorithm attained the most practical objective value (higher is preferable) compared with NF and FF in the case of 40.
Regarding the GAP, some values in the other situations were calculated to determine the difference in rate value between drop-and-add and LB. The GAP values indicate that the minimum value (44.62%) was determined in one of the cases with numerous servers (40). A data center has sufficient resource space in servers with the drop-and-add, NF, and FF algorithms. The maximum improvement ratio is represented by the ratio of improvement of feasible solutions and was 193.73% in numerous servers (i.e., 40). The result reveals that the resource allocation algorithm has a significant impact on system performance with a limited resource constraint. 5.3. Performance Evaluation Case: Effect of HA
As far as crucial business requirements are concerned, cloud service providers should offer high quality, excellent availability, good performance, and reliable services. In this case, the applications are divided into levels by using VM replication for HA requests. The VMs requesting to be in theTiset with HA by the application i asking for VM placement must be mutually and exclusively allocated into different physical servers. The level of exclusivity is called the dual-rate. In the subsequent experiment, the dual-rate was the parameter configured for HA VMs. Moreover, a dual-rate of 0.5 indicates that the HA VMs,|Ti|, requested by half of|Wi| for application i are allocated to different servers. The dual-rate equals 0.3, which indicates 30% of standard VMs required for application i with HA capability. In Figure 8, it is evident that the dual-rate significantly affected the application satisfaction rate. Thus, the drop-and-add algorithm offers more benefits (higher values are preferable) than NF or FF in the dual-rate cases. The improvement ratios increased significantly when the dual-rate was higher than 0.3, and the maximum IR was 608.39%. The drop-and-add algorithm achieved flexibility and efficiency and could sufficiently obtain the maximum objective value to deal with HA requests. As observed in Table 9, FF and NF performed poorly when the dual-rate was beyond 0.3. The tasks were not assigned to appropriate servers with HA. This resulted in turning on more servers, which corresponded to cost generation.
5.4. Performance Evaluation Case: Time complexity comparison
Table 10 shows the time complexity of the LR-based solution for resource management in a sliced network. We added an existing scheme, brute force, for comparison in Table 10. The time complexity of this scheme was higher than the proposed LR-based algorithm. Furthermore, the corresponding explanations were included to support our statements. The time complexity of the proposed LR-based solutions wasO(N|I||Wi||Ti||S|) . Table 10 shows the time complexity of the LR-based solution. The Lagrange dual solution was determined by Sub-problems (4)–(6), which were solved using the minimum algorithm with|S|servers. Each sub-problem requiredO(|I||Wi||Ti|)time minimum coefficient among servers. Since the sub-problems were solved individually by divide-and-conquer algorithms, the time complexity for each of the sub-problems was constant. Thus, the worst case of time complexity among these subproblems was considered significant in each iteration. The Lagrange dual solutions for the sub-problems could be obtained after a maximum number of iterations N, and the time complexity wasO(N|I||Wi||Ti||S|) . The number of iterations pre-defined to converge was about 600 based on the experiment results shown in Figure 4. Fortunately, all given parameters and multipliers for the solution did not need the same initial values. The convergence was achieved in a small number of iterations with the previous execution results. Furthermore, the proposed algorithms can be set as the output results at any complete iteration. Thus, the time can be controlled in practice.
5.5. Discussion
To test the proposed algorithm, we designed three experiments involving changes in application demands, the cost of servers, and dual-rates for HA with application requests, as shown in Table 11. The research problem was formulated as a mathematical programming problem to determine both admission control and resource scheduling problems from a service provider perspective. Overall, the drop-and-add algorithm is the most effective for obtaining the optimal solution in the fewest iterations. The optimization-based energy-efficient admission control and resource allocation algorithms have significant benefits in cloud computing systems. The significance of applications was inspired by the coefficient of (3), which sorts applications by a composition of application weights, client satisfaction, and high availability. Therefore, using the LR-based solution approach with multipliers to obtain the primal feasible solutions was suitable for allocating VMs to servers efficiently and effectively. The mathematical formulation was decomposed into five subproblems. It was solved optimally using parallel computation techniques to reduce computation time substantially. Furthermore, the experimental results reveal and confirm that the drop-and-add algorithm was active in a few minutes by the most significant in the sliced network stages shown in Table 7, Table 8, Table 9, Table 10 and Table 11. The following suggested problems and limitations of this paper can be further studied and solved: QoS requirements can be classified into more categories. Resource sharing between multiple service providers is a new research problem that necessitates considering related issues, such as the sharing economy, from multiple network service provider perspectives. Hence, operator pricing policies could be a worthwhile topic.
6. Summary and Conclusions A mathematical programming model is used to develop resource management by simulating the cloud service provider role in this paper’s cloud computing systems in 5G network slices. The mathematical model is solved using an LR-based approach. The proposed algorithm increases the cloud computing network infrastructure’s flexibility, including cloudlets, and core clouds, to maximize rewards by admitting as many applications as possible. The gaps between upper bounds and lower bounds in the computational experiments demonstrate the drop-and-add heuristic optimal solution qualities. The main contribution is demonstrating that the orchestrator designed the resource management algorithm significantly determined using Lagrangian multipliers to indicate task significance. A promising and straightforward resource allocation approach is proposed to combine client satisfaction and high availability for network planning in a sliced network. The developed optimization-based efficient admission control and resource allocation algorithms are confirmed through various experimental cases. The proposed method has excellent effectiveness and efficiency compared with the LB, FF, and NF solutions based on the experimental results. The resource management mechanisms enables the slicing network as services to efficiently and maximize system revenue in 5G networks.
Notation | Description |
---|---|
S | Index set of physical servers in the cloud computing system (cloudlets and core clouds), where S={1,2,…,s,…,|S|} . |
I | Index set of applications on the cloud computing system, whereI={1,2,…,i,…,|I|} . |
W | Index set of standard VMs,W=⋃i=1I WiandWi={1,2,…,j,…,|Wi|}, whereWi is also an index set of standard VMs required by application i, wherei∈I . |
T | Index set of VMs with high availability (HA),T=⋃i=1I TiandTi={1,2,…,ℓ,…,|Ti|}, whereTi is also an index set of VMs with HA required by application i, wherei∈I . |
N | Set of total VMs,N=⋃i=1I NiandNi=Wi⋃Ti, whereNi is the total number of standard VMs and VMs with HA required by application i, wherei∈I . |
r | Dual-rate; represents the ratio between the number of standard VMs and the VMs with HA, formulated asTi=Wi×r, wherei∈I. |
Bsn | Internal transmission bandwidth of server s, wheres∈S. |
Bc | Shared transmission bandwidth within the cloud computing system (cloudlets and core clouds). |
Es | Processing capability of each central processing unit (CPU) core in server s, wheres∈S. |
Ms | Total random access memory (RAM) capacity in server s, wheres∈S. |
Hs | Total storage capacity in server s, wheres∈S. |
Vs | Maximum number of VMs allowable on server s, wheres∈S. |
βs | Cost rate for opening server s, wheres∈S. |
Ri | Reward of admitting application i (application i can be admitted only if the demands on all types of resources are fully satisfied), wherei∈I. |
Pi | Penalty of rejecting application i (application i is rejected only if all of the types of resource requirements are not fully satisfied), wherei∈I. |
Dij | Total CPU processing capability on VM j required by application i, wherei∈I,j∈Wi. |
Gij | Total RAM capability required by application i on VM j, wherei∈I,j∈Wi. |
Qij | Total storage capability required by application i on VM j, wherei∈I,j∈Wi. |
Xijℓ | Total transmission channel capacity required by application i between VMs j and ℓ, wherei∈I,j∈Wi,ℓ∈Ti. |
Notation | Description |
---|---|
yi | Binary variable, 1 if the application i is completely allocated to and served in the computing system, and 0 otherwise, wherei∈I. |
aijs | Binary variable used for task admission control and assignment, 1 if the standard VM j of application i is admitted in the cloud computing networks and allocated to server s, and 0 otherwise, wherei∈I,j∈Wi,s∈S. |
biℓs | Binary variable used for task admission control and assignment, 1 if the VM with HA ℓ of application i is admitted in the cloud computing networks and allocated to server s, and 0 otherwise, wherei∈I,ℓ∈Ti,s∈S. |
fijℓs | Binary variable used for exclusive setting, 1 if VMs j and ℓ of application i are allocated to server s, and 0 otherwise, wherei∈I,j∈Wi,ℓ∈Ti,s∈S. |
zs | Binary variable for server power-on or power-off status, 1 if server s is turned on, and 0 otherwise, wheres∈S. |
Given Parameter | Value |
---|---|
Number of servers,|S| | 56–84 |
Number of applications,|I| | 24–52 |
Dual-rate, r | 0.5 |
Number of standard VMs for application i,|Wi| | |Wi|∼U(1,10),∀i∈I |
Number of VMs with HA for application i,|Ti| | Ti=Wi×r,∀i∈I |
Total number of VMs for application i,|Ni| | |Ni|=|Wi|+|Ti|,∀i∈I |
Host internal bandwidth capacity,Bsn(Mbps) | 120–225 |
Shared transmission bandwidth within the cloud computing system,Bc(Mbps) | 1000 |
Host CPU processing capacity,Es(GHz) | 480–900 |
Host memory capacity,Ms(GB) | 120–225 |
Host storage capacity,Hs (GB) | 1200–2250 |
The maximum number of VMs allowable on server s,Vs | 8–15 |
Cost rate for opening server s,βs | βs=(Es+Ms+Hs+Vs)/40,000 |
Reward rate of each application,Ri | Ri∼∑j(Dij50),∀i∈I |
Penalty rate of each application,Pi | Pi∼∑j(Dij100),∀i∈I |
CPU requests of a task,Dij(GHz) | Dij∼U(1,120),∀i∈I,∀j∈Wi |
Memory requests of a task,Gij(GB) | Gij∼U(1,30),∀i∈I,∀j∈Wi |
Storage requests of a task,Qij(TB) | Qij∼U(1,300),∀i∈I,∀j∈Wi |
Total transmission channel capacity,Xijℓ(Mbps) | Xijℓ∼U(0,30),∀i∈I,∀j∈Wi,∀ℓ∈Ti |
Bandwidth requests of a task,Cijs(Mbps) | Cijs∼U(1,2000),∀i∈I,∀j∈Wi,s∀s∈S |
Number of Applications | Drop-And-Add | LB | Gap (%) | NF | FF | IRNF(%) | IRFF(%) |
---|---|---|---|---|---|---|---|
20 | 123.743 | 124.760 | 0.80 | 123.687 | 123.687 | 0.05 | 0.05 |
40 | 337.910 | 340.080 | 0.64 | 337.910 | 337.910 | 0.00 | 0.00 |
60 | 506.729 | 509.780 | 0.60 | 506.729 | 506.729 | 0.00 | 0.00 |
80 | 623.692 | 627.760 | 0.65 | 623.523 | 623.523 | 0.02 | 0.02 |
100 | 772.445 | 780.680 | 1.06 | 736.595 | 736.595 | 4.87 | 4.87 |
120 | 934.295 | 987.845 | 5.73 | 670.535 | 670.535 | 39.33 | 39.33 |
140 | 916.985 | 1099.91 | 19.94 | 593.945 | 593.945 | 54.39 | 54.39 |
160 | 886.925 | 1110.70 | 25.23 | 537.905 | 537.905 | 64.89 | 64.89 |
180 | 860.705 | 1112.32 | 29.23 | 454.235 | 454.235 | 89.48 | 89.48 |
In some cases (20∼80), the lower bound (LB) value is extremely close to, but not equal to that of the drop-and-add algorithm. In other cases, the drop-and-add algorithm has a significant improvement rate compared with NF or FF.
Number of Servers | Drop-And-Add | LB | Gap (%) | NF | FF | IRNF(%) | IRFF(%) |
---|---|---|---|---|---|---|---|
20 | 34.439 | 137.926 | 300.5 | −75.601 | −75.601 | 145.55 | 145.55 |
40 | 291.198 | 421.117 | 44.62 | 99.138 | 99.138 | 193.73 | 193.73 |
60 | 291.880 | 498.201 | 70.69 | 108.01 | 108.01 | 170.23 | 170.23 |
80 | 312.825 | 528.898 | 69.07 | 116.41 | 116.41 | 168.73 | 168.73 |
100 | 307.056 | 540.088 | 75.89 | 113.894 | 113.894 | 169.6 | 169.6 |
Dual-Rate | Drop-and-Add | LB | Gap (%) | NF | FF | IRNF(%) | IRFF(%) |
---|---|---|---|---|---|---|---|
0.1 | 384.827 | 387.200 | 0.61 | 384.827 | 384.827 | 0.00 | 0.00 |
0.2 | 416.531 | 419.040 | 0.60 | 416.531 | 416.531 | 0.00 | 0.00 |
0.3 | 441.730 | 452.618 | 2.46 | 351.926 | 351.926 | 25.52 | 25.52 |
0.4 | 438.900 | 480.493 | 9.48 | 202.224 | 202.224 | 117.04 | 117.04 |
0.5 | 383.708 | 492.960 | 28.47 | 158.026 | 158.026 | 142.81 | 142.81 |
0.6 | 269.938 | 315.366 | 16.83 | 38.106 | 38.106 | 608.39 | 608.39 |
Algorithm | Time Complexity | Annotation |
---|---|---|
Equation (3) | O(|I|) | |I| subproblems determine the value of the decision variable for each application i. |
Equation (4) | O(|I||Wi||S||Ti|) | |I||Wi|subproblems with summation ofℓ∈Ti, wherei∈I, determine the binary decision variableaijsthat the coefficient is minimized among|S| . |
Equation (5) | O(|I||Ti||S||Wi|) | |I||Ti|subproblems with summation ofj∈Wi, wherei∈Idetermine the binary decision variablebiℓsthat the coefficient is minimized among|S| . |
Equation (6) | O(|I||Wi||S||Ti|) | |I||Wi||S||Ti|subproblems determine decision variablefijℓs . |
Equation (7) | O(|S||I||Wi|)orO(|S||I||Ti|) | |S|subproblems with summationμijs8orμiℓs9determine decision variablezs. |
Dual problem (D) | O(N|I||Wi||Ti||S|) | N times of the maximum complexity by the sub-problems, (4)–(6). |
Drop-and-add algorithm | O(N|I||Wi||Ti||S|) | The algorithm adjusts the values of decision variablesyi,aijs,biℓs,fijℓs, andzsbased on the dual problems. The complexity is the worst case determined by the decision variables of the subproblems. |
FF | O(|I||Wi||Ti||S|) | The resource allocation. |
NF | O(|I||Wi||Ti||S|) | The resource allocation. |
Brute force | O(2|I|4|Wi |2|Ti |2 |S|4) | The total combination of decision variables. |
Traffic load (number of servers: 80, dual-rate: 0.5) | |||
---|---|---|---|
Number of applications | 60 | 120 | 180 |
min (FF, NF) | 0.0418 s. | 0.0826 s. | 0.1067 s. |
Proposed method | 373 s. | 744 s. | 1117 s. |
Number of servers (number of users: 80, dual-rate: 0.5) | |||
Number of servers | 40 | 80 | 100 |
min (FF, NF) | 0.213 s. | 0.159 s. | 0.269 s. |
Proposed method | 236 s. | 478 s. | 604 s. |
Effect of HA (number of applications: 80, number of servers: 80) | |||
Dual-rate | 0.2 | 0.4 | 0.6 |
min (FF, NF) | 0.059 s. | 0.246 s. | 0.430 s. |
Proposed method | 284 s. | 437 s. | 583 s. |
Author Contributions
Conceptualization, C.-H.H. and F.Y.-S.L.; Formal analysis, C.-H.H. and F.Y.-S.L.; Methodology, C.-H.H., F.Y.-S.L., Y.-C.S., Y.-S.W. and H.-Y.K.; Supervision, F.Y.-S.L., Y.-F.W. and Y.H.; Visualization, C.-H.H. and Y.-F.C.; Writing review and editing, C.-H.H., E.S.-H.F., Y.-F.C. and Y.-F.W. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported in part by Ministry of Science and Technology (MOST), Taiwan, under Grant Number MOST 109-2221-E-002-144 and MOST 110-2222-E-001-002.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2021. This work is licensed under http://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
A combined edge and core cloud computing environment is a novel solution in 5G network slices. The clients’ high availability requirement is a challenge because it limits the possible admission control in front of the edge cloud. This work proposes an orchestrator with a mathematical programming model in a global viewpoint to solve resource management problems and satisfying the clients’ high availability requirements. The proposed Lagrangian relaxation-based approach is adopted to solve the problems at a near-optimal level for increasing the system revenue. A promising and straightforward resource management approach and several experimental cases are used to evaluate the efficiency and effectiveness. Preliminary results are presented as performance evaluations to verify the proposed approach’s suitability for edge and core cloud computing environments. The proposed orchestrator significantly enables the network slicing services and efficiently enhances the clients’ satisfaction of high availability.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer