Energy-Efficient Computation Offloading and Resource Allocation in Fog Computing for Internet of Everything

2019-03-21 07:20QiupingLiJunhuiZhaoYiGongQingmiaoZhang
China Communications 2019年3期

Qiuping Li,Junhui Zhao*,Yi Gong,Qingmiao Zhang

1 School of Electronic and Information Engineering,Beijing Jiaotong University,Beijing 100044,China

2 School of Information Engineering,East China Jiaotong University,Nanchang 330013,China

3 Shenzhen Engineering Laboratory of Intelligent Information Processing for IoT,Southern University of Science and Technology,Shenzhen 518055,China

Abstract:With the dawning of the Internet of Everything (IoE) era,more and more novel applications are being deployed.However,resource constrained devices cannot fulfill the resource-requirements of these applications.This paper investigates the computation offloading problem of the coexistence and synergy between fog computing and cloud computing in IoE by jointly optimizing the offloading decisions,the allocation of computation resource and transmit power.Specifically,we propose an energy-efficient computation offloading and resource allocation (ECORA) scheme to minimize the system cost.The simulation results verify the proposed scheme can effectively decrease the system cost by up to 50% compared with the existing schemes,especially for the scenario that the computation resource of fog computing is relatively small or the number of devices increases.

Keywords:fog computing; cloud computing; resource allocation; computation offloading; IoE

I.INTRODUCTION

With the rapid development of technologies in wireless communication and mobile sensing,the Internet of Everything (IoE) and Internet of Thing (IoT) are proposed.IoT is comprised of many different types of applications,devices and items connected to the global internet [1] [2].When the people are supposed to be nodes on the internet,the IoT grows into IoE [3].Moreover,along with the advent of IoE,more and more novel mobile applications are introduced and facilitated,such as automatic driving [4],smart grid [5],and augmented reality [6].However,these novel applications usually need to consume extensive computation resource,which exceeds the computing capabilities of IoE devices.

Cloud computing supplies a huge amount of computation resource.Devices can transfer their computation task to the cloud by mobile networks [7],which is able to alleviate the computing stress of devices.However,it also causes some problems in the process of offloading,e.g.unacceptable latency [8] [9],extra transmission energy consumption [3] [10] and data breaches [11].These deficiencies may challenge the future development of IoE and degrade the user experience.

As a replenishment of cloud computing,edge computing is proposed and catches the attention of researchers.Edge computing provides cloud computing capacity in close proximity to mobile users,which includes mobile edge computing (MEC) [7] and fog computing [13] in the practical deployment.For the slight difference and with most similar literatures,fog computing and MEC are not distinguished in this paper.Compared to cloud computing,fog computing can further reduce energy consumption,delay and data breaches in the process of computation offloading.Some literatures have proposed to study energy-efficient resource management schemes for coordination computation offloading among devices in fog computing [14]-[16].In [14],the total cost of users is minimized by evolutionary game.Maet aldesigned computation offloading algorithm to minimize the system cost [15].The authors in [16] minimized the energy consumption of users by designing an energy-efficient computation offloading scheme in 5G heterogeneous networks.

However,the limited computation capacity of fog node is not able to fully meet more and more computation offloading requirements.In especial,with increasing numbers of IoT devices,it is expected to reach 24 billion by 2020 [3],the phenomenon will become more obvious.Therefore,the fog computing and cloud computing are highly complementary [17],utilizing both resource is quite important and necessary.In addition,with the widespread deployment of WLAN,devices usually connect to the Internet more than one wireless access point (WAP) [18],e.g.heterogeneous networks.In the heterogeneous networks,devices not only need to decide whether or not to offload,and they need to choose a proper WAP to obtain high data rate for offloading.

In this paper,we research computation offloading and resource allocation problem for cloud collaborated fog computing in heterogeneous networks,which aims to minimize the system cost by jointly optimizing the computation offloading strategy,transmission power and computation resource allocation.Moreover,to solve this problem,an energy-efficient computation offloading and resource allocation (ECORA) scheme is proposed.The proposed ECORA scheme decouples the computation offloading problem into sub-problems of resource allocation and offloading decisions.That is,the ECORA scheme works iteratively between offloading decisions and resource allocation to obtain the optimal computation offloading strategy,transmission power and computation resource allocation.We also present numerical results to demonstrate the performance improvement by using our proposed ECORA scheme.

The rest of this paper is organized as follows.In Section II,we establish the system model and formulate an optimization to minimize the system cost.The proposed energy-efficient computation offloading and resource allocation (ECORA) scheme is shown in Section III.In Section IV,we give the simulation results about the proposed scheme.Finally,conclusion is presented in Section V.

Fig.1.Three-layer integration architecture of the cloud computing,the fog computing,and the IoE.

II.SYSTEM MODEL AND PROBLEM FORMULATION

2.1 System model

As shown in figure 1,a novel three-layer integration architecture is established including the cloud computing,the fog computing and the IoE,where the IoE devices offload their computation tasks to the fog or cloud via heterogeneous networks.Based on this architecture,the computation offloading problem for cloud collaborated fog computing is analyzed.

In this paper,we consider that the heterogeneous network includes a macro base station (MBS) as a fog node andMsmall base stations (SBSs).The service area of SBS is overlaid by that of the MBS.The set of MBS and SBSs is denoted as M={1,2,…,M.M+1},where M1=…{1,2,,M} denotes the set of SBSs andM+1 presents the MBS.Between SBSs and MBS,there is a backhaul which relays the transmission from the SBS to the MBS.The MBS and SBSs operate in different frequency band.Moreover,the spectrum is divided intoKchannels denoted as K=…{1,2,,K},where each channel is orthogonal to the others and each sub-channel can be assigned to one user at most to avoid interference among users.The bandwidth of each channel of BS is,andLet N=…{1,2,,N} denotes a set of devices and each device has a computation taskTn,andwhereDnrepresents required computation resource for task,Lnis the size of input data,andrepresents the maximum processing delay.

In this paper,we consider the computation task is indivisible.Each task can be processed locally,or offloaded to the fog,or accomplished on the cloud.After that,we define the offloading decision set as Θ={-M- 1,-M,…,-1,0,1,…,M,M+1},and the computation offloading strategy of device as S={s n|sn=i,i∈ Θ,n∈N},wheresn=0 if devicendecides to execute the computation task locally,s n=m(m∈M1) orsn=-m(m∈M1) if devicenoffloads the task to fog or cloud through SBSmrespectively,s n=M+1 orsn=-M-1 if devicenoffloads the task to fog and cloud through MBS respectively.

In the following,we discuss the system cost about locally computing,offloading the computation task to the fog,and migrating the computation task to the cloud,respectively.

2.2 Cost under different offloading decisions

Local execution model:represents computation capability of devicen,and κ is the effective switched capacitance relying on the chip architecture [15].Moreover,we assume κ=0.55 ×1 0-9.Subsequently,processing time of taskTncomputing locally is

The energy consumption of local execution can be calculated as [14],[15]

Fog computing execution model:Note that,when the device offloads its task to the fog or cloud,Lnneed to be transmitted to the fog or cloud via BSm m,∈M.

The uplink transmission rate of device n iswhererepresents the power of device n transmitting data to BS m at sub-channel k,N0is the noise power,gnmdenotes the channel power gain between device n and BS m,is interference from the other BSs to device n of BS m on the same sub-channel,I(x) is an indicator function which equals 1 if x is true and 0 otherwise,nmis the number of orthogonal sub-channels of device n assigned by BS.Specially,for analytical tractability,we assume thatand we haveandby applying the uniform zero frequency reuse method [19].Additionally,the time of receiving the computation results is ignored since the amount of output data is much less the input data.

Then,the delay and energy consumption of fog processing device can be given respectively by

Additionally,similar to the existing work [19],the monetary cost of fog need to be considered in the computation offloading,expressed as

whereς fdenotes the unit cost of computation resource of fog,and η is the unit price of transmission rate of BSm.

According to (3),(4) and (5),the total cost of fog processing device can be defined as

where theβnandαnare the impact factor of energy consumption and monetary cost,respectively.

Cloud computing execution model:When the device offloads its task to the cloud (sn<0),the input data need to be transmitted to cloud thousands of miles away via the fiber and core networks.Moreover,it is noticed that the cloud always has sufficient computation resource,hence the computation requirements of cloud processing device can be well satisfied.In this case,the total duration time and energy consumption of device n can be computed respectively as

Analogously,the total cost of cloud processing devicencan be formulated as

2.3 Problem formulation

In this section,we minimize the system cost by optimizing the offloading decisions,the computation resource allocation and uplink transmission power allocation.The optimization problem can be written as

whereC2is the change range of uplink transmission power,C3 implies the non-negativity of computation resources,C4means that the allocated computation resource should be inferior the maximum processing capability of fog,C5ensures that only one offloading decision can be chosen for each device,C6represents each computation task should be processed before a tolerable deadline,andcncan be formulated in the following.

The problem in (10) is a mixed-integer programming problem and it is difficult to solve.Therefore,we propose an ECORA scheme to obtain the optimal solution in the next section.

III.ENERGY-EFFICIENT COMPUTATION OFFLOADING AND RESOURCE ALLOCATION

In this section,we utilize the ECORA scheme to obtain the optimal computation offloading strategy,transmit power and computation resource allocation.This scheme consists of two parts.One is the potential game,which analyzes the current optimal offloading decision.Another is the resource allocation,which achieves the optimal power control and computation resource allocation for the fog and cloud processing devices.They are introduced in the following respectively.

3.1 Resource allocation

In the case where the computation offloading decision of device is given,the optimal resource allocation need to be solved for minimizing respective cost.For the fog processing devices,the (10) can be converted to

where Nfis a set of fog processing devices,NfMdenotes a set of devices with offloading the computation task to fog via MBS,Nfsrepresents a set of devices with offloading the computation task to fog via SBS,and

Lemma 1:The optimal solution of problem (12) is achieved when

Proof:For devicenwith offloading its task to the fog through SBSm,e.g.s n=m,first,we calculate the first derivative of,given by

According to (13),it can be observed thatasHence,we can know thatis monotone increasing with.We assume thatis optimal solution of devicen,it is need to satisfyAfter that,there are,and it meets thatandAsis monotone increasing with,we can inferwhich is contradictory to our hypothesis.Therefore,the optimal solution should is obtained whenIn this regard,fors n=M+1,we can obtain the similar conclusion.Hence,the optimal solution of (12) is achieved when

Based on Lemma 1,we can derive thatwhen device n offload its computation task to the fog node via SBS m.Whens n=M+1,Moreover,we can obtain the following lemma by bringing theandinto (12).

Lemma 2:The problem in (12) is convex.

Proof:Based onLemma 1, for devicenwith removing task to the fog via SBSm,we havewhere

First,if g(x) is concave,h y() is convex and non-increasing,thenf x() is convex,wheref(x)=h(g(x)).Sinceis concave,x(2 11/x- ) is convex and decreasing withxincreasing.Hence,is convex.Theis convex asSimilarly,is convex asThe summation of convex function preserves the convexity.Moreover,for devicenwith removing task to the fog via MBS,we can also draw the similar conclusion.Therefore,the (12) is convex.

Based onLemma 2,the optimal resource allocation can be obtained by some convex optimization tools,i.e.CVX.Moreover,for cloud processing devices,we can come to the same conclusion.

3.2 Game formulation

The offloading decision of devicennot only depends on its own offloading requirement,but also on the offloading strategies of other devices.Thus,we can make a strategy game to model the offloading decision process,and the players are all devices.The strategy game can be denoted aswhereSnis a set of offloading strategy of devicen,is the cost function of devicenanddenotes the offloading decisions of all devices apart from devicen.

Theorem 1:This computation offloading game has a Nash Equilibrium (NE) and possesses the finite improvement property (FIP).

Proof:Due to every exact potential game has a NE [18],hence,to proveTheorem 1,we only need to demonstrate this game is an exact potential game.Based that,we construct the potential function as [19]

Similar the proof in [7],[18] and [19],then,we can demonstrate the computation offloading game is an exact potential game with the potential function shown in (14).It is therefore omitted.

3.3 Algorithm description

In this section,we describe the details of ECROA scheme.An iteration approach can be used to obtain this solution.At first,given computation offloading strategies of all devices and each device has opportunity to update the offloading decision.In each iterative,each device that has the opportunity to update the offloading decision choose the optimal offloading strategy to minimize the respective cost and contend for the update opportunity.Then,the device with the greatest reduction in system cost wins the competition,and updates its computation offloading decision.When no strategy updates change the system cost,the iteration process is terminated.The more details are summarized in Algorithm 1.In each iteration,the computational complexity of obtaining the offloading decision isThe while-loop in Algorithm 1 needs V iterations to converge.Therefore the computational complexity of Algorithm 1 is

IV.SIMULATION RESULTS

In this section,we present some representative numerical results to evaluate our proposed computation offloading and resource allocation for cloud collaborated fog computing in heterogeneous networks.Assuming that there are one MBS and one SBS.The detailed simulation parameters are shown in table 1.

Algorithm 1.The ECROA scheme.Input:TD L t={,, max},cn0 Output:f *,S *,p*Initialization:each device has opportunity to update the offloading decision χn 0=1,each device initially offloading strategy sn=0,the offloading decision set B.1:while all devices satisfy χn ≠ ≠0|| 0N 2:for device n (n∈N )3:for BS m=1,2,….,M,M+1 4:Compute cnc and cn f by using CVX 5:if c c nnn n f ≤ 0 6:sn t=sn c ≤ 0 and cc nn nn t-1 t=0 8:else if c c 7:χn cf ≤nn tt 9:χ χ=-1 nn 10:update offloading decision set B,bnm=-m 11:else 12:χ χ tt=-1 nn 13:update offloading decision set B,bnm=m 14:end if 15:end if 16:end for 17:Choose the optimal offloading strategy in offloading decision set B to minimize the cost.18:end for 19:All devices χnt=1 contends the wireless link 20:if device n wins the contention 21:Update the offloading strategy of device n, snt=m,and nN 22:χnt=0.23:else s s tt=-1 nn 24:end if 25:t=t+1 26:end while

Table I.Simulation parameters.

Firstly,we evaluate the NE and convergence of ECORA from the view of simulation.We then further evaluate the performance of our proposed ECORA scheme in comparison with the following two schemes.(1)Local computation scheme,where all tasks are executed by computing locally.(2)Two-tier collaborative computation offloading (TCCO) scheme[19],where the tasks are completed by computing locally or offloading to fog.

Figure 2 depicts the system average cost versus the number of iterations for the different size of input dataLn,which verifies the convergence performance of the Algorithm 1.It is observed that the system cost keeps decreasing after each iteration until convergence.This is because the ECORA scheme aims to minimize the system cost in each iteration by performing computation offloading decision,resource allocation,thus the system cost can be decreased by proper resource allocation and computation offloading decision.Moreover,as shown in figure 2,the system converges to a stable state after 15 iterations,which obtains a NE and optimal resource allocation.

Figure 3 shows the simulations results between the system average cost and the computation capacity of fog.From this figure,it can be observed that the system average cost keeps decreasing with the computation capacity of fog node increasing for ECORA and TCCO schemes.Furthermore,our proposed ECORA scheme is lower than other schemes when the computation resource of fog is relatively small,e.g.our proposed ECORA scheme can effectively obtain 10% and 15% improvement over TCCO scheme and Local computation scheme respectively when F=1000GHz.This is because fog unable to fully satisfy those computation requires when the computation resource of fog is relatively small,the task can achieve more computation resource when it is carried out on the cloud.Therefore the system average cost can effectively is dropped by offloading the task to the cloud.UntilFis more than 4000Ghz,fog can meet computation demands of devices.The devices intend to offload their computation tasks to the fog,the system average cost in this moment is the same for ECORA and TCCO schemes.Additionally,after a certain certain point,such as F=6000GHz,fog can meet all computation requires of devices,the system average cost becomes constant.

Fig.2.The convergence of algorithm 1.

Fig.3.The system average cost versus the computation capacity of fog node.

Fig.4.The system average cost versus the number of IoE devices.

Figure 4 is the relationship about the number of IoE devices and the system average cost.As shown in figure 4,ECORA scheme and TCCO scheme can obtain lower system cost than the Local computation scheme.Moreover,the performance of ECORA scheme outperforms the TCCO scheme.When the N=40,ECORA scheme can effectively decrease the system average cost by up to 10% and 20% compared with the TCCO scheme and Local computation scheme,respectively.This is mainly because fog processing devices can obtain lower resource with the number of devices increasing.However,the cloud always has powerful computation capacity to meet the computation requires of devices.Hence,fog processing leads the higher cost than cloud processing.

Figure 5 shows the impact of required computation resource of task on the system average cost of three schemes.As depicted in figure 5,it is quite obvious that the system average cost is identical among three schemes,when the required computation resource of task is small.However,with the increase ofDn,our proposed ECORA scheme can obviously achieve performance improvement over the other schemes,since the ECORA scheme can well satisfy the computation demands of task.WhenDnreaches 1000 KB,the ECORA scheme can apparently decrease the system cost by up to 30% and 50% compared with the TCCO scheme and Local computation scheme,respectively.Above all,we can see the necessity of investigating the coexistence and synergy between fog computing and cloud computing,especially when there are too many resource hungry computation-intensive tasks to be completed

Fig.5.The system average cost versus the required computation resource of task.

V.CONCLUSIONS

In this paper,we investigate the computation offloading and resource allocation problem of cloud collaborated fog computing in IoE.This problem is formulated as a constrained optimization for minimizing the system cost.In this regard,an ECORA scheme is proposed to solve this problem,which jointly optimizes the computation offloading strategy and transmission power,and computation resource allocation to minimize the system cost.Our simulation results indicate that the proposed ECORA scheme can obtain lower system average cost compared with other existing schemes.

ACKNOWLEDGEMENTS

This work was supported by the Fundamental Research Funds for the Central Universities (No.2018YJS008),the National Natural Science Foundation of China (61471031,61661021,61531009),Beijing Natural Science Foundation (L182018),the Open Research Fund of National Mobile Communications Research Laboratory,Southeast University (No.2017D14),the State Key Laboratory of Rail Traffic Control and Safety (Contract No.RCS2017K009),Science and Technology Program of Jiangxi Province (20172BCB22016,20171BBE50057),Shenzhen Science and Technology Program under Grant (No.JCYJ20170817110410346).