A Deep Learning Based Energy-Efficient Computational Offloading Method in Internet of Vehicles

2019-03-21 07:21XiaojieWangXiangWeiLeiWang
China Communications 2019年3期

Xiaojie Wang,Xiang Wei,Lei Wang,*

1 School of Software,Dalian University of Technology,Dalian 116620,China

2 School of Computer Science and Engineering,Beihang University,Beijing 100191,China

Abstract:With the emergence of advanced vehicular applications,the challenge of satisfying computational and communication demands of vehicles has become increasingly prominent.Fog computing is a potential solution to improve advanced vehicular services by enabling computational offloading at the edge of network.In this paper,we propose a fog-cloud computational offloading algorithm in Internet of Vehicles (IoV) to both minimize the power consumption of vehicles and that of the computational facilities.First,we establish the system model,and then formulate the offloading problem as an optimization problem,which is NP-hard.After that,we propose a heuristic algorithm to solve the offloading problem gradually.Specifically,we design a predictive combination transmission mode for vehicles,and establish a deep learning model for computational facilities to obtain the optimal workload allocation.Simulation results demonstrate the superiority of our algorithm in energy efficiency and network latency.

Keywords:computational offloading; fog computing; deep learning; internet of vehicles

I.INTRODUCTION

With the development of Internet of Things (IoT) and wireless communication technologies,vehicles become more intelligent and can provide better services than before [1] [2].Following that,various computing-intensive applications,such as autonomous driving and image recognition,are emerging.However,the limited computational resources of vehicles result in advanced applications difficult to be applied in reality [3].Without powerful computational support,various applications are still in the concept phase and cannot be applied in our daily life [4].

Cloud computing can improve the computation performance by offloading tasks to the cloud with abundant computational resources [5].However,the long transmission distance of data to the cloud has not only caused a heavy burden on wireless communication links but also resulted in intolerable latency,which significantly degrades the performances of applications.Fog computing is a promising paradigm to overcome the above problem,since it extends the computational facilities to the network front end.It is also promising to relieve the heavy computational burden of vehicles [6].On one hand,the fog nodes decrease the power consumption by relieving the workload of the cloud.On the other hand,the geo-distributed fog devices can reduce the message transmission delay.Fog computing has the above advantages.However,it is impossible to offload all tasks to the fog layer because the computation capability of fog-only model is not up to coping with the delay growth under high workload,and some complex computational tasks are supposed to be offloaded to the remote cloud servers [7].Thus,it is critical to make efficient offloading decisions to minimize the power consumption of the computational facilities and vehicles with a delay constraint simultaneously [8].

In this paper,we propose a fog-cloud computational offloading algorithm in Internet of Vehicles (IoV) to both minimize the power consumption of vehicles and that of the computational facilities.Specially,the contributions are summarized as follows:

1) We establish a fog-cloud offloading system,and then formulate a mathematical framework to optimize the power consumption based on delay constraints.

2) We decompose the whole system into two parts,i.e.,the front-end and the back-end.After that,we develop a combination transmission algorithm in vehicular networks and a deep learning model to solve the optimization problem in the front-end and the back-end,respectively.

3) We conduct simulations to validate the effectiveness of our fog-cloud computational offloading algorithm.Simulation results show that it can significantly optimize the power consumption while satisfying the latency requirements.

The organization of the paper is as follows:In section II,we introduce the related work.The fog-cloud model and problem formulation are described in section III.In section IV,we elaborate the designed heuristic offloading algorithm.Numerical results and analyses are presented in section V.We summarize the paper in section VI.

II.RELATED WORK

With abundant computational resources,the cloud computing has attracted great attention in IoV systems [9].The authors in [10] propose a promising network paradigm with predictive offloading on the cloud to minimize the delay and power consumption in IoV systems.Nevertheless,the cloud computing has the following deficiencies [11]:first,the cloud server is far away from vehicles,making it difficult to meet the demands of delay-sensitive applications.Second,the cost for power consumption of the data center is quite expensive.At last,cloud computing has a poor performance on supporting vehicular applications.Similar to mobile cloud,cloudlet and edge computing,fog computing is a promising paradigm by extending cloud computing to the network edge [12].A mobile cloud offloading model is proposed to meet multi-user computing offload requirements [13],so that the mobile cloud can be a compensation for the deficiencies.Researches such as [14],[15] mainly consider the power consumption at the client side.

Edge computing is promising for real-time traffic management by making full use of idle resources in vehicles,such as vehicles on the move or in the parking slot [16].However,the computational and storage capacities of vehicular edge computing are still limited comparing with the cloud or cloudlet computing [17].In addition,the fog nodes generally lack resources to become intelligent by self-learning.Furthermore,accurate prediction of vehicles' mobility largely impacts the utilization of their computing resources and energy [18].One way is to mine traffic flow according to the position,direction and velocity of vehicles.As electric vehicles are becoming popular,vehicle-to-grid technology has been investigated to charge electric vehicles and monitor their power status in the smart grid.A hybrid computing model for vehicle-to-grid networks is designed in [19],including a permanent cloud or cloudlet and temporary vehicular fog nodes.In order to provide secure communications and services by VFC,intelligence is indispensable for network control and management in IoV [20].In [21],the authors design a hierarchical architecture enabled by fog computing to provide prompt responses at neighborhood,community and city-wide traffic management.

Different from the existing researches,we aim to minimize the power consumption of both vehicles and the computational facilities.In the front-end (the vehicle's side),we design a combination transmission algorithm to save energy.In addition,we develop a deep learning method to optimize the workload to minimize the power consumption in the back-end (fog and cloud facilities).To our best knowledge,this is the first work to provide a detailed design about how to minimize the power consumption by the deep learning method in an offloading model for IoV systems.

III.SYSTEM MODEL AND PROBLEM FORMULATION

As shown in figure 1,the system architecture comprises:a set of Roadside Units (RSUs),fog devices and cloud servers.The RSUs receive requests from vehicles and send them to fog devices.In the model we assume that the RSUs can cover all vehicles in the experimental area.Vehicles can also access the network based on other facilities such as the cellular base stations,which is not discussed in this paper.Fog nodes can process requests and forward redundant ones to cloud servers through a Wide Area Network (WAN).Each cloud server hosts a number of virtual machines [22].Since WAN covers a large geographical area from fog nodes to cloud servers,the corresponding transmission delay cannot be ignored (compared to the local area network) [23].Moreover,the computational delay of fog nodes or cloud servers should be also considered.In addition,the power consumption and delay of the front-end from vehicles to RSUs are important factors for the system performance [24][25].Hence,we mainly consider power consumption and delay of four components,i.e.,the cloud layer,the fog layer,the dispatch in WAN and the transmission process from vehicles to RSUs.

3.1 System model

We aim to minimize the power consumption with a guaranteed delay for vehicles and the computational infrastructures in IoV systems.In the following,we elaborate the designed system model.

1) Power Consumption of Fog Devices:The power consumption of fog deviceican be calculated by a function of the CPU frequencyfi.The function is a monotonic increasing function.For simplicity and without loss of generality,the power consumptionpifogof deviceiis defined as follows:

wherefiis the CPU frequency of fog devicei,andAiandBiare two non-negative and pre-determined parameters.

2) Communication Delay of Fog Devices:We assume that it is a queueing system for the fog device to process requests.For devicei,its computational delay,including the waiting time and the service time,is:

Fig.1.The architecture of the fog-cloud computing system.

wherexiis the task arrival rate andviis the service rate.

3) Power Consumption of Cloud Servers:As mentioned before,each cloud server hosts numerous computing machines.We simply assume that the CPU frequencies of these machines are equal in a cloud server.The number of machines in cloud serverjis denoted byni.In this situation,the power consumption of cloud server j can be expressed as the product of the machine consumption value and the number of machines in serverj.We approximate the power consumption value of each machine in serverjby a function of CPU frequencywhereAjandBjare positive constants.Hence,the power consumptionPjcloudof the cloud serverjcan be calculated by

4) Communication Delay of Cloud Servers:The delay of a cloud server can be further divided into waiting delay and processing delay.We model the system as a queueing network,and the cloud can be modeled as anM M n//queue.Thus,the total computational delay of a cloud server is given by:

whereKjdenotes the average cycle of requests andC(n j,y jK j/fj) is Erlangs C formula [26].

5) Communication Delay of Transmission:it can be divided into two parts:communication delay of dispatch and that between vehicles and RSUs.For communication delay of dispatch,we use wij to record the bandwidth from fog device i to cloud server j.The communication delay of dispatch is computed by:

wheren0=S/Nis the signal-noise ratio.

For communication delay of vehicles and RSUs,we consider multi-hop V2I and V2V modes.For V2I mode,the corresponding delay contains two parts,i.e.,upload time and 3 transmission time from the source to the destination RSU.For V2V mode,the transmission delay is larger than that in V2I model,and is the summation of upload time and V2V relaying delay among multi-hop vehicles.A practical prediction scheme is presented in [10] to predict the delay in multi-hop transmissions.Therefore,the delay between vehicles and RSUscan be obtained by the combination of delays in V2I and V2V modes.

6) Power Consumption of Transmission:The physical infrastructures mainly include RSUs and vehicles in the frontend.Generally speaking,the size of upload request packets is much larger than that of returned packet of the request result.Thus,we mainly concern about the power consumption and delay of the upload link and ignore the return link.LetRandUbe the sets of RSUs and vehicles,respectively,where R={1,...,}Rand U={1,...,}U.There are two transmission modes from vehicles to RSUs.The first is direct V2I mode and the other is V2V predictive transmission mode.We can compute the power consumption of the task from vehicleuto RSUrin V2I mode by the addition of traffic uploading and relaying among the RSUs from the source to the destination.For V2V mode,the consumed energy is lower than that in V2I mode,and is the summation of traffic uploading and relaying among multi-hop vehicles.Thus,the power consumption between vehicles and RSUs is the combination of delays in V2I and V2V modes by considering the corresponding transmission delay and cost.

3.2 Problem formulation

The objective of our fog-cloud integration framework is to minimize power consumption while satisfying network delay constraint.As analysis above,the total network delay is the combination ofandSystem energy consumption consists of three parts,i.e.,andThe optimization problem is to minimize the total power consumption of fog nodes,cloud servers,and that between vehicles and RSUs.The following constraints should be satisfied:

1) The total network delay is less than the delay threshold;

2) The processing ability of fog nodes is limited,i.e.,their arrival rate and required CPU cycle should be no more than their upper bound of processing ability;

3) Similarly,for cloud servers,the arrival rate of one cloud server and the required CPU cycle of the assigned requests by cloud servers should be within the range of maximum arrival rate and required CPU cycles of cloud servers.

4) The number of machines within one cloud server is an integer,and operating state of cloud servers is a binary.

5) Network traffic processed by cloud nodes and cloud servers is no less than the total network traffic to be handled.

The above problem is a mixed integer non-linear programming (MINLP) problem,which has been demonstrated to be NP-hard.Thus,we propose a heuristic algorithm for offloading in fog-based IoV systems to effectively resolve the formulated problem with acceptable computational complexity.

IV.DEEP LEARNING BASED ENERGYEFFICIENT OFFLOADING SCHEME

We divide the whole system into two parts:the front end and the back end.The front end consists of vehicles and RSUs.Its cost is mainly the consumption and delay of communication between vehicles and RSUs,and we aim to minimizeand.We adopt the predictive combination transmission mode to solve the above problem.For the back end,containing the fog nodes and cloud servers,we propose a deep learning model to minimize its cost.

4.1 Greedy algorithm

In the greedy algorithm,the requests are queued and processed successively.The server,which has the minimum power consumption under the constraints,are chosen for processing the queuing request in each step.For each request,we place it to the current optimal position.As a result,we get the total power consumption and delay of these requests.

4.2 Deep learning model

1) Input and output design:Our considered system model is depicted in figure 1.We totally havenoptional fog nodes and cloud servers to handle requests from the front-end.Therefore,each node holds a record of the number of requests and average power consumption in the lastHperiods.The value ofHis determined by the CPU frequency and the average required CPU cycles of each task,and we choose the H value with a minimum loss in the CNN model.We adopt these records as the input of our deep learning model.Similar with [27],we also consider the Convolutional Neural Network (CNN) structure.In order to train our CNN model,labeled data (i.e.,sets of (,)x y) are required to perform supervised training.

Fig.2.The considered CNN-based deep learning model.

Our objective is to identify the features from the network traffic by constructing a CNN-based system.As demonstrated in figure 2(a),low-level features of the input data are filtered in the feature extraction part.The objective of pooling layer is to reduce the size of features and parameters,and speed up network calculation.By the extracted features from the convolution and pooling layers,the output of the classification can be worked out by the fully connected layers.In figure 2(b),the recorded information of the server with multiple intervals is the input data.It can be represented by a three-dimensional matrix,i.e.,channel,network feature,and serving nodes.The deep learning structure is utilized to compute the candidate servers.Therefore,we choose the server number as the output in our deep learning model.Thus,the output value is in the range of [0,1N- ],indicating the server number.

Algorithm 1.Simulated annealing algorithm.Require:The set of requests Q q qq={,,...,}1 2 m ; The list of servers S s ss={,,...,}1 2 n ; The tolerable total delay D Ensure:The minimum of power consumption ; The serial number of server nodes for requests 1.Initial temperature T,temperature threshold T′,iterations L 2.Initial solution s GreedyAlgorithm Q S D(,,)3.while T > T′ do 4.repeat 5.Displace some requests from high performance to low performance randomly 6.if Delay<D then 7.Get new solution s′8.end if 9.calculate Δ=′ -← () ()10.if random [0,1] < e-Δ T t tC sC s then 11.accept new solution s′12.record serial list,C s()′13.end if 14.if satisfy final condition then 15.return C s()′,serial list 16.endif 17.KK← +1 18.until K==L 19.TT=⋅α,where is α an attenuation factor 20.end while 21.return C s()′,serial list

Algorithm 2.Training algorithm.

2) Initialization Phase:As described in input and output design,we need labeled data to train our CNN model.In the initialization phase,we need to obtain the labeled data.The purpose of the initialization phase is to get the labeled data,consisting of the input vector and the corresponding output result [28].It is the best to train our CNN model with the global solutions.However,the time complexity of obtaining the optimal solution isO n()m,which is intolerated.We choose a compromised method to get the labeled data,where a heuristic algorithm is leveraged to obtain the nearoptimal solution.As shown in Algorithm 1,the simulated annealing (SA) algorithm mainly consists of two key steps:generating a new solution with some functions and accepting the new solution with a certain probability [29].We take the solution of our greedy algorithm as the initial solution in the SA algorithm.

The SA algorithm iteratesLtimes at a certain temperature to search the global optimal solution.In the searching process,the SA algorithm accepts the poor solution with the probability of exp(/)-Δt T,where Δtis the evaluation difference between the new solution and the origin one.With the 1SA algorithm,we can obtain the labeled data in our CNN model.The input of our CNN model is the record information of servers in last H intervals.Thus,we obtain the record information of servers and the corresponding offloading result.

3) Training Phase:We use the data obtained in the initialization phase to train the CNN model.The training phase consists of two steps:initializing the parameters in our designed CNN and fine-tuning the parameters with back-propagation algorithm.It is necessary to initialize the parameters,which is benefit to accelerate the convergence speed.The CNN parameters are initialized by normal Gauss distribution with a mean value of zero.

For feed-forward neural networks,parameter optimization depends on the error back-propagation.The optimization phase can be done by the stochastic gradient descent algorithm or the Adam algorithm.Since the Adam algorithm is adaptive,we choose it as the optimization algorithm in our training phase as shown in Algorithm 2.In the training phase,we take the cross-entropy cost function as the loss function.Thus,the output is a scalar,which is the neuron index of the maximum value in the output layer.

4) Running Phase:In the running phase,all servers need to record the number of received requests and average power consumption over a period of time before sending these records to edge computing nodes.In this way,each edge node can take these records as an input to calculate the offloading result.The computational complexity of running phase is relatively low compared to the training phase.The training process is periodically conducted offline to update the weights in the CNN model.However,it is possible to obtain an inappropriate result which does not meet the constraints in the CNN running phase.In such situation,we take the greedy algorithm as an compensation.

Fig.3.CNN model structure.

Fig.4.Cost comparison of direct V2I and combination transmission mode.

V.PERFORMANCE EVALUATION

In order to validate the performance of our model,we conduct the simulations on the front end and back end.We set N=20,M=5,njbetween 20 and 25,fibetween 4.5 and 5.5,fjbetween 2.5 and 3.5,the packet size between 5 and 15MB and request need cycles between 0.7 and 0.9.

Fig.5.Performances of fog+cloud computing,cloud computing and fog computing.

Fig.6.Performances of power consumption and delay of deep learning,SA and greedy method.

We use the predictive combination transmission model in the front-end.Before sending requests,broadcast packets are sent to ask the nearby vehicles or RSU for the backend processing delay.With the delay result and vehicle speed,vehicles can calculate the arriving RSU when the request returns.In the combined-model,the power consumption and delay of V2I,V2V models are estimated to choose the optimal one.In the simulation,we assume that vehicles travel in a straight line.Since the size of the input data is manageable,we do not consider the pooling layer.The features of deep learning based model are illustrated in figure 3.The records of last four time intervals are leveraged as network input,and the time interval is set to 2 seconds.The average number of requests in each time interval is 50.

Figure 4 illustrates the performance of cost with different vehicle speed and density.With higher vehicle speed,the cost of the combination transmission mode is much lower than the V2I mode.For the V2I mode,the cost grows significantly with the speed from 100 to 120 (Km/h).This is because most vehicles are at the edge of RSU coverage when the request returns.Therefore,when the speed exceeds a threshold,the vehicles will drive into another RSU.The cost of V2I mode will increase significantly.In addition,the effect of the combination transmission mode is significant with a heavy traffic density,and the delay and power consumption of each hop in V2V mode can be largely reduced.

As shown in figure 5(a),the power consumption of fog only is extremely low while no more than 80 requests can be processed.The fog+cloud model is much better than the cloud-only model.It is observed that the power consumption of the fog-cloud model is almost the same as that of fog-only model with the workload between 20 and 80 requests.Under such situation,the workload of fog computing is not saturated and most requests can be processed by the fog nodes.When the workload is more than 80 requests,the fog layer becomes saturated,and the growth trend of power consumption in the fog+cloud model is similar with that in the cloud-only model.It should be noticed that the maximum workload of our fog+cloud model is much larger than the other two models,demonstrating it can well cope with distinct traffic offloading requirement.

Figure 5(b) illustrates the performance of total network delay with different number of requests.Overall,the average delay of our fog+cloud model is less than 1.5 seconds,which can satisfy the delay constraint.For cloud-only and fog-only offloading strategies,their trends of total delay sky rocket as the number of requests increases,while the trend of our fog+cloud model enhances gently.Furthermore,these two methods can merely offload network traffic to some extent due to the constraint of network delay.

As mentioned before,we take the last H interval records of servers as input in CNN model.We take the records of last four Δtintervals as input,where Δtis equal to 2 seconds.In each interval Δt,the average number of request is 50.The training data is obtained by the SA algorithm in 20 consecutive intervals.

After training,we obtain the offloading result of requests by CNN model.The power consumption of different algorithms is shown in figure 6.From the simulation results,the simulated annealing algorithm has the best comprehensive performance.However,the simulated annealing algorithm is intolerably time-consuming in execution.Our deep learning model can effectively reduce the computational complexity in the running phase and the performance is much better than the greedy algorithm.For delay,greedy algorithm performs better than the other algorithms due to its strict delay constraint on each request.As shown in figure 6(b),the delay of all the three algorithms is less than 1 second.

VI.CONCLUSION

In this paper,we propose a fog-cloud model based on deep learning,which is a feasible solution to deduce the power consumption and delay in the back-end.The offloading optimization problem is formulated.For the fog nodes,they can be modeled as an M/M/1 model according to queueing theory.For the cloud servers,they can be modeled as M/M/n queue.In addition,we propose a predictive combination transmission model to minimize the cost in front-end.In the simulation,we can draw the conclusion that the fog-cloud model shows good performances compared with the cloud-only mode and the fog-only mode.Our deep learning model is an approximate approach to resolve the formulated problem.

ACKNOWLEDGEMENTS

The work is supported by National Natural Science Foundation of China with No.61733002 and 61842601,National Key Research and Development Plan 2017YFC0821003-2,the Fundamental Research Funds for the Central University with No.DUT-17LAB16 and No.DUT2017TB02.