Qingqing Tang,Zesong Fei,*,Bin Li
1 School of Information and Electronics,Beijing Institute of Technology,Beijing 100081,China
2 School of Computer and Software,Nanjing University of Information Science and Technology,Nanjing 210044,China
3 Key Lab of Broadband Wireless Communication and Sensor Network Technology,Nanjing University of Posts and Telecommunications,Ministry of Education,Nanjing 210003,China
Abstract:Low earth orbit(LEO)satellite network is an important development trend for future mobile communication systems,which can truly realize the“ubiquitous connection” of the whole world.In this paper,we present a cooperative computation offloading in the LEO satellite network with a three-tier computation architecture by leveraging the vertical cooperation among ground users,LEO satellites,and the cloud server,and the horizontal cooperation between LEO satellites.To improve the quality of service for ground users,we optimize the computation offloading decisions to minimize the total execution delay for ground users subject to the limited battery capacity of ground users and the computation capability of each LEO satellite.However,the formulated problem is a large-scale nonlinear integer programming problem as the number of ground users and LEO satellites increases,which is difficult to solve with general optimization algorithms.To address this challenging problem,we propose a distributed deep learning-based cooperative computation offloading(DDLCCO)algorithm,where multiple parallel deep neural networks(DNNs)are adopted to learn the computation offloading strategy dynamically.Simulation results show that the proposed algorithm can achieve near-optimal performance with low computational complexity compared with other computation offloading strategies.
Keywords:LEO satellite networks;computation offloading;deep neural networks
The rapid development of mobile communication technology has brought many emerging applications such as Augmented Reality(AR),and Virtual Reality(VR),which poses new challenges to the current networks[1–3].Firstly,the limited coverage of traditional terrestrial communication networks is challenging to meet the needs of ground users to access the network anytime and anywhere,especially in rural areas,isolated islands,and sea areas without terrestrial communication infrastructure.Secondly,traditional terrestrial communication infrastructure is vulnerable to damage from natural disasters such as earthquakes,causing ground users to lose communication connections with each other.To overcome these shortcomings of terrestrial communication networks,satellite communication networks have emerged.Compared with terrestrial communication networks,satellite communication networks have a wide coverage area and can achieve ubiquitous global coverage.In recent years,satellite communication networks have made great progress,especially for low earth orbit(LEO)satellites.LEO satellite networks are deemed the most promising satellite mobile communication systems due to their low orbital height,short transmission delay,and small path loss.
However,emerging applications such as intelligent transportation and games are computation and energyintensive applications[4,5],which makes the LEO satellite network not only need to provide ground users with ubiquitous connections around the world but also need to provide ground users with computing service supports.In general,ground users in remote mountainous areas without the support of terrestrial network communication infrastructure can only offload computation tasks to the remote cloud server for processing through bent pipe transmission[6].However,bent pipe transmission requires ground users to offload computation tasks to the LEO satellite network first,and then the LEO satellite forwards the received computation tasks to the cloud server for processing.As a result,bent pipe transmission will increase the processing delay of computation tasks and may not satisfy the low-latency requirements of ground users.Inspired by the terrestrial multi-access edge(MEC)technology[7–9],the MEC technology is introduced into the LEO satellite network to sink the rich computing resources of the cloud server to the edge of the LEO satellite network[10].Therefore,the LEO satellite network can directly process computation tasks from ground users,reducing the task processing delay of ground users.
Recent years have witnessed research progress on computation offloading in LEO satellite networks.The work in[11]proposed a space-ground-sea integrated network architecture,where LEO satellites and unmanned aerial vehicles(UAVs)provide users with edge computing services to optimize the offloading decisions of users.The authors of[12]proposed a network framework that uses ground base stations,high altitude platforms(HAPs),and LEO satellites to provide offloading services for ground users.In[13],the authors proposed a satellite-ground integrated network with dual-edge computing capabilities to reduce the energy consumption and delay of ground users,where the Hungarian algorithm was used to solve the computation offloading problem.Although the LEO satellite network with edge computing has been preliminarily studied,several problems in this network have not yet been resolved.Firstly,LEO satellites can only be equipped with lightweight MEC servers due to load limitations.Therefore,when a large number of computation tasks from ground users are offloaded to the same LEO satellite at the same time,it may cause the computation overload of the LEO satellite.Secondly,the existing research uses a traditional optimization algorithm to deal with the problem of computation offloading in LEO satellite networks.However,the traditional optimization algorithm requires multiple iterations to adjust the offloading decision to the optimum[14],which leads to high computational complexity and is not suitable for real-time computation offloading problems in LEO satellite networks with time-varying environments.Thirdly,the existing research only considers the LEO satellite network with local and edge computing while ignoring the remote cloud computing with abundant computing resources.
Inspired by the above challenges,this paper proposes a cooperative computation offloading in LEO satellite networks with a three-tier computation architecture,which has the following advantages.Firstly,considering that LEO satellites can exchange information through inter-satellite links(ISLs),we design an inter-satellite cooperative computation offloading strategy in this network.Under this framework,the computation tasks of overloaded LEO satellites can be forwarded to other lightly-loaded LEO satellites for processing,which can balance the computation load of LEO satellite networks and achieve better resource utilization.Secondly,we propose a distributed deep learning-based cooperative computation offloading(DDLCCO)algorithm to solve the problem of realtime computation offloading of LEO satellite networks in a time-varying environment.The DDLCCO algorithm can dynamically adjust the offloading decisions according to the requirements of ground users.Compared with the traditional optimization algorithm,this algorithm has low computational complexity and is more suitable for computation offloading in a real network environment.Thirdly,to make full use of the computing resources in the LEO satellite network,we consider not only the horizontal cooperation between LEO satellites but also the vertical cooperation among ground users,LEO satellites,and the cloud server.
Based on the proposed network,an optimization problem for minimizing the total execution delay of ground users that satisfies the constraints of the limited battery capacity of ground users and the computation capability of each LEO satellite is proposed.However,the formulated problem is a large-scale nonlinear integer programming problem as the number of ground users and LEO satellites increases,which is difficult to solve with general optimization algorithms.To address this challenging problem,we propose a DDLCCO algorithm that usesKparallel deep neural networks(DNNs)to quickly and efficiently generate offloading decisions to obtain suboptimal solutions to the formulated optimization problem.Compared with a single DNN,theseKparallel DNNs use different training parameters such as weights,resulting in large differences in output data of DNNs,which can accelerate the convergence of the algorithm.The main contributions of this paper are summarized as follows:
1)For better utilization of computation resources,this paper proposes a cooperative computation offloading in LEO satellite networks with a threetier computation architecture.In this network,the formulated optimization problem is considered to minimize the total execution delay of ground users with the constraints of the limited battery capacity of ground users and the computation capability of each LEO satellite.
2)The formulated optimization problem is a largescale nonlinear integer programming problem,and the computational complexity of this problem will increase dramatically as the number of LEO satellites and ground users increases.To this end,we propose a DDLCCO algorithm to find the near-optimal solution,where multiple parallel DNNs are used to generate offloading decisions in a distributed fashion effectively.
3)Simulation results show that the convergence of the DDLCCO algorithm can be accelerated by using multiple parallel DNNs compared with a single DNN.In addition,the gap between the proposed algorithm and the enumeration algorithm is relatively small,which means that the proposed algorithm has better performance.
The remainder of this paper is organized as follows.The related works are presented in Section II.In Section III,the system model of the three-tier cooperative computation offloading network is presented.Section IV describes the formulated optimization problem.Section V introduces the DDLCCO algorithm.Section VI presents the simulation results with the proposed algorithm.Finally,this paper is concluded in Section VII.
Currently,a range of literature concerns the computation offloading problem in LEO satellite networks to reduce the energy consumption or execution delay of ground users[15–19].
To elaborate a little further,in[15],the authors proposed a hybrid cloud and edge computing LEO satellite network to reduce the energy consumption of ground users,where the alternating direction method of multipliers algorithm was used to solve the computation offloading problem.The authors of[16]proposed a computation offloading strategy based on game theory in LEO satellite networks to minimize the response time and energy consumption of computation tasks.[17]used a dynamic network virtualization technology to integrate the computation resources within the coverage of LEO satellites to minimize user perception delay and energy consumption.In addition,considering the heterogeneity of resources in LEO satellite networks,the authors of[18]proposed a satellite-ground integrated network to dynamically manage the computing resources and spectrum resources of this network,and a deep learning algorithm was adopted to solve the joint resource allocation optimization problem.The work in[19]proposed a space-air-ground integrated computing architecture,where the computation tasks from ground/air users can be processed on HAPs or offloaded to LEO satellites.Furthermore,a joint user association and offloading decision optimization problem were studied with the goal of maximizing the sum rate of ground users.
From the above analysis,the existing works mainly focus on the vertical cooperation among ground users,LEO satellites,or the cloud server while ignoring the horizontal cooperation between LEO satellites.Moreover,most of the existing works use the general optimization algorithm to solve the problem of computation offloading or resource allocation,which leads to high computational complexity and is not suitable for real-time computation offloading in LEO satellite networks with time-varying environments.In this paper,we consider not only the vertical cooperation among ground users,LEO satellites,and the cloud server but also the horizontal cooperation between LEO satellites.In addition,a DDLCCO algorithm is proposed for computation offloading in LEO satellite networks with time-varying environments.
In this section,we present a system model for the three-tier cooperative computation offloading network,which includes the network model,coverage model,communication model,and computation model.
We consider a cooperative computation offloading in LEO satellite networks with a three-tier computation architecture as shown in Figure 1,which includesMLEO satellites,Iground users and a remote cloud server.The set of all LEO satellites and ground users can be denoted asM={1,2,3,...,M}andI={1,2,3,...,I},respectively.Each LEO satellite is equipped with a lightweight MEC platform such as Docker,and it can be considered as an edge computing node.Furthermore,each LEO satellite is connected to the remote cloud server via feeder links,and multiple neighboring LEO satellites can communicate with each other via ISLs.
Figure 1.The system model of a cooperative computation offloading in LEO satellite networks with a three-tier computation architecture.
In the considered network,each ground user has a computation taskWiΔ=(Di,Xi)to be either processed by itself,by LEO satellites,or by the remote cloud server.Direpresents the size of the input computation task,andXidenotes the required central processing unit(CPU)cycles to accomplish the computation taskWi.Specifically,when LEO satellitemreceives an offloading request from a ground user,the LEO satellite can process the computation task by itself,or forward it to other LEO satellites with remaining computation resources,or further forward it to the cloud server for processing.Note that the computation tasks of ground users cannot be partitioned[20],and the size of the input computation tasks changes over time;that is,the requirements of ground users change over time.The notations used in the rest of this paper are summarized in Table 1.
Table 1.Notation.
LEO satellites are characterized by high-speed movement,and thus the communication between ground users and LEO satellites is different from ground communication networks.According to[21],LEO satellites can only communicate with ground users in a certain period,which can be characterized by the elevation angle between ground users and LEO satellites.The elevation angle between ground users and LEO satellites can be calculated by
wherehdenotes the distance between the ground user and the LEO satellite orbit,Reexpresses the radius of the earth,sis the distance between the ground user and the LEO satellite,andγstands for the geocentric angle.
Considering that the MEC server equipped on the LEO satellite is a lightweight computing platform,when a large number of ground users send computation offloading requests to the same LEO satellite,the LEO satellite may be overloaded.Since LEO satellites can exchange information through ISLs to obtain the remaining computing resource status,the computation tasks of ground users can be completed through the cooperation between LEO satellites.Specifically,the computation tasks of ground users received by overloaded LEO satellites can be forwarded to other lightly-loaded LEO satellites for processing through the ISLs,which can balance the computation load of the LEO satellite network and achieve better resource utilization.
We assume that each ground user is only associated with one LEO satellite within a time slot,and each ground user has only one computation task to be offloaded in each time slot.Furthermore,we consider that the spectrum used by ground users is overlapped,which implies that there exists interference between ground users.According to[22],the uplink transmission rate of a ground user that chooses to offload its computation task to the LEO satellite through a wireless link can be denoted as
wheregi,mexpresses the channel gain between ground useriand LEO satellitem,Bis the available spectrum bandwidth,pidenotes the uplink transmit power of ground useri,andσ2represents the additive white Gaussian noise(AWGN)power.
In general,the size of the input computation task is much larger than the size of computation results[23].Thus,the delay caused by transmitting computation results to ground users is ignored in this paper.
Through the above analysis,there are three schemes for ground users to process the computation tasks.Letai ∈{0,1}denote whether the computation taskWiof ground useriis processed by itself,whereai= 1 denotes that the computation taskWiis computed by ground useri;otherwise,ai= 0.Letbi,m ∈{0,1}indicate whether the computation taskWiof ground useriis processed by the associated LEO satellitem,wherebi,m=1 denotes the computation taskWiis offloaded to LEO satellitem;otherwise,bi,m=0.Similarly,letci ∈{0,1}express whether the computation taskWiis processed by the cloud server,whereci=1 denotes the computation taskWiis executed by the cloud server;otherwise,ci=0.Considering that each computation task has only one offloading decision in each time slot,the offloading decision of ground userineed to satisfy the following constraint,
In Subsection 4.1,the computation cost for different offloading schemes is discussed.Then,the formulated optimization problem for minimizing the sum execution delay of ground users is studied in Subsection 4.2.
According to different offloading schemes,the computation cost in terms of energy consumption and delay for ground users are different.
1)Local computing: For local computing,we defineas the local computation capability of ground useri.Thus,the execution time of computation taskWiprocessed by ground userican be calculated by
whereεexpresses the energy coefficient,and its size relies on the chip architecture[24].
2)LEO satellite computing: For LEO satellite computing,we setas the computation capability(CPU cycles/s)allocated to ground useriby LEO satellitem.Therefore,the computation delay of computation taskWicomputed by LEO satellitemcan be denoted as
When a large number of computation tasks from ground users are offloaded to the same LEO satellite at the same time,the LEO satellite will be overloaded.Hence,the computation tasks on the overloaded LEO satellite needs to be forwarded to other LEO satellites for processing.Letstand for the average round trip time of transfer computation taskWifrom LEO satellitemto LEO satellitek.The round trip time can be estimated using the average values of historical information[25].Moreover,whenm=k,since there is no computation task to transfer in the same LEO satellite.Therefore,if the computation taskWiis finally computed at LEO satellitek,the total delay consists of the transmission delay between ground useriand LEO satellitem,the propagation delay between ground useriand LEO satellitem,the transfer delay between LEO satellitemand LEO satellitek,and the computing delay at LEO satellitek.The total delay of computation taskWiexecuted by LEO satellitekcan be denoted as
where the transmission delay between ground useriand LEO satellitemcan be obtained by
and the propagation delay between ground useriand LEO satellitemcan be calculated by
wherevis the speed of light,si,mdenotes the distance between ground useriand LEO satellitemand can be obtained by
Furthermore,the energy consumptionof ground userifor offloading computation taskDito LEO satellitemcan be calculated by
3)Cloud computing:For cloud computing,the computation taskWiis processed by the remote cloud server.Specifically,if the computation taskWiis offloaded to the cloud server,ground userifirstly transmits the computation taskWito LEO satellitemvia a wireless link.Then,LEO satellitemforwards the received computation taskWito the cloud server through a feeder link.Letdenote the computation capability(CPU cycles/s)allocated to ground useriby the cloud server[26].The total delayof computation taskWiprocessed by the cloud server includes the transmission delay between ground useriand LEO satellitem,the propagation delay between ground useriand LEO satellitem,the delay for transmitting computation taskWifrom LEO satellitemto the cloud server,and the computing delay at the cloud server.Thus,the total delay of computation taskWiprocessed by the cloud server can be expressed as
where the computing delay of computation taskWiprocessed by the cloud server can be denoted as
and the delay of transmitting computation taskWifrom LEO satellitemto the cloud server can be calculated by
whereris the rate for transmitting computation taskWifrom LEO satellitemto the cloud server.Note that the delay caused by transmitting computation results from the cloud server to LEO satellitemis ignored in this paper.Furthermore,it can be seen that the energy consumption of ground users to offload computation tasks to the cloud server or LEO satellites remains unchanged.
To improve the quality of service for ground users,we formulate the cooperative computation offloading problem in LEO satellite networks to minimize the total execution delay of ground users while considering the limited battery capacity of ground users and the computation capability of LEO satellites.Let Xi={ai,bi,1,bi,2,...,bi,M,ci}denote the computation offloading vector of ground useriand X={Xi,i ∈I}express the computation offloading decisions for all ground users.Mathematically,the problem of interest reads
However,the formulated optimization problem is a large-scale nonlinear integer programming problem as the number of ground users and LEO satellites increases.In addition,since the objective function and constraints of the formulated optimization problem contain binary variables,the problem is NP-hard.In general,this challenging problem can be reformulated by traditional relaxation methods and then solved by using convex optimization techniques[14].However,the traditional optimization algorithm requires a large number of iterations to adjust the offloading decision to the optimum,which leads to high computational complexity and is not suitable for real-time computation offloading in the LEO satellite network with a time-varying environment.To effectively solve this problem,we propose a DDLCCO algorithm to obtain suboptimal solutions in the following section.
To find a satisfactory solution for the formulated optimization problem,we propose a DDLCCO algorithm,which includes offloading decision generation and deep learning.Specifically,we first give an introduction to DNN in Subsection 5.1.Then,an overview of the DDLCCO algorithm is described in Subsection 5.2.Finally,the offloading decision generation and deep learning are described in Subsection 5.3 and 5.4,respectively.
Before introducing the DNN model,we first give a brief introduction to the perceptron since the DNN model is an extension of the perceptron.As shown in Figure 2(a),the perceptron consists of three inputs,a neuron and an output.Through this neuron,the linear relationship between output and input is learned to get an output(but not the final output),which can be denoted as
Figure 2.The perceptron and DNN model.
wherewiandbdenote the weights and bias,respectively.Then,the output of the perceptron can be obtained by
whereδ(·)is the activation function.As for the choice of the activation function,it mainly depends on what kind of results we want to output,e.g.,if we need to output the result asy ∈{-1,1},then we can choosesign(z)as the activation function.
The neural network is an extension of the perceptron,and DNN can be interpreted as the neural network with multiple hidden layers,as shown in Figure 2(b).The layers of DNN are fully connected,which means that any neuron in thei-th layer must be connected to any neuron in the(i+1)-th layer[27,28].The learning process of DNN is composed of the forward propagation process and back propagation(BP)process.In the forward propagation process,the training samples are first input to the input layer,then passe through the hidden layers,and finally reach the output layer and outputs a result.Since there is an error between the outputs of DNN and actual values of samples,we need to calculate the error between the output values and the actual values and then propagate the error from the output layer to the input layer.In the process of BP,we need to adjust the values of weights to minimize this error continuously.In general,the error between the output values and the actual values can be expressed as a loss function.The purpose of DNN is to minimize the loss function through training to obtain the model that we need.
The structure of the DDLCCO algorithm is shown in Figure 3,which consists of offloading decision generation and deep learning.The generation of offloading decisions mainly depends onKparallel DNNs,which are characterized by their embedded parameters,such as the weights of connected hidden neurons.Letθkdenote the embedded parameter of DNNk.At thetth time slot,each DNN first takes the offloading data Dtof ground users as input and then outputs a relaxed offloading decision(which is a continuous variable between 0 and 1).To meet the objective function and constraints of problem(15),we need to map these continuous output variables to binary variables.Finally,the offloading decisionthat minimizes problem(15)is chosen as the final output of the offloading decision generation stage and stores the newly obtained datato the replay memory.
Figure 3.The structure of the DDLCCO algorithm.
Figure 4.The ratio versus different training steps.
Figure 5.The learning loss versus different learning rate.
As for the deep learning stage in thet-th time slot,a batch of samples is taken from memory to train theseKparallel DNNs.Meanwhile,the parameters of theseKparallel DNNs are updated according to the loss function.Then,the above steps are repeated to train theseKparallel DNNs until the entire network reaches a steady state.The specific process of these two stages is introduced in the following subsection.
For the input offloading data Dtin thet-th time slot,the parametersθk,tof theseKparallel DNNs are randomly initialized.Note that theseKparallel DNNs have the same structure,but their parametersθk,tare different.Correspondingly,each DNN outputs a relaxed offloading decisionaccording to the parameterized functionfθk,t,whererepresents the output of DNNkin thet-th time slot.Furthermore,we adopt Rectified Linear Unit(ReLU)as the activation function in hidden layers to correlate the output of the neuron with the input[29].In the output layer,we use the sigmoid function as the activation function,e.g.,y= 1/(1+e-x).However,the output of each DNN is a continuous variable.To solve problem(15)effectively,we need to map these obtained continuous variables to binary variables.In this paper,we adopt the binary mapping method according to[30],which can be expressed as
Algorithm 1.DDLCCO algorithm for computation offloading.1: Input offloading data Dt at t-th time slot;2: Initialize the K parallel DNNs with random parameters θk,t and empty memory;3: Let σ represent the training interval.4: for t=1,2,3,...,T do 5: Input offloading data Dt to all K DNNs;6: Generate a relaxed offloading decision ˆxk,t =fθk,t(Dt)from DNN k;7: Map the continuous offloading decisions into binary action xk,t;8: Compute Q*(Dt,xk,t)according to xk,t;9: Choose the best offloading decision according to x*t=arg min Q*(Dt,xk,t);10: Update the memory by adding(Dt,x*t);11: if t mod σ=0 then 12:Randomly select K batches of training samples from the memory;13:Train the DNNs and update θk,t using the Adam algorithm;14: end if 15: end for
Therefore,the output continuous variables of theseKparallel DNNs are mapped to binary variables.After obtaining the output ofKparallel DNNs,these variables are substituted into problem(15),and the best offloading decision can be chosen by the following formula,
The optimal offloading decisionobtained by(19)and its corresponding input offloading data Dt,i.e.,will be saved in an empty memory with limited capacity.When the memory is full,the newly generated sample will replace the oldest sample.
We use experience replay technology[31]to train theseKparallel DNNs with samples stored in the memory.Firstly,we randomly select a batch of training samples from memory.Then,the Adam algorithm[32,33]is used to update the parameters ofKparallel DNNs to minimize the cross-entropy loss.The crossentropy loss is calculated by
In this paper,we use the cross-entropy function as the loss function since it can effectively accelerate the convergence of the algorithm compared to other loss functions such as mean square error.The detailed process of the DDLCCO algorithm is shown in Algorithm 1.
In this section,we evaluate the performance of the proposed DDLCCO algorithm through simulations and compare it with the following algorithms:
1)Vertical Cooperation:For vertical cooperation,the computation task of the ground user can only be processed by itself,by LEO satellites,or by the cloud server.
2)Greedy:Since the MEC technology can usually provide a lower computation delay,each ground user will offload all computation tasks to the associated LEO satellite.
3)Enumeration:The enumeration algorithm is a traditional optimization algorithm that finds the optimal solution by searching all possible offloading decisions of ground users.However,the computational complexity of this algorithm is very high,and thus we only evaluate its performance in a small network.
In the simulation,the software environment is Python 3.6 with Tensorflow and Matlab 2018b,and the hardware environment is a GPU-based server.We assume that there are 3 LEO satellites and 24 ground users in the network,where 3 LEO satellites are in an orbit of 784 kilometers(km).Furthermore,we assume that ground users are randomly deployed in a fixed area,and each ground user has only one computation task to be processed in each time slot.The transmit power of each ground user is 23 dBm,and the channel bandwidth is 20 MHz.For the computation task,we consider that the size of the input computation task is randomly distributed between 1,000 kilobits(KB)and 5,000 KB,and the required CPU cycles to accomplish the computation task is 1,000 Megacycles per second(Mcycles/s).In addition,the computation capability of ground users is 0.1 Gigacycles per second(Gcycles/s).The computation capability allocated by LEO satellites and the cloud server to each ground user is 3 Gcycles/s and 10 Gcycles/s[13],[34],respectively.
The layers of DNNs in our proposed DDLCCO algorithm are fully connected and consist of one input layer,two hidden layers,and one output layer.In addition,the first hidden layer has 120 neurons and the second hidden layer has 80 neurons.The training intervalσand memory size are set to 10 and 1024,respectively.Next,we will illustrate the advantages of the proposed DDLCCO algorithm through simulation.
We use the ratio of the suboptimal solution obtained by the proposed DDLCCO algorithm to the optimal solution obtained by the enumeration algorithm as the ordinate of Figure 4.To prove that using multiple DNNs to generate offloading decisions has better performance than a single DNN,we compare the changes in the value of the ratio under different numbers of DNNs.Intuitively,the higher the value of the ratio,the closer the solution of the proposed DDLCCO algorithm is to the optimal solution.It can be seen from Figure 4 that the value of the ratio increases as the number of DNNs increases and gradually approaches 1.Another observation is that as the number of DNNs increases,the convergence speed becomes faster.This is because by using DNNs with different parameters,the output results of different DNNs are different,and the difference in output results will accelerate the convergence of the algorithm.In this paper,the proposed DDLCCO algorithm uses 3 parallel DNNs to generate offloading decisions,which not only speeds up the convergence of the algorithm but also obtains a solution that is closest to the optimal solution.
Figure 5 shows the relationship between the learning loss and training steps of the proposed DDLCCO algorithm when the learning rate is 0.01,0.001,0.0001,and 0.00001,respectively.It can be observed from Figure 5 that the learning rate will affect the performance of learning because the learning rate is the learning step length that minimizes the loss function.The higher the learning rate,the faster the convergence speed of the loss function,which indicates that the algorithm approaches the suboptimal solution faster.As a result,this paper chooses a learning rate of 0.01 to train the DNN model because it has the best learning performance.
Figure 6.The total delay of ground users versus different computation capability of ground users.
Figure 7.The total delay of ground users versus different computation requirement of ground users.
To prove that the proposed DDLCCO algorithm has better performance advantages than other benchmarks,Figure 6 compares the total delay of ground users for the four algorithms versus the different computation capabilities of ground users.We can observe that the total delay of these four algorithms decreases as the computation capability of the ground user increases.This is due to the fact that when the computation capability of the ground user increases,the ground user can process the computation tasks by itself without offloading them to LEO satellites or the cloud server,which can reduce the delay for ground users to transmit computation tasks to LEO satellites.It is interesting to note that the total delay of the proposed DDLCCO algorithm is lower than the vertical cooperation algorithm and the greedy algorithm,and the gap between the proposed DDLCCO algorithm and the enumeration algorithm is relatively small.The reason is that the proposed DDLCCO algorithm provides ground users with multiple offloading schemes,which not only considers the cooperation among ground users,LEO satellites,and the cloud server but also considers the cooperation between LEO satellites.
Figure 8.The total delay of ground users versus different number of ground users.
In Figure 7,we compare the total delay of ground users for the four algorithms versus the different computation requirements of ground users.In the experiment,the total delay of these four algorithms increases as the computation requirement of the ground user increases.It can be explained by the fact that the computation capability allocated to ground users by LEO satellites and the cloud server and the local computation capability of ground users are fixed.According to(4),(6)and(13),it can be seen that the computation requirement of ground users increases will cause ground users,LEO satellites,and the cloud server to take more time to process those computation tasks.In contrast,the total delay of the proposed DDLCCO algorithm is lower than that of the vertical cooperation algorithm and the greedy algorithm and close to the enumeration algorithm,which shows that the proposed DDLCCO algorithm can effectively reduce the total delay of ground users.
Finally,Figure 8 depicts the total delay of ground users for the four algorithms versus the different number of ground users.Obviously,the total delay of ground users of the four algorithms increases as the number of ground users increases.This is because the number of computation tasks increases as the number of ground users increases,which means that there are more computation tasks for ground users need to process.However,the growth of the proposed DDLCCO algorithm is much slower than the vertical cooperation algorithm and the greedy algorithm,and the gap with the enumeration algorithm is quite small.This is because the proposed DDLCCO algorithm can provide more computation offloading opportunities for ground users.Thus,the total delay of the proposed DDLCCO algorithm is lower than other algorithms.Through the above analysis,it can be seen that compared with other benchmark algorithms,the proposed DDLCCO algorithm can effectively reduce the total delay of ground users.
In this paper,we have introduced a cooperative computation offloading in LEO satellite networks with a three-tier computation architecture by leveraging the vertical cooperation among ground users,LEO satellites,and the cloud server,and the horizontal cooperation between LEO satellites.To improve the quality of service for ground users,we have formulated an optimization problem that minimizes the total execution delay of ground users subject to the limited battery capacity of ground users and the computation capability of each LEO satellite.Since the traditional optimization algorithms cannot solve the real-time computation offloading problem in LEO satellite networks with a time-varying environment,we have proposed a DDLCCO algorithm consisting ofKparallel DNNs to generate offloading decisions effectively.Extensive numerical results illustrated that the proposed DDLCCO algorithm could accelerate the convergence speed of the algorithm and effectively reduce the total execution delay of ground users.
This work is partially supported by the National Key R&D Program of China(2020YFB1806900),by Ericsson,by the Natural Science Foundation of Jiangsu Province(No.BK20200822),by the Natural Science Foundation of Jiangsu Higher Education Institutions of China(No.20KJB510036),and by the open research fund of Key Lab of Broadband Wireless Communication and Sensor Network Technology(Nanjing University of Posts and Telecommunications),Ministry of Education(No.JZNY202103).