Efficient Bandwidth Allocation and Computation Configuration in Industrial IoT

2023-05-08 06:13HUANGRuiLIHuilinZHANGYongmin
ZTE Communications 2023年1期

HUANG Rui,LI Huilin,ZHANG Yongmin

(Central South University,Changsha 410012,China)

Abstract: With the advancement of the Industrial Internet of Things (IoT),the rapidly growing demand for data collection and processing poses a huge challenge to the design of data transmission and computation resources in the industrial scenario.Taking advantage of improved model accuracy by machine learning algorithms,we investigate the inner relationship of system performance and data transmission and computation resources,and then analyze the impacts of bandwidth allocation and computation resources on the accuracy of the system model in this paper.A joint bandwidth allocation and computation resource configuration scheme is proposed and the Karush-Kuhn-Tucker (KKT) conditions are used to get an optimal bandwidth allocation and computation configuration decision,which can minimize the total computation resource require‐ment and ensure the system accuracy meets the industrial requirements.Simulation results show that the proposed bandwidth allocation and computation resource configuration scheme can reduce the computing resource usage by 10% when compared to the average allocation strategy.

Keywords: bandwidth allocation;computation resource management;industrial IoT;system accuracy

1 Introduction

In recent years,the advances in computation,communica‐tion and application design and the rapid development of the Internet of Things (IoT) have been driving the realiza‐tion of intelligence and automation in the industry[1–2].Through various IoT devices,a large number of data can be collected,such as images,sounds and temperatures,to judge the operating status of equipment and efficient follow-up main‐tenance∕management strategies are then made.For the tradi‐tional Industrial IoT with the cloud,the system performance may be affected by the network performance and computing capability of the cloud since data should be transmitted to a re‐mote cloud via the Internet for processing.With the develop‐ment of industry,more and more data needs to be collected and processed in real time,leading to explosive growth in com‐munication overhead and computation requirements,which brings significant challenges in the design of Industrial IoT,especially with high-reliability requirements.

To solve data transmission issues in the Industrial IoT,the communication framework has been updated to improve the speed and reliability of data transmission[3–4]and a new wire‐less transmission system framework has also been proposed to help design an operable and effective end-to-end wireless so‐lution[5].Moreover,many works have focused on improving data transmission technologies,such as time slot frequency hopping technologies[6]and clustering of data transmission[7].Besides optimizing the communication framework of the Indus‐trial IoT,some researchers have considered and studied the energy consumption,delay,cost and other parameters associ‐ated with the data transmission issues;for example,Ref.[8] proposed a bandwidth allocation strategy based on deep rein‐forcement learning algorithm and Ref.[9] enables the control of transmission energy consumption for the dynamic change of bandwidth.To deal with the large and unstable communica‐tion latency,an edge computing system with computing re‐sources deployed at the network edge has been introduced into the Industrial IoT and become a potential mainstream so‐lution[10–12].These works alleviate the problem of insufficient wireless resources.

To solve the computation resource issues,there exists a lot of work focusing on the optimization of task offloading perfor‐mance for cloud computing∕edge computing or collaborative edge-cloud computing,such as computation delay,energy con‐sumption,resource efficiency and data quality,to guarantee the quality of computation service[13–16].With the gradual deepening of machine learning research,it has been discov‐ered that the number of training epochs directly affects the ac‐curacy of the system model after training[17–18].Considering that allocated computation resources can determine the train‐ing epochs in a given time scale,some researchers have inves‐tigated the relationships among accuracy of the system model,the number of processed data,the number of computation re‐sources,the training speed∕delay and the energy consumption,and made use of machine learning based algorithms to further improve the performance of the computation system[19–22].In such a way,the performance of the computation system can be further improved.

However,most of the current works do not consider the in‐ner relationship between the bandwidth allocation and the computation resource management incurred by the data trans‐mission and just assume that the IoT devices transmit all of collected data to the edge server via access points (AP).The AP needs to try its best to forward the data to the edge server and the edge server processes all the received data making use of its available computation resources.Unfortunately,with the explosive increase of IoT devices,it is difficult for the ex‐isting Industrial IoT system to carry on such a heavy work‐load,which may lead to network congestion,even network crash when wireless communication resources are exhausted.Therefore,to solve the data transmission problem,it is worth‐while to consider the inner relationship among the computa‐tion resources,the accuracy of the system model and the data transmission.Making up for the shortage of wireless resources by increasing the computing resources can guarantee system accuracy.

In this paper,we aim at the scenario of resource manage‐ment in the Industrial IoT,which can allocate the wireless communication resources by the AP and train a high-accuracy model by computation resources at the edge server.First,we model the available channel bandwidth for each IoT device based on the allocated bandwidth and the distance between the IoT device and the AP.Second,we formulate the band‐width allocation and computation configuration as a resource requirement minimization problem.Then,we analyze the rela‐tionship among the transmitted data,the computation re‐sources and the system accuracy,and design a heuristic algo‐rithm to obtain the optimal computation resources allocation and communication resources management to each IoT de‐vice.The contributions of this paper can be summarized as follows:

• The bandwidth allocation and computation resource man‐agement problem for Industrial IoT is formulated as a cost minimization problem with the given accuracy requirement.

• The relationship among the accuracy of the system model,the transmitted data and the computation resources is investi‐gated and an efficient bandwidth allocation and resource man‐agement scheme is designed to satisfy the system requirement with a minimal resource requirement.

• Simulation results show the proposed algorithm can mini‐mize the resource requirement with a performance guarantee.

The rest of the paper is organized as follows.Section 2 pres‐ents the system model and problem formulation.An algorithm that can minimize the resource requirement with performance guarantee is proposed in Section 3.An operational perfor‐mance analysis is demonstrated based on simulation results in Section 4.Finally,Section 5 concludes our work.

2 System Model and Problem Formulation

Fig.1 shows an industrial scenario of the Industrial IoT sys‐tem.In this system,there areNIoT devices and the set of IoT devices is denoted asN={1,2,…,N},and several APs (small cell base stations or Wi-Fi APs) with edge servers.Gen‐erally,an IoT device collects monitoring data and transmits the data to the edge server via AP using the wireless communi‐cation technology,the AP allocates the available wireless bandwidth to each IoT device and forwards the monitoring data to the edge server for processing,and the edge server pro‐cesses the monitoring data using machine learning models to satisfy the requirement of system performance.Here,we as‐sume that each IoT device can adjust its monitoring data ac‐cording to the available wireless bandwidth,and the connec‐tions of the IoT devices and APs are given due to specified monitoring objects.Thus,for ease of description,we only fo‐cus on optimizing communication bandwidth allocation and computation resource management strategies in a single AP with an edge server with multiple IoT device scenarios in this paper,but the results can be extended to multiple APs with multiple edge servers based on the deployment of the infer‐ence model.Note that we mainly focus on the allocation of communication resources and the configuration of computa‐tion resources at the edge server.

Considering the time-varying feature of industrial scenarios,one optimization period can be divided intoTtime segmentsT={1,2,···,T},andtrepresents thet-th time segment.The accuracy requirement of a system model for one IoT device during one time segment is a constant and can be changed at different time segments.

▲Figure 1.An example of industrial scenarios

2.1 Communication Model

Generally,all IoT devices need to send their real-time monitoring data to the edge server for processing via the AP.It means that several IoT devices will transmit their data to the AP simultaneously.To mitigate interference among IoT devices,some effective interference cancellation techniques,such as orthogonal frequency-division multiple access (OFDMA) and time division multiple access (TDMA),can be used by the AP.In this paper,we assume that OFDMA is used for wireless communications.Besides the interference,signal pass loss is another important factor that affects data transmission.According to Refs.[23] and [24],the pass loss can be formulated as a function of transmission distance with a path loss exponent 2 ≤α ≤4.Lethn,tdenote the smallscale channel gain from then-th mobile device to the AP dur‐ing time segmentt.The achievable data transmission rate for IoT devicenduring time segmentt,denoted byRn,t,can be given by

wherewn,tdenotes the allocated bandwidth for IoT devicenduring time segmentt,Pn,tdenotes the transmission power of IoT devicenduring time segmentt,dn,tdenotes the distance between IoT devicenand AP,andσ2denotes the background noise power.Generally,wn,tis determined by AP,dn,tandσ2are constants,and the value ofPn,tcan be calculated by the power control algorithms[24–25].Due to the limitation of AP’s wireless communication resources,the total bandwidths that can be allocated to IoT devices have an upper bound,denoted byWˉ.Thus,we have

It is obvious that the bandwidth allocation strategy of the AP should consider the distancedn,tfor IoT devicenand the requirements of all the IoT devices.The data set of IoT devicenthat have been transmitted to the edge server during time segmentt,denoted byDn,t,is

wheretis the total number of time units in one time segment.

2.2 Computation Model

The edge server can make use of the received data and the machine learning based algorithm to train a high-accuracy sys‐tem model for each IoT device.We define accuracy as the op‐posite of the loss function in a training model based on feder‐ated learning.In general,the performance of the system model achieved by the machine learning algorithm can be affected by multiple factors,including feature selection,user-defined parameters,data sets and computation resources for training.In this paper,we mainly consider the impact of the data set and the computation resources on the training results and in‐tend to find an appropriate data set and computing resources to satisfy the requirements of system accuracy.

According to Refs.[17] and [19],the accuracy of the system model through training of traditional machine learning algo‐rithms generally tends to increase with the increase of the data set.At the same time,due to the noiseδn,tin the data set and the limitation of the model capacity,the accuracy growth rate of the system model will gradually slow down until it becomes stable[26].Besides,the precision of a machine learning algo‐rithm,such as a neural network,has a logarithmic relationship with the number of training epochs[17].Thus,with the increase of computation resources,the number of training epochs within the limited time scale can be increased,which can im‐prove the precision of the system model in a logarithmic form.Thus,in this paper,the accuracy of the system model for IoT devicenduring time segmentt,denoted byξn,t,can be mod‐eled as

whereaandbare accuracy parameters based on the machine learning algorithm and 0 ≤a,b≤1,Cepochdenotes the compu‐tation resources required for training the data set for one ep‐och,Dunitdenotes a reference unit of the data set for training,andδnis the influence factor of the noise in the data set on the accuracy.Generally,δn,t(-1 ≤δn<0) is a constant affected by the noise during time segmentt[27].

It can be found that both the data set and computation re‐sources can affect the accuracy of the system model.Because of this,it provides the industrial IoT with an opportunity to solve the wireless resource issues by managing the computa‐tion resources.

2.3 Problem Formulation

In this paper,we intend to design an efficient bandwidth al‐location and computation resource management scheme for the Industrial IoT to satisfy the accuracy requirement of each IoT devices.Letdenote the accuracy requirement of IoT devicenduring time segmentt.Thus,we have

To achieve the accuracy requirements of each IoT device,the AP allocates its available communication resources to IoT devices for data transmission while the edge server manages the computation resources for data processing.In other words,when wireless communication resources are scarce∕costly,more computation resources can be used to improve the accu‐racy of the system model.Otherwise,more computation re‐sources can be saved to keep the accuracy of the system model at a given level.

Considering that the wireless communication resources for each AP are limited,our objective function is to minimize the to‐tal computation resources requirement,which can both minimize the operating cost and identify the bottleneck of the system per‐formance.The bandwidth allocation and the computation re‐source management problem can be formulated as following

where={wn,t,∀n,t} is the set of the bandwidth allocation of the AP and={Cn,t,∀n,t} is the set of the computation re‐sources for data processing.The objective of Problem P1 is to obtain the optimal bandwidth allocation,which can minimize the total number of computation resources.The first constraint ensures the sum of the bandwidth resources that are allocated to the IoT devices does not exceed the total numer of the avail‐able bandwidth resources of the AP.The second constraint guarantees that the accuracy of the system model for each IoT device can meet the industrial requirements.

3 Optimal Bandwidth Allocation and Com⁃putation Configuration Scheme

To solve this problem,from the perspective of the edge server,we study the relationship among accuracyξn,t,compu‐tation resourcesCn,t,and data setDn,tof a specific IoT devicen.To satisfy the accuracy requirement of each IoT device,we can analyze the influence of data setDn,ton the computation resource requirements for each IoT device.Then,through the communication model,the relationship between the data set and the allocated bandwidth resources can be obtained.Thus,we can derive the impact of bandwidth allocation decisions on the computation resource requirements for each IoT device.

By analyzing the relationship amongξn,t,Cn,tandDn,t,we have the following results.

Lemma 1:The accuracyξn,tobtained by the edge server is an increasing and concave function with respect to the compu‐tation resourcesCn,twhen the data setDn,tis given.

Proof: According to Eq.(4),we can derive the first and sec‐ond derivatives ofξn,twith respect toCn,tas follows:

Since each item of Eq.(9) is positive,(∂ξn,t)/ (∂Cn,t) >0 holds.Since only-ain Eq.(10) is negative,(∂2ξn,t)/ (∂Cn,t2) <0 holds.Thusξn,tis an increasing and con‐cave function ofCn,t.

Lemma 2: The accuracyξn,tobtained by the edge server is an increasing and concave function with respect to the data setDn,twhen the computation resourcesCn,tis given.

The proof of Lemma 2 is similar to that of Lemma 1,so we omit it.We can also derive that (∂ξn,t)/ (∂Dn,t) >0 and (∂2ξn,t)/ (∂Dn,t2) <0.Thusξn,tis an increasing and concave function ofDn,t.

Theorem 1: Accuracyξn,tis an increasing and concave func‐tion with respect to both the computation resourcesCn,tand the data setDn,t.

Proof:According to Lemma 1 and Lemma 2,ξn,tis an in‐creasing and concave function with respect toCn,torDn,twhen the other variable is given.Furthermore,sinceCn,tandDn,tare independent,according to Ref.[28],it can be proved thatξn,tis an increasing and concave function with respect toCn,tandDn,t.

Based on Theorem 1,we have the following theorem for the optimal solution to P1.

Theorem 2:The optimal solution to P1 should satisfy {ξn,t=,∀n} and ∑nwn,t=.

Proof: According to Theorem 1,for a specific IoT devicen,ξn,tis an increasing function ofCn,tandDn,t.First,we can prove that ∑nwn,t=is a necessary condition for the optimal solution by contradiction as follows.

Assuming that there exists an optimal solution,denoted by {,∀n},satisfyingwe can in‐crease any,and find a smaller.This contradicts the objective function.Thus,always holds for the op‐timal solution to P1.

Therefore,according to the location information and accu‐racy requirements of all IoT devices,we design an efficient bandwidth allocation and computation configuration algo‐rithm,named EBACC,which can solve P1 and get the opti‐mal decision of bandwidth allocation and computation con‐figuration.

Algorithm 1.Efficient Bandwidth Allocation and Computa‐tion Configuration Algorithm (EBACC)

4 Simulation

In this section,numerical experiments have been con‐ducted to verify the correctness of the lemmas and perfor‐mance of the proposed algorithm EBACC.We first consider a scenario where the AP has a coverage range of 200 m and there areN=60 randomly scattered IoT devices within the coverage region.We randomly generate the distancedn,tbe‐tween each IoT device and AP within [10 m,200 m].In the communication model,we assume that the upper bound of to‐tal bandwidth resources of the AP isWˉ=200 MHz.And the reference signal-to-noise ratio (SNR) at the transmission dis‐tanced0=10 m is set toThe propagation distance can be converted to=dn,t∕d0,which is within [1,20],and the path loss exponent is set toα=3.Meanwhile,we randomly generate the accuracy requirement of each device within [0.8,0.95].

In the following subsection,we firstly explore the relation‐ship between variables in the computation and communica‐tion models.Then we verify the correctness of the lemmas in Section 3.Last,we evaluate the performance of the pro‐posed algorithm EBACC,which can get the optimal band‐width resource allocation to minimize the total computation resources while satisfying the accuracy requirements of IoT devices.

4.1 Impact of Computation Resources and Data Set on Accuracy of Training Results

As shown in Figs.2(a) and 2(b),the accuracy of training re‐sults shows a growing trend with the increase of data set size or computing resources,and the growth rate will be gradually slowed down,which verifies the conclusion of Lemma 1 that the accuracyξn,tis an increasing concave function with re‐spect toCn,tandDn,t.Thus,we can configure more computing resources or upload more data to improve the accuracy of training results.

4.2 Impact of Distance from IoT devices to AP on Actual Transmission Rate

▲Figure 2.Impact of computation resources and data set on the accu⁃racy of training results

▲Figure 3.Impact of distance from IoT devices to AP on the actual transmission rate and that of data set on computation resources

As shown in Fig.3(a),it is obvious that the actual transmis‐sion rateRn,tis a decreasing and convex function of distancedn,tfrom IoT devices to the AP.The closer the IoT device is to the AP,the higher the actual transmission rate will be.Thus,we can allocate more bandwidth resources to the farther IoT devices,which can reduce the impact of distance to get smaller computing resource requirements.

4.3 Impact of Data Set on Computation Resources Re⁃quirement

As shown in Fig.3(b),when the accuracyis given,the computation resource requirementCn,tby the IoT device is a decreasing and convex function of the data setDn,t,which has been proved by Lemma 3.It means when the uploaded data set is larger,the computing resources required by the model will be reduced.In addition,it can be found that,with the im‐provement of the accuracyξn,tof model requirements,the computation resourcesCn,twill become larger.Therefore,when the accuracy of model requirement is given,we can make a trade-off between the number of uploaded data and computing resources.

4.4 Impact of Bandwidth Resources Allocation on Data Set and Computation Resources Requirement

As shown in Fig.4(a),during one time segment,data setDn,tthat the IoT device can upload to the edge server is a linear and increasing function of the allocated bandwidth resourceswn,t.It can be found that the IoT device closer to the AP has a higher positive slope.Meanwhile,as shown in Fig.4(b),for an IoT device,the computation resources requirementCn,tis a de‐creasing and convex function of the bandwidth resources allo‐cated to it,which has proved the correctness of Lemma 4.We also find that the IoT device,which is farther away from the AP,will need more computation resources to satisfy the accu‐racy requirements when the bandwidth resource is given.Thus,if we want to minimize the total computation resources,we need to allocate bandwidth resources reasonably.In this way,the IoT device farther away from the AP should be allo‐cated with more bandwidth resources.

4.5 Optimal Bandwidth Resources Allocation

▲Figure 4.Impact of bandwidth resources allocated to IoT device on data set and computation resources

▲Figure 5.CDF of computation resource requirement of each IoT de⁃vice and total computation resources requirement under two situations: 1) optimal bandwidth resources allocation decided by EBACC;2) allo⁃cating bandwidth resources equally to all IoT devices.

We compare two strategies of bandwidth allocation: 1) the optimal bandwidth resources allocation decided by EBACC;2) allocating bandwidth resources equally to all IoT devices.As shown in Fig.5(a),when the total computation resource available is 3 000 MHz,the first strategy can use 77.85%of the total computation resource to satisfy the accuracy require‐ment of all IoT devices,but the second strategy needs 86.28%.It means that the proposed algorithm can signifi‐cantly improve the efficiency of computing and bandwidth re‐sources.Meanwhile,as shown in Fig.5(b),the optimal band‐width resource allocation can significantly reduce the demand of total computation resources of all IoT devices.Specifically,when the total bandwidth resource is=300 Mbit∕s,the opti‐mal bandwidth resource allocation can reduce the total compu‐tation resource requirement from 2 588.1 MHz to 2 335.4 MHz.

4.6 Relationship Between Optimal Bandwidth Allocation and Distance of IoT Devices

As shown in Figs.6(a) and 6(b),we explore the relationship between the optimal bandwidth allocation decisionwt={1,t,2,t,…,,t} and the distances of all IoT devicesdt={d1,t,d2,t,…,dN,t}.Compared with the average allocation strat‐egy,the optimal bandwidth allocation decision will be obvi‐ously affected by the accuracy requirement of IoT devices and the distance between the device and the AP.And it can be found that more bandwidth resources will be allocated to the IoT device farther away from the AP or with higher accuracy requirements.

5 Conclusions

In this paper,we focus on the bandwidth allocation of AP and the computation resource management of the edge server to ensure the system accuracy can meet the industrial require‐ment.We formulate the bandwidth allocation and computation resource management problem for the industrial IoT as a cost minimization problem with a given accuracy requirement.Then,we analyze the relationship among the transmitted data,computation resources and system accuracy and then design an efficient algorithm to obtain the optimal computation re‐source allocation and communication resource management.Numerical experiment results demonstrate that the proposed algorithm EBACC can significantly reduce the number of total computation resources while satisfying the accuracy require‐ments of the industrial IoT.

For future work,we are going to consider the more general cases where IoT devices can choose different APs and edge servers to process their data and obtain a high-accuracy sys‐tem model.We will focus on the bandwidth allocation between multiple APs and multiple IoT devices,which would be more technically challenging.

▲Figure 6.Relationship between the optimal bandwidth allocation deci⁃sion and distances of IoT devices