An Improved Convolutional Neural Network Based Indoor Localization by Using Jenks Natural Breaks Algorithm

2022-04-20 05:58ChengjieHouYaqinXieZhizhongZhang
China Communications 2022年4期

Chengjie Hou,Yaqin Xie,Zhizhong Zhang

1 School of Communication and Information Engineering,Chongqing University of Posts and Telecommunications,Chongqing,China

2 School of Electronic and Information Engineering,Nanjing University of Information Science and Technology,Nanjing,China

Abstract:With the rapid growth of the demand for indoor location-based services(LBS),Wi-Fi received signal strength(RSS)fngerprints database has attracted signifcant attention because it is easy to obtain.The fngerprints algorithm based on convolution neural network(CNN)is often used to improve indoor localization accuracy.However,the number of reference points used for position estimation has signifcant effects on the positioning accuracy.Meanwhile,it is always selected arbitraily without any guiding standards.As a result,a novel location estimation method based on Jenks natural breaks algorithm(JNBA),which can adaptively choose more reasonable reference points,is proposed in this paper.The output of CNN is processed by JNBA,which can select the number of reference points according to different environments.Then,the location is estimated by weighted K-nearest neighbors(WKNN).Experimental results show that the proposed method has higher positioning accuracy without sacrifcing more time cost than the existing indoor localization methods based on CNN.

Keywords:indoor localization;convolution neural network(CNN);Wi-Fi fngerprints;Jenks natural breaks

I.INTRODUCTION

With the development and innovation of wireless communication technology,the demand for location services in various industries is also increasing[1–3].Due to the complex characteristics of the indoor environment and the infuence of shadow fading and multipath effect,range-based localization technologies that are widely used outdoors cannot provide accurate indoor positioning services.Meanwhile,GPS cannot be used in indoor localization because GPS signals cannot be received in the indoor environment.Since Wi-Fi devices are cheap and widely deployed,indoor localization technologies based on Wi-Fi have become a research hotspot[4,5].The received signal strength(RSS)of Wi-Fi is easy to obtain and requires little hardware.Thus,RSS-based indoor positioning methods have always been concerned.

The earliest RSS-based fngerprint system is called RADAR,which collects RSS at multiple receivers and empirically determines triangulations to complete location estimation[6].After that,a probabilistic method of K-nearest-neighbor(KNN)is used in the indoor fngerprinting localization to improve the positioning accuracy[7].In the complex indoor environment,the method based on fngerprint identifcation is easily affected by environmental fuctuations,and the system model is unstable.Research shows that CNN achieves satisfactory results of image recognition and classifcation tasks[8].CNN is also used in indoor fngerprint localization to improve the positioning accuracy and robustness of the system[9–11].Wang et al.proposed CiFi[12]and Chen et al.proposed ConFi[13],in which CSI fne-grained information was used as the input data set,the CNN model was further improved to reduce the localization error.Ashraf et al.made a new exploration of the data set in which the geomagnetic signal was used as input for CNN[14].

Generally,indoor localization technology based on CNN fngerprint divides into offine training and online localization phases.In the offine phase,the collected data set is used to train the CNN model.In the online testing phase,the data collected by the target users are input into the trained CNN model,and the system will return the location of the reference point,which has the highest probability.Jang et al.used RSS data as the input of CNN to locate foors and rooms[15],but they could not determine the exact location.The CiFi uses CSI as input data to achieve high-precision positioning[16].

Both CNN and traditional fngerprint databases need to use KNN or WKNN to obtain the fnal position estimation.The difference is that CNN selects the reference points with high probability from the output,while the traditional fngerprint library selects the reference points with the smallest Euclidean distance matching.Since boundary users have a signifcant infuence on the accuracy of the system,the selection of the number of reference points used for position estimation becomes particularly important.The boundary users are those located at the junction of the reference points.The CiFi proposed a greedy method that computes a weighted average of reference points as the estimated location for targets.The specifc operation is to input the data collected by the target user,and the trained CNN model will return theKlargest reference points.Finally,the position can be estimated by WKNN.Their results show that better localization performance can be realized when the value ofKis 2.This algorithm dramatically improves the accuracy of positioning based on CNN fngerprints,but it still has disadvantages.A fxedKvalue means that a weighted calculation is always carried out among a certain number of reference points.If the target user happens to be in the position of the reference point or at the junction of more than two reference points,which some examples are shown in Figure 4,this algorithm cannot work well.

Figure 1.The system architecture.

In this paper,the above method is improved by using Jenks natural breaks algorithm[17],and the value ofKcan be selected adaptively.Thus,more reasonable classifcation results are adaptively selected based on the similarity between CNN output data.The proposed method is used to preprocess the outputs of CNN and fnally return theKlargest reference points.It is worth noting that the value ofKis between 1 and 4,and the specifc implementation will be introduced in Section III.

The main contributions of this paper are summarized as follows:

·RSS is used as the input data of CNN to achieve high-precision indoor positioning.The solution method which proposed in this paper can use any reasonable data sets as well,such as magnetic feld and CSI.

·In this paper,an improved position estimation method based on Jenks natural breaks algorithm(JNBA)is proposed.The algorithm can adaptively select the value ofK.At the same time,the overall positioning accuracy is improved.

·The proposed method is also effective even when the number of access points are small.The input data of CNN is constructed by multiple sampling.Through experiments,it is proved that this data construction method is feasible,and the CNN model could learn some features of it.

·After a large number of simulation experiments,the correctness of our proposed algorithm is verifed.Simulation results demonstrate that the localization accuracy is improved compared with the traditional CNN based indoor.

In the remainder of this paper,the system model is introduced in Section II.The details of the positioning algorithm will be described in Section III.The proposed method is verifed through extensive experiments and compared with the existing schemes in Section IV.Section V concludes this paper.

II.SYSTEM MODEL

Some common symbols need to be explained in order to describe the model more clearly.Assuming that there areNreference points andMAPs,the experimental space is equally divided intoNrectangular areas with a side length ofd×d.Each rectangular area is numbered from 1 toNas their label,and the reference point is located at the center point of the rectangular space.Then,the vector formed by the RSS value of theMAPs received at any point in thei-th label can be expressed as:

IfSsamples are taken at one point in thei-th label,the RSS values obtained at this point can be written as:

whereSis sampling times andMis the number of AP.This matrix can be regarded as a grayscale image of RSS which is shown in Figure 2.

The improved CNN based indoor localization method proposed in this paper has two phases,the offine training phase and the online positioning phase.The architecture of the system is shown in Figure 1.

Figure 2.RSS grayscale images in three labels(from left to right are label 1,label 10 and label 20,respectively).

In the offine training phase,multiple RSS grayscale images were collected in each label range as input to train and test the CNN model.In the online positioning phase,the RSS grayscale image of the same structure collected by the target user is input into the trained CNN model for classifcation.In this way,the localization problem is transformed into a classifcation problem through CNN.Then,the output of CNN is processed by JNBA to choose the value ofKadaptively.Finally,the position can be estimated according to the WKNN.

III.DESCRIPTION OF THE PROPOSED POSITIONING ALGORITHM

The details of the positioning algorithm will be presented in this section.

3.1 RSS Grayscale Image

For the RSS values in the above Eq.(2),values less than-110 dBm are represented by 0.When the received signal is less than-110 dBm,it can be considered as insuffcient signal strength[18].Generally,the input of CNN must be a square matrix.In this paper,the number of AP is ten,so that the sampling timesSare ten as well.Therefore,the grayscale image of RSS can be shown in Figure 2.In the label radiated by each reference point,Vdifferent points are selected to obtain the RSS matrix of the point,and then there are a total ofN ×Vgrayscale images represented by the RSS matrix as the data set.Among them,80%of the data are selected as the training set to train the CNN model,and the remaining data are used as the test set to verify the accuracy of the CNN model.Generally,the input of CNN needs to be normalized to between-0.5 and 0.5.

3.2 CNN Offline Training

The CNN model comprises convolution layers,pooling layers,and full connection layers,respectively.Through the modifcation of quantity,parameters,and combination mode of the above three parts,different CNN models are formed to solve the practical problems.Among the CNN models,the four most commonly used models are LeNet5,AlexNet,VGG,and ResNet,respectively.The size of the input RSS grayscale image is small in this paper so that the LeNet5[19]network model is used.The CNN model used in this paper is shown in Figure 3.The detailed parameters of each layer are described in Section IV.The activation function provides nonlinear factors for the network.After the activation function is added,the neural network has the hierarchical nonlinear mapping learning ability.The Rectifed Linear Units(ReLU)is used as an activation function which can be expressed as:

Figure 3.The architecture of the LeNet5 CNN model.

Figure 4.Several cases of K value selection.

In order to reduce the over-ftting of the model in training,a 50%dropout is used after the frst full connection layer to improve the network structure[20].As shown in the pooling layer of Figure 3,there are a total of 64×3×3 local features after the second pooling layer.The layer,which contains 576 hidden nodes,is linked to 256 hidden nodes through the frst fully connected layer.In the fnal output layer,it is linked toNnodes,and the output is theN-dimensional vector,corresponding to the labels where theNreference points are located.According to the output vector and the preset label,the cross-entropy loss function is constructed,and the Adam optimizer[21]is used as the optimization method to update the training weights and convolution kernel parameters of the model.

The training of the CNN model can fnd the nonlinear mapping relationship between the test point and its reference label.The network parameters will be saved when the network is stable,i.e.,when the loss function falls below the threshold or the model satisfes the number of iterations.If the training reaches the number of iterations and still fails to get satisfactory model accuracy,the new training set can continue to train the model.

3.3 Jenks Natural Breaks Algorithm

3.3.1 The Rationality of Using JNBA to Select K

Generally,a fxed value ofKis used for all those test users[12,13].This strategy is not feasible in all situations.Several examples ofKvalue selection are shown in Figure 4.

In theory,if the classifcation result of the CNN model is accurate enough,an appropriateKvalue can greatly improve the accuracy of positioning.For example,in Figure 4(a),the optimal choice is that only the largest reference point in the CNN output is selected.If fxedKis chosen,the structural relationship information between the output of CNN will be wasted.

Assuming that the best value ofKis two,as shown in Figure 4(b).The two reference points closest to the target user will be used for position estimation.This refects the difference or weak similarity between these two reference points and the others reference points.The most intuitive manifestation of this difference in the output of CNN is the numerical result.It is well known that the output of CNN returns the numerical result corresponding to each label,which is the reference point in this paper.The larger the numerical result,the greater the probability that the user belongs to the area where the reference point is located.Therefore,those reference points whose output values are much larger than the others points need to be focused.This is essentially a classifcation problem,that is,data in the same category have strong similarities.

JNBA belongs to a classifcation algorithm whichcan be used to classify the output of CNN in order to obtain appropriateKvalue and related reference points.

3.3.2 The Principle of JNBA

In any sequence,there are some natural(non-manmade)turning points and breakpoints,which could be used to divide the research object into groups with similar characteristics[22].The Jenks natural breaks algorithm uses the clustering idea to classify the data according to the principle of minimum intra-class variance and maximum inter-class variance.This algorithm can divide theN-dimensional output of CNN intoccategories,and the Goodness of Variance Fit(GVF)is used to evaluate the effect of classifcation,which is:

whereni,u,andNdenote thei-th data in the vector,mean,and the length of original data,respectively.After dividing the data intoccategories,andNjdenote thei-th data inj-th category,the mean ofj-th category,and the data length ofj-th category,respectively.SDCMandSDAMdenote “sum of squared deviations for array mean”and“sum of squared deviations for class means”.TheGV Fis used to judge the classifcation effect.The higher theGV Fvalue is,the better the classifcation effect is.This suggests that the data in the same category have more similar characteristics.Generally,GV F >0.7 is considered to be an acceptable classifcation result.

A simple example can be used to illustrate how JNBA works.There is an ascending sequenceP=(4,5,9,10)that needs to be divided into two categories.The mean is 7 andSDAM=(4-7)2+(5-7)2+(9-7)2+(10-7)2= 26.There are a total of three breakpoints in this sequence:(4),(5,9,10),(4,5),(9,10)and(4,5,9),(10).For(4),(5,9,10),SDCM=(4-4)2+(5-8)2+(9-8)2+(10-8)2=0+9+1+4=14.Then,the SDCM in other cases can be calculated in the same way.All the classifcations are shown in the following Table 1:

Table 1.JNBA classification results.

Algorithm 1.Self-adaptively choosing of K based on JNBA.Input: P from CNNs,P ∈R1×N Output: Pout ∈R1×K,K ∈(1,2,3,4),index 1: Pout ←P,GV F =0;2: while length(Pout)>5 or GV F <0.7 do 3: Pa,Pb ←Pout is divided by JNBA,GV F ←new GV Fnew is calculated by JNBA;4: if max(Pout)∈Pa then 5:Pmax ←Pa;6: else 7:Pmax ←Pb;8: end if 9: Pout ←Pmax;10: end while 11: index ← the index of Pout in original Ndimensional vector

It can be seen thatPis divided intoP1=(4,5)andP2=(9,10)is the best.

3.4 Online Localization

3.4.1 Self-adaptively Choosing of K

This paper adopts the method of multiple iterations.The data are divided into two categories each time,and then they are fltered according to the two constraints of the classifed data length andGV F.The category with larger data values will be reclassifed until the constraints are met.The details are shown in Algorithm 1.In particular,it can be seen from Figure 4 thatKshould not exceed four.

An example in Table 2 is used to show the work of Algorithm 1.The output of CNN for ffty reference points isP=(-1.7,0.1,-6.6,-3.0,-2.6,-7.2,-4.6,-2.2,-1.2,-1.3,5.6,-4.1,-3.9,-0.3,-2.0,-6.9,2.2,5.4,2.4,-0.4,6.3,-7.2,-2.3,1.7,-1.5,-10.1,2.8,7.2,9.4,2.1,3.2,-2.4,-3.3,3.4,-0.7,-6.2,-5.0,8.4,11.7,5.8,5.9,-1.1,1.2,3.3,4.2,-4.2,-4.9,0.7,1.4,1.7).

Table 2.Working example of Algorithm 1.

Table 3.Software and hardware configurations.

It can be obtained,Poutis the result after screening,wherePout=(11.7,9.4,8.4),Kis the length ofPoutwithK=3 andindexis the corresponding index po-sition of the above values whereindex=(39,29,38).Several other examples ofKwhereKequals to 1,2 and 4 respectively can be found in the Appendix.

3.4.2 Location Estimation

After fltering the CNN output,the vectorPout,composed of theKlargest outputs,is obtained.It is worth noting that the value ofKfor each target point is not fxed.This means that if the value ofKis one,the maximum value in the CNN output is not similar to the others,and the position of the reference point corresponding to the maximum value will be returned as the estimated location of the target user.If the value ofKis greater than one,theKlargest outputs have some similar characteristics,then the estimated position is calculated according to their respective weights,which is shown as follows:

whereliis the position of thei-th reference,andPoutis returned by Algorithm 1.

In terms of algorithm complexity,JNBA approximates the complexity of the dichotomy because each iteration classifes the data into two categories.Thus,the complexity in this part can be shown asO(logN).Therefore,it does not sacrifce too much time cost compared with traditional CNN.

IV.EXPERIMENTAL STUDY

4.1 Experiment Configurations and Parameters

The software and hardware confgurations used in the experiment are shown in Table 3:

Table 4.CNN parameters.

The simulation scene is based on an area of Building 2 of Nanjing University of Information Science and Technology,and experimental space of 40 m×20 m is constructed.The experimental area is divided by ffty grids of 4 m×4 m,and the appropriate reference points are selected.

The schematic diagram of the simulation area layout is shown in Figure 5.The distance between the reference points is 4 m so that the whole experimental space can be divided into ffty labels.In the label area where each reference point is located,500 data are randomly generated by the log-normal shadow model,which can be shown as:

Figure 5.Schematic diagram of the simulation area layout.

Figure 6.CDF of localization errors when the distance between two reference points is 4 m.

whereλ,n,dandXσdenote wavelength,path loss index,the distance between the target user and the AP,Xσ ~N(0,σ2),respectively.In this paper,the carrier frequency is 2.4 GHz,nis 3.5,andσis 6 dB in the indoor environment.In particular,the signal attenuates 15 dB when every time it passes through the wall[23].There are a total of 25,000 data used for CNN training and testing in the offine phase.Five thousand test users are randomly generated in the whole experimental space to verify the position estimation in the online phase.In addition,two groups of additional control experiments were carried out by selecting the distancedbetween reference points as 2 m and 5 m,respectively.

The position estimation algorithm proposed in this paper is compared with the traditional CNN method.Three different fxedKvalues are used for the traditional CNN method,which isKequals to 1,2 and 3.It is worth noting that when the value ofKis one,the position of the reference point is the estimated position of the target user.To ensure fairness,all methods use the same data sets and CNN model.The detailed parameters of CNN are shown in Table 4:

Table 5.Example 1.

Table 6.Example 2.

Table 7.Example 3.

4.2 Numerical Results

Figure 6 shows the CDF of distance errors when the distance between two reference pointsdis 4 m.As can be seen from this Figure 6,for the proposed method,30%of the test points have a positioning accuracy less than 1 m,while for the other schemes are 18%,25%,and 28%,respectively.It also can be found that the positioning error of our proposed method is almost all within 4 m.Meanwhile,the percentage of test locations for the others that have a smaller error than 4 m are 96%,98%,and 95%,respectively.

Figure 7.CDF of localization errors when the distance between two reference points is 5 m.

Figure 8.CDF of localization errors when the distance between two reference points is 2 m.

Figure 9.The average localization error of all algorithms with different distance between two reference points.

The results of the two control experiments are shown in Figure 7 and Figure 8,respectively.Figure 7 is the result when the distance between two reference points is 5 m which means there are 32 reference points and labels located.In the proposed method,70%of the test locations have an error less than 2 m,and all errors are less than 5 m.The other schemes have errors greater than 5 m,while the percentage of test locations for them have a smaller error than 2 m are 45%,63%,and 64%,respectively.

The results can be seen from Figure 8 when the distance between two reference points is 2 m.The 50%of the test locations have an error less than 1 m when the proposed method is used,while both ofKis 2 and 3 are 45%,and the reference point location is only 38%.Thus,the method proposed in this paper achieves the best performance in this simulation.More information in the CNN outputs is considered,i.e.,the similarity between the data.In short,a more reasonableKvalue will be selected adaptively to obtain higher positioning accuracy.

Figure 9 presents the average localization error of all schemes with different distances between two adjacent reference points.For all schemes,the smaller the distance between reference points,the smaller the average error.This result is reasonable because the smaller the distance between the reference points means that the division of the area is more detailed.However,the disadvantage is that more works are needed to do in completing offine training.The results show that the average error ofKequals to two is less thanKis three in most cases.This also verifes the previous description that the CNN system has better performance whenKis two[12].Besides,the result shows that the average error of our proposed method is the lowest in any case of the same model.

The loss function of CNN model training is shown in Figure 10.The learning rate is set to 0.05.The results show that the CNN model tends to converge around the 20-th iteration.

Figure 10.Loss Function.

V.CONLCUSION

A novel CNN based indoor localization method improved by JNBA was proposed in this paper.The output of CNN,whose number changed dynamically according to the actual situation,was frstly fltered by using JNBA.Then,the fltered points were handled using the WKNN method before obtaining the fnal estimation of the undetermined user.Extensive simulations and comparisons verify the effectiveness of the proposed algorithm.Simulation results also show that the improved positioning algorithm in this paper has high location accuracy than existing schemes.

ACKNOWLEDGEMENT

This work was supported by the National Natural Science Foundation of China(NSFC)under Grants 62001238 and 61901075.

APPENDIX

Example 1.P=(0.2,3.5,-2.0,-1.9,-7.3,-9.3,-2.4,3.7,-3.8,-1.5,5.6,2.0,3.7,-0.7,-5.5,-3.7,12.4,19.0,6.7,6.2,4.1,0.8,1.5,-7.1,-10.3,-7.2,9.9,10.6,8.3,2.4,-0.4,-2.8,-8.4,-8.1,-10.9,-2.7,2.6,1.5,-0.4,3.1,-1.5,-8.9,-7.5,-6.1,-2.9,-2.5,-6.0,-3.9,1.9,5.3).

Now,Pout=(19.0),K is the length of Pout(K=1)and index=(18).

Example 2.P=(-4.3,4.1,-1.3,-3.6,-10.2,-14.4,-8.1,-4.1,-10.8,-7.4,5.1,4.1,5.2,1.1,-2.5,-4.8,14.0,14.2,3.2,4.3,5.4,-1.8,0.5,-5.9,-7.4,-2.8,18.8,14.6,8.0,1.9,-0.6,-7.2,-11.0-6.9,-8.2,3.1,8.9,6.9,1.4,7.0,-1.8,-10.0,-8.8,-5.7,3.2,1.8,-3.0,-3.8,-1.0,5.5).

Now,Pout=(18.8,14.6,14.2,14.0),K is the length of Pout(K=4)and index=(27,28,18,17).

Example 3.P=(3.2,3.9,5.3,-1.7,-5.1,-1.3,2.1,5.6,1.0,-0.7,-1.1,4.0,7.6,5.7,-0.5,-0.2,3.0,1.9,0.4,-0.9,-4.0,-1.3,8.7,3.7,-1.9,1.4,1.8,-2.0,-3.7,-1.4,-5.6,-5.0,0.6,-0.4,-0.6,0.1,-2.4,-4.2,-4.3,-0.4,-3.4,-3.3,2.6,5.8,2.9,-0.4,-1.9,-2.0,0.4,4.3).

Now,Pout=(8.7,7.6),K is the length of Pout(K=2)and index=(23,13).