Haibo LIN Yuandong LU Rongcheng DING Yufeng XIU
Abstract In order to achieve accurate classification of apple, a multi-feature fusion classification method based on image processing and improved SVM was proposed in this paper. The method was mainly divided into four parts, including image preprocessing, background segmentation, feature extraction and multi-feature fusion classification with improved SVM. Firstly, the homomorphic filtering algorithm was used to improve the quality of apple images. Secondly, the images were converted to HLS space. The background was segmented by the QTSU algorithm. Morphological processing was employed to remove fruit stem and surface defect areas. And apple contours were extracted with the Canny algorithm. Then, apples’ size, shape, color, defect and texture features were extracted. Finally, the cross verification method was used to optimize the penalty factor in SVM. A multi-feature fusion classification model was established. And the weight of each index was calculated by Fisher. In this study, 146 apple samples were selected for training and 61 apple samples were selected for testing. The test results showed that the accuracy of the classification method proposed in this paper was 96.72%, which can provide a reference for apple automatic classification.
Key words Apple classification; Image processing; Improved SVM; Multi-feature fusion
Received: June 12, 2022 Accepted: August 13, 2022
Supported by Natural Science Foundation of Shandong Province (ZR2021MF096); Shandong Agricultural Machinery Equipment R&D Innovation Planning Project (2018YF009).
Haibo LIN (1976-), male, P. R. China, lecturer, devoted to research about image processing and pattern recognition.
*Corresponding author. E-mail: linhaibo@qut.edu.cn.
China is the largest apple producer in the world, and its apple output has been increasing year by year. However, due to the uneven quality of apple, the export volume accounts for a low proportion of the global apple export volume[1]. At present, the external quality grading of apple in China mainly relies on manual work, which is highly subjective. Some also rely on mechanical graders based on size or color, which is difficult to meet the comprehensive grading requirements and easily cause mechanical damage to apples[5]. The realization of apple quality online monitoring and classification[2-4] is of great significance for improving the output value of apple in China.
In recent years, machine vision has gradually penetrated into the agricultural field and is mainly used in the detection of agricultural products[6]. Many scholars have also applied it to apple classification. Machine vision performs identification and classification by extracting features such as size[7-9], shape[11-12], color[13-17], defect[18-22], and texture[23-25] of apple, and certain results have been achieved. However, classifying apples based on a single feature is still inherently biased. For this reason, some scholars have conducted research on combining multiple features to achieve apple classification. For example, Jana et al.[26] extracted texture features and statistical color features of the grayscale co-occurrence matrixes from segmented images, and trained and predicted them based on support vector machines (SVMs). Lei et al.[27] extracted the color features of apple through histogram transformation, fused them with local and global texture features, and used SVMs to identify apple varieties. Song et al.[28] combined such three texture features as gray co-occurrence matrix, fractal and spatial autocorrelation with spectral features, respectively, and finally used SVMs to extract apple orchards. Fan et al.[29] combined color and fruit diameters for apple classification and detection, achieving an accuracy rate of 91.6%. Bao et al.[30] used an improved artificial neural network algorithm to classify apples according to their color, shape and quality, achieving an accuracy rate of 88.9%. Zhang et al.[31] classified apples according to their ratios of red color, defects, and fruit diameters. Ren et al.[32] used the size, color, defect degree and roundness features of apple for classification. Wang et al.[33] classified Qinguan apples according to five aspects: size, fruit shape, quality, color, and defect, and the accuracy rate reached 97%. Li et al.[34] used discriminant tree and improved support vector machine decision fusion for classification. Li et al.[35] performed decision-level fusion processing on the four characteristics of apple size, shape, color and defect using the D-S evidence theory, and achieved comprehensive apple classification with an accuracy rate of 92.5%. Anuja et al.[36] used principal component analysis to select statistical features, texture features, geometric features, discrete wavelet transform features, gradient histogram features and texture energy from the feature space, and used SVM for classification, obtaining a higher accuracy. Yang et al.[37] extracted the maximum cross-sectional average diameter, roundness, ratio of red area and defect area of apples, and achieved apple classification through weighted K-means clustering algorithm, with an accuracy rate higher than 96%. In summary, apple classification using multiple features is more accurate and reliable than single feature classification. To this end, in this study, four images of apples in different orientations were selected to extract five features of size, shape, color, defect, and texture, and then improved SVM and multi-feature fusion method were used to achieve accurate apple classification. The process is shown in Fig. 1. In order to achieve accurate classification of apples, this study obtained one top and three side images of each apple. In order to fully describe the external features of apple, eight indexes (largest transverse diameter, roundness, ratio of red color, defect area percentage, contrast, energy, entropy, correlation) of the five features of size, shape, color, defect and texture were extracted by using image processing methods. In order to avoid over-learning or under-learning of the penalty factor in SVM, the cross-validation method was adopted to optimize the penalty factor, and a multi-feature fusion classification model was established. The apple classification method can provide a reference for apple automatic classification.
Image Acquisition and Preprocessing
Gala apple fruit was randomly selected and photographed by Manta G046C under laboratory lighting. The shooting background was selected as black, which was convenient for image processing. The camera lens was perpendicular to the desktop where the apple was placed, and the distance from the desktop was 25 cm. The resolution of the collected images was 4 624×3 472 and the format was .JPEG. In order to accurately extract apple features, four images (one top image, three side images) in different orientations were collected for each apple.
Due to the uneven light in the environment and the slight reflection of the background during the shooting process, the homomorphic filtering algorithm was first used to improve the image quality by compressing the brightness range and enhancing the contrast.
Background Segmentation
Converting images to a suitable color space is especially important for background segmentation. Since the chromaticity is less affected by the lighting conditions, the collected apple images should be converted from the RGB color space to the HLS color space. The conversion relationship between the two color spaces is shown in equations (1) to (3).
H=cos-10.5(R-G)+R-B(R-G)2+(R-B)(G-B)
2π-cos-10.5(R-G)+R-B(R-G)2+(R-B)(G-B)
(G≥B)(G<B)(1)
S=1-3R+G+Bmin(R, G, B)(2)
L=13(R+G+B)(3)
The color images were then converted to grayscale images (Fig. 2). By comparing the grayscale images obtained by different components in the HLS color space, it could be found that the apples were the clearest in the grayscale images obtained by the S component, and had a higher contrast with the background. Therefore, the S component was chosen to obtain grayscale images.
The background was then segmented using the Qtsu method to obtain a binarized image (Fig. 3a). To eliminate the effects of fruit stems and possible defects on feature extraction, the images were morphologically processed. Specifically, 11×11 rectangular structure elements were used to erode the images, and then expansive working was performed to remove the fruit stem (Fig. 3b). Hole filling was used to remove black holes formed by defects on the apple surface (Fig. 3c). Finally, a relatively clear apple edge contour was detected by the Canny algorithm (Fig. 3d).
Feature Extraction
Apple size extraction
Fruit size is one of the important external characteristics of apple and is generally expressed by the largest transverse diameter. In this study, we used three side views of apple to extract the size features of apple. The difference between the maximum abscissa value and the minimum abscissa value of the apple outline in each image was taken as the largest transverse diameter of the apple, and the average value of the three images was taken as the final value of the largest cross-sectional diameter of the apple. The calculation method of the largest transverse diameter of the apple in each image is shown in equation (4).
D=max{x1, x2, …, xn-1, xn}-min{x1, x2, …, xn-1, xn}(4)
In the equation, D represents the largest transverse diameter of the apple extracted from each image, and xi represents the abscissa of the apple contour point in each image.
The unit of the apple size extracted here was pixel, which needs to be converted to the actual millimeter unit used. First, the largest transverse diameter of each apple was manually measured 3 times to take an average value. Then, the actual measured value (Dactual) and the image extracted value (D) were made into a scatter plot (Fig. 4), and it was found that the two had an approximate linear relationship, and the linear fitting equation was obtained as Dactual=0.03D+9.31.
Agricultural Biotechnology2022
Apple shape extraction
Fruit shape is an important reference for apple classification. In this study, roundness was used to represent the shape of an apple. The average roundness of the apple’s top and three side views was taken as the roundness of the apple. The roundness ranged from 0 to 1. The closer it was to 1, the more circular the apple outline was and the fuller the apple looked. The calculation method of the roundness of the apple contour in each image is shown in equation (5).
E=4πSP2(5)
In the equation, E, S, and P represent the roundness, area, and perimeter of the apple outline in an image, respectively.
Apple color extraction
Color is one of the external features of apple that people directly acquire through vision, which can reflect the internal quality of apple to a certain extent. In this study, the ratio of red color R was used to describe the color characteristics of apple, which reflected the average ripeness of apple. In the HLS color space, the H component represents the position of the spectral color, and the value ranges from 0° to 360°. The redder an apple’s surface color is, the closer the value of the H component is to 0°. After research and comparison, the ratio of the area where the H value of the apple surface is less than 12° to the area of the apple area was taken as the ratio of red color. The three side images of each apple were selected to calculate the ratio of red color, respectively, and the average of the three was taken as the final ratio of red color of the apple.
Apple defect extraction
Defects such as bumps, insect wounds, and rot on the surface of the fruit will affect the quality of apples. Therefore, fruit surface defects are also an important indicator for apple classification. Defect areas are usually darker in the acquired image of an apple. Therefore, after converting a color image to a grayscale image based on the L channel component, the defect area of the apple could be extracted by threshold segmentation. In this study, the defect area ratio (the ratio of the defect area to the apple area, F) was used to measure the defect characteristics of apple, and the average of the three side images of each apple was used as the final defect area proportion of the apple.
Apple texture extraction
The texture features of the fruit surface are also an important index to measure the quality of apple, which can reflect the internal quality of apple to a certain extent. Generally speaking, apples with a clear texture will be of higher quality when apples of the same variety have similar size and color features. In this study, a grayscale co-occurrence matrix of apple images was first calculated, and then, the contrast, energy, entropy, and correlation of the matrix were calculated to describe the texture features of apple.
The contrast (CON) represented the local change of the value in the matrix, which could reflect the clarity of the apple texture. The calculation method is shown in equation (6).
CON=∑k-1n=0n2∑i-j=nG(i, j)(6)
where k=16, and G(i, j) is the grayscale co-occurrence matrix.
The energy (ASM) represented the uniformity of the distribution of values in the matrix, which could reflect the thickness of the apple texture. The calculation method is shown in equation (7).
ASM=∑ki=1∑kj=1[G(i, j)2](7)
Entropy (ENT) represented the complexity of the grayscale distribution in the image, which could reflect the complexity of the apple texture. The calculation method is shown in equation (8).
ENT=-∑ki=1∑kj=1(G(i, j)logG(i, j)(8)
Correlation (COR) represented the degree of correlation of local gray values in the image, which could reflect the consistency of the apple texture. The calculation method is shown in equation (9).
COR=∑ni=1∑nj=1(i, j)G(i, j)-uiujsisj(9)
where si=∑ki=1∑kj=1G(i, j)(i-ui)2, and ui=∑ki=1∑kj=1i·G(i, j); and sj and uj are represented in a similar way.
Multi-feature Fusion Classification of Apple
SVM algorithm
SVM is a supervised learning method, which is to find the correspondence between the training samples and the categories when the categories of the training samples are known, so as to predict the categories corresponding to new samples. Assuming that the training sample is (xi, yi), xi is the input vector, and yi is the output vector (category), an objective function is constructed through SVM, and the nonlinear mapping φ(x) is introduced to find the optimal segmentation plane. The objective function is shown in equation (10).
f(x)=Wφ(x)+b(10)
where W is the weight coefficient, and b is the deviation.
Assuming that the training samples are linearly fitted with a certain accuracy without error, the optimal equation is solved, as shown in equation (11).
minQ=12‖W‖2+C∑ni=1(β1+β2)
s.t.yi-Wφ(x)-b=γ+β1
Wφ(x)+b-yi=γ+β2
β1, β2≥0(11)
where Q is the optimization objective; C is the penalty factor; β1 and β2 are relaxation coefficients; and γ is the precision parameter.
The Lagrangian function was used for the objective function, as shown in equation (12).
f(x)=∑ni, j=1(αi-α*i)K(xi, xj)+b(12)
where αi, α*i are Lagrangian factors, and K(xi, xj) is the kernel function. The SVM algorithm used the kernel function to transform the sample data structure into a high-dimensional space, and found the optimal classification surface in the high-dimensional space.
Improved SVM Algorithm
For the problem of over-learning or under-learning of the penalty factor, the cross-validation method was used to optimize the penalty factor. The samples were divided into T subsets, and in each iteration, one of the subsets was used as the test set, and all other subsets were used as training sets. In this way, T correct rates were obtained, and the average value was taken as a sample division estimate of the correct rate, and the penalty factor was thereby calculated, as shown in equation (13).
C=1-1T∑Ti=1θi(13)
where θi is the correct rate of the ith subset.
Multi-feature fusion classification model
Due to the diversity of external features of apples, it is easy to cause misjudgment when classifying apples only according to a single feature of apple. Therefore, in this study, the method of feature fusion was adopted to classify apples. The feature fusion grading function is shown in equation (14).
fi=∑Ui=1αiηi, i=1, 2, 3, …, n(14)
where fj is the fusion feature of the jth apple; U is the number of features; ηi is the feature component before fusion; and αi is the weight of the feature component.
In this study, a total of eight indexes based on five characteristics of apple were used for classification. Due to the different value ranges of various indexes, corresponding weight values were also different. It required to fuse these eight indexes and use Fisher to calculate their respective weight values. The specific steps were given below.
① Assuming that the number of classifications in the apple training sample was A, the number of samples in each class was ζ, and Xij was the jth sample in the ith class. Accordingly, the total number of samples was N=ζ×A, and the mean value within each class was Mi=1ζj=∑ζj=1Xij, and the overall sample mean was M=1ζ∑ζj=1∑Ai=1Xij.
② The intra-class distribution matrix was computed according to S1=∑Ai=1∑ζj=1(Xij-Mi)(Xij-Mi)T, and inter-class distribution matrix was computed according to S2=∑Ai=1ζ(Mi-M)(Mi-M)T. The eigenvectors corresponding to the first l largest eigenvalues of S-11S2 were calculated to obtain the Fisher linear hierarchical subprojection matrix Wf.
③ The matrix composed of the eigenvectors of the eight indexes were projected into the Fisher linear space to obtain λm,i (m is the number of samples in each class, and i is the number of classes). The Fisher linear classification function was established: J(w)=wTS2wwTS1w, and the solution w* making J(w) the largest was the best solution vector.
④ The mean of the intra-class distances corresponding to the eight indexes of all samples was calculated according to Lw(n)=∑i, j, k‖λj, i(n)-λk, i(n)‖wTS1w, where i=1, …A, 1≤k≤j≤N.
⑤ The mean of the inter-class distances corresponding to the eight indexes of all samples was calculated according to Lb(n)=∑u, v, j, k‖λu, j(n)-λv, k(n)‖A2×ζ2, where 1≤k≤j≤A, u, v∈[1, N].
⑥ The ratio of the average inter-class distance to the average intra-class distance for the samples was calculated according to L(n)=Lb(n)Lw(n).
⑦ The weight value was calculated according to q(n)=L(n)∑L(n).
Empirical Analysis
Sample selection
First, 200 apples were randomly selected as initial training samples and 100 apples served as initial test samples, and five skilled graders were invited to strictly grade all apple samples. According to China’s Fresh Apples and Professional Standards for Exporting Fresh Apples, apples were divided into four classes: 1, 2, 3, and 4. When there were at least four apple classification staff who classified an apple into the same class, the apple was considered to belong to this class; and otherwise, it was regarded as a disputed classifying fruit and removed from the sample. In the end, 146 and 61 apples in the initial training sample and the initial test sample were clearly classified, and these clearly classified apple samples were used to verify the classifying effect of the method established in this study.
Image background segmentation accuracy evaluation
The Qtsu segmentation algorithm and the fixed threshold segmentation algorithm were used to segment the image background, respectively, and the segmentation accuracy of the Qtsu method was evaluated by comparing the segmentation effects of the two methods. The calculation methods of image segmentation accuracy SA and its standard deviation δ are shown in equations (15) and (16).
SA=Aq∩AfAq∪Af(15)
δ=1Y∑Yi=1(SAi-MSA)2(16)
In the equations, Aq and Af represent the apple area segmented by the Qtsu algorithm and the fixed threshold algorithm, respectively, and the intersection and union of the two are shown in Fig. 5; Y is the total number of apple images; SAi is the segmentation accuracy calculated for the ith image; MSA is the mean of segmentation accuracy for all images; and δ is the standard deviation, which reflects the segmentation effect of the Qtsu segmentation algorithm, and the smaller the δ value, the better the segmentation effect. In order to ensure the accuracy of the apple area obtained by the fixed threshold algorithm, the threshold parameters used for segmentation were continuously adjusted during the background segmentation of each image until the apple area obtained by segmentation was basically the same as the actual area of the apple in the image.
After calculation, the segmentation accuracy of the Qtsu segmentation algorithm used in this study was 98.8% and the standard deviation was 0.042, so it could well segment apple images from the background.
Analysis of apple feature extraction results
Fruit size feature
It could be seen from Fig. 6 that the contours of an apple extracted by its three side images were clearer, but the largest transverse diameter obtained by the images of different orientations was different. Among them, the largest transverse diameters extracted from the two orientations in Fig. 6(a) and Fig. 6(b) were very close, while the largest transverse diameter extracted from the orientation in Fig. 6(c) was nearly 3 mm larger.
In order to more accurately evaluate the extraction accuracy of the apple size feature, a vernier caliper was used to measure the maximum transverse diameter of each apple in three directions. The measurement positions are shown in Fig. 7. Each orientation was measured 3 times, and the average value of the 9 measurement results was taken as the actual value of the largest transverse diameter of each apple. The measured actual values were compared with the extracted values of the research method, and the difference between the two was used as the error of fruit size feature extraction. The smaller the error, the higher the extraction accuracy. According to statistics, the average extraction error of 200 samples (The total number of samples was 200, and there were 146 classified actually by people. The 200 samples were used when extracting features, because we wanted to use as many samples as possible to show that the feature parameters extracted in this paper were more accurate) was 0.62 mm, and the extraction accuracy was high.
Fruit shape feature
Fig. 8 shows the shape features extracted from images of an apple from four different orientations. Among them, the roundness value extracted from the top image of the apple was the largest, at 0.837, and the roundness value extracted from the side image of the apple was slightly smaller (0.790-0.816). The roundness of the apple was obtained as 0.811 by calculating the average.
Fruit color feature
Fig. 9 shows the color features extracted by using the three side views of an apple, and the white area in the figure represented the area with high ripeness of the apple. After calculation, the ratios of red color ranged from 0.114 to 0.414, and the average ratio of red color of the apple could be obtained as 0.228.
Fruit defect feature
Fig. 10 shows the defect features extracted from the three side views of an apple. Among them, (a) picture is the defect area caused by rust; (b) picture is the defect area caused by scratches; and (c) picture is the defect area caused by pests. After calculation, the proportions of defect area ranged from 0.007 to 0.068. It could be seen that smaller defect areas could also be extracted.
Fruit texture feature
Fig. 11 shows the texture features of apple samples extracted by this research method. Combined with the observed texture distribution on the apple surface, it was found that the larger the contrast value, the clearer the apple texture; the larger the energy value, the thicker the apple texture; the larger the entropy value, the more complex the apple texture; and the higher the correlation value, the stronger the consistency of the apple texture.
Multi-feature fusion classification effect evaluation
Kernel function selection
Fig. 12 shows the accuracy rates obtained by using the linear function, polynomial function, radial basis function and sigmoid function as kernel functions to perform hierarchical training and testing, respectively. It could be seen that the results obtained by the sigmoid function in the hierarchical training and testing were not ideal; and the accuracy of the polynomial function and the radial basis function in the hierarchical training was 100%, but the accuracy of the radial basis function in the testing was 1.64 percentage points higher than that of the polynomial function. Therefore, the radial basis function was finally chosen as the kernel function in this study. There were still some misjudgments in the classification process. The main reason was that some apples were at the edge of two adjacent classes, and there was a high possibility of being classified into any class.
Classification effect by improved SVM
After continuously adjusting the penalty factor, for the training samples, the classification accuracy after the improvement reached 93.44%, which was 2.28% higher than that before the improvement. Although the accuracy rates before and after the improvement were very close, the improved SVM was more convenient in operation without the need for manual adjustment of parameters.
Effect of the number of fusion features on the classification effect
In order to verify the effect of the multi-feature fusion classification proposed in this study, the classification effects of different number of fusion features were compared. It could be seen from Fig. 13 that the accuracy achieved by selecting single feature for classification was generally low, indicating that the use of single feature for apple classification was one-sided; and with the increase of the number of fusion features, the overall classification accuracy tended to increase, and when the number of fusion features was 5, the classification accuracy was the highest.
Conclusions
① In order to achieve accurate classification of apple, in this study, one top image and three side images of each apple were obtained, and the quality of the images was improved by the homomorphic filtering algorithm; the images were converted to HLS color space, and subjected to background segmentation by the Qtus segmentation algorithm; and the stalk and defect area were removed by morphological processing, and apple contours were extracted by the Canny algorithm, and finally, eight indexes (largest transverse diameter, roundness, ratio of red color, defect area percentage, contrast, energy, entropy, correlation) of five features of apple size, shape, color, defect, and texture were extracted for multi-feature fusion classification. The extracted feature indexes were richer, which could fully describe the external features of apples.
② In order to avoid over-learning or under-learning of the penalty factor in SVM, the cross-validation method was used to optimize the penalty factor, and a multi-feature fusion classification model was established. The weight of each index was calculated by Fisher, and the radial basis function was selected as the kernel function, thereby improving the classification effect.
③ We selected 146 and 61 apples that were manually classified as training samples and test samples, respectively, and the classification effect of the method established in this study was verified. The results showed that the accuracy rate of apple classification of the method proposed in this study was 96.72%, which was relatively high. This study can provide a reference for automatic apple classification.
References
[1] MENG XN, ZHANG ZH, LI Y, et al. Research status and progress of apple grading[J]. Deciduous Fruits, 2019, 51(6): 24-27. (in Chinese).
[2] CAO YD, QI WY, LI X, et al. Research progress and prospect on non-destructive detection and quality grading technology of apple[J]. Smart Agriculture, 2019, 1(3): 29-45. (in Chinese).
[3] YANG Q. An approach to apple surface feature detection by machine vision[J]. Computers & Electronics in Agriculture, 1994, 11(2/3): 249-264.
[4] NAKANO K. Application of neural networks to the color grading of apples[J]. Computers and Electronics in Agriculture, 1997, 18(2/3): 105-116.
[5] LI M. Research status and development of fruit sorting technology[J]. Journal of Jiangsu University of Technology, 2018, 24(2): 121-124. (in Chinese).
[6] HUANG C, FEI JY. Online apple grading based on decision fusion of image features[J]. Transactions of the Chinese Society of Agricultural Engineering, 2017, 33(1): 285-291. (in Chinese).
[7] LI L, PENG YK, LI YY. Design and experiment on grading system for online non-destructive detection of internal and external quality of apple[J]. Transactions of the Chinese Society of Agricultural Engineering, 2018, 34(9): 267-275. (in Chinese).
[8] ZHENG JY, ZHANG C, LIU G, et al. Apple size grading method based on linear fitting model[J]. Shandong Agricultural Sciences, 2020, 52(12): 118-125. (in Chinese).
[9] WANG YJ. Design of fruit grading packaging system based on machine vision[J]. Packaging Engineering, 2021, 42(3): 235-239. (in Chinese).
[10] CHEN YJ, ZHANG JX, LI W, et al. Grading method of apple by maximum cross-sectional diameter based on computer vision[J]. Transactions of the Chinese Society of Agricultural Engineering, 2012, 28(2): 284-288. (in Chinese).
[11] LI Q, HU JK. Research on apple online classification based on machine vision[J]. Food and Machinery, 2020, 36(8): 123-128, 153. (in Chinese).
[12] MIAO YH, DU Q, SHEN HY. Design of red Fuji apple automatic grading algorithm based on machine vision[J]. Electronic Test, 2019(1): 54-55, 58. (in Chinese).
[13] XIE FY, ZHOU JM, JIANG WW, et al. Study on method of apple grading based on hidden Markov model[J]. Food and Machinery, 2016, 32(7): 29-31, 111. (in Chinese).
[14] YANG XQ, DANG HS. A study on color grading system of apples based on the pixel transformation method[J]. Journal of Agricultural Mechanization Research, 2012, 34(3): 203-205, 241. (in Chinese).
[15] WANG JJ, ZHAO DA, JI W, et al. Apple fruit recognition based on support vector machine using in harvesting robot[J]. Transactions of the Chinese Society of Agricultural Machinery, 2009, 40(1): 148-151, 147. (in Chinese).
[16] SOFU MM, ER O, KAYACAN MC, et al. Design of an automatic apple sorting system using machine vision[J]. Computers and Electronics in Agriculture, 2016(127): 395-405.
[17] HUANG ZL, ZHU QB. Detection of red region of Fuji apple based on RGB color model[J]. Laser & Optoelectronics Progress, 2016, 53(4): 64-70. (in Chinese).
[18] FAN S, LI J, ZHANG Y, et al. On line detection of defective apples using computer vision system combined with deep learning methods[J]. Journal of Food Engineering, 2020, 286(5): 110102.
[19] ISMAIL A, IDRIS MYI, AYUB MN, et al. Investigation of fusion features for apple classification in smart manufacturing[J]. Symmetry, 2019, 11(10): 1194.
[20] ZHANG D, LILLYWHITE KD, LEE DJ, et al. Automated apple stem end and calyx detection using evolution-constructed features[J]. Journal of Food Engineering, 2013, 119(3): 411-418.
[21] QIU GY, PENG GL, TAO D, et al. Detection on surface defect of apples by DT-SVM method[J]. Food and Machinery, 2017, 33(9): 131-135. (in Chinese).
[22] GAO H, MA GF, LIU WJ. Research on a rapid detection of apple defects based on mechanical vision[J]. Food and Machinery, 2020, 36(10): 125-129, 148. (in Chinese).
[23] LI W, KANG QQ, ZHANG JX, et al. Detecting technique for surface texture on apples based on machine vision[J]. Journal of Jilin University: Engineering and Technology Edition, 2008, 38(5): 1110-1113. (in Chinese).
[24] MOALLEM P, SERAJODDIN A, POURGHASSEM H. Computer vision-based apple grading for golden delicious apples based on surface features[J]. Information Processing in Agriculture, 2017, 4(1): 33-40.
[25] KAVDIR I, GUYER DE. Comparison of artificial neural networks and statistical classifiers in apple sorting using textural features[J]. Biosystems Engineering, 2004, 89(3): 331-344.
[26] JANA S, BASAK S, PAREKH R. Automatic fruit recognition from natural images using color and texture features[C]∥Devices for Integrated Circuit. IEEE, 2017: 620-624.
[27] LEI H, JIAO ZY, MA JQ, et al. Fast recognition algorithm of apple varieties based on multi feature fusion and SVM[J]. Automation & Information Engineering, 2020, 41(4): 13-17. (in Chinese).
[28] SONG RJ, NING JF, LIU XY, et al. Apple orchard extraction with QuickBird imagery based on texture features and support vector machine[J]. Transactions of the Chinese Society of Agricultural Machinery, 2017, 48(3): 188-197. (in Chinese).
[29] FAN ZZ, LIU Q, CHAI JW, et al. Apple detection and grading based on color and fruit-diameter[J]. Computer Engineering and Science, 2020, 42(9): 1599-1607. (in Chinese).
[30] BAO XA, ZHANG RL, ZHONG LH. Apple grade identification method based on artificial neural network and image processing[J]. Transactions of the Chinese Society of Agricultural Engineering, 2004, 20(3): 109-112. (in Chinese).
[31] ZHANG JJ, CHENG YT, DA XM. Image processing and grading design of apple based on K-means clustering[J]. Computer and Digital Engineering, 2021, 49(8): 1656-1660. (in Chinese).
[32] REN LL, FENG T, ZHAI CL, et al. Research on feature fusion grading of apple size, color, roundness and defect based on MATLAB image processing[J]. Digital Technology and Application, 2021, 39(7): 90-95. (in Chinese).
[33] WANG YY, HUANG X, CHEN H, et al. Research on apple classification algorithm based on homomorphic filtering and improved K-means algorithm[J]. Food and Machinery, 2019, 35(12): 47-51, 112. (in Chinese).
[34] LI XJ, CHENG H. Study on key technologies for apple grading detection based on decision fusion method[J]. Food and Machinery, 2020, 36(12): 136-140. (in Chinese).
[35] LI XF, ZHU WX, HUA XP, et al. Multi-feature decision fusion method based on D-S evidence theory for apple grading[J]. Transactions of the Chinese Society of Agricultural Machinery, 2011, 42(6): 188-192. (in Chinese).
[36] BHARGAVA A, BANSAL A. Classification and grading of multiple varieties of apple fruit[J]. Food Analytical Methods, 2021(14): 1359-1368.
[37] YU Y, VELASTIN SA, YIN F. Automatic grading of apples based on multi-features and weighted K-means clustering algorithm[J/OL]. Information Processing in Agriculture, 2019. https:∥doi.org/10.1016/j.inpa.2019.11.003.