Apple is one of the popular fruits for public
consumption. People can distinguish many apples based on their
colors and shapes, such as the Braeburn Apple with skin color
varies from orange to red, the Pink Lady Apple that is red with
pseudo pink, the Crismon Snow Apple that has dark red skin.
Recently, computers can automatically recognize them using
digital image processing techniques such as Convolutional
Neural Networks (CNN). In this paper, a CNN-based
classification model of apple types is developed using 1856 apple
images from three classes derived from the fruit-360 dataset on
the Kaggle website, and its robustness is then examined. Two
types of testing have been carried out in this study: testing five
scenarios for sharing training data and testing five scenarios for
robustness to noise. An examination based on 5-fold crossvalidation shows that CNN is robust to decreasing the portion of
training set size up to 50% to get high accuracy of 99.97% in
classifying 50% testing set, which is better than previous models
that use VGG16, faster R-CNN, and Tanh. Decreasing the
portion training set to 40% and 30% reduces the accuracy to
95.97% and 95.29%, respectively. Adding low-level noises of
10% into the testing images decreases the accuracy slightly to
99.17%. However, high-level noises of 50% drastically make the
accuracy drastically drops to 63.93%.
Keywords—apple classification, convolutio