Principal Component Analysis on Convolutional Neural Network Using Transfer Learning Method for Image Classification of Cifar-10 Dataset
DOI:
https://doi.org/10.26594/register.v10i2.3517Keywords:
Cifar-10, Convolutional Neural Network, DenseNet, Principal Component Analysis, Transfer LearningAbstract
The current era was defined by an overwhelming abundance of information, including multimedia data such as audio, images, and videos. However, with such an enormous amount of image data available, accurately and efficiently selecting the necessary images poses a significant challenge. To address this, image classification has emerged as a viable solution for organizing and managing large volumes of image data, thereby mitigating the issue of cluttered image datasets. One of the most popular algorithms for image classification is the Convolutional Neural Network (CNN), which reduces the complexity of network structure and parameters by leveraging local receptive fields, weight sharing, and pooling operations. CNN is a type of artificial neural network specifically designed to process grid-like data, such as images, using convolutional layers to automatically detect local features. Nonetheless, CNN faces several challenges, such as gradient diffusion, large dataset requirements, and slow training processes. To overcome these issues, Transfer Learning has been widely adopted in CNN-based image classification, and Principal Component Analysis (PCA) has been employed to accelerate the training process. PCA is a technique used to reduce data dimensionality by identifying the principal components that account for most of the variance in the data. This study tested the efficacy of PCA-based CNN architecture using the Transfer Learning method on the Cifar-10 dataset. The results demonstrated that the PCA-based CNN architecture achieved the highest accuracy, with a testing accuracy rate of 0.8982 (89%).
References
Y. Sun et al., “Image classification base on PCA of multi-view deep representation,” J. Vis. Commun. Image Represent., vol. 62, pp. 253–258, Jul. 2019, doi: 10.1016/j.jvcir.2019.05.016.
A. Krizhevsky, “Learning Multiple Layers of Features from Tiny Images,” Toronto, 2009. [Online]. Available: https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf
R. Doon, T. K. Rawat, and S. Gautam, “Cifar-10 Classification using Deep Convolutional Neural Network,” in 2018 IEEE Punecon, Pune, India: IEEE, 2018, pp. 1–5. doi: https://doi.org/10.1109/PUNECON.2018.8745428.
M. ’Arif Mohamad, H. Hassan, D. Nasien, and H. Haron, “A Review on Feature Extraction and Feature Selection for Handwritten Character Recognition,” Int. J. Adv. Comput. Sci. Appl., vol. 6, no. 2, pp. 204–212, 2015, doi: 10.14569/ijacsa.2015.060230.
S. Ahmed Medjahed, “A Comparative Study of Feature Extraction Methods in Images Classification,” Int. J. Image, Graph. Signal Process., vol. 7, no. 3, pp. 16–23, 2015, doi: 10.5815/ijigsp.2015.03.03.
E. Karypidis, S. G. Mouslech, K. Skoulariki, and A. Gazis, “Comparison Analysis of Traditional Machine Learning and Deep Learning Techniques for Data and Image Classification,” WSEAS Trans. Math., vol. 21, pp. 122–130, 2022, doi: 10.37394/23206.2022.21.19.
M. S. Noya van Delsen, A. Z. Wattimena, and S. Saputri, “Penggunaan Metode Analisis Komponen Utama untuk Mereduksi Faktor-Faktor Inflasi di Kota Ambon,” BAREKENG J. Ilmu Mat. dan Terap., vol. 11, no. 2, pp. 109–118, 2017, doi: 10.30598/barekengvol11iss2pp109-118.
K. Koonsanit, C. Jaruskulchai, and A. Eiumnoh, “Band Selection for Dimension Reduction in Hyper Spectral Image Using Integrated Information Gain and Principal Components Analysis Technique,” Int. J. Mach. Learn. Comput., vol. 2, no. 3, pp. 248–251, 2012, doi: 10.7763/ijmlc.2012.v2.124.
N. Sabri, H. N. A. Hamed, Z. Ibrahim, K. Ibrahim, M. A. Isa, and N. M. Diah, “The hybrid feature extraction method for classification of adolescence idiopathic scoliosis using Evolving Spiking Neural Network,” J. King Saud Univ. - Comput. Inf. Sci., vol. 34, no. 10, pp. 8899–8908, 2022, doi: 10.1016/j.jksuci.2022.08.019.
X.-D. Ren, H.-N. Guo, G.-C. He, X. Xu, C. Di, and S.-H. Li, “Convolutional Neural Network Based on Principal Component Analysis Initialization for Image Classification,” in 2016 IEEE First International Conference on Data Science in Cyberspace (DSC), Changsha, China: IEEE, 2016, pp. 329–334. doi: https://doi.org/10.1109/DSC.2016.18.
X. Zhang, X. Zhou, M. Lin, and J. Sun, “ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, 2018, pp. 6848–6856. doi: https://doi.org/10.1109/CVPR.2018.00716.
M. Tan and Q. V. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,” in Proceedings of the 36th International Conference on Machine Learning, ICML 2019, PMLR 97, 2019, pp. 6105–6114. doi: http://proceedings.mlr.press/v97/tan19a.html.
L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, pp. 834–848. doi: https://doi.ieeecomputersociety.org/10.1109/TPAMI.2017.2699184.
B. Bamne, N. Shrivastava, L. Parashar, and U. Singh, “Transfer learning-based Object Detection by using Convolutional Neural Networks,” in 2020 International Conference on Electronics and Sustainable Communication Systems (ICESC), Coimbatore, India: IEEE, 2020, pp. 328–332. doi: https://doi.org/10.1109/ICESC48915.2020.9156060.
S. Survarachakan et al., “Deep learning for image-based liver analysis - A comprehensive review focusing on malignant lesions,” Artif. Intell. Med., vol. 130, p. 102331, 2022, doi: https://doi.org/10.1016/j.artmed.2022.102331.
C. Wang, X. Peng, C. Shang, C. Fan, L. Zhao, and W. Zhong, “A deep learning-based robust optimization approach for refinery planning under uncertainty,” Comput. Chem. Eng., vol. 155, no. 107495, 2021, doi: https://doi.org/10.1016/j.compchemeng.2021.107495.
S. Wu, Y.-C. Chen, X. Li, A.-C. Wu, J.-J. You, and W.-S. Zheng, “An enhanced deep feature representation for person re-identification,” in 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Placid, NY, USA: IEEE, 2016, pp. 1–8. doi: https://doi.org/10.1109/WACV.2016.7477681.
G. Qi, H. Wang, M. Haner, C. Weng, S. Chen, and Z. Zhu, “Convolutional neural network based detection and judgement of environmental obstacle in vehicle operation,” CAAI Trans. Intell. Technol., vol. 4, no. 2, pp. 80–91, 2019, doi: https://doi.org/10.1049/trit.2018.1045.
J. Du, H. Lu, M. Hu, L. Zhang, and X. Shen, “CNN-based infrared dim small target detection algorithm using target-oriented shallow-deep features and effective small anchor,” IET Image Process., vol. 15, no. 1, pp. 1–15, 2020, doi: https://doi.org/10.1049/ipr2.12001.
B. K. Umri and V. Delica, “Penerapan transfer learning pada convolutionalneural networks dalam deteksi covid-19,” JNANALOKA, vol. 2, no. 2, pp. 53–61, 2021, doi: https://doi.org/10.36802/jnanaloka.2021.v2-no2-53-61.
W. M. Salama and A. Shokry, “A novel framework for brain tumor detection based on convolutional variational generative models,” Multimed. Tools Appl., vol. 81, pp. 16441–16454, 2022, doi: https://doi.org/10.1007/s11042-022-12362-9.
M. W. Attia, F. E. Z. Abou-Chadi, H. E.-D. Moustafa, and N. Mekky, “Classification of Ultrasound Kidney Images using PCA and Neural Networks,” Int. J. Adv. Comput. Sci. Appl., vol. 6, no. 4, pp. 53–57, 2015, doi: https://dx.doi.org/10.14569/IJACSA.2015.060407.
M. Karthikeyan and D. Raja, “Deep transfer learning enabled DenseNet model for content based image retrieval in agricultural plant disease images,” Multimed. Tools Appl., vol. 82, pp. 36067–36090, 2023, doi: https://doi.org/10.1007/s11042-023-14992-z.
Y. Jiang, “Exploring The Efficiency of Resnet and Densenet in Gender Recognition,” in Highlights in Science, Engineering and Technology, 2023, pp. 1033–1037. doi: https://doi.org/10.54097/hset.v38i.5992.
S. Liu et al., “Multiresolution 3D-DenseNet for Chemical Shift Prediction in NMR Crystallography,” J. Phys. Chem. Lett., vol. 10, no. 16, pp. 4558–4565, Aug. 2019, doi: https://doi.org/10.1021/acs.jpclett.9b01570.
R. Andriana, D. Maharani, M. A. Muhammad, and A. Junaidi, “Butterfly identification using gray level co-occurrence matrix (glcm) extraction feature and k-nearest neighbor (knn) classification,” Regist. J. Ilm. Teknol. Sist. Inf., vol. 6, no. 1, pp. 11–21, 2020, doi: https://doi.org/10.26594/register.v6i1.1602.
S. Kido, Y. Hirano, and N. Hashimoto, “Detection and classification of lung abnormalities by use of convolutional neural network (CNN) and regions with CNN features (R-CNN),” in 2018 International Workshop on Advanced Image Technology (IWAIT), Chiang Mai, Thailand, 2018, pp. 1–4. doi: https://doi.org/10.1109/IWAIT.2018.8369798.
A. Khan, A. Sohail, and A. Ali, “A New Channel Boosted Convolutional Neural Network using Transfer Learning,” arXiv, 2018, doi: https://doi.org/10.48550/arXiv.1804.08528.
J. P. Sahoo, S. Ari, and S. K. Patra, “Hand Gesture Recognition Using PCA Based Deep CNN Reduced Features and SVM Classifier,” in 2019 IEEE International Symposium on Smart Electronic Systems (iSES), Rourkela, India, 2019, pp. 221–224. doi: https://doi.org/10.1109/iSES47678.2019.00056.
S.-J. Kim et al., “Deep transfer learning-based hologram classification for molecular diagnostics,” Sci Rep, vol. 8, 2018, doi: https://doi.org/10.1038/s41598-018-35274-x.
T. Zhou, Y. Li, Y. Wu, and D. Carlson, “Estimating Uncertainty Intervals from Collaborating Networks,” J. Mach. Learn. Res., vol. 22, no. 257, pp. 1–47, 2021, doi: https://jmlr.org/papers/v22/20-1100.html.
M. A. Mutasodirin and F. M. Falakh, “Efficient Weather Classification Using DenseNet and EfficientNet,” JPIT (Jurnal Inform. J. Pengemb. IT), vol. 9, no. 2, pp. 173–179, 2024, doi: https://doi.org/10.30591/jpit.v9i2.7539.
L. Alzubaidi et al., “Review of deep learning: concepts, CNN architectures, challenges, applications, future directions,” J. Big Data, vol. 3, no. 53, 2021, doi: https://doi.org/10.1186/s40537-021-00444-8.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 M. Al Haris, Muhammad Dzeaulfath, Rochdi Wasono
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Please find the rights and licenses in Register: Jurnal Ilmiah Teknologi Sistem Informasi. By submitting the article/manuscript of the article, the author(s) agree with this policy. No specific document sign-off is required.
1. License
The non-commercial use of the article will be governed by the Creative Commons Attribution license as currently displayed on Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
2. Author(s)' Warranties
The author warrants that the article is original, written by stated author(s), has not been published before, contains no unlawful statements, does not infringe the rights of others, is subject to copyright that is vested exclusively in the author and free of any third party rights, and that any necessary written permissions to quote from other sources have been obtained by the author(s).
3. User/Public Rights
Register's spirit is to disseminate articles published are as free as possible. Under the Creative Commons license, Register permits users to copy, distribute, display, and perform the work for non-commercial purposes only. Users will also need to attribute authors and Register on distributing works in the journal and other media of publications. Unless otherwise stated, the authors are public entities as soon as their articles got published.
4. Rights of Authors
Authors retain all their rights to the published works, such as (but not limited to) the following rights;
Copyright and other proprietary rights relating to the article, such as patent rights,
The right to use the substance of the article in own future works, including lectures and books,
The right to reproduce the article for own purposes,
The right to self-archive the article (please read out deposit policy),
The right to enter into separate, additional contractual arrangements for the non-exclusive distribution of the article's published version (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal (Register: Jurnal Ilmiah Teknologi Sistem Informasi).
5. Co-Authorship
If the article was jointly prepared by more than one author, any authors submitting the manuscript warrants that he/she has been authorized by all co-authors to be agreed on this copyright and license notice (agreement) on their behalf, and agrees to inform his/her co-authors of the terms of this policy. Register will not be held liable for anything that may arise due to the author(s) internal dispute. Register will only communicate with the corresponding author.
6. Royalties
Being an open accessed journal and disseminating articles for free under the Creative Commons license term mentioned, author(s) aware that Register entitles the author(s) to no royalties or other fees.
7. Miscellaneous
Register will publish the article (or have it published) in the journal if the article’s editorial process is successfully completed. Register's editors may modify the article to a style of punctuation, spelling, capitalization, referencing and usage that deems appropriate. The author acknowledges that the article may be published so that it will be publicly accessible and such access will be free of charge for the readers as mentioned in point 3.