An in-depth performance analysis of the oversampling techniques for high-class imbalanced dataset

Prasetyo Wibowo(1*), Chastine Fatichah(2),

(1) Institut Teknologi Sepuluh Nopember, Surabaya
(2) Institut Teknologi Sepuluh Nopember, Surabaya
(*) Corresponding Author
Prasetyo Wibowo
Chastine Fatichah

Abstract


Class imbalance occurs when the distribution of classes between the majority and the minority classes is not the same. The data on imbalanced classes may vary from mild to severe. The effect of high-class imbalance may affect the overall classification accuracy since the model is most likely to predict most of the data that fall within the majority class.  Such a model will give biased results, and the performance predictions for the minority class often have no impact on the model. The use of the oversampling technique is one way to deal with high-class imbalance, but only a few are used to solve data imbalance. This study aims for an in-depth performance analysis of the oversampling techniques to address the high-class imbalance problem. The addition of the oversampling technique will balance each class’s data to provide unbiased evaluation results in modeling. We compared the performance of Random Oversampling (ROS), ADASYN, SMOTE, and Borderline-SMOTE techniques. All oversampling techniques will be combined with machine learning methods such as Random Forest, Logistic Regression, and k-Nearest Neighbor (KNN). The test results show that Random Forest with Borderline-SMOTE gives the best value with an accuracy value of 0.9997, 0.9474 precision, 0.8571 recall, 0.9000 F1-score, 0.9388 ROC-AUC, and 0.8581 PRAUC of the overall oversampling technique.


Keywords


classification; imbalanced dataset; oversampling; performance analysis

Full Text:

Article

References


J. L. Leevy, T. M. Khoshgoftaar, R. A. Bauder and N. Seliya, "A survey on addressing high-class imbalance in big data," J Big Data, vol. 5, no. 42, 2018.

H. He and E. A. Garcia, "Learning from Imbalanced Data," IEEE Transactions on Knowledge and Data Engineering, vol. 21, no. 9, pp. 1263-1284, 2009.

I. Triguero, S. d. Río, V. López, J. Bacardit, J. M. Benítez and F. Herrera, "ROSEFW-RF: The winner algorithm for the ECBDL’14 big data competition: An extremely imbalanced big data bioinformatics problem," Knowledge-Based Systems, vol. 87, pp. 69-79, 2015.

H. Kaur, H. S. Pannu and A. K. Malhi, "A Systematic Review on Imbalanced Data Challenges in Machine Learning: Applications and Solutions," ACM Comput. Surv., vol. 52, no. 4, 2019.

D. J. Dittman, T. M. Khoshgoftaar and A. Napolitano, "The Effect of Data Sampling When Using Random Forest on Imbalanced Bioinformatics Data," in 2015 IEEE International Conference on Information Reuse and Integration, San Francisco, CA, USA, 2015.

I. Indrajani, Y. Heryadi, L. A. Wulandhari and B. S. Abbas, "Recognizing debit card fraud transaction using CHAID and K-nearest neighbor: Indonesian Bank case," in 2016 11th International Conference on Knowledge, Information and Creativity Support Systems (KICSS), Yogyakarta, 2016.

A. G. Pertiwi, N. Bachtiar, R. Kusumaningrum, I. Waspada and A. Wibowo, "Comparison of performance of k-nearest neighbor algorithm using smote and k-nearest neighbor algorithm without smote in diagnosis of diabetes disease in balanced data," Journal of Physics: Conference Series, 2020.

S. Cui, D. Wang, Y. Wang, P.-W. Yu and Y. Jin, "An improved support vector machine-based diabetic readmission prediction," Computer Methods and Programs in Biomedicine, vol. 166, pp. 123-135, 2018.

R. Pruengkarn, K. W. Wong and C. C. Fung, "Imbalanced data classification using complementary fuzzy support vector machine techniques and SMOTE," in 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Banff, AB, 2017.

F. Last, G. Douzas and F. Bacao, "Oversampling for Imbalanced Learning Based on K-Means and SMOTE," Information Sciences, vol. 465, 2018.

J. Zhang, L. Chen and F. Abid, "Prediction of Breast Cancer from Imbalance Respect Using Cluster-Based Undersampling Method," Journal of Healthcare Engineering, vol. 2019, 2019.

N. V. Chawla, K. W. Bowyer, L. O. Hall and W. P. Kegelmeyer, "SMOTE: Synthetic Minority Over-sampling Technique," Journal of Artificial Intelligence Research, vol. 16, p. 321–357, 2002.

H. Han, W.-Y. Wang and B.-H. Mao, "Borderline-SMOTE: A New Over-Sampling Method in Imbalanced Data Sets Learning. In: Huang DS., Zhang XP., Huang GB," in Advances in Intelligent Computing. ICIC 2005, Berlin, Heidelberg, 2005.

H. He, Y. Bai, E. A. Garcia and S. Li, "ADASYN: Adaptive synthetic sampling approach for imbalanced learning," in 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), Hong Kong, China, 2008.

P. Cortez, A. Cerdeira, F. Almeida, T. Matos and J. Reis, "Modeling wine preferences by data mining from physicochemical properties," Decision Support Systems, vol. 47, no. 4, pp. 547-553, 2009.

S. Ali, A. Majid, S. G. Javed and M. Sattar, "Can-CSC-GBE: Developing Cost-sensitive Classifier with Gentleboost Ensemble for breast cancer classification using protein amino acids and imbalanced data," Computers in Biology and Medicine, vol. 73, pp. 38-46, 2016.

A. D. Pozzolo, O. Caelen, Y.-A. L. Borgne, S. Waterschoot and G. Bontempi, "Learned lessons in credit card fraud detection from a practitioner perspective," Expert Systems with Applications, vol. 41, no. 10, pp. 4915-4928, 2014.

S. Makki, Z. Assaghir, Y. Taher, R. Haque, M. Hacid and H. Zeineddine, "An Experimental Study With Imbalanced Classification Approaches for Credit Card Fraud Detection," IEEE Access, vol. 7, pp. 93010-93022, 2019.

C. Meng, L. Zhou and B. Liu, "A Case Study in Credit Fraud Detection With SMOTE and XGBoost," Journal of Physics: Conference Series, vol. 1601, 2020.

D. Almhaithawi, A. Jafar and M. Aljnidi, "Example-dependent cost-sensitive credit cards fraud detection using SMOTE and Bayes minimum risk," SN Appl. Sci., vol. 2, no. 1574, 2020.

J. O. Awoyemi, A. O. Adetunmbi and S. A. Oluwadare, "Credit card fraud detection using machine learning techniques: A comparative analysis," in 2017 International Conference on Computing Networking and Informatics (ICCNI), Lagos, Nigeria, 2017.

W. Han, Z. Huang, S. Li and Y. Jia, "Distribution-Sensitive Unbalanced Data Oversampling Method for Medical Diagnosis," J Med Syst, vol. 43, no. 39, 2019.

B. Krawczyk, "Learning from imbalanced data: open challenges and future directions," Prog Artif Intell, vol. 5, no. 221–232, 2016.

S. Wager and S. Athey, "Estimation and Inference of Heterogeneous Treatment Effects using Random Forests," Journal of the American Statistical Association, vol. 113, no. 523, pp. 1228-1242, 2018.




DOI: https://doi.org/10.26594/register.v7i1.2206

Article metrics

Abstract Abstract views : 0times
Article views : 0 times

Refbacks

  • There are currently no refbacks.


Copyright (c) 2021 Prasetyo Wibowo, Chastine Fatichah

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.


Indexed in:

                          


 


This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International  License