Comparison of ensemble hybrid sampling with bagging and boosting machine learning approach for imbalanced data

Training an imbalanced dataset can cause classifiers to overfit the majority class and increase the possibility of information loss for the minority class. Moreover, accuracy may not give a clear picture of the classifier’s performance. This paper utilized decision tree (DT), support vector machine...

Full description

Bibliographic Details
Published in:Indonesian Journal of Electrical Engineering and Computer Science
Main Author: Malek N.H.A.; Yaacob W.F.W.; Wah Y.B.; Md Nasir S.A.; Shaadan N.; Indratno S.W.
Format: Article
Language:English
Published: Institute of Advanced Engineering and Science 2023
Online Access:https://www.scopus.com/inward/record.uri?eid=2-s2.0-85142097924&doi=10.11591%2fijeecs.v29.i1.pp598-608&partnerID=40&md5=61804174165e22ee4efed7401972f189
Description
Summary:Training an imbalanced dataset can cause classifiers to overfit the majority class and increase the possibility of information loss for the minority class. Moreover, accuracy may not give a clear picture of the classifier’s performance. This paper utilized decision tree (DT), support vector machine (SVM), artificial neural networks (ANN), K-nearest neighbors (KNN) and Naïve Bayes (NB) besides ensemble models like random forest (RF) and gradient boosting (GB), which use bagging and boosting methods, three sampling approaches and seven performance metrics to investigate the effect of class imbalance on water quality data. Based on the results, the best model was gradient boosting without resampling for almost all metrics except balanced accuracy, sensitivity and area under the curve (AUC), followed by random forest model without resampling in term of specificity, precision and AUC. However, in term of balanced accuracy and sensitivity, the highest performance was achieved by random forest with a random under-sampling dataset. Focusing on each performance metric separately, the results showed that for specificity and precision, it is better not to preprocess all the ensemble classifiers. Nevertheless, the results for balanced accuracy and sensitivity showed improvement for both ensemble classifiers when using all the resampled dataset. © 2023 Institute of Advanced Engineering and Science. All rights reserved.
ISSN:25024752
DOI:10.11591/ijeecs.v29.i1.pp598-608