Summary: | This research examines the efficacy of random search (RS) in hyperparameter tuning, comparing its performance to baseline methods namely manual search and grid search. Our analysis spans various deep learning (DL) architectures-multilayer perceptron (MLP), convolutional neural network (CNN), and AlexNet implemented on prominent benchmark datasets of Modified National Institute of Standards and Technology (MNIST) and Canadian Institute for Advanced Research-10 (CIFAR-10). In the context of this study, the evaluation will be adopting a multi-objective framework, navigating the delicate trade-offs between conflicting performance metrics, including accuracy, F1-score, and model parameter size. The primary objective of employing a multi-objective evaluation framework is to enhance the understanding regarding the interactions of these performance metrics interact and influence each other. In real-world scenarios, DL models often need to strike a balance between these conflicting objectives. This research adds to the increasing wealth of knowledge in hyperparameter tuning for DL models and serves as a reference point for practitioners seeking to optimize their DL architectures. The results of our analysis are positioned to provide invaluable insights into the intricate balancing act required during the process of hyperparameter fine-tuning. These insights will contribute to the ongoing advancement of best practices in optimizing DL models and facilitating the ongoing optimization of the DL models. © (2024), (Universitas Ahmad Dahlan). All rights reserved.
|