Hyperparameter Tuning of Deep learning Models in Keras

Main Article Content

Mohamad Zaim Awang Pon
Krishna Prakash K K

Abstract

Hyperparameter tuning or optimization is one of the fundamental way to improve the performance of the machine learning models. Hyper parameter is a parameter passed during the learning process of the model to make corrections or adjustments to the learning process. To generalise diverse data patterns, the same machine learning model may require different constraints, weights, or learning rates. Hyperparameters are the term for these kind of measurements. These parameters have been trial-and-error tested to ensure that the model can solve the machine learning task optimally. This paper focus on the science of hyperparameter tuning using some tools with experimental values and results of each experiments. We have also documented 4 metrics to analyze the hyperparameter tuning results and benchmark the outcome.


The experimental results of two tools used commonly for deep learning models namely Keras tuner and AiSara tuner are captured in the article. All relevant experimental code is also available for readers in authors github repository. The metrics used to benchmark the results are accuracy, search time, cost and complexity and expalinability. The results indicate the overall performance of AiSara tuner in search time, cost and complexity and expalinability matrices are superior to keras tuners.

Article Details

How to Cite
Mohamad Zaim Awang Pon, & Krishna Prakash K K. (2021). Hyperparameter Tuning of Deep learning Models in Keras. Sparklinglight Transactions on Artificial Intelligence and Quantum Computing (STAIQC), 1(1), 36–40. https://doi.org/10.55011/staiqc.2021.1104
Section
Articles