Enhancing energy efficiency in cloud scaling: A DRL-based approach incorporating cooling power

The rapid growth of cloud computing significantly boosts energy usage, driven mainly by CPU operations and cooling. While cloud scaling efficiently allocates resources for changing workloads, current energy-driven methods often prioritize energy metrics combined with throughput, execution time, or S...

Full description

Bibliographic Details
Published in:SUSTAINABLE ENERGY TECHNOLOGIES AND ASSESSMENTS
Main Authors: Jawaddi, Siti Nuraishah Agos; Ismail, Azlan; Shafian, Shafidah
Format: Article
Language:English
Published: ELSEVIER 2023
Subjects:
Online Access:https://www-webofscience-com.uitm.idm.oclc.org/wos/woscc/full-record/WOS:001124397600001
author Jawaddi
Siti Nuraishah Agos; Ismail
Azlan; Shafian
Shafidah
spellingShingle Jawaddi
Siti Nuraishah Agos; Ismail
Azlan; Shafian
Shafidah
Enhancing energy efficiency in cloud scaling: A DRL-based approach incorporating cooling power
Science & Technology - Other Topics; Energy & Fuels
author_facet Jawaddi
Siti Nuraishah Agos; Ismail
Azlan; Shafian
Shafidah
author_sort Jawaddi
spelling Jawaddi, Siti Nuraishah Agos; Ismail, Azlan; Shafian, Shafidah
Enhancing energy efficiency in cloud scaling: A DRL-based approach incorporating cooling power
SUSTAINABLE ENERGY TECHNOLOGIES AND ASSESSMENTS
English
Article
The rapid growth of cloud computing significantly boosts energy usage, driven mainly by CPU operations and cooling. While cloud scaling efficiently allocates resources for changing workloads, current energy-driven methods often prioritize energy metrics combined with throughput, execution time, or SLA compliance, neglect-ing cooling power's influence on energy consumption. To bridge this gap, we propose a deep reinforcement learning (DRL)-based autoscaler that considers cooling power as a critical factor for decision-making. Our approach employs DRL to dynamically adjust cloud resources, aiming to maximize energy efficiency and meet performance objectives. DRL, unlike RL, uses neural networks to handle the extensive state-action space in cloud scaling, overcoming the challenge of limited memory capacity for storing Q-values. In this study, we evaluate the performance of our proposed solution through a simulation-based experiment. We compare the performance of the proposed DRL-based autoscalers against an RL-based autoscaler. Our findings indicate that the DDQN-based autoscaler consistently outperforms other algorithms by maintaining optimal Power Usage Effectiveness (PUE) levels and improving task execution speed during high workloads. In contrast, the DQN-based autoscaler excels at sustaining optimal PUE levels during lower task loads, with a faster convergence rate at a scaling factor of 2 compared to scaling factor 1.
ELSEVIER
2213-1388
2213-1396
2023
60

10.1016/j.seta.2023.103508
Science & Technology - Other Topics; Energy & Fuels

WOS:001124397600001
https://www-webofscience-com.uitm.idm.oclc.org/wos/woscc/full-record/WOS:001124397600001
title Enhancing energy efficiency in cloud scaling: A DRL-based approach incorporating cooling power
title_short Enhancing energy efficiency in cloud scaling: A DRL-based approach incorporating cooling power
title_full Enhancing energy efficiency in cloud scaling: A DRL-based approach incorporating cooling power
title_fullStr Enhancing energy efficiency in cloud scaling: A DRL-based approach incorporating cooling power
title_full_unstemmed Enhancing energy efficiency in cloud scaling: A DRL-based approach incorporating cooling power
title_sort Enhancing energy efficiency in cloud scaling: A DRL-based approach incorporating cooling power
container_title SUSTAINABLE ENERGY TECHNOLOGIES AND ASSESSMENTS
language English
format Article
description The rapid growth of cloud computing significantly boosts energy usage, driven mainly by CPU operations and cooling. While cloud scaling efficiently allocates resources for changing workloads, current energy-driven methods often prioritize energy metrics combined with throughput, execution time, or SLA compliance, neglect-ing cooling power's influence on energy consumption. To bridge this gap, we propose a deep reinforcement learning (DRL)-based autoscaler that considers cooling power as a critical factor for decision-making. Our approach employs DRL to dynamically adjust cloud resources, aiming to maximize energy efficiency and meet performance objectives. DRL, unlike RL, uses neural networks to handle the extensive state-action space in cloud scaling, overcoming the challenge of limited memory capacity for storing Q-values. In this study, we evaluate the performance of our proposed solution through a simulation-based experiment. We compare the performance of the proposed DRL-based autoscalers against an RL-based autoscaler. Our findings indicate that the DDQN-based autoscaler consistently outperforms other algorithms by maintaining optimal Power Usage Effectiveness (PUE) levels and improving task execution speed during high workloads. In contrast, the DQN-based autoscaler excels at sustaining optimal PUE levels during lower task loads, with a faster convergence rate at a scaling factor of 2 compared to scaling factor 1.
publisher ELSEVIER
issn 2213-1388
2213-1396
publishDate 2023
container_volume 60
container_issue
doi_str_mv 10.1016/j.seta.2023.103508
topic Science & Technology - Other Topics; Energy & Fuels
topic_facet Science & Technology - Other Topics; Energy & Fuels
accesstype
id WOS:001124397600001
url https://www-webofscience-com.uitm.idm.oclc.org/wos/woscc/full-record/WOS:001124397600001
record_format wos
collection Web of Science (WoS)
_version_ 1809678576912433152