Enhancing energy efficiency in cloud scaling: A DRL-based approach incorporating cooling power
The rapid growth of cloud computing significantly boosts energy usage, driven mainly by CPU operations and cooling. While cloud scaling efficiently allocates resources for changing workloads, current energy-driven methods often prioritize energy metrics combined with throughput, execution time, or S...
Published in: | Sustainable Energy Technologies and Assessments |
---|---|
Main Author: | |
Format: | Article |
Language: | English |
Published: |
Elsevier Ltd
2023
|
Online Access: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85174333147&doi=10.1016%2fj.seta.2023.103508&partnerID=40&md5=df4c8fd49f9b19eb7497ba81244c2b07 |
id |
2-s2.0-85174333147 |
---|---|
spelling |
2-s2.0-85174333147 Agos Jawaddi S.N.; Ismail A.; Shafian S. Enhancing energy efficiency in cloud scaling: A DRL-based approach incorporating cooling power 2023 Sustainable Energy Technologies and Assessments 60 10.1016/j.seta.2023.103508 https://www.scopus.com/inward/record.uri?eid=2-s2.0-85174333147&doi=10.1016%2fj.seta.2023.103508&partnerID=40&md5=df4c8fd49f9b19eb7497ba81244c2b07 The rapid growth of cloud computing significantly boosts energy usage, driven mainly by CPU operations and cooling. While cloud scaling efficiently allocates resources for changing workloads, current energy-driven methods often prioritize energy metrics combined with throughput, execution time, or SLA compliance, neglecting cooling power's influence on energy consumption. To bridge this gap, we propose a deep reinforcement learning (DRL)-based autoscaler that considers cooling power as a critical factor for decision-making. Our approach employs DRL to dynamically adjust cloud resources, aiming to maximize energy efficiency and meet performance objectives. DRL, unlike RL, uses neural networks to handle the extensive state–action space in cloud scaling, overcoming the challenge of limited memory capacity for storing Q-values. In this study, we evaluate the performance of our proposed solution through a simulation-based experiment. We compare the performance of the proposed DRL-based autoscalers against an RL-based autoscaler. Our findings indicate that the DDQN-based autoscaler consistently outperforms other algorithms by maintaining optimal Power Usage Effectiveness (PUE) levels and improving task execution speed during high workloads. In contrast, the DQN-based autoscaler excels at sustaining optimal PUE levels during lower task loads, with a faster convergence rate at a scaling factor of 2 compared to scaling factor 1. © 2023 Elsevier Ltd Elsevier Ltd 22131388 English Article |
author |
Agos Jawaddi S.N.; Ismail A.; Shafian S. |
spellingShingle |
Agos Jawaddi S.N.; Ismail A.; Shafian S. Enhancing energy efficiency in cloud scaling: A DRL-based approach incorporating cooling power |
author_facet |
Agos Jawaddi S.N.; Ismail A.; Shafian S. |
author_sort |
Agos Jawaddi S.N.; Ismail A.; Shafian S. |
title |
Enhancing energy efficiency in cloud scaling: A DRL-based approach incorporating cooling power |
title_short |
Enhancing energy efficiency in cloud scaling: A DRL-based approach incorporating cooling power |
title_full |
Enhancing energy efficiency in cloud scaling: A DRL-based approach incorporating cooling power |
title_fullStr |
Enhancing energy efficiency in cloud scaling: A DRL-based approach incorporating cooling power |
title_full_unstemmed |
Enhancing energy efficiency in cloud scaling: A DRL-based approach incorporating cooling power |
title_sort |
Enhancing energy efficiency in cloud scaling: A DRL-based approach incorporating cooling power |
publishDate |
2023 |
container_title |
Sustainable Energy Technologies and Assessments |
container_volume |
60 |
container_issue |
|
doi_str_mv |
10.1016/j.seta.2023.103508 |
url |
https://www.scopus.com/inward/record.uri?eid=2-s2.0-85174333147&doi=10.1016%2fj.seta.2023.103508&partnerID=40&md5=df4c8fd49f9b19eb7497ba81244c2b07 |
description |
The rapid growth of cloud computing significantly boosts energy usage, driven mainly by CPU operations and cooling. While cloud scaling efficiently allocates resources for changing workloads, current energy-driven methods often prioritize energy metrics combined with throughput, execution time, or SLA compliance, neglecting cooling power's influence on energy consumption. To bridge this gap, we propose a deep reinforcement learning (DRL)-based autoscaler that considers cooling power as a critical factor for decision-making. Our approach employs DRL to dynamically adjust cloud resources, aiming to maximize energy efficiency and meet performance objectives. DRL, unlike RL, uses neural networks to handle the extensive state–action space in cloud scaling, overcoming the challenge of limited memory capacity for storing Q-values. In this study, we evaluate the performance of our proposed solution through a simulation-based experiment. We compare the performance of the proposed DRL-based autoscalers against an RL-based autoscaler. Our findings indicate that the DDQN-based autoscaler consistently outperforms other algorithms by maintaining optimal Power Usage Effectiveness (PUE) levels and improving task execution speed during high workloads. In contrast, the DQN-based autoscaler excels at sustaining optimal PUE levels during lower task loads, with a faster convergence rate at a scaling factor of 2 compared to scaling factor 1. © 2023 Elsevier Ltd |
publisher |
Elsevier Ltd |
issn |
22131388 |
language |
English |
format |
Article |
accesstype |
|
record_format |
scopus |
collection |
Scopus |
_version_ |
1809677777317658624 |