Energy efficient task scheduling based on deep reinforcement learning in cloud environment: A specialized review

The expanding scale of cloud data centers and the diversification of user services have led to an increase in energy consumption and greenhouse gas emissions, resulting in long-term detrimental effects on the environment. To address this issue, scheduling techniques that reduce energy usage have bec...

Full description

Bibliographic Details
Published in:Future Generation Computer Systems
Main Author: Hou H.; Agos Jawaddi S.N.; Ismail A.
Format: Review
Language:English
Published: Elsevier B.V. 2024
Online Access:https://www.scopus.com/inward/record.uri?eid=2-s2.0-85174552639&doi=10.1016%2fj.future.2023.10.002&partnerID=40&md5=aa7bde25c26ff1790a01e5aa5322ef8f
id 2-s2.0-85174552639
spelling 2-s2.0-85174552639
Hou H.; Agos Jawaddi S.N.; Ismail A.
Energy efficient task scheduling based on deep reinforcement learning in cloud environment: A specialized review
2024
Future Generation Computer Systems
151

10.1016/j.future.2023.10.002
https://www.scopus.com/inward/record.uri?eid=2-s2.0-85174552639&doi=10.1016%2fj.future.2023.10.002&partnerID=40&md5=aa7bde25c26ff1790a01e5aa5322ef8f
The expanding scale of cloud data centers and the diversification of user services have led to an increase in energy consumption and greenhouse gas emissions, resulting in long-term detrimental effects on the environment. To address this issue, scheduling techniques that reduce energy usage have become a hot topic in cloud computing and cluster management. The Deep Reinforcement Learning (DRL) approach, which combines the advantages of Deep Learning and Reinforcement Learning, has shown promise in resolving scheduling problems in cloud computing. However, reviews of the literature on task scheduling that employ DRL techniques for reducing energy consumption are limited. In this paper, we survey and analyze energy consumption models used for scheduling goals, provide an overview of the DRL algorithms used in the literature, and quantitatively compare the model differences of Markov Decision Process elements. We also summarize the experimental platforms, datasets, and neural network structures used in the DRL algorithm. Finally, we analyze the research gap in DRL-based task scheduling and discuss existing challenges as well as future directions from various aspects. This paper contributes to the correlation perspective on the task scheduling problem with the DRL approach and provides a reference for in-depth research on the direction of DRL-based task scheduling research. Our findings suggest that DRL-based scheduling techniques can significantly reduce energy consumption in cloud data centers, making them a promising area for further investigation. © 2023 Elsevier B.V.
Elsevier B.V.
0167739X
English
Review

author Hou H.; Agos Jawaddi S.N.; Ismail A.
spellingShingle Hou H.; Agos Jawaddi S.N.; Ismail A.
Energy efficient task scheduling based on deep reinforcement learning in cloud environment: A specialized review
author_facet Hou H.; Agos Jawaddi S.N.; Ismail A.
author_sort Hou H.; Agos Jawaddi S.N.; Ismail A.
title Energy efficient task scheduling based on deep reinforcement learning in cloud environment: A specialized review
title_short Energy efficient task scheduling based on deep reinforcement learning in cloud environment: A specialized review
title_full Energy efficient task scheduling based on deep reinforcement learning in cloud environment: A specialized review
title_fullStr Energy efficient task scheduling based on deep reinforcement learning in cloud environment: A specialized review
title_full_unstemmed Energy efficient task scheduling based on deep reinforcement learning in cloud environment: A specialized review
title_sort Energy efficient task scheduling based on deep reinforcement learning in cloud environment: A specialized review
publishDate 2024
container_title Future Generation Computer Systems
container_volume 151
container_issue
doi_str_mv 10.1016/j.future.2023.10.002
url https://www.scopus.com/inward/record.uri?eid=2-s2.0-85174552639&doi=10.1016%2fj.future.2023.10.002&partnerID=40&md5=aa7bde25c26ff1790a01e5aa5322ef8f
description The expanding scale of cloud data centers and the diversification of user services have led to an increase in energy consumption and greenhouse gas emissions, resulting in long-term detrimental effects on the environment. To address this issue, scheduling techniques that reduce energy usage have become a hot topic in cloud computing and cluster management. The Deep Reinforcement Learning (DRL) approach, which combines the advantages of Deep Learning and Reinforcement Learning, has shown promise in resolving scheduling problems in cloud computing. However, reviews of the literature on task scheduling that employ DRL techniques for reducing energy consumption are limited. In this paper, we survey and analyze energy consumption models used for scheduling goals, provide an overview of the DRL algorithms used in the literature, and quantitatively compare the model differences of Markov Decision Process elements. We also summarize the experimental platforms, datasets, and neural network structures used in the DRL algorithm. Finally, we analyze the research gap in DRL-based task scheduling and discuss existing challenges as well as future directions from various aspects. This paper contributes to the correlation perspective on the task scheduling problem with the DRL approach and provides a reference for in-depth research on the direction of DRL-based task scheduling research. Our findings suggest that DRL-based scheduling techniques can significantly reduce energy consumption in cloud data centers, making them a promising area for further investigation. © 2023 Elsevier B.V.
publisher Elsevier B.V.
issn 0167739X
language English
format Review
accesstype
record_format scopus
collection Scopus
_version_ 1809677677450231808