Energy saving strategy of cloud data computing based on convolutional neural network and policy gradient algorithm.
Journal
PloS one
ISSN: 1932-6203
Titre abrégé: PLoS One
Pays: United States
ID NLM: 101285081
Informations de publication
Date de publication:
2022
2022
Historique:
received:
25
08
2022
accepted:
12
12
2022
entrez:
30
12
2022
pubmed:
31
12
2022
medline:
4
1
2023
Statut:
epublish
Résumé
Cloud Data Computing (CDC) is conducive to precise energy-saving management of user data centers based on the real-time energy consumption monitoring of Information Technology equipment. This work aims to obtain the most suitable energy-saving strategies to achieve safe, intelligent, and visualized energy management. First, the theory of Convolutional Neural Network (CNN) is discussed. Besides, an intelligent energy-saving model based on CNN is designed to ameliorate the variable energy consumption, load, and power consumption of the CDC data center. Then, the core idea of the policy gradient (PG) algorithm is introduced. In addition, a CDC task scheduling model is designed based on the PG algorithm, aiming at the uncertainty and volatility of the CDC scheduling tasks. Finally, the performance of different neural network models in the training process is analyzed from the perspective of total energy consumption and load optimization of the CDC center. At the same time, simulation is performed on the CDC task scheduling model based on the PG algorithm to analyze the task scheduling demand. The results demonstrate that the energy consumption of the CNN algorithm in the CDC energy-saving model is better than that of the Elman algorithm and the ecoCloud algorithm. Besides, the CNN algorithm reduces the number of virtual machine migrations in the CDC energy-saving model by 9.30% compared with the Elman algorithm. The Deep Deterministic Policy Gradient (DDPG) algorithm performs the best in task scheduling of the cloud data center, and the average response time of the DDPG algorithm is 141. In contrast, the Deep Q Network algorithm performs poorly. This paper proves that Deep Reinforcement Learning (DRL) and neural networks can reduce the energy consumption of CDC and improve the completion time of CDC tasks, offering a research reference for CDC resource scheduling.
Identifiants
pubmed: 36584089
doi: 10.1371/journal.pone.0279649
pii: PONE-D-22-22823
pmc: PMC9803140
doi:
Types de publication
Journal Article
Research Support, Non-U.S. Gov't
Langues
eng
Sous-ensembles de citation
IM
Pagination
e0279649Informations de copyright
Copyright: © 2022 Yang et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Déclaration de conflit d'intérêts
The authors have declared that no competing interests exist.
Références
Sensors (Basel). 2019 Feb 28;19(5):
pubmed: 30823391
Sensors (Basel). 2020 Sep 21;20(18):
pubmed: 32967094
Entropy (Basel). 2022 Jun 21;24(7):
pubmed: 35885075