1. Introduction
1.1. Background and Statistics
In recent years, the exponential growth of data-driven services—enabled by cloud computing and supported by hyper-scale infrastructure—has positioned CDCs as an essential component within global information and communication networks. Their central role in powering Artificial Intelligence (AI), big data analytics [1,2,3,4,5,6], e-commerce [7], streaming platforms [8,9,10,11,12,13], and edge computing [14,15,16,17,18] has made them critical to the digital infrastructure across all economic sectors. However, this massive computational footprint comes at a significant energy and environmental cost. While CDCs have become central to digital services, their growing computational load has made them one of the most energy-intensive classes of the digital infrastructure [19].
Global energy consumption by CDCs is projected to rise from 200 TWh in 2016 to nearly 2967 TWh by 2030, accounting for a significant share of total electricity use [20]. In the United States, CDCs consumed approximately 4.4% of total electricity in 2023, with projections reaching between 6.7% and 12% by 2028, mainly due to rising AI workloads [21]. Worldwide, electricity demand from CDCs is expected to exceed 857 TWh by 2028, with a Compound Annual Growth Rate (CAGR) of 19.5%, outpacing several major industrial sectors [22]. Within these facilities, Information Technology (IT) equipment accounts for roughly 60% of electricity consumption, while cooling and power delivery systems contribute another 30–40% [23]. This level of consumption places substantial pressure on energy infrastructure and raises critical concerns regarding sustainability, grid stability, and long-term cost efficiency [24].
These issues are further amplified by instabilities in the global energy market. Electricity prices are becoming increasingly volatile, influenced by geopolitical conflict, supply chain constraints, and fuel price fluctuations [25]. Meanwhile, many regions face escalating grid stress as electricity demand from both hyper-scale CDCs and electrified transport grows. Reports from the International Energy Agency (IEA) and the World Bank have noted the growing mismatch between CDC growth and the modernization rate of electric grids, particularly in regions where energy security is already constrained [26,27]. As a result, CDC operators must now address not only internal energy efficiency but also the broader challenge of navigating uncertain and competitive energy markets.
In many high-growth regions, energy management and reliability have become a key constraint [28] in which CDC expansion is not an exception. In Texas, concerns over grid capacity for AI and crypto workloads were publicly raised by industry leaders [29], and Taiwan has temporarily halted the development of large CDCs due to power concerns [30]. These developments reflect a shift toward proactive energy procurement strategies. For example, Microsoft’s long-term Power Purchase Agreement (PPA) with the Three Mile Island nuclear facility exemplifies efforts by major cloud providers to secure a reliable, dedicated energy supply for their infrastructure [31]. Sustainability objectives also influence the way CDCs are planned and managed. Many operators have committed to CO2-neutral targets and 100% Renewable Energy Sources (RESs). As of 2023, over 120 digital realty facilities operate on Renewable Energy Sources (RESs), supported by regional PPAs and clean energy procurement strategies [32]. In addition, thermal re-use systems, such as district heating with server exhaust and aquifer thermal storage, as well as water-efficient cooling technologies, are being deployed to reduce environmental impact [33]. These initiatives signal a broader shift from traditional energy efficiency metrics to integrated sustainability and resilience planning.
Existing metrics such as Power Usage Effectiveness (PUE), Data Center Infrastructure Efficiency (DCiE), and Carbon Usage Effectiveness (CUE) have been widely adopted to evaluate CDC energy performance [34,35]. However, these metrics often provide only a partial view, as they typically exclude software-related inefficiencies, dynamic workloads, and grid-level CO2 intensity. With the increasing adoption of virtualization, containerization, and serverless computing, these limitations are becoming more significant. As a result, new metrics—such as Server Power Usage Effectiveness (SPUE)—have been proposed to address energy use at more granular levels [36]. While promising, these approaches remain under development, and their practical implementation faces challenges, in terms of standardization and integration. Therefore, a comprehensive review of energy efficiency metrics is necessary to assess their relevance, gaps, and applicability in modern CDC environments.
1.2. Systematic Review Methodology
The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) systematic review process is a structured framework designed to ensure transparency, rigor, and reproducibility in systematic reviews [37], and our PRISMA checklist can be found in Supplementary Materials. It involves several key stages of keywords, identification, screening, eligibility, and inclusion, each important to synthesizing evidence from relevant studies. The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) framework of this review is illustrated in Figure 1:
The process begins with keywords. Firstly, a group of keywords is selected to make a comprehensive search strategy, centered on carefully chosen keywords. These keywords are derived from the research question, which results in the energy efficiency of CDCs, the metrics, and their definition.
Then, the identification phase starts, involving the search strategy execution across multiple databases, such as IEEEXplore, Google Scholar, Web of Science (WoS), Springer Nature, MDPI, ACM, Wiley Online Library, and ScienceDirect, to locate potentially relevant studies. Additional sources, such as conference proceedings, should also be searched. The goal is to cast a wide net, retrieving all records that might address the research question.
As the third step, during the screening phase, the deduplicated records are evaluated against predefined inclusion and exclusion criteria, typically in two stages: title/abstract screening and full-text review. In the first stage, the reviewers assess titles and abstracts to determine whether the studies are potentially relevant, excluding those that do not meet the criteria. The eligibility phase (Step 4) involves a detailed assessment of the full texts of the studies that passed the screening phase. Finally, the inclusion phase (Step 5) results in the selection of studies that fully meet the eligibility criteria and are included in the systematic review.
1.3. Literature Review
In the past ten years, the overall interest in the energy efficiency of CDCs has been evident, based on the publications published in Scopus (Figure 2) and their publication types (Figure 3). Based on these statistics, the journal publication type had the most significant share of research presentations on the energy efficiency metrics of CDCs (especially in 2024).
A study conducted in [38] used focus groups and interviews with CDCs managers to identify split incentives, imperfect information, and reliability trade-offs as barriers to energy efficiency investments. While market failures had limited impact, the high costs of context-specific information and opportunity costs from competing priorities were more significant in slowing adoption. In another study, a two-phase research was conducted by [39], investigating the non-technical barriers to energy efficiency. In Phase I, it was found that there were abundant technical solutions but insufficient focus on cultural shifts. Phase II’s interview with 16 CDC experts presented vendor-driven procurement, facility-specific energy metrics, and design convergence due to high-density infrastructure.
A comprehensive review analyzed over 200 CDC power models, organizing them into a hierarchical framework with hardware-centric and software-centric branches [40]. The hardware-centric models spanned digital circuit, component, server, data center, and systems-of-systems levels, while the software-centric models focused on operating systems, virtual machines, and applications. Efforts to establish energy efficiency metrics for wireless networks, such as radiated base station power normalized to area and traffic load (e.g., Watts/Erlang/km² or Watts/(bits/sec)/km²), reflect industry goals to create green networks. However, a study by [41] argued that these metrics are unsuitable for wireless systems, as they conflict with effective system operation and mislead network design. On the other hand, a study by [42] enhanced energy monitoring in telecommunication central offices by introducing novel metrics alongside the heating degree days parameter: the parameter of central utilization, index of cluster reliability, and reliability index.
The energy intensity metric—the ratio of energy consumed to data volume—is used to assess energy efficiency in communication networks and CDCs. This metric was criticized in [43] for its application, noting that weak correlations between data and energy use at short time scales lead to misleading results. Additionally, the importance of energy savings in CDCs was considered by [44], who proposed a model for measuring CDCs’ components to organize metrics and enhance corporate communication. The strengths and weaknesses of standard metrics were evaluated. The Power Usage Effectiveness (PUE) metric is one of the most used energy efficiency factors. The analysis conducted in [45] critiqued this metric’s limitations, noting that its instantaneous measurement of electrical energy use encouraged reporting of minimum values, thus reflecting only the lowest possible energy consumption.
As another metric class, cooling and thermal management of CDCs has come under focus. Several works, such as [46,47,48,49,50], have evaluated and reviewed several cooling and thermal strategies, as well as metrics in CDCs. Moreover, several reviews [51,52,53] have also been conducted on generally investigating the energy efficiency metrics of CDCs. Therefore, by highlighting efficiency trends and recalibrating estimates, ref. [54] presented policymakers and analysts with a refined perspective on data center energy use, its drivers, and near-term efficiency potential.
Three adaptive models—of Gradient Descent-Based Regression (GDR), Maximize Correlation Percentage (MCP), and Bandwidth-Aware Selection Policy (BW)—were designed in [55] to reduce energy consumption and Service Level Agreement (SLA) violations in CDCs. These models use energy-aware techniques for detecting overloaded hosts and selecting Virtual Machines (VMs) for migration. As another alternative, the work by [56] presented the enhanced multi-objective optimization algorithm for task scheduling, combining deep reinforcement learning and enhanced electric fish optimization for energy-efficient task scheduling. Following the importance of this subject, a resource prediction-based VM allocation method was introduced by [57] that reduced energy consumption and enhanced system reliability. It optimized feed-forward Neural Networks (NNs) using a self-adaptive differential evolution algorithm incorporating multi-dimensional learning and global exploration for superior global solution searching compared to traditional gradient descent.
The same subject was applied to vehicular edge cloud computing systems in [58]. It introduced a load-balancing algorithm that redistributes vehicles across roadside units based on load, computational capacity, and data rate. A robust security mechanism integrates an advanced encryption standard with electrocardiogram signals as encryption keys to secure data transmission. A caching strategy enables edge servers to store completed tasks, reducing latency and energy use. An optimization model minimized energy consumption while meeting latency constraints during computation offloading.
Moreover, the paper by [59] highlighted four key areas to enhance energy efficiency in CDCs: aligning architecture with emerging workloads, provisioning resources for future demands, improving the energy proportionality of machines, enhancing vertical integration across software stacks, and standardizing hardware–software interfaces for technology integration. Consequently, on the energy-management side, a hybrid policy-based reinforcement learning approach was presented in [60] for adaptive energy management in island group energy systems with constrained energy transmission. The paper introduced an island energy hub model enabling cascade energy utilization to meet island-specific demands and ensure a reliable supply. An energy management model for island groups was developed, accounting for the mismatch between energy demand and resources, in addition to the limited transmission capacity. Given the complexity of modeling, due to high renewable penetration and variable loads, the problem was framed as a model-free reinforcement learning task.
Therefore, a multi-energy trading market model based on price matching was proposed by [61] to promote collaboration across energy types and improve utilization through user participation. The model supports personalized energy responses while preserving user privacy and autonomy. A joint trading mechanism was developed to handle various energy types and time scales, reducing failures from overlooked transmission processes. Conversion devices are used to boost matching efficiency, and an income mechanism prevents operator bias. An enhanced hierarchical reinforcement learning algorithm is applied to manage large state-action spaces and sparse rewards. Furthermore, the review by [62] overviewed the trends in improving energy efficiency across cloud infrastructure, including servers, networking, management systems, and user software. It highlighted solutions, their benefits, and their trade-offs.
1.4. Existing Gaps and Contribution
In this context, the development of a unified, comprehensive understanding of energy efficiency metrics is critically needed. The current landscape includes a diverse and often fragmented set of methods, each targeting different aspects of energy performance—yet no systematic framework exists to guide their selection or application in line with evolving infrastructure models. As energy consumption continues to rise and sustainability becomes a primary design constraint, the ability to assess and manage CDC efficiency through appropriate metrics is no longer optional—it is foundational to both engineering practice and decision making.
To this end, this work overviews the energy efficiency metrics of CDCs, classifies them into two groups of IT-related and non-IT-related metrics, and investigates them in detail. Next, challenges and limitations are identified, and potential future research directions are presented. Consequently, the contributions of this work are highlighted as follows: The complete analytics of the energy efficiency metrics of CDCs; Presenting the energy-consuming components of a CDC; Describing different centralized and decentralized setups of Uninterruptible Power Supplies (UPSs) and Power Distribution Units (PDUs) in CDCs; Providing the challenges, limitations, and the associated potential research directions for each metric.
1.5. Paper Structure
The paper is structured as follows: Cloud Data Centers energy management concepts and their energy efficiency metrics are presented in Section 2 and Section 3. Next, real-world case studies, challenges, limitations, and future work, as well as the conclusion, are provided in Section 4, Section 5, Section 6, respectively.
2. Energy Management in Cloud Data Centers
Cloud Data Centers consume energy primarily for server operation, cooling systems, networking equipment, and power distribution [40,63,64,65,66]. Consequently, effective energy management in CDCs holds considerable importance. A typical CDC energy supply and consuming components are shown in Figure 4:
2.1. Servers and Racks
Racks and servers are the central computational engines of a CDC [67]. A server rack typically contains multiple servers stacked in vertical slots, including Central Processing Units (CPUs) [68,69], Graphical Processing Units (GPUs) [70,71,72,73], Random Accessible Memory (RAM) [74,75,76], storage drives [77,78,79,80], and networking interfaces [81,82,83,84,85,86]. These servers handle user requests, run applications, manage databases, and execute processing tasks. Their power draw is constant and scales with computing intensity. A high-density server environment leads to significant heat output, necessitating robust cooling solutions. Moreover, the performance-optimized design of modern servers often prioritizes speed over energy efficiency, further escalating their power needs. Thus, energy consumed by racks and servers is one of the most substantial factors in CDC energy efficiency.
2.2. Uninterruptible Power Supply (UPS)
Uninterruptible Power Supply is a system that ensures uninterrupted operation during a power failure [87,88]. It provides short-term backup power, using batteries (usually Lithium-Ion Batteries (LIBs) [89,90]), and it conditions the incoming power to protect equipment from surges, drops, and noise. These systems involve energy losses in the process of converting Alternating Current (AC) to Direct Current (DC) (to charge batteries) and DC back to AC (to power the load), known as double conversion [91,92,93,94]. These conversions are essential for high reliability; however, they consume additional power. Moreover, batteries require ongoing charging, and in large-scale CDCs the number of UPS units is significant enough to be taken into account, making them one of the largest IT-related energy consumers.
2.3. Power Distribution Unit (PDU)
Power Distribution Units are the important component within CDCs, distributing power from the UPS or main supply to individual equipment, such as servers and network devices [95,96,97]. They often include transformers, circuit breakers, and monitoring features. Power Distribution Units step down voltage and convert power to appropriate formats, during which energy losses occur. Intelligent PDUs allow for real-time power consumption monitoring and environmental condition tracking, which adds to their power usage. Generally, CDCs’ power supply, including UPSs, PDUs, and Renewable Energy Sources (RESs), is conducted in two forms of [98]: Centralized (Figure 5): Electricity from a single UPS is distributed to multiple PDUs, which then channel the power to server racks. To avoid any delay in switching to UPS power, CDCs are equipped with double conversion UPS systems. Distributed (Figure 6 and Figure 7): Instead of a centralized UPS, a battery cabinet serves every multiple rack, supporting the servers (Figure 6). This approach eliminates double conversion by modifying server power supply units to accept both AC power from the grid and DC power from the battery cabinet. The battery cabinet distributes DC power directly to the servers. As another form of distributed UPS, a battery is integrated into each server following the UPS. This configuration eliminates the AC/DC/AC double conversion, enhancing energy efficiency during normal operation, and it positions AC distribution closer to the IT load before conversion (Figure 7).
2.4. Information Technology (IT) Rooms and Equipment
Information Technology rooms contain not only servers but also the essential components of firewalls [99], switches [100], routers [101,102], load balancers [103], storage systems, and monitoring consoles [104], which handle data routing, traffic control [105,106,107], storage functions, and system diagnostics. These systems run continuously and rely on redundant setups for high availability. Additionally, control rooms with operator workstations, large display walls, and supporting electronics are included in this category. Although each piece of equipment consumes less power than servers, their combined energy demand significantly contributes to the IT-related energy footprint.
2.5. Heating, Ventilation, and Air Conditioning (HVAC)
Heating, Ventilation, and Air Conditioning systems are the largest non-IT energy consumer. They are responsible for maintaining stable temperature and humidity to prevent overheating [108], which can degrade or damage sensitive electronics. Cloud Data Centers use advanced cooling strategies, such as those presented in Table 1. These systems must operate continuously and at scale, especially in high-density environments where thermal loads are extreme. Energy is consumed not only by compressors and fans but also by pumps and sensors. Inefficient HVAC operation leads to higher PUE, a key metric for energy performance in CDCs, which is presented in the next section.
2.6. Facility Security, Lighting, and Offices
Security infrastructure restricts access to CDCs to authorized personnel, utilizing surveillance cameras [191], biometric access systems [192,193], motion sensors [194], alarms, and, occasionally, on-site monitoring rooms. Operating continuously with redundancies, backup power, and significant storage for video footage, these systems, while not major individual energy consumers, add to the facility’s baseline power usage. As another energy consumer, lighting systems can be considered.
Lighting illuminates server rooms, offices, corridors, emergency exits, and outdoor areas in CDCs [195]. Although modern facilities use energy-efficient Light-Emitting Diode (LED) lights and smart controls, such as motion sensors and timers to reduce usage, lighting still contributes significantly to non-IT energy consumption. In large CDCs with multiple shifts or frequent maintenance, lighting’s cumulative energy demand is substantial.
Cloud Data Center offices support staff including administrators, engineers, facility managers, and support teams, and they feature workstations, printers, telecommunication equipment, and often individual HVAC systems. While their energy use is lower than IT and cooling systems, these spaces contribute to non-IT energy consumption.
Based on the provided context about the energy-consuming components of a CDC, several energy efficiency metrics are useful in their efficient operation and management for considering usage, costs, overall efficiency, and sustainability. These metrics are presented in Section 3.
3. Energy Efficiency Metrics in Cloud Data Centers
Energy efficiency is the ratio of useful work output by a system to the total energy input [196,197,198]. In CDCs, this efficiency reflects the productive work carried out by various subsystems relative to the energy supplied [51]. To this end, we have divided these energy efficiency metrics into two classes: IT-related metrics (Table 2 and Table 3): Measurements that evaluate the energy performance of computing and networking components. These metrics assess how effectively IT resources utilize energy to perform computational tasks, focusing on the ratio of computational output to energy consumed, the degree of resource utilization, and the adaptability of power consumption to workload variations. Non-IT-related metrics (Table 4 and Table 5): These are used in the evaluation of supporting infrastructure such as power distribution, cooling systems, and building facilities. These metrics measure the proportion of energy used by non-IT systems relative to total energy consumption, with an emphasis on minimizing overhead and improving the efficiency of physical infrastructure and environmental controls.
According to Table 2 and Table 3, several sets of correlations are created, where improving one metric impacts others positively or negatively. Reducing APC positively correlates with ITEE, meaning lower power usage enhances efficiency of rated performance per power. However, it negatively correlates with PpW, CPE, and ScE, as cutting power may reduce performance or compute output, worsening these metrics that aim for higher values. Conversely, improving CPE, which measures compute output per IT power and targets higher values, positively correlates with PpW and ScE, indicating that better compute efficiency boosts performance and server efficiency, but it negatively correlates with APC, as higher compute output may increase power usage. DWPE shows positive correlations with ITEE and CPE, suggesting higher workload efficiency aligns with better equipment and compute efficiency, but it negatively impacts APC. EWR, where lower is better, negatively correlates with CPE, PpW, and ScE, meaning less energy per task improves these performance metrics, but it positively correlates with APC, as lower energy use may reflect higher power consumption. ITEE positively correlates with PpW and CPE but negatively with EWR, indicating efficient equipment supports performance but it may increase energy per task. OSWE positively correlates with Data Center energy Productivity (DCeP) and Data Center Performance Efficiency (DCPE) but negatively with PUE, suggesting system-wide efficiency improvements conflict with PUE optimization. PpW and ScE share strong positive correlations with each other, CPE, and ITEE, but they negatively correlate with the EWR and APC, reinforcing their alignment with performance efficiency. SPUE positively correlates with PUE but negatively with PpW and ScE, indicating trade-offs in server energy distribution. Finally, SWaP positively correlates with Data Center Power Density (DCPD) but negatively with the EWR, showing that optimizing performance per space and power aligns with density. However, it may increase energy per task.
About non-IT-related metrics (Table 4 and Table 5), the Corporate Average Data center Efficiency (CADE) metric, which targets a value of 1, positively correlates with the IT Equipment Utilization (ITEU) and DCiE, indicating that higher IT asset utilization and IT energy efficiency improve asset deployment efficiency. Still, it negatively correlates with PUE, as better asset efficiency often increases total facility power relative to IT power. The Data Center Availability (DCA) metric, also aiming for 1, negatively correlates with PUE Adjusted for Reliability (PUEreliability), suggesting that maximizing uptime may compromise reliability adjustments for power usage. The Data Center energy Productivity (DCeP) metric, targeting 1, positively correlates with DCPE and OSWE, showing that increased useful work per facility energy aligns with output efficiency and system workload efficiency, but it negatively correlates with PUE, as higher work output may elevate total power. Similarly, DCPE, also targeting 1, mirrors these correlations with Data Center Energy Productivity (DCeP), OSWE, and PUE. The Data Center green Efficiency (DCgE) metric, aiming for 1, positively correlates with Green Energy Coefficient (GEC) and DCiE, indicating that greater renewable energy use enhances green energy contributions and IT efficiency, but negatively correlates with CUE, as renewable energy reduces CO2 emissions. The DCPD metric, where higher values vary by context, positively correlates with SWaP, suggesting that higher IT power density supports space-adjusted performance, but negatively correlates with the Rack Cooling Index (RCI), as dense power usage may strain cooling infrastructure.
The Data Center Fixed to Variable Energy Ratio (DC-FVER) metric, where lower is better, negatively correlates with the Cooling Effectiveness Ratio (CER), indicating that reducing fixed-to-variable energy ratios conflicts with cooling efficiency. The Data Hall Utilization Efficiency (DH-UE) and Data Hall Utilization Rate (DH-UR) metrics, both targeting 1, positively correlate with each other and Total Utilization Efficiency (TUE), reflecting that active floor and rack deployment efficiencies enhance total utilization but lack negative correlations in the table. The Energy Baseline Score (EBS) metric, aiming for values less than 1, positively correlates with PUE, suggesting that lower energy baselines align with higher facility-to-IT power ratios, but it negatively correlates with Power Efficiency Savings (PEsavings), as reduced actual energy hinders savings potential. The Hardware Power Overhead Multiplier (H-POM) metric, targeting 1, positively correlates with ITEU, GEC, and DCiE, indicating that holistic performance aligns with utilization and green efficiency, but it negatively correlates with PUE. The Power Delivery Efficiency (PDE) and PEsavings metrics, both aiming for 1, positively correlate with DCiE and negatively with PUE, showing that efficient power delivery and energy savings enhance IT efficiency but increase facility power ratios. The PUEreliability metric, targeting 1, positively correlates with PUE but negatively with DCA, reflecting trade-offs between reliability and uptime. The System Infrastructure Power Optimization Metric (SI-POM) metric, also aiming for 1, positively correlates with CER and PDE but negatively with PUE, indicating that site infrastructure efficiency supports cooling and power delivery but conflicts with PUE. The TUE metric positively correlates with ITEU, DH-UR, and DH-UE, reinforcing utilization efficiencies, but it negatively correlates with PUE. Finally, PUE, DCiE, and CUE form a tight correlation cluster: DCiE positively correlates with PDE and negatively with PUE, while PUE negatively correlates with DCiE and positively with CUE, and CUE positively correlates with PUE, highlighting that minimizing CO2 emissions and maximizing IT efficiency often increases the facility-to-IT power ratio.
4. Real-World Examples
The proposed metrics are used in corporations’ (such as Google’s or Microsoft’s) CDC energy-efficient management. For instance, in 2023 Google reported an average annual PUE of 1.10 across its global fleet of large-scale CDCs, lower than the industry average of 1.58 [258]. Microsoft tracks PUE and WUE to monitor energy and water efficiency in its data centers. For the period from 1 July 2023 to 30 June 2024, Microsoft reported PUE and Water Usage Effectiveness (WUE) figures for its operational CDCs, emphasizing location-specific variables such as climate and humidity. A recent Microsoft study also highlighted that advanced cooling methods, such as cold plate and immersion cooling, can reduce data center emissions and water usage, particularly for AI workloads [259].
PayPal improved its CDC efficiency by adopting NVIDIA’s accelerated computing, reducing server energy consumption by nearly eight times while enhancing real-time fraud detection. This highlights how workload-specific efficiency improvements can be measured beyond traditional metrics such as PUE, as it is just the first step [260].
5. Challenges and Future Works
This section offers challenges and future scope for CDCs.
5.1. Challenges
5.1.1. IT-Related Metrics
Of the presented IT-related energy efficiency metrics CDCs, each faces challenges and limitations in quantifying power usage and performance in modern computing environments, as shown in Figure 8. APC struggles with the dynamic variability of cloud workloads, complicating consistent tracking, and it fails to reflect the productivity of power used, limiting its efficiency insights. CPE is challenged by the heterogeneity of cloud tasks, making computational output measurement complex, and overlooking the non-computational factors of memory and storage, missing a holistic view. DWPE grapples with consistently defining workloads across virtualized platforms and is sensitive to measurement timing, leading to inconsistent results. EWR faces difficulties in distinguishing wasted from useful energy and struggles to objectively assess task usefulness in mixed environments. ITEE’s efficiency varies with workload types, hindering cross-platform comparisons, and it ignores software or system management inefficiencies. OSWE finds it hard to define system boundaries in distributed cloud settings and does not distinguish between idle and active states. PpW is affected by workload diversity, making comparisons misleading, and it lacks standardized baselines for reliable benchmarking. ScE is complicated by workload distribution layers, missing broader data center interactions, while SPUE’s limited industry adoption and architecture-specific dynamics reduce its comparability. Lastly, SWaP tackles the conflicting optimization of space, power, and performance, with challenges in defining trade-offs and achieving automatic tuning.
5.1.2. Non-IT-Related Metrics
The related challenges for non-IT-related metrics are manifested in Figure 9.
Corporate Average Data Center Efficiency faces the challenge of aggregating data across diverse sites with varying architectures and operational models, which complicates consistent measurement. Its limitation lies in potentially masking inefficiencies at individual data centers due to the averaging effect, reducing its ability to pinpoint specific issues. DCA struggles with balancing energy efficiency against the need for high uptime, often leading to conservative, less efficient designs. However, it is limited as a reliability metric rather than an efficiency one, as it does not measure how efficiently availability is achieved.
Data Center Energy Productivity encounters difficulties in standardizing the measurement of productivity for abstract services such as cloud functions, lacking a universal approach. Its limitation is the challenge of correlating energy input with meaningful service output in a generalized manner, reducing its applicability. Data Center green Efficiency (DCgE) is hindered by restricted access to consistent, verifiable green energy data, and its limitation lies in not accounting for the intermittent availability of green sources or energy storage inefficiencies, which can skew results.
Data Center Power Density faces challenges with cooling bottlenecks and thermal hot spots caused by higher density designs. Its limitation is that it may prioritize compact layouts at the expense of cooling efficiency and maintainability, leading to operational trade-offs. DCPE requires complex performance benchmarking for diverse services in hybrid environments, and its lack of a universal performance definition limits comparability across data centers.
Data Center Fixed to Variable Energy Ratio demands granular instrumentation to distinguish fixed from variable energy components, a challenging task. Its static nature, not responsive to real-time load changes, is a key limitation. DH-UE is affected by layout heterogeneity and varying cooling strategies, which can distort utilization measurements, and it overlooks vertical space and cooling zones, providing an incomplete view of spatial efficiency.
Data Hall Utilization Rate lacks real-time rack-level monitoring and vendor standardization, limiting its precision. It only reflects space utilization, ignoring power or thermal efficiency, which restricts its scope. EBS struggles to establish reliable baselines in dynamic cloud environments, and its dependence on potentially outdated baseline periods undermines its relevance.
Power Efficiency Savings faces the challenge of quantifying savings through estimates rather than measured outcomes, and without a standard reference claimed savings may be inflated or inconsistent. SI-POM is intricate to quantify in shared facilities and lacks real-time tracking, missing transient inefficiencies.
Total Utilization Efficiency is complex to measure, due to factors of idle time, redundancy, and oversubscription, and its broad, aggregate nature reduces its actionability. PUE can be manipulated (such as by shutting off unused equipment) and is often reported under optimal conditions, while its focus on infrastructure energy ignores IT utilization, leading to potential misinterpretation.
Data Center Infrastructure Efficiency has limited new insights unless combined with other metrics, and it fails to capture internal inefficiencies in IT or cooling subsystems. Finally, CUE requires access to often unavailable or approximate CO2 emissions data from energy providers, and its exclusion of offset mechanisms or lifecycle emissions results in an incomplete environmental assessment.
5.2. Future Works
5.2.1. IT-Related Metrics
For APC, future work will focus on creating intelligent, workload-aware tools that dynamically adjust power metrics in real time, based on specific system parameters. These tools will incorporate system utilization rates, hardware degradation over time, and cooling system dynamics, while leveraging AI models to predict and optimize energy consumption proactively with high accuracy. For CPE, the goal is to establish standardized computational output units tailored to diverse applications, ensuring precise alignment of power consumption with specific compute tasks. Adaptive models are proposed to dynamically quantify the computational value of heterogeneous workloads, improving energy efficiency in real-time operational scenarios.
Data Center Workload Power Efficiency focuses on establishing universal workload definitions and standardized benchmark suites to ensure consistent, reproducible evaluation across diverse computing platforms. Automated profiling tools are recommended to enhance workload tagging and power consumption traceability, enabling precise, real-time assessments of energy efficiency. For EWR, the objective could be to develop refined methods for categorizing energy wastage, clearly distinguishing between essential and non-essential energy consumption under varying workload conditions. Artificial Intelligence techniques could be proposed to identify inefficiency patterns and recommend targeted system-level optimizations to minimize energy waste.
IT Equipment Energy Efficiency aims to integrate software performance indicators and real-time workload classification to enable comprehensive efficiency assessments. It promotes hardware–software co-design strategies to optimize performance by aligning computational demands with resource utilization. OSWE focuses on decomposing energy consumption into active, idle, and background categories through fine-grained telemetry, while mitigating virtualization and orchestration overhead in cloud-native environments to enhance overall system efficiency.
Performance per Watt advancements focus on developing workload-specific databases to inform precise system configurations and utilizing machine learning to dynamically optimize operations for maximum energy efficiency in real time. Server Compute Efficiency emphasizes detailed intra-server monitoring, including per-core and per-thread performance analysis, and explores correlations between compute efficiency, thermal dynamics, and performance throttling to enhance server-level optimization.
Server Power Usage Effectiveness could be in alignment with comprehensive PUE metrics, emphasizing the development of automated reporting tools and dashboard integration for real-time efficiency tracking. SWaP proposes multi-objective optimization and AI-driven decision frameworks to effectively balance trade-offs between space utilization, power consumption, and computational performance. The use of Digital Twin (DT) models makes it possible to simulate and validate SWaP strategies before implementation, improving decision-making precision.
5.2.2. Non-IT-Related Metrics
The future development of non-IT-related data center energy efficiency metrics will focus on three primary directions: (1) enhancing standardization by defining uniform calculation methodologies for metrics such as facility energy re-use and cooling efficiency across geographic and regulatory boundaries; (2) automating data collection and analysis through integration with building management systems, Internet of Things (IoT) sensors, and AI-based anomaly detection; and (3) using advanced technologies such as digital twins and predictive analytics to provide dynamic and scenario-based insights into infrastructure performance. For Corporate Average Data center Efficiency (CADE), upcoming efforts could emphasize the creation of standardized cross-organizational key performance indicators that account for energy use in auxiliary systems (such as lighting, HVAC, security) and align with Environmental, Social, and Governance (ESG) requirements, enabling consistent corporate-level reporting of environmental performance. DCA initiatives could increasingly incorporate energy-aware reliability models by quantifying the trade-offs between power consumption and system availability (such as the impact of redundancy level versus efficiency), supported by real-time telemetry and AI-driven predictive maintenance to optimize uptime without excess energy overhead.
For DCeP, the focus will shift toward automated, workload-aware productivity measurement using real-time operational data (task completion rate, computational throughput per energy unit), while ensuring alignment with changing industry performance benchmarks, such as output-based metrics, to enable accurate and continuous evaluation of energy-to-output efficiency.
Data Center green Efficiency will advance by implementing blockchain-based or smart contract systems for green energy traceability, enabling verifiable tracking of renewable energy inputs at a granular level, while also incorporating comprehensive Lifecycle Assessments (LCAs)—including embodied carbon and end-of-life emissions—to offer a more accurate evaluation of environmental impacts. DCPD will prioritize thermal-aware high-density architecture design, using Computational Fluid Dynamics (CFD) simulations and liquid cooling integration to mitigate hot spots and improve heat dissipation efficiency in rack-scale deployments. DCPE aims to utilize AI-driven adaptive performance models—such as reinforcement learning and digital twin simulation—to dynamically predict and balance workloads against energy consumption profiles, optimizing the energy-performance trade-off in near real-time. DC-FVER will refine cost allocation frameworks by incorporating time-of-use energy pricing models and real-time consumption data and by integrating with smart grid interfaces, enhancing cost traceability and enabling demand-side response participation. DH-UE and DH-UR will automate space and rack usage tracking through RFID, computer vision, and occupancy sensors, while optimizing modular hardware design and resource allocation through constraint-aware algorithms to maximize spatial and operational efficiency.
EBS development will focus on automated baseline recalibration using historical and streaming operational data, supported by predictive analytics to detect deviation trends and ensure that energy performance benchmarks remain relevant over time. H-POM intends to implement sub-component-level power telemetry (e.g., PSU-level, network interface cards) and develop standardized power overhead metrics, allowing for detailed attribution of energy use across hardware elements. PDE can enhance power delivery architectures by incorporating high-efficiency DC–DC conversion, busbar optimization, and real-time loss-tracking algorithms to minimize distribution inefficiencies within the data center power path. PEsavings may enable automated energy savings quantification by linking real-time energy data with dynamic cost models, providing finance and sustainability teams with clearer, auditable insights into cost-to-efficiency benefits. SI-POM can consider using IoT networks for fine-grained, edge-level monitoring of power and thermal systems, enabling closed-loop control schemes to reduce overcooling and improve power delivery precision. TUE will focus on building AI-assisted unified utilization frameworks that incorporate compute, memory, storage, and networking usage data to improve global resource scheduling and holistic system efficiency. PUE will be refined to consider workload type and utilization context, with hybrid metrics that integrate CO2-equivalent emissions per compute output to better align efficiency reporting with carbon accountability. DCiE development will emphasize real-time analytics platforms that correlate infrastructure energy usage with renewable input and cooling efficiency data to deliver more actionable, infrastructure-level insights. Finally, CUE can be evolved by standardizing carbon emissions reporting methodologies, including Scope 2 emissions, and integrating renewable energy certificates and time-stamped carbon intensity data to improve the accuracy and consistency of data center CO2 footprint assessments.
6. Conclusions
In this study, various components contributing to energy consumption within CDCs, including UPS and PDU configurations, as well as diverse energy efficiency metrics, have been systematically examined. The analyzed metrics offer critical insights into optimizing resource utilization, reducing energy waste, and ensuring compliance with environmental and regulatory standards. By addressing the inherent challenges and limitations of these metrics and exploring prospective research avenues—particularly the integration of AI technologies—further enhancements in energy efficiency could be realized. Advancements in this domain not only promise cost reductions and lower carbon emissions but also support the scalability and sustainability of digital services, fostering a more resilient and environmentally conscious digital infrastructure.
A.S.: conceptualization, software, validation, visualization, original writing; review/editing; formal analysis, investigation. H.S.: original writing, review/editing, formal analysis. A.R.: supervision, project management, formal analysis, validation, review/editing, funding acquisition. H.S.: original writing, review/editing, formal analysis. A.O.: review/editing, formal analysis, funding acquisition. All authors have read and agreed to the published version of the manuscript.
Not applicable.
Not applicable.
No data were used in this review paper.
The authors declare no conflicts of interest.
The following abbreviations are used in this manuscript:
| AC | Alternating Current |
| AI | Artificial Intelligence |
| APC | Average Power Consumption |
| CADE | Corporate Average Data center Efficiency |
| CAGR | Compound Annual Growth Rate |
| CDC | Cloud Data Center |
| CER | Cooling Effectiveness Ratio |
| CPE | Compute Power Efficiency |
| CPU | Centeral Processing Units |
| CRAC | Computer Room Air Conditioning |
| CRAH | Computer Room Air Handler |
| CUE | Carbon Usage Effectiveness |
| DC | Direct Current |
| DC-FVER | Data Center Fixed to Variable Energy Ratio |
| DCA | Data Center Availability |
| DCeP | Data Center energy Productivity |
| DCgE | Data Center green Efficiency |
| DCiE | Data Center infrastructure Efficiency |
| DCPD | Data Center Power Density |
| DCPE | Data Center Performance Efficiency |
| DH-UE | Data Hall Utilization Efficiency |
| DH-UR | Data Hall Utilization Rate |
| DT | Digital Twin |
| DWPE | Data Center Workload Power Efficiency |
| EBS | Energy Baseline Score |
| ESG | Environmental, Social, and Governance |
| EWR | Energy Wastage Ratio |
| GEC | Green Energy Coefficient |
| GPU | Graphical Processing Unit |
| H-POM | Hardware Power Overhead Multiplier |
| HVAC | Heating, Ventilation, and Air Conditioning |
| IEA | International Energy Agency |
| IoT | Internet of Things |
| IT | Information Technology |
| ITEE | IT Equipment Energy Efficiency |
| ITEU | IT Equipment Utilization |
| LED | Light-Emitting Diode |
| LIB | Lithium-Ion Battery |
| NN | Neural Network |
| NSERC | Natural Sciences and Engineering Research Council of Canada |
| OSWE | Operational System Workload Efficiency |
| PDE | Power Delivery Efficiency |
| PDU | Power Distribution Unit |
| PEsavings | Power Efficiency Savings |
| PPA | Power Purchase Agreement |
| PpW | Performance per Watt |
| PRISMA | Preferred Reporting Items for Systematic reviews and Meta-Analyses |
| PUE | Power Usage Effectiveness |
| PUEreliability | PUE Adjusted for Reliability |
| RAM | Random Accessible Memory |
| RCI | Rack Cooling Index |
| RES | Renewable Energy Source |
| ScE | Server compute Efficiency |
| SI-POM | System Infrastructure Power Optimization Metric |
| SPUE | Server Power Usage Effectiveness |
| SWaP | Space, Wattage, and Performance |
| TUE | Total Utilization Efficiency |
| UPS | Uninterruptible Power Supply |
| VM | Virtual Machine |
| WUE | Water Usage Effectiveness |
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1 The Preferred Reporting Items for Systemic reviews and Meta-Analyses (PRISMA) process of this systematic review, including keywords, identification, screening, eligibility, and inclusion.
Figure 2 Overview of related Scopus publications on energy efficiency of CDCs.
Figure 3 Overall statistics of Scopus publication types on energy efficiency of CDCs (
Figure 4 A typical CDC facility, with its energy supply (PDU and Uninterruptible Power Supply (UPS)), and energy consumption components (Heating, Ventilation, and Air Conditioning (HVAC), security, IT, servers, racks, etc.).
Figure 5 Centralized UPS configuration.
Figure 6 Decentralized UPS configuration—Type 1.
Figure 7 Decentralized UPS configuration—Type 2. The base concept for
Figure 8 Challenges and limitations of the presented IT-related energy efficiency CDC metrics.
Figure 9 Challenges and limitations of the presented non-IT-related energy efficiency CDCs metrics.
Comparison of CDC cooling strategies.
| Cooling Strategy | Methodology | Advantages | Disadvantages | Examples |
|---|---|---|---|---|
| Air-Based Cooling | Uses CRAC or CRAH units to circulate air around servers. | Simple and widely adopted; low initial cost. | High energy consumption; inefficient for high-density workloads. | [ |
| Cold/Hot Aisle Containment | Separates hot and cold air paths, using containment structures. | Improves energy efficiency and reduces hot spots. | Requires proper planning and layout; retrofitting is difficult. | [ |
| Direct-to-Chip Liquid Cooling | Delivers coolant directly to components, such as CPUs, via cold plates. | High cooling efficiency; ideal for high-performance systems. | Higher cost; risk of leaks. | [ |
| Immersion Cooling | Submerges components in dielectric fluid for heat transfer. | Excellent thermal performance; quiet operation. | Expensive setup; fluid compatibility and maintenance complexity. | [ |
| Rear Door Heat Exchangers | Cooled water absorbs heat via coils mounted at the back of racks. | Scalable and effective for dense racks. | Adds rack weight; complex plumbing. | [ |
| In-Row Cooling | Places cooling units between server racks for localized cooling. | Targeted cooling; reduces airflow inefficiencies. | High cost; depends on data center layout. | [ |
| Evaporative Cooling | Uses evaporating water to pre-cool intake air. | Very energy-efficient in dry climates. | Ineffective in humid climates; water treatment needed. | [ |
| Chilled Water Cooling | Chiller cools water that circulates through air handlers. | Suitable for large-scale operations; reliable. | Expensive infrastructure; needs regular maintenance. | [ |
| Economizers | Uses outside air or water for cooling when ambient conditions allow. | Major energy savings; eco-friendly. | Weather-dependent; air filtration often required. | [ |
| Hybrid Systems | Combines multiple cooling methods (such as air + liquid, or free cooling). | Flexible and efficient under varying loads. | High complexity and initial setup cost. | [ |
Cloud Data Center energy efficiency IT-related metrics information.
| Metric | Definition | Primary Use | Examples |
|---|---|---|---|
| APC | Average power usage of IT equipment over time. | Monitors IT power trends. | [ |
| CPE | Computes output per unit of IT power. | Measures IT computational efficiency. | [ |
| DWPE | Workload processed per unit of IT power. | Workload efficiency at IT level. | [ |
| EWR | Energy wasted per unit of computational work. | Energy cost of work. | [ |
| ITEE | Efficiency of IT hardware. | Evaluates hardware efficiency. | [ |
| OSWE | System workload output per total energy. | Measures system-level workload efficiency. | [ |
| PpW | Performance per unit of power. | Measures IT hardware efficiency. | [ |
| ScE | Server compute output per energy used. | Assesses server-level efficiency. | [ |
| SPUE | PUE-like metric for servers. | Server-specific efficiency. | [ |
| SWaP | Composite of performance vs. space and power. | Space–power performance balance. | [ |
Cloud Data Center energy efficiency IT-related metrics mathematical details.
| Metric | Formulation | Units | Best Value | Correlation with Other Metrics |
|---|---|---|---|---|
| APC | | W | Lower is better | ↑ (ITEE), ↓ (PpW, CPE, ScE) |
| CPE | | Ops/W | | ↑ (PpW, ScE); ↓ (APC) |
| DWPE | | Tasks/W | | ↑ (ITEE, CPE); ↓ (APC) |
| EWR | | kWh/Task | 0 | ↓ (CPE, PpW, ScE); ↑ (APC) |
| ITEE | | % | 1 | ↑ (PpW, CPE); ↓ (EWR) |
| OSWE | | % | 1 | ↑ (DCeP, DCPE); ↓ (PUE) |
| PpW | | Perf/W | | ↑ (CPE, ScE, ITEE); ↓ (EWR, APC) |
| ScE | | Ops/W | | ↑ (PpW, CPE, ITEE); ↓ (EWR) |
| SPUE | | % | 1 | ↑ (PUE); ↓ (PpW, ScE) |
| SWaP | | Ops/m2W | | ↑ (DCPD); ↓ (EWR) |
Cloud Data Center energy efficiency non-IT-related metrics information.
| Metric | Definition | Primary Use | Examples |
|---|---|---|---|
| CADE | Corporate-level efficiency | Corporate energy assessment | [ |
| DCA | Data Center Availability | Measures uptime reliability | [ |
| DCeP | Energy productivity | Productivity benchmark | [ |
| DCgE | Green energy efficiency | Green energy impact | [ |
| DCPD | Power density | Space optimization | [ |
| DCPE | Performance per energy | Performance efficiency measure | [ |
| DC-FVER | Fixed to variable energy | Cost structure insight | [ |
| DH-UE | IT floor space utilization | Space optimization | [ |
| DH-UR | Rack utilization rate | Rack deployment insight | [ |
| EBS | Baseline comparison score | Tracks savings | [ |
| H-POM | Non-computational (overhead) power consumed by hardware components | Includes everything consumed by the hardware | [ |
| PDE | Power delivery efficiency | Power loss assessment | [ |
| PEsavings | Energy efficiency savings | Savings tracking | [ |
| SI-POM | Infrastructure optimization | Infrastructure focus | [ |
| TUE | Holistic utilization | Total resource usage | [ |
| PUE | Total vs IT energy | Efficiency benchmark | [ |
| DCiE | Infra efficiency | Infrastructure energy ratio | [ |
| CUE | Carbon emissions per IT energy | Carbon footprint metric | [ |
Cloud Data Center energy efficiency non-IT-related metrics mathematical details.
| Metric | Formulation | Best Value | Units | Correlation with Other Metrics |
|---|---|---|---|---|
| CADE | | 1 | Ratio | ↑ (ITEU, DCiE); ↓ (PUE) |
| DCA | | 1 | Ratio | ↓ PUEreliability |
| DCeP | | 1 | Output/W | ↑ (DCPE, OSWE); ↓ (PUE) |
| DCgE | | 1 | Ratio | ↑ (GEC, DCiE); ↓ (CUE) |
| DCPD | | Higher varies | W/m2 | ↑ (SWaP); ↓ (RCI) |
| DCPE | | 1 | Output/W | ↑ (DCeP, OSWE); ↓ (PUE) |
| DC-FVER | | Lower is better | Ratio | ↓ (CER) |
| DH-UE | | 1 | Ratio | ↑ (DH-UR, TUE) |
| DH-UR | | 1 | Ratio | ↑ (DH-UE, TUE) |
| EBS | | 1 | Ratio | ↑ (PUE); ↓ (PEsavings) |
| H-POM | Custom Formula | 1 | Ratio | ↑ (ITEU, GEC, DCiE); ↓ (PUE) |
| PDE | | 1 | Ratio | ↑ (DCiE); ↓ (PUE) |
| PEsavings | | 1 | Ratio | ↑ (DCiE); ↓ (PUE, EBS) |
| PUEreliability | | 1 | Ratio | ↑ (PUE); ↓ (DCA) |
| SI-POM | Custom formula | 1 | Ratio | ↑ (CER, PDE); ↓ (PUE) |
| TUE | | 1 | Ratio | ↑ (ITEU, DH-UR, DH-UE); ↓ (PUE) |
| PUE | | 1 | Ratio | ↓ (DCiE, PDE); ↑ (CUE) |
| DCiE | | 1 | Ratio | ↑ (PDE); ↓ (PUE) |
| CUE | | 0 | KgCO2/kWh | ↑ (PUE) |
Supplementary Materials
The following supporting information can be downloaded at:
1. Xu, C.; Wang, K.; Sun, Y.; Guo, S.; Zomaya, A.Y. Redundancy avoidance for big data in data centers: A conventional neural network approach. IEEE Trans. Netw. Sci. Eng.; 2018; 7, pp. 104-114. [DOI: https://dx.doi.org/10.1109/TNSE.2018.2843326]
2. Kaur, K.; Garg, S.; Kaddoum, G.; Bou-Harb, E.; Choo, K.K.R. A big data-enabled consolidated framework for energy efficient software defined data centers in IoT setups. IEEE Trans. Ind. Inform.; 2019; 16, pp. 2687-2697. [DOI: https://dx.doi.org/10.1109/TII.2019.2939573]
3. Xu, C.; Wang, K.; Li, P.; Xia, R.; Guo, S.; Guo, M. Renewable energy-aware big data analytics in geo-distributed data centers with reinforcement learning. IEEE Trans. Netw. Sci. Eng.; 2018; 7, pp. 205-215. [DOI: https://dx.doi.org/10.1109/TNSE.2018.2813333]
4. Rong, H.; Zhang, H.; Xiao, S.; Li, C.; Hu, C. Optimizing energy consumption for data centers. Renew. Sustain. Energy Rev.; 2016; 58, pp. 674-691. [DOI: https://dx.doi.org/10.1016/j.rser.2015.12.283]
5. Chaudhary, R.; Aujla, G.S.; Kumar, N.; Rodrigues, J.J. Optimized big data management across multi-cloud data centers: Software-defined-network-based analysis. IEEE Commun. Mag.; 2018; 56, pp. 118-126. [DOI: https://dx.doi.org/10.1109/MCOM.2018.1700211]
6. Gu, L.; Zeng, D.; Li, P.; Guo, S. Cost minimization for big data processing in geo-distributed data centers. IEEE Trans. Emerg. Top. Comput.; 2014; 2, pp. 314-323. [DOI: https://dx.doi.org/10.1109/TETC.2014.2310456]
7. Zhou, Q.; Lou, J.; Jiang, Y. Optimization of energy consumption of green data center in e-commerce. Sustain. Comput. Inform. Syst.; 2019; 23, pp. 103-110. [DOI: https://dx.doi.org/10.1016/j.suscom.2019.07.008]
8. Dong, C.; Wen, W.; Xu, T.; Yang, X. Joint optimization of data-center selection and video-streaming distribution for crowdsourced live streaming in a geo-distributed cloud platform. IEEE Trans. Netw. Serv. Manag.; 2019; 16, pp. 729-742. [DOI: https://dx.doi.org/10.1109/TNSM.2019.2907785]
9. Ranjan, R.; Wang, L.; Zomaya, A.Y.; Tao, J.; Jayaraman, P.P.; Georgakopoulos, D. Advances in methods and techniques for processing streaming big data in datacentre clouds. IEEE Trans. Emerg. Top. Comput.; 2016; 4, pp. 262-265. [DOI: https://dx.doi.org/10.1109/TETC.2016.2524219]
10. Chen, W.; Paik, I.; Li, Z. Cost-aware streaming workflow allocation on geo-distributed data centers. IEEE Trans. Comput.; 2016; 66, pp. 256-271. [DOI: https://dx.doi.org/10.1109/TC.2016.2595579]
11. He, J.; Chaintreau, A.; Diot, C. A performance evaluation of scalable live video streaming with nano data centers. Comput. Netw.; 2009; 53, pp. 153-167. [DOI: https://dx.doi.org/10.1016/j.comnet.2008.10.014]
12. Sajjad, H.P.; Danniswara, K.; Al-Shishtawy, A.; Vlassov, V. Spanedge: Towards unifying stream processing over central and near-the-edge data centers. Proceedings of the 2016 IEEE/ACM Symposium on Edge Computing (SEC); Washington, DC, USA, 27–28 October 2016; pp. 168-178.
13. Ranjan, R. Streaming big data processing in datacenter clouds. IEEE Cloud Comput.; 2014; 1, pp. 78-83. [DOI: https://dx.doi.org/10.1109/MCC.2014.22]
14. Simić, M.; Prokić, I.; Dedeić, J.; Sladić, G.; Milosavljević, B. Towards edge computing as a service: Dynamic formation of the micro data-centers. IEEE Access; 2021; 9, pp. 114468-114484. [DOI: https://dx.doi.org/10.1109/ACCESS.2021.3104475]
15. Bilal, K.; Khalid, O.; Erbad, A.; Khan, S.U. Potentials, trends, and prospects in edge technologies: Fog, cloudlet, mobile edge, and micro data centers. Comput. Netw.; 2018; 130, pp. 94-120. [DOI: https://dx.doi.org/10.1016/j.comnet.2017.10.002]
16. Jiang, C.; Fan, T.; Gao, H.; Shi, W.; Liu, L.; Cérin, C.; Wan, J. Energy aware edge computing: A survey. Comput. Commun.; 2020; 151, pp. 556-580. [DOI: https://dx.doi.org/10.1016/j.comcom.2020.01.004]
17. Jiang, C.; Cheng, X.; Gao, H.; Zhou, X.; Wan, J. Toward computation offloading in edge computing: A survey. IEEE Access; 2019; 7, pp. 131543-131558. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2938660]
18. Premsankar, G.; Di Francesco, M.; Taleb, T. Edge computing for the Internet of Things: A case study. IEEE Internet Things J.; 2018; 5, pp. 1275-1284. [DOI: https://dx.doi.org/10.1109/JIOT.2018.2805263]
19. Safari, A.; Taghizad-Tavana, K.; Tarafdar Hagh, M. Artificial Intelligence-Driven Optimization of Internet Data Center Energy Consumption in Active Distribution Networks: A Transformer-Based Robust Control Model with Spatio-Temporal Flexibility Analytics. Available at SSRN 5056252 2024; Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5056252 (accessed on 14 May 2025).
20. Koot, M.; Wijnhoven, F. Usage impact on data center electricity needs: A system dynamic forecasting model. Appl. Energy; 2021; 291, 116798. [DOI: https://dx.doi.org/10.1016/j.apenergy.2021.116798]
21. U.S. Department of Energy. DOE Releases New Report Evaluating Increase in Electricity Demand from Data Centers. 2024; Available online: https://www.energy.gov/articles/doe-releases-new-report-evaluating-increase-electricity-demand-data-centers (accessed on 23 April 2025).
22. IDC. IDC Report Reveals AI-Driven Growth in Datacenter Energy Consumption. 2024; Available online: https://my.idc.com/getdoc.jsp?containerId=prUS52611224&utm_source=chatgpt.com (accessed on 23 April 2025).
23. McKinsey & Company. Energy Consumption in Data Centers: Air versus Liquid Cooling. Retrieved from Boyd Corporation. 2023; Available online: https://www.boydcorp.com/blog/energy-consumption-in-data-centers-air-versus-liquid-cooling.html (accessed on 23 April 2025).
24. Cai, S.; Gou, Z. Towards energy-efficient data centers: A comprehensive review of passive and active cooling strategies. Energy Built Environ.; 2024; Available online: https://www.sciencedirect.com/science/article/pii/S2666123324000916 (accessed on 23 April 2025).
25. Saâdaoui, F.; Jabeur, S.B. Analyzing the influence of geopolitical risks on European power prices using a multiresolution causal neural network. Energy Econ.; 2023; 124, 106793. [DOI: https://dx.doi.org/10.1016/j.eneco.2023.106793]
26. International Energy Agency. Electricity Security Matters More than Ever–Power Systems in Transition. Retrieved from IEA. 2020; Available online: https://www.iea.org/reports/power-systems-in-transition/electricity-security-matters-more-than-ever (accessed on 23 April 2025).
27. World Bank. Selecting and Implementing Demand Response Programs to Support Grid Flexibility: A Guidance Note for Practitioners; World Bank: Washington, DC, USA, 2023; Available online: https://documents.worldbank.org/en/publication/documents-reports/documentdetail/099647511282438850/idu10031c3e11c08e145e11b50a114842d7d19fd (accessed on 23 April 2025).
28. Safari, A.; Daneshvar, M.; Anvari-Moghaddam, A. Energy Intelligence: A Systematic Review of Artificial Intelligence for Energy Management. Appl. Sci.; 2024; 14, 11112. [DOI: https://dx.doi.org/10.3390/app142311112]
29. Houston Chronicle. BlackRock CEO: AI Data Center Growth Could Be Limited by Texas Grid. 2024; Available online: https://www.houstonchronicle.com/business/energy/article/ceraweek-power-grid-texas-blackrock-20213874.php (accessed on 23 April 2025).
30. Reccessary. Taiwan to Stop Approving Data Centers over 5MW in the North Due to Electricity Concerns. 2024; Available online: https://www.datacenterdynamics.com/en/news/taiwan-to-stop-large-data-centers-in-the-north-cites-insufficient-power/ (accessed on 23 April 2025).
31. Data Center Dynamics. Three Mile Island Nuclear Power Plant to Return as Microsoft Signs 20-Year 835MW AI Data Center PPA. 2024; Available online: https://www.datacenterdynamics.com/en/news/three-mile-island-nuclear-power-plant-to-return-as-microsoft-signs-20-year-835mw-ai-data-center-ppa/ (accessed on 23 April 2025).
32. Kurt, A.; Gumus, M. Sustainable planning of penal facilities through multi-objective location-allocation modelling and data envelopment analysis. Socio-Econ. Plan. Sci.; 2025; 98, 102147. [DOI: https://dx.doi.org/10.1016/j.seps.2024.102147]
33. Woodruff, J.Z.; Brenner, P.; Buccellato, A.P.; Go, D.B. Environmentally opportunistic computing: A distributed waste heat reutilization approach to energy-efficient buildings and data centers. Energy Build.; 2014; 69, pp. 41-50. [DOI: https://dx.doi.org/10.1016/j.enbuild.2013.09.036]
34. Shao, X.; Zhang, Z.; Song, P.; Feng, Y.; Wang, X. A review of energy efficiency evaluation metrics for data centers. Energy Build.; 2022; 271, 112308. [DOI: https://dx.doi.org/10.1016/j.enbuild.2022.112308]
35. Long, S.; Li, Y.; Huang, J.; Li, Z.; Li, Y. A review of energy efficiency evaluation technologies in cloud data centers. Energy Build.; 2022; 260, 111848. [DOI: https://dx.doi.org/10.1016/j.enbuild.2022.111848]
36. Fieni, G.; Rouvoy, R.; Seinturier, L. xPUE: Extending Power Usage Effectiveness Metrics for Cloud Infrastructures. arXiv; 2025; arXiv: 2503.07124
37. Sarkis-Onofre, R.; Catalá-López, F.; Aromataris, E.; Lockwood, C. How to properly use the PRISMA Statement. Syst. Rev.; 2021; 10, 117. [DOI: https://dx.doi.org/10.1186/s13643-021-01671-z]
38. Klemick, H.; Kopits, E.; Wolverton, A. How do data centers make energy efficiency investment decisions? Qualitative evidence from focus groups and interviews. Energy Effic.; 2019; 12, pp. 1359-1377. [DOI: https://dx.doi.org/10.1007/s12053-019-09782-2]
39. Newkirk, A.C.; Hanus, N.; Payne, C.T. Expert and operator perspectives on barriers to energy efficiency in data centers. Energy Effic.; 2024; 17, 63. [DOI: https://dx.doi.org/10.1007/s12053-024-10244-7]
40. Dayarathna, M.; Wen, Y.; Fan, R. Data center energy consumption modeling: A survey. IEEE Commun. Surv. Tutor.; 2015; 18, pp. 732-794. [DOI: https://dx.doi.org/10.1109/COMST.2015.2481183]
41. Gandhi, A.D.; Newbury, M.E. Evaluation of the energy efficiency metrics for wireless networks. Bell Labs Tech. J.; 2011; 16, pp. 207-215. [DOI: https://dx.doi.org/10.1002/bltj.20495]
42. D’Aniello, F.; Sorrentino, M.; Rizzo, G.; Trifirò, A.; Bedogni, F. Introducing innovative energy performance metrics for high-level monitoring and diagnosis of telecommunication sites. Appl. Therm. Eng.; 2018; 137, pp. 277-287. [DOI: https://dx.doi.org/10.1016/j.applthermaleng.2018.03.061]
43. Hossfeld, T.; Wunderer, S.; Loh, F.; Schien, D. Analysis of Energy Intensity and Generic Energy Efficiency Metrics in Communication Networks: Limits, Practical Applications and Case Studies. IEEE Access; 2024; 12, pp. 105527-105549. [DOI: https://dx.doi.org/10.1109/ACCESS.2024.3435716]
44. Daim, T.; Justice, J.; Krampits, M.; Letts, M.; Subramanian, G.; Thirumalai, M. Data center metrics: An energy efficiency model for information technology managers. Manag. Environ. Qual. Int. J.; 2009; 20, pp. 712-731. [DOI: https://dx.doi.org/10.1108/14777830910990870]
45. Yuventi, J.; Mehdizadeh, R. A critical analysis of power usage effectiveness and its use as data center energy sustainability metrics. Energy Build.; 2013; 64, pp. 90-94. [DOI: https://dx.doi.org/10.1016/j.enbuild.2013.04.015]
46. Herrlin, M.K. Airflow and cooling performance of data centers: Two performance metrics. ASHRAE Trans.; 2008; 114, pp. 182-187.
47. Xie, M.; Wang, J.; Liu, J. Evaluation metrics of thermal management in data centers based on exergy analysis. Appl. Therm. Eng.; 2019; 147, pp. 1083-1095. [DOI: https://dx.doi.org/10.1016/j.applthermaleng.2018.10.137]
48. Chang, Q.; Huang, Y.; Liu, K.; Xu, X.; Zhao, Y.; Pan, S. Optimization Control Strategies and Evaluation Metrics of Cooling Systems in Data Centers: A Review. Sustainability; 2024; 16, 7222. [DOI: https://dx.doi.org/10.3390/su16167222]
49. Capozzoli, A.; Serale, G.; Liuzzo, L.; Chinnici, M. Thermal metrics for data centers: A critical review. Energy Procedia; 2014; 62, pp. 391-400. [DOI: https://dx.doi.org/10.1016/j.egypro.2014.12.401]
50. Capozzoli, A.; Chinnici, M.; Perino, M.; Serale, G. Review on performance metrics for energy efficiency in data center: The role of thermal management. Proceedings of the International Workshop on Energy Efficient Data Centers; Springer: Berlin/Heidelberg, Germany, 2014; pp. 135-151.
51. Reddy, V.D.; Setz, B.; Rao, G.S.V.; Gangadharan, G.; Aiello, M. Metrics for sustainable data centers. IEEE Trans. Sustain. Comput.; 2017; 2, pp. 290-303. [DOI: https://dx.doi.org/10.1109/TSUSC.2017.2701883]
52. Gowans, D. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures. National Renewable Energy Laboratory. 2013; Available online: https://www.energy.gov/oe/uniform-methods-project-methods-determining-energy-efficiency-savings-specific-measures (accessed on 14 May 2025).
53. Whitehead, B.; Andrews, D.; Shah, A.; Maidment, G. Assessing the environmental impact of data centres part 1: Background, energy use and metrics. Build. Environ.; 2014; 82, pp. 151-159. [DOI: https://dx.doi.org/10.1016/j.buildenv.2014.08.021]
54. Masanet, E.; Shehabi, A.; Lei, N.; Smith, S.; Koomey, J. Recalibrating global data center energy-use estimates. Science; 2020; 367, pp. 984-986. [DOI: https://dx.doi.org/10.1126/science.aba3758] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32108103]
55. Yadav, R.; Zhang, W.; Kaiwartya, O.; Singh, P.R.; Elgendy, I.A.; Tian, Y.C. Adaptive energy-aware algorithms for minimizing energy consumption and SLA violation in cloud computing. IEEE Access; 2018; 6, pp. 55923-55936. [DOI: https://dx.doi.org/10.1109/ACCESS.2018.2872750]
56. Nambi, S.; Thanapal, P. Emo–Ts: An Enhanced Multi-Objective Optimization Algorithm for Energy-Efficient Task Scheduling In Cloud Data Centers. IEEE Access; 2025; 13, pp. 8187-8200. [DOI: https://dx.doi.org/10.1109/ACCESS.2025.3527031]
57. Swain, S.R.; Parashar, A.; Singh, A.K.; Lee, C.N. An intelligent virtual machine allocation optimization model for energy-efficient and reliable cloud environment. J. Supercomput.; 2025; 81, pp. 1-26. [DOI: https://dx.doi.org/10.1007/s11227-024-06734-1]
58. Elgendy, I.A.; Muthanna, A.; Alshahrani, A.; Hassan, D.S.; Alkanhel, R.; Elkawkagy, M. Optimizing Energy Efficiency in Vehicular Edge-Cloud Networks Through Deep Reinforcement Learning-Based Computation Offloading. IEEE Access; 2024; 12, pp. 191537-191550. [DOI: https://dx.doi.org/10.1109/ACCESS.2024.3514881]
59. Chong, F.T.; Heck, M.J.R.; Ranganathan, P.; Saleh, A.A.M.; Wassel, H.M.G. Data Center Energy Efficiency:Improving Energy Efficiency in Data Centers Beyond Technology Scaling. IEEE Des. Test; 2014; 31, pp. 93-104. [DOI: https://dx.doi.org/10.1109/MDAT.2013.2294466]
60. Yang, L.; Li, X.; Sun, M.; Sun, C. Hybrid Policy-Based Reinforcement Learning of Adaptive Energy Management for the Energy Transmission-Constrained Island Group. IEEE Trans. Ind. Inform.; 2023; 19, pp. 10751-10762. [DOI: https://dx.doi.org/10.1109/TII.2023.3241682]
61. Zhang, N.; Yan, J.; Hu, C.; Sun, Q.; Yang, L.; Gao, D.W.; Guerrero, J.M.; Li, Y. Price-Matching-Based Regional Energy Market with Hierarchical Reinforcement Learning Algorithm. IEEE Trans. Ind. Inform.; 2024; 20, pp. 11103-11114. [DOI: https://dx.doi.org/10.1109/TII.2024.3390595]
62. Mastelic, T.; Brandic, I. Recent Trends in Energy-Efficient Cloud Computing. IEEE Cloud Comput.; 2015; 2, pp. 40-47. [DOI: https://dx.doi.org/10.1109/MCC.2015.15]
63. Zhang, Q.; Meng, Z.; Hong, X.; Zhan, Y.; Liu, J.; Dong, J.; Bai, T.; Niu, J.; Deen, M.J. A survey on data center cooling systems: Technology, power consumption modeling and control strategy optimization. J. Syst. Archit.; 2021; 119, 102253. [DOI: https://dx.doi.org/10.1016/j.sysarc.2021.102253]
64. Ahmed, K.M.U.; Bollen, M.H.; Alvarez, M. A review of data centers energy consumption and reliability modeling. IEEE Access; 2021; 9, pp. 152536-152563. [DOI: https://dx.doi.org/10.1109/ACCESS.2021.3125092]
65. Cho, J.; Kim, Y. Improving energy efficiency of dedicated cooling system and its contribution towards meeting an energy-optimized data center. Appl. Energy; 2016; 165, pp. 967-982. [DOI: https://dx.doi.org/10.1016/j.apenergy.2015.12.099]
66. Shuja, J.; Bilal, K.; Madani, S.A.; Othman, M.; Ranjan, R.; Balaji, P.; Khan, S.U. Survey of techniques and architectures for designing energy-efficient data centers. IEEE Syst. J.; 2014; 10, pp. 507-519. [DOI: https://dx.doi.org/10.1109/JSYST.2014.2315823]
67. Li, Z.; Zhang, X.; Zuo, H.; Shang, Q.; Sun, G.; Huai, W.; Wang, T. Shaking table tests of double-layer micro-module data center: Structural responses and numerical simulation. Eng. Struct.; 2025; 335, 120272. [DOI: https://dx.doi.org/10.1016/j.engstruct.2025.120272]
68. Hewage, T.B.; Ilager, S.; Read, M.R.; Buyya, R. Aging-aware CPU Core Management for Embodied Carbon Amortization in Cloud LLM Inference. arXiv; 2025; arXiv: 2501.15829
69. Xiong, Z.; Tan, L.; Xu, J.; Cai, L. Online real-time energy consumption optimization with resistance to server switch jitter for server clusters. J. Supercomput.; 2025; 81, 460. [DOI: https://dx.doi.org/10.1007/s11227-024-06827-x]
70. Ronchetti, F.; Akishina, V.; Andreassen, E.; Bluhme, N.; Dange, G.; de Cuveland, J.; Erba, G.; Gaur, H.; Hutter, D.; Kozlov, G.
71. Luo, Z.; Liu, J.; Lee, M.; Shroff, N.B. Prediction-Assisted Online Distributed Deep Learning Workload Scheduling in GPU Clusters. arXiv; 2025; arXiv: 2501.05563
72. Cui, W.; Zhang, J.; Zhao, H.; Liu, C.; Zhang, W.; Sha, J.; Chen, Q.; He, B.; Guo, M. XPUTimer: Anomaly Diagnostics for Divergent LLM Training in GPU Clusters of Thousand-Plus Scale. arXiv; 2025; arXiv: 2502.05413
73. Jiang, Y.; Fu, F.; Yao, X.; He, G.; Miao, X.; Klimovic, A.; Cui, B.; Yuan, B.; Yoneki, E. Demystifying Cost-Efficiency in LLM Serving over Heterogeneous GPUs. arXiv; 2025; arXiv: 2502.00722
74. Salmanian, Z.; Izadkhah, H.; Isazadeh, A. Optimizing web server RAM performance using birth–death process queuing system: Scalable memory issue. J. Supercomput.; 2017; 73, pp. 5221-5238. [DOI: https://dx.doi.org/10.1007/s11227-017-2081-z]
75. Allcock, W.; Bernardoni, B.; Bertoni, C.; Getty, N.; Insley, J.; Papka, M.E.; Rizzi, S.; Toonen, B. Ram as a network managed resource. Proceedings of the 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW); Vancouver, BC, Canada, 21–25 May 2018; pp. 99-106.
76. Bianchini, R.; Rajamony, R. Power and energy management for server systems. Computer; 2004; 37, pp. 68-76. [DOI: https://dx.doi.org/10.1109/MC.2004.217]
77. Shan, X.; Yu, H.; Chen, Y.; Chen, Y.; Yang, Z. S2A-P2FS: Secure Storage Auditing with Privacy-Preserving Flexible Data Sharing in Cloud-Assisted Industrial IoT. IEEE Trans. Mob. Comput.; 2025; pp. 1-17. Available online: https://ieeexplore.ieee.org/document/10886974 (accessed on 14 May 2025). [DOI: https://dx.doi.org/10.1109/TMC.2025.3538057]
78. Yu, H.; Zhang, H.; Yang, Z.; Chen, Y.; Liu, H. Efficient and Secure Storage Verification in Cloud-Assisted Industrial IoT Networks. IEEE Trans. Comput.; 2025; 74, pp. 1702-1716. [DOI: https://dx.doi.org/10.1109/TC.2025.3540661]
79. Soundharya, U.L.; Vadivu, G.; Ragunathan, T. A neural network based optimization algorithm for file fetching in distributed file system. Int. J. Inf. Technol. Decis. Mak.; 2025; Available online: https://www.worldscientific.com/doi/10.1142/S0219622025500063?srsltid=AfmBOopHb2ZJUwm_vYI7RszlNLx7VbEFz9_jThxS2Mq9k7zU6cW37Enz (accessed on 14 May 2025). [DOI: https://dx.doi.org/10.1142/S0219622025500063]
80. Shih, W.C.; Wang, Z.Y.; Kristiani, E.; Hsieh, Y.J.; Sung, Y.H.; Li, C.H.; Yang, C.T. The Construction of a Stream Service Application with DeepStream and Simple Realtime Server Using Containerization for Edge Computing. Sensors; 2025; 25, 259. [DOI: https://dx.doi.org/10.3390/s25010259]
81. Jasny, M.; Ziegler, T.; Binnig, C. Scalable Data Management on Next-Generation Data Center Networks. Scalable Data Management for Future Hardware; Springer: Berlin/Heidelberg, Germany, 2025; pp. 199-221.
82. Patronas, G.; Terzenidis, N.; Kashinkunti, P.; Zahavi, E.; Syrivelis, D.; Capps, L.; Wertheimer, Z.A.; Argyris, N.; Fevgas, A.; Thompson, C.
83. Chen, B.; Zhang, Y.; Liang, H. Multi-Level Network Topology and Time Series Multi-Scenario Optimization Planning Method for Hybrid AC/DC Distribution Systems in Data Centers. Electronics; 2025; 14, 264. [DOI: https://dx.doi.org/10.3390/electronics14020264]
84. Hojati, E.; Sill, A.; Mengel, S.; Sayedi, S.M.B.; Bilbao, A.; Schmitt, K. A Comprehensive Monitoring, Visualization, and Management System for Green Data Centers. IEEE Syst. J.; 2025; 19, pp. 119-129. [DOI: https://dx.doi.org/10.1109/JSYST.2025.3528748]
85. Madhura, K.; Sekhar, G.C.; Sahu, A.; Karthikeyan, M.; Khurana, S.; Shukla, M.; Vashisht, N. Software defined network (SDN) based data server computing system. Int. J. Inf. Technol.; 2025; 17, pp. 607-613. [DOI: https://dx.doi.org/10.1007/s41870-024-02238-6]
86. Xiao, Q.; Li, T.; Jia, H.; Mu, Y.; Jin, Y.; Qiao, J.; Pu, T.; Blaabjerg, F.; Gu, J.M. Electrical circuit analogy-based maximum latency calculation method of internet data centers in power-communication network. IEEE Trans. Smart Grid; 2024; Available online: https://ieeexplore.ieee.org/document/10714408 (accessed on 14 May 2025).
87. Milad, M.; Darwish, M. UPS system: How can future technology and topology improve the energy efficiency in data centers?. Proceedings of the 2014 49th International Universities Power Engineering Conference (UPEC); Cluj-Napoca, Romania, 2–5 September 2014; pp. 1-4.
88. Ye, G.; Gao, F.; Fang, J.; Zhang, Q. Joint Workload Scheduling in Geo-Distributed Data Centers Considering UPS Power Losses. IEEE Trans. Ind. Appl.; 2023; 59, pp. 612-626. [DOI: https://dx.doi.org/10.1109/TIA.2022.3214186]
89. Oshnoei, A.; Sorouri, H.; Safari, A.; Davari, P.; Zacho, M.; Johnsen, A.D.; Teodorescu, R. Accelerated SoH Balancing in Lithium-ion Battery Packs Using Finite Set MPC. Proceedings of the 26th European Conference on Power Electronics and Applications; Paris, France, 31 March–4 April 2025.
90. Sorouri, H.; Safari, A.; Oshnoei, A.; Teodorescu, R. Voltage-Controlled SoC Estimation in Lithium-Ion Batteries: A Comparative Analysis of Equivalent Circuit Models. Proceedings of the 26th European Conference on Power Electronics and Applications; Paris, France, 31 March–4 April 2025.
91. Tracy, J.G.; Pfitzer, H.E. Achieving high efficiency in a double conversion transformerless UPS. Proceedings of the 31st Annual Conference of IEEE Industrial Electronics Society, 2005, IECON 2005; Raleigh, NC, USA, 6–10 November 2005; 4.
92. Sato, E.K.; Kinoshita, M.; Yamamoto, Y.; Amboh, T. Redundant high-density high-efficiency double-conversion uninterruptible power system. IEEE Trans. Ind. Appl.; 2010; 46, pp. 1525-1533. [DOI: https://dx.doi.org/10.1109/TIA.2010.2049728]
93. Milad, M.; Darwish, M. Comparison between Double Conversion Online UPS and Flywheel UPS technologies in terms of efficiency and cost in a medium Data Centre. Proceedings of the 2015 50th International Universities Power Engineering Conference (UPEC); Stoke-on-Trent, UK, 1–4 September 2015; pp. 1-5.
94. Oliveira, T.J.; Caseiro, L.M.; Mendes, A.M.; Cruz, S.M.; Perdigao, M.S. Online filter parameters estimation in a double conversion UPS system for real-time model predictive control performance optimization. IEEE Access; 2022; 10, pp. 30484-30500. [DOI: https://dx.doi.org/10.1109/ACCESS.2022.3159968]
95. Parise, G.; Parise, L. Electrical distribution for a reliable data center. IEEE Trans. Ind. Appl.; 2013; 49, pp. 1697-1702. [DOI: https://dx.doi.org/10.1109/TIA.2013.2256332]
96. Ganesh, L.; Weatherspoon, H.; Marian, T.; Birman, K. Integrated approach to data center power management. IEEE Trans. Comput.; 2013; 62, pp. 1086-1096. [DOI: https://dx.doi.org/10.1109/TC.2013.32]
97. Krein, P.T. Data center challenges and their power electronics. CPSS Trans. Power Electron. Appl.; 2017; 2, pp. 39-46. [DOI: https://dx.doi.org/10.24295/CPSSTPEA.2017.00005]
98. Kontorinis, V.; Zhang, L.E.; Aksanli, B.; Sampson, J.; Homayoun, H.; Pettis, E.; Tullsen, D.M.; Rosing, T.S. Managing distributed ups energy for effective power capping in data centers. ACM SIGARCH Comput. Archit. News; 2012; 40, pp. 488-499. [DOI: https://dx.doi.org/10.1145/2366231.2337216]
99. Jia, X.; Wang, J.K. Distributed firewall for P2P network in data center. Proceedings of the 2013 IEEE International Conference on Consumer Electronics-China; Hsinchu City, Taiwan, 3–6 June 2013; pp. 15-19.
100. Farrington, N.; Porter, G.; Radhakrishnan, S.; Bazzaz, H.H.; Subramanya, V.; Fainman, Y.; Papen, G.; Vahdat, A. Helios: A hybrid electrical/optical switch architecture for modular data centers. Proceedings of the ACM SIGCOMM 2010 Conference; New Delhi, India, 30 August–3 September 2010; pp. 339-350.
101. Chen, K.; Hu, C.; Zhang, X.; Zheng, K.; Chen, Y.; Vasilakos, A.V. Survey on routing in data centers: Insights and future directions. IEEE Netw.; 2011; 25, pp. 6-10. [DOI: https://dx.doi.org/10.1109/MNET.2011.5958002]
102. Shang, Y.; Li, D.; Xu, M. Energy-aware routing in data center network. Proceedings of the First ACM SIGCOMM Workshop on Green Networking; New Delhi, India, 30 August 2010; pp. 1-8.
103. Zhang, J.; Yu, F.R.; Wang, S.; Huang, T.; Liu, Z.; Liu, Y. Load balancing in data center networks: A survey. IEEE Commun. Surv. Tutor.; 2018; 20, pp. 2324-2352. [DOI: https://dx.doi.org/10.1109/COMST.2018.2816042]
104. Santos, D.; Mataloto, B.; Ferreira, J.C. Data center environment monitoring system. Proceedings of the 2019 4th International Conference on Cloud Computing and Internet of Things; Tokyo, Japan, 20–22 September 2019; pp. 75-81.
105. Noormohammadpour, M.; Raghavendra, C.S. Datacenter traffic control: Understanding techniques and tradeoffs. IEEE Commun. Surv. Tutor.; 2017; 20, pp. 1492-1525. [DOI: https://dx.doi.org/10.1109/COMST.2017.2782753]
106. Guo, Z.; Liu, S.; Zhang, Z.L. Traffic control for RDMA-enabled data center networks: A survey. IEEE Syst. J.; 2019; 14, pp. 677-688. [DOI: https://dx.doi.org/10.1109/JSYST.2019.2936519]
107. Benson, T.; Anand, A.; Akella, A.; Zhang, M. Understanding data center traffic characteristics. ACM SIGCOMM Comput. Commun. Rev.; 2010; 40, pp. 92-99. [DOI: https://dx.doi.org/10.1145/1672308.1672325]
108. Safari, A.; Kharrati, H.; Rahimi, A. A hybrid attention-based long short-term memory fast model for thermal regulation of smart residential buildings. IET Smart Cities; 2024; 6, pp. 361-371. [DOI: https://dx.doi.org/10.1049/smc2.12088]
109. Jing, Y.; Xie, L.; Li, F.; Zhan, Z.; Wang, Z.; Yang, F.; Fan, J.; Zhu, Z.; Zhang, H.; Zhao, C.
110. Isazadeh, A.; Ziviani, D.; Claridge, D.E. Thermal management in legacy air-cooled data centers: An overview and perspectives. Renew. Sustain. Energy Rev.; 2023; 187, 113707. [DOI: https://dx.doi.org/10.1016/j.rser.2023.113707]
111. Gupta, R.; Asgari, S.; Moazamigoodarzi, H.; Pal, S.; Puri, I.K. Cooling architecture selection for air-cooled Data Centers by minimizing exergy destruction. Energy; 2020; 201, 117625. [DOI: https://dx.doi.org/10.1016/j.energy.2020.117625]
112. Kuzay, M.; Dogan, A.; Yilmaz, S.; Herkiloglu, O.; Atalay, A.S.; Cemberci, A.; Yilmaz, C.; Demirel, E. Retrofitting of an air-cooled data center for energy efficiency. Case Stud. Therm. Eng.; 2022; 36, 102228. [DOI: https://dx.doi.org/10.1016/j.csite.2022.102228]
113. Lin, J.; Lin, W.; Lin, W.; Wang, J.; Jiang, H. Thermal prediction for air-cooled data center using data driven-based model. Appl. Therm. Eng.; 2022; 217, 119207. [DOI: https://dx.doi.org/10.1016/j.applthermaleng.2022.119207]
114. Xiong, X.; Lee, P.S. Vortex-enhanced thermal environment for air-cooled data center: An experimental and numerical study. Energy Build.; 2021; 250, 111287. [DOI: https://dx.doi.org/10.1016/j.enbuild.2021.111287]
115. Wang, N.; Guo, Y.; Huang, C.; Tian, B.; Shao, S. Multi-scale collaborative modeling and deep learning-based thermal prediction for air-cooled data centers: An innovative insight for thermal management. Appl. Energy; 2025; 377, 124568. [DOI: https://dx.doi.org/10.1016/j.apenergy.2024.124568]
116. Li, N.; Li, H.; Duan, K.; Tao, W.Q. Evaluation of the cooling effectiveness of air-cooled data centers by energy diagram. Appl. Energy; 2025; 382, 125215. [DOI: https://dx.doi.org/10.1016/j.apenergy.2024.125215]
117. Chen, H.; Li, D.; Wang, S.; Chen, T.; Zhong, M.; Ding, Y.; Li, Y.; Huo, X. Numerical investigation of thermal performance with adaptive terminal devices for cold aisle containment in data centers. Buildings; 2023; 13, 268. [DOI: https://dx.doi.org/10.3390/buildings13020268]
118. Cheong, K.H.; Tang, K.J.W.; Koh, J.M.; Yu, S.C.M.; Acharya, U.R.; Xie, N.G. A novel methodology to improve cooling efficiency at data centers. IEEE Access; 2019; 7, pp. 153799-153809. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2946342]
119. Jao, Y.C.; Zhang, Z.W.; Wang, C.C. Effect of uneven heat load on the airflow uniformity and thermal performance in a small-scale data center. Appl. Therm. Eng.; 2024; 242, 122525. [DOI: https://dx.doi.org/10.1016/j.applthermaleng.2024.122525]
120. Li, Y.; Zhu, C.; Li, X.; Yang, B. A Review of Non-Uniform Load Distribution and Solutions in Data Centers: Micro-Scale Liquid Cooling and Large-Scale Air Cooling. Energies; 2025; 18, 149. [DOI: https://dx.doi.org/10.3390/en18010149]
121. Heydari, A.; Gharaibeh, A.R.; Tradat, M.; Manaserh, Y.; Radmard, V.; Eslami, B.; Rodriguez, J.; Sammakia, B. Experimental evaluation of direct-to-chip cold plate liquid cooling for high-heat-density data centers. Appl. Therm. Eng.; 2024; 239, 122122. [DOI: https://dx.doi.org/10.1016/j.applthermaleng.2023.122122]
122. Shahi, P.; Mathew, A.; Saini, S.; Bansode, P.; Kasukurthy, R.; Agonafer, D. Assessment of reliability enhancement in high-power CPUs and GPUs using dynamic direct-to-chip liquid cooling. J. Enhanc. Heat Transf.; 2022; 29, pp. 1-13. [DOI: https://dx.doi.org/10.1615/JEnhHeatTransf.2022043858]
123. Kong, R.; Zhang, H.; Tang, M.; Zou, H.; Tian, C.; Ding, T. Enhancing data center cooling efficiency and ability: A comprehensive review of direct liquid cooling technologies. Energy; 2024; 308, 132846. [DOI: https://dx.doi.org/10.1016/j.energy.2024.132846]
124. Hnayno, M.; Chehade, A.; Klaba, H.; Bauduin, H.; Polidori, G.; Maalouf, C. Performance analysis of new liquid cooling topology and its impact on data centres. Appl. Therm. Eng.; 2022; 213, 118733. [DOI: https://dx.doi.org/10.1016/j.applthermaleng.2022.118733]
125. Lucchese, R.; Varagnolo, D.; Johansson, A. Controlled direct liquid cooling of data servers. IEEE Trans. Control Syst. Technol.; 2020; 29, pp. 2325-2338. [DOI: https://dx.doi.org/10.1109/TCST.2019.2942270]
126. Wang, J.; Yuan, C.; Li, Y.; Li, C.; Wang, Y.; Wei, X. Direct-to-chip immersion liquid cooling for high-power vertical-cavity surface-emitting laser (VCSEL). Appl. Therm. Eng.; 2025; 269, 126137. [DOI: https://dx.doi.org/10.1016/j.applthermaleng.2025.126137]
127. Kim, J.; Choi, H.; Lee, S.; Lee, H. Computational study of single-phase immersion cooling for high-energy density server rack for data centers. Appl. Therm. Eng.; 2025; 264, 125476. [DOI: https://dx.doi.org/10.1016/j.applthermaleng.2025.125476]
128. Alkasmoul, F.S.; Almogbel, A.M.; Shahzad, M.W.; Al-damook, A.J. Thermal performance and optimum concentration of different nanofluids in immersion cooling in data center servers. Results Eng.; 2025; 25, 103699. [DOI: https://dx.doi.org/10.1016/j.rineng.2024.103699]
129. Liu, S.; Guo, S.; Sun, H.; Xu, Z.; Li, X.; Wang, X. Evaluation of energy, economic, and pollution emission for single-phase immersion cooling data center with different economizers. Appl. Therm. Eng.; 2025; 260, 125049. [DOI: https://dx.doi.org/10.1016/j.applthermaleng.2024.125049]
130. Wu, X.; Yang, J.; Liu, Y.; Zhuang, Y.; Luo, S.; Yan, Y.; Xiao, L.; Han, X. Investigations on heat dissipation performance and overall characteristics of two-phase liquid immersion cooling systems for data center. Int. J. Heat Mass Transf.; 2025; 239, 126575. [DOI: https://dx.doi.org/10.1016/j.ijheatmasstransfer.2024.126575]
131. Sun, X.; Liu, Z.; Ji, S.; Yuan, K. Experimental study on thermal performance of a single-phase immersion cooling unit for high-density computing power data center. Int. J. Heat Fluid Flow; 2025; 112, 109735. [DOI: https://dx.doi.org/10.1016/j.ijheatfluidflow.2024.109735]
132. Ozer, R.A. Heat sink optimization with response surface methodology for single phase immersion cooling. Int. J. Heat Fluid Flow; 2025; 112, 109745. [DOI: https://dx.doi.org/10.1016/j.ijheatfluidflow.2025.109745]
133. Oh, H.; Jin, W.; Peng, P.; Winick, J.A.; Sickinger, D.; Sartor, D.; Zhang, Y.; Beckers, K.; Kitz, K.; Acero-Allard, D.
134. Pambudi, N.A.; Sarifudin, A.; Firdaus, R.A.; Ulfa, D.K.; Gandidi, I.M.; Romadhon, R. The immersion cooling technology: Current and future development in energy saving. Alex. Eng. J.; 2022; 61, pp. 9509-9527. [DOI: https://dx.doi.org/10.1016/j.aej.2022.02.059]
135. Li, X.; Xu, Z.; Liu, S.; Zhang, X.; Sun, H. Server performance optimization for single-phase immersion cooling data center. Appl. Therm. Eng.; 2023; 224, 120080. [DOI: https://dx.doi.org/10.1016/j.applthermaleng.2023.120080]
136. Liu, C.; Yu, H. Evaluation and optimization of a two-phase liquid-immersion cooling system for data centers. Energies; 2021; 14, 1395. [DOI: https://dx.doi.org/10.3390/en14051395]
137. Cho, J. Numerical analysis of rack-based data center cooling with rear door heat exchanger (RDHx): Interrelationship between thermal performance and energy efficiency. Case Stud. Therm. Eng.; 2024; 63, 105247. [DOI: https://dx.doi.org/10.1016/j.csite.2024.105247]
138. Mynampati, V.N.P.; Karajgikar, S.; Sheerah, I.; Agonafer, D.; Novotny, S.; Schmidt, R. Rear Door Heat Exchanger Cooling Performance in Telecommunication Data Centers. Proceedings of the ASME International Mechanical Engineering Congress and Exposition; Vancouver, BC, Canada, 12–18 November 2010; Volume 44281, pp. 405-410.
139. Nemati, K.; Alissa, H.A.; Murray, B.T.; Schneebeli, K.; Sammakia, B. Experimental failure analysis of a rear door heat exchanger with localized containment. IEEE Trans. Components Packag. Manuf. Technol.; 2017; 7, pp. 882-892. [DOI: https://dx.doi.org/10.1109/TCPMT.2017.2682863]
140. Manaserh, Y.M.; Tradat, M.I.; Gharaibeh, A.R.; Sammakia, B.G.; Tipton, R. Shifting to energy efficient hybrid cooled data centers using novel embedded floor tiles heat exchangers. Energy Convers. Manag.; 2021; 247, 114762. [DOI: https://dx.doi.org/10.1016/j.enconman.2021.114762]
141. Li, X.; Li, M.; Zhang, Y.; Han, Z.; Wang, S. Rack-level cooling technologies for data centers—A comprehensive review. J. Build. Eng.; 2024; 5, 109535. [DOI: https://dx.doi.org/10.1016/j.jobe.2024.109535]
142. Yang, W.; Yang, L.; Ou, J.; Lin, Z.; Zhao, X. Investigation of heat management in high thermal density communication cabinet by a rear door liquid cooling system. Energies; 2019; 12, 4385. [DOI: https://dx.doi.org/10.3390/en12224385]
143. Gao, T.; Sammakia, B.G.; Geer, J.F.; Ortega, A.; Schmidt, R. Dynamic analysis of cross flow heat exchangers in data centers using transient effectiveness method. IEEE Trans. Components Packag. Manuf. Technol.; 2014; 4, pp. 1925-1935. [DOI: https://dx.doi.org/10.1109/TCPMT.2014.2369256]
144. Karki, K.; Novotny, S.; Radmehr, A.; Patankar, S. Use of passive, rear-door heat exchangers to cool low to moderate heat loads. ASHRAE Trans.; 2011; 117, pp. 26-34.
145. Wan, J.; Gui, X.; Kasahara, S.; Zhang, Y.; Zhang, R. Air flow measurement and management for improving cooling and energy efficiency in raised-floor data centers: A survey. IEEE Access; 2018; 6, pp. 48867-48901. [DOI: https://dx.doi.org/10.1109/ACCESS.2018.2866840]
146. Abbas, A.; Huzayyin, A.; Mouneer, T.; Nada, S. Effect of data center servers’ power density on the decision of using in-row cooling or perimeter cooling. Alex. Eng. J.; 2021; 60, pp. 3855-3867. [DOI: https://dx.doi.org/10.1016/j.aej.2021.02.051]
147. Nada, S.; Abbas, A. Solutions of thermal management problems for terminal racks of in-row cooling architectures in data centers. Build. Environ.; 2021; 201, 107991. [DOI: https://dx.doi.org/10.1016/j.buildenv.2021.107991]
148. Abbas, A.; Huzayyin, A.; Mouneer, T.; Nada, S. Thermal management and performance enhancement of data centers architectures using aligned/staggered in-row cooling arrangements. Case Stud. Therm. Eng.; 2021; 24, 100884. [DOI: https://dx.doi.org/10.1016/j.csite.2021.100884]
149. Nada, S.A.; Abbas, A.M. Effect of in-row cooling units numbers/locations on thermal and energy management of data centers servers. Int. J. Energy Res.; 2021; 45, pp. 20270-20284. [DOI: https://dx.doi.org/10.1002/er.7112]
150. Cho, J.; Woo, J. Development and experimental study of an independent row-based cooling system for improving thermal performance of a data center. Appl. Therm. Eng.; 2020; 169, 114857. [DOI: https://dx.doi.org/10.1016/j.applthermaleng.2019.114857]
151. Wang, L.; Wang, Y.; Bai, X.; Wu, T.; Ma, Y.; Jin, Y.; Jiang, H. An efficient assessment method for the thermal environment of a row-based cooling data center. Appl. Therm. Eng.; 2025; 269, 126020. [DOI: https://dx.doi.org/10.1016/j.applthermaleng.2025.126020]
152. Baghsheikhi, M.; Haftlangi, P.; Mohammadi, M. Analytical and experimental comparison of various in-row cooling systems for data centers: An exergy, exergoeconomic, and economic analysis. Therm. Sci. Eng. Prog.; 2025; 57, 103086. [DOI: https://dx.doi.org/10.1016/j.tsep.2024.103086]
153. Cho, J. Optimal supply air temperature with respect to data center operational stability and energy efficiency in a row-based cooling system under fault conditions. Energy; 2024; 288, 129797. [DOI: https://dx.doi.org/10.1016/j.energy.2023.129797]
154. Wang, Y.; Bai, X.; Fu, Y.; Tang, Y.; Jin, C.; Li, Z. Field experiment and numerical simulation for airflow evaluation in a data center with row-based cooling. Energy Build.; 2023; 294, 113231. [DOI: https://dx.doi.org/10.1016/j.enbuild.2023.113231]
155. Singh, N.; Permana, I.; Agharid, A.P.; Wang, F.J. Innovative Retrofits for enhanced thermal performance in data centers using Independent Row-Based cooling systems. Therm. Sci. Eng. Prog.; 2025; 57, 103101. [DOI: https://dx.doi.org/10.1016/j.tsep.2024.103101]
156. Moazamigoodarzi, H.; Gupta, R.; Pal, S.; Tsai, P.J.; Ghosh, S.; Puri, I.K. Modeling temperature distribution and power consumption in IT server enclosures with row-based cooling architectures. Appl. Energy; 2020; 261, 114355. [DOI: https://dx.doi.org/10.1016/j.apenergy.2019.114355]
157. Chu, J.; Huang, X. Research status and development trends of evaporative cooling air-conditioning technology in data centers. Energy Built Environ.; 2023; 4, pp. 86-110. [DOI: https://dx.doi.org/10.1016/j.enbenv.2021.08.004]
158. Kim, M.H.; Ham, S.W.; Park, J.S.; Jeong, J.W. Impact of integrated hot water cooling and desiccant-assisted evaporative cooling systems on energy savings in a data center. Energy; 2014; 78, pp. 384-396. [DOI: https://dx.doi.org/10.1016/j.energy.2014.10.023]
159. Han, Z.; Xue, D.; Wei, H.; Ji, Q.; Sun, X.; Li, X. Study on operation strategy of evaporative cooling composite air conditioning system in data center. Renew. Energy; 2021; 177, pp. 1147-1160. [DOI: https://dx.doi.org/10.1016/j.renene.2021.06.046]
160. Liu, Y.; Yang, X.; Li, J.; Zhao, X. Energy savings of hybrid dew-point evaporative cooler and micro-channel separated heat pipe cooling systems for computer data centers. Energy; 2018; 163, pp. 629-640. [DOI: https://dx.doi.org/10.1016/j.energy.2018.07.172]
161. Han, Z.; Sun, X.; Wei, H.; Ji, Q.; Xue, D. Energy saving analysis of evaporative cooling composite air conditioning system for data centers. Appl. Therm. Eng.; 2021; 186, 116506. [DOI: https://dx.doi.org/10.1016/j.applthermaleng.2020.116506]
162. Li, C.; Mao, R.; Wang, Y.; Zhang, J.; Lan, J.; Zhang, Z. Experimental study on direct evaporative cooling for free cooling of data centers. Energy; 2024; 288, 129889. [DOI: https://dx.doi.org/10.1016/j.energy.2023.129889]
163. Zhang, Y.; Han, H.; Zhang, Y.; Li, J.; Liu, C.; Liu, Y.; Gao, D. Experimental study on the performance of a novel data center air conditioner combining air cooling and evaporative cooling. Int. J. Refrig.; 2025; 170, pp. 98-112. [DOI: https://dx.doi.org/10.1016/j.ijrefrig.2024.11.010]
164. Mao, R.; Wu, H.; Li, C.; Zhang, Z.; Liang, X.; Zhou, J.; Chen, J. Experimental investigation on the application of cold-mist direct evaporative cooling in data centers. Int. J. Therm. Sci.; 2025; 208, 109500. [DOI: https://dx.doi.org/10.1016/j.ijthermalsci.2024.109500]
165. Yan, W.; Cui, X.; Zhao, M.; Meng, X.; Yang, C.; Zhang, Y.; Liu, Y.; Jin, L. Multi-objective optimization of dew point indirect evaporative coolers for data centers. Appl. Therm. Eng.; 2024; 241, 122425. [DOI: https://dx.doi.org/10.1016/j.applthermaleng.2024.122425]
166. Wang, W.; Liang, C.; Zha, F.; Wang, H.; Shi, W.; Cui, Z.; Zhang, K.; Jia, H.; Li, J.; Li, X. Air conditioning system with dual-temperature chilled water for air grading treatment in data centers. Energy Build.; 2023; 290, 113073. [DOI: https://dx.doi.org/10.1016/j.enbuild.2023.113073]
167. Park, B.R.; Choi, Y.J.; Choi, E.J.; Moon, J.W. Adaptive control algorithm with a retraining technique to predict the optimal amount of chilled water in a data center cooling system. J. Build. Eng.; 2022; 50, 104167. [DOI: https://dx.doi.org/10.1016/j.jobe.2022.104167]
168. Chen, L.; Wemhoff, A.P. The sustainability benefits of economization in data centers containing chilled water systems. Resour. Conserv. Recycl.; 2023; 196, 107053. [DOI: https://dx.doi.org/10.1016/j.resconrec.2023.107053]
169. Jiang, W.; Jia, Z.; Feng, S.; Liu, F.; Jin, H. Fine-grained warm water cooling for improving datacenter economy. Proceedings of the 46th International Symposium on Computer Architecture; Phoenix, AZ, USA, 22–26 June 2019; pp. 474-486.
170. Kim, Y.J.; Ha, J.W.; Park, K.S.; Song, Y.H. A study on the energy reduction measures of data centers through chilled water temperature control and water-side economizer. Energies; 2021; 14, 3575. [DOI: https://dx.doi.org/10.3390/en14123575]
171. Siriwardana, J.; Jayasekara, S.; Halgamuge, S.K. Potential of air-side economizers for data center cooling: A case study for key Australian cities. Appl. Energy; 2013; 104, pp. 207-219. [DOI: https://dx.doi.org/10.1016/j.apenergy.2012.10.046]
172. Jin, Y.; Bai, X.; Xu, X.; Mi, R.; Li, Z. Climate zones for the application of water-side economizer in a data center cooling system. Appl. Therm. Eng.; 2024; 250, 123450. [DOI: https://dx.doi.org/10.1016/j.applthermaleng.2024.123450]
173. Ham, S.W.; Kim, M.H.; Choi, B.N.; Jeong, J.W. Energy saving potential of various air-side economizers in a modular data center. Appl. Energy; 2015; 138, pp. 258-275. [DOI: https://dx.doi.org/10.1016/j.apenergy.2014.10.066]
174. Deymi-Dashtebayaz, M.; Namanlo, S.V.; Arabkoohsar, A. Simultaneous use of air-side and water-side economizers with the air source heat pump in a data center for cooling and heating production. Appl. Therm. Eng.; 2019; 161, 114133. [DOI: https://dx.doi.org/10.1016/j.applthermaleng.2019.114133]
175. Cho, J.; Lim, T.; Kim, B.S. Viability of datacenter cooling systems for energy efficiency in temperate or subtropical regions: Case study. Energy Build.; 2012; 55, pp. 189-197. [DOI: https://dx.doi.org/10.1016/j.enbuild.2012.08.012]
176. Cho, K.; Chang, H.; Jung, Y.; Yoon, Y. Economic analysis of data center cooling strategies. Sustain. Cities Soc.; 2017; 31, pp. 234-243. [DOI: https://dx.doi.org/10.1016/j.scs.2017.03.008]
177. Wang, J.; Zhang, Q.; Yoon, S.; Yu, Y. Reliability and availability analysis of a hybrid cooling system with water-side economizer in data center. Build. Environ.; 2019; 148, pp. 405-416. [DOI: https://dx.doi.org/10.1016/j.buildenv.2018.11.021]
178. Durand-Estebe, B.; Le Bot, C.; Mancos, J.N.; Arquis, E. Simulation of a temperature adaptive control strategy for an IWSE economizer in a data center. Appl. Energy; 2014; 134, pp. 45-56. [DOI: https://dx.doi.org/10.1016/j.apenergy.2014.07.072]
179. Kim, J.H.; Shin, D.U.; Kim, H. Data center energy evaluation tool development and analysis of power usage effectiveness with different economizer types in various climate zones. Buildings; 2024; 14, 299. [DOI: https://dx.doi.org/10.3390/buildings14010299]
180. Zou, S.; Pan, Y. Performance of a hybrid thermosyphon cooling system using airside economizers for data center free cooling under different climate conditions. J. Build. Eng.; 2024; 98, 111235. [DOI: https://dx.doi.org/10.1016/j.jobe.2024.111235]
181. Chen, H.; Peng, Y.h.; Wang, Y.l. Thermodynamic analysis of hybrid cooling system integrated with waste heat reusing and peak load shifting for data center. Energy Convers. Manag.; 2019; 183, pp. 427-439. [DOI: https://dx.doi.org/10.1016/j.enconman.2018.12.117]
182. Wang, J.; Zhang, Q.; Yoon, S.; Yu, Y. Impact of uncertainties on the supervisory control performance of a hybrid cooling system in data center. Build. Environ.; 2019; 148, pp. 361-371. [DOI: https://dx.doi.org/10.1016/j.buildenv.2018.11.026]
183. Fouladi, K.; Schaadt, J.; Wemhoff, A.P. A novel approach to the data center hybrid cooling design with containment. Numer. Heat Transf. Part A Appl.; 2017; 71, pp. 477-487. [DOI: https://dx.doi.org/10.1080/10407782.2016.1277932]
184. Zhu, Y.; Zhang, Q.; Zeng, L.; Wang, J.; Zou, S. An advanced control strategy of hybrid cooling system with cold water storage system in data center. Energy; 2024; 291, 130304. [DOI: https://dx.doi.org/10.1016/j.energy.2024.130304]
185. Jahangir, M.H.; Mokhtari, R.; Mousavi, S.A. Performance evaluation and financial analysis of applying hybrid renewable systems in cooling unit of data centers—A case study. Sustain. Energy Technol. Assess.; 2021; 46, 101220. [DOI: https://dx.doi.org/10.1016/j.seta.2021.101220]
186. Wang, J.; Huang, Z.; Yue, C.; Zhang, Q.; Wang, P. Various uncertainties self-correction method for the supervisory control of a hybrid cooling system in data centers. J. Build. Eng.; 2021; 42, 102830. [DOI: https://dx.doi.org/10.1016/j.jobe.2021.102830]
187. Zhou, F.; Shen, C.; Ma, G.; Yan, X. Power usage effectiveness analysis of a liquid-pump-driven hybrid cooling system for data centers in subclimate zones. Sustain. Energy Technol. Assess.; 2022; 52, 102277. [DOI: https://dx.doi.org/10.1016/j.seta.2022.102277]
188. Sbaity, A.A.; Louahlia, H.; Le Masson, S. Performance of a hybrid thermosyphon condenser for cooling a typical data center under various climatic constraints. Appl. Therm. Eng.; 2022; 202, 117786. [DOI: https://dx.doi.org/10.1016/j.applthermaleng.2021.117786]
189. Lamptey, N.B.; Anka, S.K.; Lee, K.H.; Cho, Y.; Choi, J.W.; Choi, J.M. Comparative energy analysis of cooling energy performance between conventional and hybrid air source internet data center cooling system. Energy Build.; 2024; 302, 113759. [DOI: https://dx.doi.org/10.1016/j.enbuild.2023.113759]
190. Zurmuhl, D.P.; Lukawski, M.Z.; Aguirre, G.A.; Law, W.R.; Schnaars, G.P.; Beckers, K.F.; Anderson, C.L.; Tester, J.W. Hybrid geothermal heat pumps for cooling telecommunications data centers. Energy Build.; 2019; 188, pp. 120-128. [DOI: https://dx.doi.org/10.1016/j.enbuild.2019.01.042]
191. Hanwha Vision America. Video Surveillance for Data Centers. 2025; Available online: https://hanwhavisionamerica.com/markets/data-centers/ (accessed on 21 April 2025).
192. Giri, S.; Su, J.; Zajko, G.; Prasad, P. Authentication method to secure cloud data centres using biometric technology. Proceedings of the 2020 5th International Conference on Innovative Technologies in Intelligent Systems and Industrial Applications (CITISIA); Sydney, Australia, 25–27 November 2020; pp. 1-9.
193. Stefani, E.; Ferrari, C. Design and implementation of a multi-modal biometric system for company access control. Algorithms; 2017; 10, 61. [DOI: https://dx.doi.org/10.3390/a10020061]
194. Wang, C.; Schwan, K.; Talwar, V.; Eisenhauer, G.; Hu, L.; Wolf, M. A flexible architecture integrating monitoring and analytics for managing large-scale data centers. Proceedings of the 8th ACM International Conference on Autonomic Computing; Karlsruhe, Germany, 14–18 June 2011; pp. 141-150.
195. Data Center Frontier. Sustainable Lighting: Key Considerations for Green Data Centers. 2023; Available online: https://www.datacenterfrontier.com/voices-of-the-industry/article/11428741/sustainable-lighting-key-considerations-for-green-data-centers (accessed on 21 April 2025).
196. Bakar, N.N.A.; Hassan, M.Y.; Abdullah, H.; Rahman, H.A.; Abdullah, M.P.; Hussin, F.; Bandi, M. Energy efficiency index as an indicator for measuring building energy performance: A review. Renew. Sustain. Energy Rev.; 2015; 44, pp. 1-11. [DOI: https://dx.doi.org/10.1016/j.rser.2014.12.018]
197. Berndt, E.R. Aggregate energy, efficiency and productivity measurement. Annu. Rev. Environ. Resour.; 1978; 3, pp. 225-273. [DOI: https://dx.doi.org/10.1146/annurev.eg.03.110178.001301]
198. Giacone, E.; Mancò, S. Energy efficiency measurement in industrial processes. Energy; 2012; 38, pp. 331-345. [DOI: https://dx.doi.org/10.1016/j.energy.2011.11.054]
199. Aravanis, A.I.; Voulkidis, A.; Salom, J.; Townley, J.; Georgiadou, V.; Oleksiak, A.; Porto, M.R.; Roudet, F.; Zahariadis, T. Metrics for assessing flexibility and sustainability of next generation data centers. Proceedings of the 2015 IEEE Globecom Workshops (GC Wkshps); San Diego, CA, USA, 6–10 December 2015; pp. 1-6.
200. Song, Z.; Zhang, X.; Eriksson, C. Data center energy and cost saving evaluation. Energy Procedia; 2015; 75, pp. 1255-1260. [DOI: https://dx.doi.org/10.1016/j.egypro.2015.07.178]
201. Mittal, S. Power management techniques for data centers: A survey. arXiv; 2014; arXiv: 1404.6681
202. Shaikh, A.; Uddin, M.; Elmagzoub, M.A.; Alghamdi, A. PEMC: Power Efficiency Measurement Calculator to Compute Power Efficiency and CO2 Emissions in Cloud Data Centers. IEEE Access; 2020; 8, pp. 195216-195228. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.3033791]
203. Uddin, M.; Darabidarabkhani, Y.; Shah, A.; Memon, J. Evaluating power efficient algorithms for efficiency and carbon emissions in cloud data centers: A review. Renew. Sustain. Energy Rev.; 2015; 51, pp. 1553-1563. [DOI: https://dx.doi.org/10.1016/j.rser.2015.07.061]
204. Shang, Y.; Li, D.; Zhu, J.; Xu, M. On the network power effectiveness of data center architectures. IEEE Trans. Comput.; 2015; 64, pp. 3237-3248. [DOI: https://dx.doi.org/10.1109/TC.2015.2389808]
205. Belady, C.L.; Malone, C.G. Metrics and an infrastructure model to evaluate data center efficiency. Proceedings of the International Electronic Packaging Technical Conference and Exhibition; Singapore, 10–12 December 2007; Volume 42770, pp. 751-755.
206. Kumar, R.; Khatri, S.K.; Diván, M.J. Efficiency measurement of data centers: An elucidative review. J. Discret. Math. Sci. Cryptogr.; 2020; 23, pp. 221-236. [DOI: https://dx.doi.org/10.1080/09720529.2020.1721886]
207. Wilde, T.; Auweter, A.; Patterson, M.K.; Shoukourian, H.; Huber, H.; Bode, A.; Labrenz, D.; Cavazzoni, C. DWPE, a new data center energy-efficiency metric bridging the gap between infrastructure and workload. Proceedings of the 2014 International Conference on High Performance Computing & Simulation (HPCS); New Orleans, LA, USA, 16–21 November 2014; pp. 893-901.
208. Grishina, A.; Chinnici, M.; Kor, A.L.; Rondeau, E.; Georges, J.P.; De Chiara, D. Data center for smart cities: Energy and sustainability issue. Big Data Platforms and Applications: Case Studies, Methods, Techniques, and Performance Evaluation; Springer: Berlin/Heidelberg, Germany, 2021; pp. 1-36.
209. Grishina, A.; Chinnici, M.; De Chiara, D.; Rondeau, E.; Kor, A.L. Energy-oriented analysis of HPC cluster queues: Emerging metrics for sustainable data center. Proceedings of the Applied Physics, System Science and Computers III: Proceedings of the 3rd International Conference on Applied Physics, System Science and Computers (APSAC2018); Dubrovnik, Croatia, 26–28 September 2018; Springer: Berlin/Heidelberg, Germany, 2019; pp. 286-300.
210. Shiino, T. Standardizing Data Center Energy Efficiency Metrics in Preparation for Global Competition. NRI Papers. 2012; Available online: https://www.nri.com/content/900013140.pdf (accessed on 25 April 2025).
211. Koutitas, G.; Demestichas, P. Challenges for energy efficiency in local and regional data centers. J. Green Eng.; 2010; 1, pp. 1-32.
212. Chilukuri, M.; Dahlan, M.M.; Hwye, C.C. Benchmarking Energy Efficiency in Tropical Data Centres–Metrics and Mesurements. Proceedings of the 2018 International Conference and Utility Exhibition on Green Energy for Sustainable Development (ICUE); Phuket City, Thailand, 24–26 October 2018; pp. 1-10.
213. Khargharia, B.; Luo, H.; Al-Nashif, Y.; Hariri, S. Appflow: Autonomic performance-per-watt management of large-scale data centers. Proceedings of the 2010 IEEE/ACM Int’l Conference on Green Computing and Communications & Int’l Conference on Cyber, Physical and Social Computing; Washington, DC, USA, 18–20 December 2010; pp. 103-111.
214. Gandhi, A.; Harchol-Balter, M. How data center size impacts the effectiveness of dynamic power management. Proceedings of the 2011 49th Annual Allerton Conference on Communication, Control, and Computing (Allerton); Monticello, IL, USA, 28–30 September 2011; pp. 1164-1169.
215. Ruiu, P.; Fiandrino, C.; Giaccone, P.; Bianco, A.; Kliazovich, D.; Bouvry, P. On the energy-proportionality of data center networks. IEEE Trans. Sustain. Comput.; 2017; 2, pp. 197-210. [DOI: https://dx.doi.org/10.1109/TSUSC.2017.2711967]
216. Khargharia, B.; Hariri, S.; Yousif, M.S. An adaptive interleaving technique for memory performance-per-watt management. IEEE Trans. Parallel Distrib. Syst.; 2008; 20, pp. 1011-1022. [DOI: https://dx.doi.org/10.1109/TPDS.2008.136]
217. Li, Z.; Yang, Y. RRect: A novel server-centric data center network with high power efficiency and availability. IEEE Trans. Cloud Comput.; 2018; 8, pp. 914-927. [DOI: https://dx.doi.org/10.1109/TCC.2018.2816650]
218. Dalvandi, A.; Gurusamy, M.; Chua, K.C. Application scheduling, placement, and routing for power efficiency in cloud data centers. IEEE Trans. Parallel Distrib. Syst.; 2016; 28, pp. 947-960. [DOI: https://dx.doi.org/10.1109/TPDS.2016.2607743]
219. Jamalzadeh, M.; Behravan, N. An exhaustive framework for better data centers, energy efficiency and greenness by using metrics. Indian J. Comput. Sci. Eng. (IJCSE); 2012; 2, pp. 2231-3850.
220. Beitelmal, A.; Fabris, D. Servers and data centers energy performance metrics. Energy Build.; 2014; 80, pp. 562-569. [DOI: https://dx.doi.org/10.1016/j.enbuild.2014.04.036]
221. Wang, L.; Khan, S.U. Review of performance metrics for green data centers: A taxonomy study. J. Supercomput.; 2013; 63, pp. 639-656. [DOI: https://dx.doi.org/10.1007/s11227-011-0704-3]
222. Nawathe, U.G.; Hassan, M.; Yen, K.C.; Kumar, A.; Ramachandran, A.; Greenhill, D. Implementation of an 8-core, 64-thread, power-efficient SPARC server on a chip. IEEE J. Solid-State Circuits; 2008; 43, pp. 6-20. [DOI: https://dx.doi.org/10.1109/JSSC.2007.910967]
223. Rivoire, S.; Shah, M.A.; Ranganathan, P.; Kozyrakis, C.; Meza, J. Models and metrics to enable energy-efficiency optimizations. Computer; 2007; 40, pp. 39-48. [DOI: https://dx.doi.org/10.1109/MC.2007.436]
224. Anderson, S.F. Improving data center efficiency. Energy Eng.; 2010; 107, pp. 42-63. [DOI: https://dx.doi.org/10.1080/01998595.2010.10121753]
225. Imani, M.; Garcia, R.; Huang, A.; Rosing, T. Cade: Configurable approximate divider for energy efficiency. Proceedings of the 2019 Design, Automation & Test in Europe Conference & Exhibition (DATE); Florence, Italy, 25–29 March 2019; pp. 586-589.
226. da Silva Rocha, É.; GF da Silva, L.; Santos, G.L.; Bezerra, D.; Moreira, A.; Gonçalves, G.; Marquezini, M.V.; Mehta, A.; Wildeman, M.; Kelner, J.
227. Nguyen, T.A.; Min, D.; Choi, E.; Tran, T.D. Reliability and availability evaluation for cloud data center networks using hierarchical models. IEEE Access; 2019; 7, pp. 9273-9313. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2891282]
228. Sego, L.H.; Marquez, A.; Rawson, A.; Cader, T.; Fox, K.; Gustafson, W.I., Jr.; Mundy, C.J. Implementing the data center energy productivity metric. ACM J. Emerg. Technol. Comput. Syst. (JETC); 2012; 8, pp. 1-22. [DOI: https://dx.doi.org/10.1145/2367736.2367741]
229. Uddin, M.; Rahman, A.A. Energy efficiency and low carbon enabler green IT framework for data centers considering green metrics. Renew. Sustain. Energy Rev.; 2012; 16, pp. 4078-4094. [DOI: https://dx.doi.org/10.1016/j.rser.2012.03.014]
230. Gandhi, A.; Lee, D.; Liu, Z.; Mu, S.; Zadok, E.; Ghose, K.; Gopalan, K.; Liu, Y.D.; Hussain, S.R.; Mcdaniel, P. Metrics for sustainability in data centers. ACM SIGENERGY Energy Inform. Rev.; 2023; 3, pp. 40-46. [DOI: https://dx.doi.org/10.1145/3630614.3630622]
231. Shally, S.S.; Kumar, S. Measuring energy efficiency of cloud datacenters. Int. J. Recent Technol. Eng.; 2019; 8, pp. 5428-5433. [DOI: https://dx.doi.org/10.35940/ijrte.B3548.098319]
232. Metrics, G.G. Describing Datacenter Power Efficiency. Technical Committee White Paper, The Green Grid. 2007; Available online: https://www.thegreengrid.org/resources/library-and-tools (accessed on 25 April 2025).
233. Schaeppi, B.; Bogner, T.; Schloesser, A.; Stobbe, L.; de Asuncao, M.D. Metrics for energy efficiency assessment in data centers and server rooms. Proceedings of the 2012 Electronics Goes Green 2012+; Berlin, Germany, 9–12 September 2012; pp. 1-6.
234. Herzog, C. Standardization Bodies, Initiatives and their relation to Green IT focused on the Data Centre Side. Proceedings of the Energy Efficiency in Large Scale Distributed Systems: COST IC0804 European Conference, EE-LSDS 2013; Vienna, Austria, 22–24 April 2013; Revised Selected Papers; Springer: Berlin/Heidelberg, Germany, 2013; pp. 289-299.
235. Chen, D.; Henis, E.; Kat, R.I.; Sotnikov, D.; Cappiello, C.; Ferreira, A.M.; Pernici, B.; Vitali, M.; Jiang, T.; Liu, J.
236. Pop, C.B.; Anghel, I.; Cioara, T.; Salomie, I.; Vartic, I. A swarm-inspired data center consolidation methodology. Proceedings of the 2nd International Conference on Web Intelligence, Mining and Semantics; Craiova, Romania, 13–15 June 2012; pp. 1-7.
237. Schödwell, B.; Erek, K.; Zarnekow, R. Data center green performance measurement: State of the art and open research challenges. Proceedings of the Nineteenth Americas Conference on Information Systems; Chicago, IL, USA, 15–17 August 2013.
238. Sisó, L.; Salom, J.; Jarus, M.; Oleksiak, A.; Zilio, T. Energy and heat-aware metrics for data centers: Metrics analysis in the framework of CoolEmAll project. Proceedings of the 2013 International Conference on Cloud and Green Computing; Karlsruhe, Germany, 30 September–2 October 2013; pp. 428-434.
239. Procaccianti, G.; Routsis, A. Energy efficiency and power measurements: An industrial survey. Proceedings of the ICT for Sustainability 2016; Atlantis Press: Dordrecht, The Netherlands, 2016; pp. 69-78.
240. Meisner, D.; Wu, J.; Wenisch, T.F. Bighouse: A simulation infrastructure for data center systems. Proceedings of the 2012 IEEE International Symposium on Performance Analysis of Systems & Software; New Brunswick, NJ, USA, 1–3 April 2012; pp. 35-45.
241. Tian, H.; Wu, D.; He, J.; Xu, Y.; Chen, M. On achieving cost-effective adaptive cloud gaming in geo-distributed data centers. IEEE Trans. Circuits Syst. Video Technol.; 2015; 25, pp. 2064-2077. [DOI: https://dx.doi.org/10.1109/TCSVT.2015.2416563]
242. Zhu, T.; Kozuch, M.A.; Harchol-Balter, M. WorkloadCompactor: Reducing datacenter cost while providing tail latency SLO guarantees. Proceedings of the 2017 Symposium on Cloud Computing; Santa Clara, CA, USA, 25–27 September 2017; pp. 598-610.
243. Metri, G.; Srinivasaraghavan, S.; Shi, W.; Brockmeyer, M. Experimental analysis of application specific energy efficiency of data centers with heterogeneous servers. Proceedings of the 2012 IEEE Fifth International Conference on Cloud Computing; Honolulu, HI, USA, 24–29 June 2012; pp. 786-793.
244. Taheri, J.; Zomaya, A.Y. Energy efficiency metrics for data centers. Energy-Efficient Distributed Computing Systems; John Wiley and Sons: Hoboken, NJ, USA, 2012; pp. 245-269.
245. Lee, H. An Analysis of the Impact of Datacenter Temperature on Energy Efficiency. Ph.D. Thesis; Massachusetts Institute of Technology: Cambridge, MA, USA, 2012.
246. Fiandrino, C.; Kliazovich, D.; Bouvry, P.; Zomaya, A.Y. Performance and energy efficiency metrics for communication systems of cloud computing data centers. IEEE Trans. Cloud Comput.; 2015; 5, pp. 738-750. [DOI: https://dx.doi.org/10.1109/TCC.2015.2424892]
247. Cheng, H.; Liu, B.; Lin, W.; Ma, Z.; Li, K.; Hsu, C.H. A survey of energy-saving technologies in cloud data centers. J. Supercomput.; 2021; 77, pp. 13385-13420. [DOI: https://dx.doi.org/10.1007/s11227-021-03805-5]
248. North, M.T.; Kulkarni, A.; Haley, D. Effects of Datacenter Cooling Subsystems Performance on TUE: Air vs. Liquid vs. Hybrid Cooling. Proceedings of the 2024 23rd IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITherm); Aurora, CO, USA, 28–31 May 2024; pp. 1-5.
249. Avelar, V.; Azevedo, D.; French, A.; Power, E.N. PUE: A Comprehensive Examination of the Metric; White Paper 49 The Green Grid: Washington, DC, USA, 2012.
250. Zoie, R.C.; Mihaela, R.D.; Alexandru, S. An analysis of the power usage effectiveness metric in data centers. Proceedings of the 2017 5th International Symposium on Electrical and Electronics Engineering (ISEEE); Galaţi, Romania, 20–22 October 2017; pp. 1-6.
251. Fawaz, A.H.; Mohammed, A.F.Y.; Laku, L.I.Y.; Alanazi, R. PUE or GPUE: A carbon-aware metric for data centers. Proceedings of the 2019 21st International Conference on Advanced Communication Technology (ICACT); PyeongChang, Republic of Korea, 17–20 February 2019; pp. 38-41.
252. Li, J.; Jurasz, J.; Li, H.; Tao, W.Q.; Duan, Y.; Yan, J. A new indicator for a fair comparison on the energy performance of data centers. Appl. Energy; 2020; 276, 115497. [DOI: https://dx.doi.org/10.1016/j.apenergy.2020.115497]
253. Abdilla, A.; Borg, S.P.; Licari, J. Relating measured PUE to the cooling strategy and operating conditions through a review of a number of Maltese data centres. Energy Rep.; 2025; 13, pp. 2612-2623. [DOI: https://dx.doi.org/10.1016/j.egyr.2025.02.013]
254. Lei, N.; Ganeshalingam, M.; Masanet, E.; Smith, S.; Shehabi, A. Shedding light on US small and midsize data centers: Exploring insights from the CBECS survey. Energy Build.; 2025; 115734.Available online: https://www.sciencedirect.com/science/article/abs/pii/S0378778825004645 (accessed on 14 May 2025). [DOI: https://dx.doi.org/10.1016/j.enbuild.2025.115734]
255. Huang, H.; Lin, W.; Lin, J.; Li, K. Power Management Optimization for Data Centers: A Power Supply Perspective. IEEE Trans. Sustain. Comput.; 2025; Available online: https://ieeexplore.ieee.org/document/10891660 (accessed on 14 May 2025).
256. Hernandez, L.H.H.; Orozco, M. Measurement of Energy Efficiency Metrics of Data Centers. Case Study: Higher Education. Softw. Eng. Perspect. Intell. Syst.; 2020; 1295, pp. 23-35.
257. Azevedo, D.; Patterson, M.; Pouchet, J.; Tipley, R. Carbon Usage Effectiveness (CUE): A Green Grid Data Center Sustainability Metric; White Paper 32 Green Grid: Washington, DC, USA, 2010.
258. Google. Efficiency–Google Data Centers. 2024; Available online: https://datacenters.google/efficiency/ (accessed on 14 May 2025).
259. Microsoft. Measuring Energy and Water Efficiency for Microsoft Datacenters. 2024; Available online: https://datacenters.microsoft.com/sustainability/efficiency/ (accessed on 14 May 2025).
260. Rodriguez, J. Dial It In: Data Centers Need New Metric for Energy Efficiency. 2024; Available online: https://blogs.nvidia.com/blog/datacenter-efficiency-metrics-isc/ (accessed on 14 May 2025).
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Cloud Data Centers (CDCs) are an essential component of the infrastructure for powering the digital life of modern society, hosting and processing vast amounts of data and enabling services such as streaming, Artificial Intelligence (AI), and global connectivity. Given this importance, their energy efficiency is a top priority, as they consume significant amounts of electricity, contributing to operational costs and environmental impact. Efficient CDCs reduce energy waste, lower carbon footprints, and support sustainable growth in digital services. Consequently, energy efficiency metrics are used to measure how effectively a CDC utilizes energy for computing versus cooling and other overheads. These metrics are essential because they guide operators in optimizing resource use, reducing costs, and meeting regulatory and environmental goals. To this end, this paper reviews more than 25 energy efficiency metrics and more than 250 literature references to CDCs, different energy-consuming components, and configuration setups. Then, some real-world case studies of corporations that use these metrics are presented. Thereby, the challenges and limitations are investigated for each metric, and associated future research directions are provided. Prioritizing energy efficiency in CDCs, guided by these energy efficiency metrics, is essential for minimizing environmental impact, reducing costs, and ensuring sustainable scalability for the digital economy.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
; Sorouri Hoda 2
; Rahimi Afshin 1
; Oshnoei Arman 2
1 Mechanical, Automotive and Materials Engineering Department, University of Windsor, Windsor, ON N9B 3P4, Canada; [email protected]
2 Department of Energy, Aalborg University, 9220 Aalborg, Denmark; [email protected]




