Data centers are a favorite target of environmental activists because of their high energy requirements. The U.S. Department of Energy estimates the data centers consume 10 to 50 times as much energy per square foot as a typical commercial office building and account for about 2% of U. S. energy use.

While there’s no question that data centers are heavy users of electricity, operators have made tremendous efficiency strides in recent years. A 2016 U. S. Department of Energy report found that data center electricity consumption increased just 4% from 2010 through 2014 compared to 90% from 2000 to 2005. Total data center electricity consumption grew just 6% between 2010 and 2018 even as the number of physical servers grew 30%, network traffic rose 11-fold and storage capacity increased 26-fold, according to Data Center Frontier.

Sustaining the momentum will be more difficult in the future with the demand for data center infrastructure and services growing more than 18% annually through 2025, according to Technavio.

Global Data center MarketSource : Technavio on prnewswire.com

Hyperscale operators have led the way in demonstrating innovative approaches to energy efficiency, in particular through the use of renewable energy sources, which is the most effective way of curbing the consumption of expensive fossil fuels. For example, Amazon Web Services has set a goal to power its operations with 100% renewable energy by 2025 and Google says it intends to be carbon-free by 2030.

However, operators of older data centers typically lack the resources and flexibility of hyperscale cloud providers to achieve the economies of scale that make hyperscale data centers 98% more efficient than their on-premises counterparts, according to Microsoft. Data center owners also must contend with the need to balance availability with efficiency while wringing as much life as possible from their existing investments.

Nevertheless, there are measures a data center operator can take to improve efficiency, most of which don’t require significant investment. Here are nine improvements nearly everyone can make.

1. Know how much power you use and where

Only 70% of data center operators monitor power usage effectiveness (PUE), an important gauge of efficiency, according to Uptime Institute’s 2021 Global Data Center Survey. PUE is the ratio of power used by the entire data center to that used by IT equipment. It’s a useful way to discover how much power is being consumed by overhead services such as cooling and lighting, which are the greatest areas of potential savings. Calculating PUE also provides a baseline for calculating the impact of your efficiency improvements.

power consumption and pueSource: Uptimeinstitute.com

2. Consolidate and clean up workloads

Sixty percent of data center power is used by servers, so optimizing workloads to maximize utilization can have an immediate and substantial impact. Idle servers or servers running at low utilization rates can consume about half the power of their fully-loaded counterparts. Eliminate unnecessary workloads or move them off-hours when power costs are lower. Consolidate and virtualize workloads on as few servers as possible with a target of between 60% and 75% server utilization. Then shut off servers that are no longer needed or replace them with newer, more efficient models.

3. Optimize temperature and humidity

Both are important not only to energy cost reduction but also to equipment reliability. Poor air circulation can cause overheating and component failures. Humidity that’s too low can result in electrostatic discharges that are perilous to electronic equipment. Conversely, humidity levels that are too high can cause moisture to build up and degrade performance while increasing the risk of failure.

4. Organize servers into hot and cool zones

Most data centers have numerous hot and cool zones depending on the power needs of the equipment. By grouping servers according to the heat that they generate data center operators can target cooling more efficiently. Cold and hot aisle containment is a technique that locates servers around hot-air exhaust corridors with cool air drawn from precision air-conditioning units and/or from outside. Chilled air is directed into the servers from the rear and heated air is released into designated hot aisles to be either channeled outdoors or into air-conditioning units at the end of the aisle. Chilled air is then circulated back into the cool aisle.

Alternatively, water chillers may be used for high-temperature scenarios such as servers using multiple graphics processing units to run machine learning models. In either case, the objective is to target energy-intensive cooling resources to the servers that need it most.

5. Use air-side economization

Many data centers in temperate climate zones can make greater use of outside air for cooling by using air economizers. These integrate outside air into the central air-handling systems to reduce cooling needs and vent exhaust air out of the building. Gartner says between 40% and 90% percent of cooling can be provided by economizers throughout most of North America while the U.S. Environmental Protection Agency cites one Colorado facility that used mountain air for cooling 80% of the time. Filter and humidity controls are critical given the unpredictable nature of the air source.

Air side free cooling mapSource: energystar.gov

6. Build to suit wherever possible

When building new data centers or retrofitting existing ones, design with energy needs in mind. Modular designs enable the data center to be broken down into zones that can each be cooled selectively. This lets operators customize lease terms and power cost models to gain a predictable view of energy usage. Where build-to-suit isn’t an option, look for corners of the air-cooled facility that can be blocked off or ceilings that can be lowered to reduce the cooling of dead spaces.

7. Maximize core density

Microprocessors consume half or more of the energy used by a typical server. Cores are essentially discrete microprocessors that can be dedicated to different tasks or clustered for virtualized workloads. Adding incremental cores requires a much smaller amount of additional power than bringing new CPUs online. High-end X86-compatible CPUs now have up to 96 cores and 128-core models are around the corner. This makes high-density CPUs not only cost-efficient but a good investment in energy reduction.

8. Use the cloud

For infrequent workloads such as monthly reports, it may make more sense to shift the task to a dedicated cloud instance that can be spun up only when needed and shut down when the task is finished. It may also make sense to site new workloads in the cloud rather than taking on the cost of provisioning and cooling servers locally. All cloud providers offer calculators that customers can use to forecast their costs. There are also numerous calculators online that compare fully loaded costs of computing in the cloud versus the data center.

9. Use DCIM

Datacenter infrastructure management (DCIM) is a relatively new class of software that monitors equipment and environmental monitors to aid in capacity planning while reducing the risk of failures. The software can calculate PUE automatically and identify such wasteful factors as overutilized servers and ghost servers so operators can make more informed and confident decisions about how to achieve savings.

Outside of air-side economization and build-to-suit, none of these tactics require new equipment or expensive retrofits. Datacenter operators who have the time, skills and determination can find significant cost savings. Given that conventional data centers typically have PUE ratings of about 2.0 compared to as little as 1.1 for some hyperscale facilities, there’s plenty of room for improvement.

Interested in learning more on how Hypertec can help reduce your datacenter TCO?  Check our immersion cooling solutions

This post is also available in: FR

You May Also Like