You are using an older browser version. Please use a supported version for the best MSN experience.

Energy Efficiency Is a Hot Problem for Big Tech’s Data Centers

Bloomberg logo Bloomberg 12/13/2019 Nathaniel Bullard

(Bloomberg Opinion) -- Electrons aren’t much of a growth industry in the U.S., the second-largest electricity market in the world after China. Electricity sales rose last year, after nearly a decade of being flat or falling slightly, but are still only up 3% since 2007. There is one market, though, where demand for electrons is booming: data centers. That power-hungry growth market, though, is also where some of the world’s biggest, most capitalized and most innovative companies are bringing their might to bear.

Before getting into that innovation, though, there’s a crucial equation to consider: the power usage effectiveness ratio, or PUE. PUE is a measure of a data center’s energy efficiency — the ratio of total energy used divided by energy consumed specifically for information technology activities. The theoretical ideal PUE is 1, where 100% of electricity consumption goes toward useful computation. All the other stuff — power transformers, uninterruptible power supplies, lighting and especially cooling — uses power but doesn’t compute, and as a result raises a data center’s PUE.

A 2016 Lawrence Berkeley National Laboratory study listed what was, at the time, PUE for facilities at various scales: a server sitting in a room, a server in a closet, a “hyperscale” extremely large data center. The smaller the server, the higher its ratio and the lower its efficiency. For the smallest server spaces, the PUE is above 2, meaning that more than half of its energy use is for things other than computing. For hyperscale, the PUE is 1.2 — meaning that most of the energy is going to computation.

a screenshot of a cell phone: Where Bigger Is Better © Bloomberg Where Bigger Is Better

Here are that same data, expressed a bit differently, to show a server or data center’s power consumption by use. Here you can see that the smallest applications used more power for cooling than for computation. But at hyperscale data centers, more than 80% of power consumption went to IT (servers, networking and storage), and only 13% went to cooling.

a screenshot of a cell phone: Chill Out, Little Guy © Bloomberg Chill Out, Little Guy

But now, with so much computation happening in the cloud (and, in reality, in hyperscale data centers), it’s worth finding out what today’s PUEs are and just how close they can get to that theoretical ideal of 1.0.

A recent Uptime Institute survey of 1,600 data center owners and operators found that 2019’s average PUE is 1.67, and that “improvements in data center facility energy efficiency have flattened out and even deteriorated slightly in the past two years.” That PUE means that 60% of data center electricity consumption is going to IT, and the rest to cooling, lighting and so on.

However, some operators are doing much better than that. Google says that its data centers have a PUE of 1.1, with some centers going as low as 1.06. There’s some seasonality in play, particularly because most of Google’s data centers are in the Northern Hemisphere; its Singapore data center has the highest PUE and is the least efficient of its sites. That’s not surprising given Singapore is hot and humid year-round.

a screenshot of a video game: Trendy and Seasonal © Bloomberg Trendy and Seasonal

One key way to lower the cooling demand for a data center is to cool only to the temperature at which the machines are comfortable, not to where humans are most comfortable. For Google, that’s a temperature of 80 degrees Fahrenheit.

There’s another approach, and one that draws on computation itself: machine learning. Google unleashed its DeepMind machine learning platform on the problem of data center energy efficiency three years ago; last year, it effectively turned over control to its own artificial intelligence:

In 2016, we jointly developed an AI-powered recommendation system to improve the energy efficiency of Google’s already highly-optimised data centres. Our thinking was simple: even minor improvements would provide significant energy savings and reduce CO2 emissions to help combat climate change.

Now we’re taking this system to the next level: instead of human-implemented recommendations, our AI system is directly controlling data centre cooling, while remaining under the expert supervision of our data centre operators. This first-of-its-kind cloud-based control system is now safely delivering energy savings in multiple Google data centres.

It seems likely that more of that sort of approach will be adopted by Amazon Web Services, Microsoft, IBM and other major cloud computing firms.

Even with efficiency gains, data center electricity demand is voracious and growing; that growth has a number of implications for the power grid and for power utilities. The first is that many of these major consumers of electricity are also contracting for wind and solar power to meet their demand.

The second is that, with many data centers clustering in locations such as Northern Virginia, data center loads are becoming a meaningful share of utility peak demand in a given service territory. Recent BloombergNEF research finds that data centers could make up 15% of Dominion Energy Inc.’s summer peak demand by 2024.

Given that data center operators have every incentive to economize on electricity, utilities need to compete to provide service. Preferential — and confidential — contracts for power supply are one way to do that, with the result being that other rate payers bear the cost, as Bloomberg News reported last year. Gains in efficiency don’t mean that data center demand for electricity is going down. Their scale and growth is a testament to their power usage effectiveness. Their preferential contracts for electricity, on the other hand, feel like a testament to their effective usage of a different kind of power: buying power.

Weekend reading

Chevron Corp.’s $10 billion to $11 billion impairment charge, related mostly to its Appalachian natural gas assets, “ushers in oil’s era of the sober-major.” Chevron has also called time on the Kitimat liquefied natural gas export plant in British Columbia, writing off years of development while also planning to sell its 50% stake. Kawasaki Heavy Industries Ltd. has launched the world’s first liquefied hydrogen carrier. Tesla Inc. has lost its third general counsel in the course of a year. Vancouver-based Harbour Air Ltd.’s electric seaplane has taken flight. I looked at the environmental implications of electrifying aviation last month. Stanford University has released its 2019 Artificial Intelligence Index Report.  Venture capital fund Piva, funded by $250 million from Malaysia’s Petronas, has launched with a focus on energy and industry. Bloomberg Media will acquire CityLab, a news site covering “urban innovation and the future of cities.” Nomura Holdings Inc. will acquire sustainable technology and infrastructure boutique investment bank Greentech Capital Advisors. Hiro Mizuno, the chief investment officer of Japan’s $1.6 trillion Government Pension Investment Fund, has “embraced ESG principles so enthusiastically” that the fund will not award new mandates to managers without environmental, social and governance credentials. Considering the legacy of Xie Zhenhua, a key architect of the Paris Agreement and China’s climate negotiator for more than a decade. Greta Thunberg is Time Magazine’s Person of the Year.

To contact the author of this story: Nathaniel Bullard at nbullard@bloomberg.net

To contact the editor responsible for this story: Brooke Sample at bsample1@bloomberg.net

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Nathaniel Bullard is a BloombergNEF energy analyst, covering technology and business model innovation and system-wide resource transitions.

For more articles like this, please visit us at bloomberg.com/opinion

©2019 Bloomberg L.P.

AdChoices
AdChoices
image beaconimage beaconimage beacon