DeepMind co-founder outlines how the firm slashed cooling costs at Google’s datacentres and plans for Artificial Intelligence as a service en route to tackling climate change and global poverty.
Google’s datacentre engineers were deeply skeptical that an AI that learned to ‘think’ by playing Atari games could cut data centre cooling costs. But they were wrong. It slashed them by 40% and improved overall PUE by 15%.
Datacenters consume around 3% of the world’s power and the firm aims to sell its AI learning as a service to more of them – and into other power intensive business sectors. But DeepMind co-founder Mustafa Suleyman is ultimately training the mind he helped to build on bigger problems.
“What is the state of the global environment? 800 million people have no access to clean water – and that is set to double over the next decade. 800 million people are malnourished – and yet a third of the food we produce is wasted,” he told The Curve’s XEnergy conference.
“We would need 3.1 planet Earths to sustain the global population at UK consumption levels,” he said. “So there is a lot at stake.”
All you can eat data
Since DeepMind was acquired by Google three years ago, it has all the data it can eat. But the feast was preceded by a famine, said Suleyman.
“Pre-Google, getting access to data was difficult, so we trained [the mind] with Atari games,” he said. “We created a small world. All we passed to the algorithm was raw pixels that describe what is happening in any moment in any frame in the games.”
They built everything from scratch, said Suleyman, with the goal simply to maximise the score. “We trained the algorithm to play games, but to learn new knowledge, not our knowledge, limited as it is.”
The algorithm, said Suleyman, got pretty good. In the Activision 1980 classic Boxing, “it worked out a clever trick where it could pin the opponent in a corner – and from that position there is no way out.”
Post-Google, feeding on data, the algorithms became more powerful and took on increasingly complex games. Earlier this year, the DeepMind AlphaGo programme beat Lee Sedol, 18 times world champion of the ancient Chinese game of Go, four games to one in a televised event watched by 250 million people.
The game, in which the aim is to surround opponent’s territory, has more possible configurations than the estimated number of atoms in the known universe, according to Suleyman.
Now DeepMind wants to apply its artificial intelligence to bigger challenges. Climate change, with carbon emissions driven largely by energy consumption, is pretty big.
Datacentres: the heat is on
Power demand from datacentres is expected to triple over the next decade and within the Google fleet, that consumption is “non-trivial”, Suleyman noted. “So we were able to create a model that reduces the energy – and cost – required to cool Google’s datacentres by 40%.”
The engineers, he said, “were very cynical about whether we could do that.” But DeepMind did, improving overall PUE by around 15%. Now the firm aims to launch its optimisation engine as a service platform to other data centre operators and to power intensive sectors more broadly.
The overall objective was to maximise PUE by removing the heat from incoming compute load as efficiently as possible while respecting known temperature and safety constraints.
Key to enabling new insight was “data, data, data”, said Suleyman, in two key areas: state data, such as sensor and meter data that describes the physical behaviour of the utility; and action data, such as “how many cooling towers are turned on, how many chillers are active at any given moment, what are the set points of various pressure and temperature valves, flowrates and so forth”.
That threw up around 1,200 different state variables, and for each of those variables were about 20 actions, said Suleyman. Those were aggregated into around 120 state representations combined with a series of actions, both continuous and discreet, that would throw up suggested actions to optimise PUE within safe operating constraints.
“Essentially [it is] a very general framework to solve datacentre prediction,” said Suleyman. “There’s a bunch of state inputs, a bunch of actions, and just like we did with Atari and AlphaGo, we are learning to correlate state with rewarding behaviour.”
The aim was also to maximise long-term reward over short-term gains and for the system to predict the degree of confidence in the suggested actions delivering those rewards.
Insights gleaned from that approach defied conventional wisdom in three key areas, according to Suleyman:
“The first is that more cooling equipment, not less, brought to bear to run the system turns out to be much more valuable. So if you spread out load really thinly over lots of bits of kit and run them all at a lower level of capacity … you can learn really interesting linear versus exponential power efficiency curves that were very surprising and unintuitive to the human operators that had been running the system for some time.
“Second: Higher flow isn’t always better. A lot of the engineers believed that they should be concentrating flow through the chiller a great deal, but if we put less flow through the chiller it turns out that … the global energy consumed across the system was actually much more efficient.
“Finally, the ability to shift loads to different parts of the system, given the type of incoming compute demand and the temperature, actually allowed a much more flexible and fast adaptive response to the kind of conditions that the data centre team were seeing at any given moment,” said Suleyman. “And that turns out to be very valuable. “
The methods DeepMind used for Google’s data centres are “inherently general, large-scale optimisation systems” that will work “reasonably effectively in a wide range of environments” subject to enough appropriate data, said Suleymen.
“So we are really starting to look at what this might look like as a service we can bring to the market outside of Google.”