Peter Westwood, data centre director at SPIE UK, warns that data centres will need to tackle energy efficiency and looks at some of the strategies available…
As society embraces digital transformation, both in our personal and working lives, the knock-on effect has led to staggering growth in the data centre market. While this is all very positive, there is one unfortunate issue that cannot be avoided: the huge increase in the amount of energy needed to power these facilities to ensure they remain operational.
By 2020, global data centre energy use is set to reach about 5% of all consumption, and will be well over 10% by 2050. This could be a ticking time bomb in terms of climate change.
Fortunately, data centre organisations have recognised this issue and new energy reduction solutions are being developed to assist energy reduction. These operators have a number of options to combat the challenges of energy use, all of which require a full understanding of their operational characteristics.
For these initiatives to be a success, engineering expertise is vital in order to assimilate the right information to inform design improvements and drive efficiencies through new technology.
So, what are these new energy reduction solutions?
Firstly, edge computing has a major role to play. The progression of this technology, as well as the adoption of a more decentralised approach to data centres with cellular networks, the extension of campus networks, data centre networks or the cloud, will all be necessary to tackle the latest digital business infrastructure needs. For example, in the future as the sheer amount of data increases, and the speed at which we want to access it accelerates, so will the inadequacy of streaming all this information to a data centre or cloud in order to process it.
Decentralisation is key
Right now, data centre businesses are endeavouring to decentralise compute power and position it as close as possible to where the data is actually generated. As a consequence, this method means that micro-data centres, branch locations and smaller hubs will need to be set up in order to process the data. The future designed edge ecosystem architectures will result in dramatically improved resiliency and energy efficiency.
The convergence of systems is another tactic data centre companies can use by converging the four essential features of a data centre – compute, storage, networking and server virtualisation – into a unified package. The benefits of a hyper-converged infrastructure (HCI) is that it allows closer integration of a greater number of elements through software.
The trend of converged technologies has been around for a while now, and with the adoption of these types of architectures, organisations have the ability to eradicate the separation of resources, challenges around administration and problems associated with scaling a facility.
In a similar way to their all-flash solution counterparts, converged and hyper-converged infrastructure (CI and HCI) are constructed to radically simplify the design of data centres and also help to improve the agility of an organisation. It is important to mention the integrations with next-generation services such as scalable prefabricated solutions and the facilitation of cloud expansion. Both of these infrastructures can be rapidly deployed and have been designed to make the operation of data centres much easier and quicker to deliver.
Optimisation is another important trend for data centre companies. Organisations can optimise their existing systems by carrying out engineering studies and performance testing and airflow management, all of which assist those who operate data centres achieve higher efficiency and provides a platform for continuous improvement. One strategy taken is to work with modular containment around the main systems to help improve flexibility and resilience. In planning improvements airflow management systems are a skill which can greatly assist operational resilience.
In the same way that data centre management tools such as DCIM software have played a crucial role so far, there is a new trend emerging with their integration with things like machine learning, virtual systems and cloud. This is carried out to utilise management platforms too so that data centre functionality can be dramatically improved.
New cooling technologies
The most important means of energy reduction are the latest cooling solutions. Momentum has been growing in the use of new cooling and power considerations. In the past, free or natural air cooling has proved a useful way to improve efficiency (if the data centre is located in an area where a cooler climate serves this need – such as Nordic regions), rather than operating air-conditioning systems. Furthermore, the technology behind racks and servers has improved to the point that these machines can operate in temperatures as high as 27°C. However, they still require natural air-cooling systems to deal with any temperatures above that.
As a result of cost per kilowatt-hour (kWh) increases in underlying demand, Gartner estimates that ongoing power costs are growing by more than 10% every year, particularly for high-power density servers. Consequently, liquid-based cooling is being used more and more, primarily because it is much more efficient than its air-based cooling counterparts. The knock-on effect this will have on the growth of the global data centre cooling market using liquid-based cooling techniques over the next few years is anticipated to be exponential. We are already seeing liquid cooling solutions being pre-built into many server and data centre systems, so this trend is very much one to watch.
In addition, data centre microgrid architectures which are energy systems consisting of distributed energy sources (including demand management, storage, and generation) and loads capable of operating in parallel with, or independently from, the main power grid, are also becoming far more common. Implementing all of these cooling solutions will all contribute to cost savings, emission reduction and reliability enhancement for the data centre community.
Renewable energy solutions such as onsite wind generation have also proved an excellent option that many of the bigger data centre operators, including Apple, Facebook and Google have jumped on. All of these organisations have taken the initiatives to power their data centres using wind energy in order to be more sustainable and cost effective.
On top of this, data centre infrastructure management (DCIM) tools and platforms should act as a backbone for making data centre infrastructures energy-efficient and sustainable. These systems merge separate functions, such as data centre design, systems management functions, asset discovery, capacity planning and energy management to deliver a complete overview of the data centre. This can range from the rack or cabinet level, right across to energy utilisation and the cooling infrastructure.
Open Compute Project
Lastly, Open Compute Project technologies have been heralded as another solution. This technology forces a change to both infrastructure and IT architectures by eliminating centralised capital plant with improved hardware into the IT racks. In turn, this simplifies architectures, enhances efficiency, is quicker to deploy, easier to maintain, and lowers cost.
With all these options available, the industry needs to give serious consideration to the merits of new cooling technologies and evaluate the energy losses encountered in the power train, including its complex equipment and resilience.
The PUE (power utilisation effectiveness) on older facilities can easily reach 2.5, whereas new data centres should be achieving a value of around 1.2 or below.
The broader aspects of energy efficiency, in particular the IT equipment and its arrangement, whether this is containment systems or Open Compute Project solutions, along with the construction and location of the facility, must be carefully considered because they can make a significant difference to the energy consumption of data centres up and down the country.