Data growth continues to outstrip Moore’s Law. A paradigm shift in network photonics and devices will be required to accommodate the ever-increasing bandwidth hunger, says Professor Ian Bitterlin. Even then, it is possible that a change in usage behaviour will probably be required as the connected world expands.
Most governments in Western Europe, coordinated with EU policy initiatives, are trying to meet carbon reduction targets and weed us off of fossil fuel-based generation at the same time. The overriding target is to become ‘sustainable’ and, in classical environmental terms, this is achieved in three distinct stages:
The first step is a reduction in demand; followed closely by increased process efficiency; and, finally, powering the resulting slimmed down demand from low/zero carbon, predominantly renewable, energy sources. Whether or not this includes nuclear fission is a debate to be had elsewhere but, from an engineering perspective, nuclear generation has the advantage of stable base load generation that will be necessary for supporting high density population centres when intermittent renewable sources such as wind, tidal and solar are not available.
Reduction in demand is inextricably linked to increased efficiency – for example, domestic insulation allied with heat recovery and high efficiency appliances – but the one area where consumption seems to be inexorably rising is in ICT, and specifically data centres.
Data centres are at the heart of the internet and enable the digital economy, as well as concepts such as smart cities and smart grids. As our demands, in both social and business contexts, for digital content and services grow, data centres will need to expand.
Only highly utilised and heavily loaded facilities will enable low-cost digital services as the cost of power escalates and, despite industry best efforts, power consumption will rise, not fall. Best forecasts show a compund annual growth rate of about 15-20% for the foreseeable future, higher in the emerging markets and lower where internet penetration is already high – and often in ‘connected’ locations where energy is cheap and taxes are low.
Growth, what growth?
First of all, let’s look at some real data about data growth, not industry projections but figures from AMS-IX, the Amsterdam Internet Exchange and one of the five major internet exchanges in Europe.
In July 2001, this stood at 690TB (terabytes) a month. Fourteen years later, that figure was 690,000TB a month. That’s 1000x growth, or 70% compound annual growth rate, and AMS-IX breaks its own peak traffic record every month or two, currently nearly 4TB per second.
In 2005, futurologist Raymond Kurzweil wrote a book entitled, The Singularity Is Near, which accurately predicted today’s exponential data growth seen in the AMX growth curve.
Back to the present
When we look at the internet usage and the average amount of data produced per individual worldwide, we should remember only 30% of the global population have internet connections. Just imagine the data growth when the other 70% get connected, although in 2011 Vint Cerf, inventor of the IP address and often regarded as one of the ‘Fathers of the internet’, said that: “Internet access will become a privilege, not a right.”
Giving people access to faster internet isn’t going to solve the strain placed on networks. That would only work if their internet usage remained the same but just faster. What do people do if you give them faster broadband today? They simply use up all the bandwidth and increase their demand for content.
It is the same as when, during the industrial revolution, engines that used less coal became available people simply installed more engines and used more coal.
Let them eat bandwidth
As English economist William Stanley Jevons said in 1865: “It is a confusion of ideas to suppose that the economical use of fuel is equivalent to diminished consumption. The very contrary is the truth.”
We could draw a similar conclusion about data and the internet: if you give people better access, they will just use the internet more and drive power consumption up.
The EU’s digital agenda aims to provide 20MB/s to every European Citizen at affordable cost. The idea is that this will encourage developments in education, medicine and business, but the availability of faster broadband will actually encourage more social networking, photo sharing, gaming and gambling.
There are plenty of examples of how data generation is growing. At Photonics West 2009 in San Jose, California, Cisco correctly predicted for 2012 that “20 US homes with fibre to the home [FTTH] will generate more traffic than the entire internet backbone carried in 1995”.
In Japanese homes that have FTTH, the average download rate is greater than 500MB per day, dominated by HD video. One Sky movie download already equals 1.3GB and 4K ultra-high definition TV will further increase traffic and power consumption. More video content is uploaded to YouTube every month than a TV station broadcasting 24 hours a day could show in more than 350 years. With 3G, Vodafone reported 79% annual data growth in 2011 and 4G phones are even bigger data-generators.
Video is clearly a huge bandwidth usage driver: TIME magazine reported that it takes 0.0002kWh to stream one minute of video from the YouTube data centre.
Jay Walker’s recent TED talk explained how on average 0.01kWh of energy is consumed in carrying 1MB over the internet while the average home device energy consumption is around 0.001kWh for one minute of video streaming.
The viral video is best demonstrated by Korean rapper Psy’s ‘Gangnam Style’ music video, which lasts just over four minutes and was streamed 1.7 billion times in its first year. Multiplied by the 17MB file size, the overall energy consumption in one year is 298GWh. That’s 35MW of constant power for an entire year or 80,000 UK car years. It is more than the annual electricity consumption of the nine million inhabitants of the Republic of Burundi.
However, there’s only so much video a person can watch in 24 hours, and there is a limit to the number of people in the world so is there a natural limit? Probably not, as largely unlimited growth will come from The Internet of Things, with its uncountable sensors, each constantly transmitting a stream of short data packets.
ICT capacity growth curve
ICT in total is generally regarded as consuming 6-9% of all utility power in countries such as the UK and data centres use one third of that power. We can assume, therefore, that data centres consume 2-3% of our grid capacity. That is currently growing at 15-20% CAGR. So the answer is no, this growth is clearly NOT sustainable, but what about the capacity of the ICT hardware over time?
More than 35 years ago, Gordon Moore, founder of Intel, came up with ‘Moore’s Law’ predicting a doubling of the number of transistors on a microprocessor every two years. This has held true ever since and Moore’s law directly applies to doubling computational capacity, halving the required Watts/operation and kWh per unit required to compute loads.
His law was been ‘updated’ by Intel to 18 months for clock-rate and recently Kurzweil suggests that the doubling currently occurs every 14 months. That’s why a company like Facebook replaces its servers every year, to improve energy efficiency and service levels. Holding on to the same IT hardware for three years or more is really a waste of energy and money.
So, if we apply Moore’s Law and other improvements to data centres over the same time span as the AMX data shown here, we find that capacity has consistently fallen behind the demand curve. That is despite the huge growth in hardware processing power: best-in-class performance today is around 1250 times greater than in 2001, while average best practice today is some 700x greater and best practice on a four year old base installation is just 100x greater than the 2001 benchmark.
But, as we’ve seen, data growth is 1000x so our demand for data has outstripped the vast capacity increase of DC equipment. Looking at it another way; a data centre built in 2001 housing 2,000 ICT cabinets that consumed 3.5MW of grid power can be replaced by a one-cabinet data centre consuming 6kW.
In 2001, you would have needed 1,000 cabinets to do the work one cabinet can do today.
Optimise, optimise and optimise again
Data growth continues to outstrip Moore’s Law. A paradigm shift in network photonics and devices will be required to accommodate the ever-increasing bandwidth hunger. Yet, even then, it is possible that a change in usage behaviour will probably be required as the connected world expands. The key to energy effectiveness and the single largest factor in moderating power growth is hardware utilisation.
Stranded capacity and under-utilisation reverses the gap between demand growth and ‘best-in-class’ ICT performance. To realise this, we need to optimise computing, storage and I/O, virtualise heavily and de-duplicate stored data.
Ian Bitterlin is a consulting engineer and visiting professor at Leeds University.
Click here to see if you qualify for a free subscription to the print magazine, or to renew.
Follow us at @mcriticalpower. For regular bulletins, sign up for the free newsletter.