Storage Magazine - UK
  Centre of the universe

Centre of the universe

From STORAGE Magazine Vol 7, Issue 2 - March/April 2007

Data centres lie at the heart of many enterprises’ smooth operation – and they are being asked to handle an increasingly heavy burden as the reliance on information and communications technology accelerates.

Over the last decade, the steady growth of such trends as the internet, mobile telephony, online gaming - as well as an increasing need for businesses to improve efficiency, competitiveness and compliance - have driven the need for greater reliance on Information and Communications Technology (ICT). As a result, practically all mid to large-size enterprises, and such household names as eBay, Google, Yahoo and YouTube, have built data centres to store their collection of computer server equipment.

In essence, a data centre is a physical room or building that houses an enterprise’s mission-critical computing equipment, together with associated services such as power, heating, ventilation, cooling and fire prevention/control systems. The availability of sufficient electrical power and the ability to maintain each individual server in the data centre at its specified operating temperature is critical to its reliable operation and, in many cases, to the continued operation of the enterprise itself.

In a typical data centre, approximately 50% of the overall energy input is required to power the air conditioning systems that are necessary to maintain the servers at their correct operating temperature. The remaining 50% of the energy supply powers the servers themselves. The exponential growth in demand for the internet, and the accelerating adoption of information technology by businesses, has driven the demand for more data centres and the requirement for higher performance servers within those data centres.

Additionally, rising land costs have meant that businesses are under pressure to maximise the number of these high-performance servers that they can fit into a specific physical space. A typical example of this trend can be observed in the rapid growth of the size and number of data centres that Intel uses to enable its chip designers to realise the latest products. Between 1996 and the present day, the number of Intel data centres has increased by more than 900%, from eight in 1996 to today’s count of 75. Similarly, the total number of servers housed within these data centres has increased by 6,000%, clearly demonstrating the trend for data centres that are more densely packed.

The result of these trends is a net increase in the demand for energy. It is estimated that the requirements of all data centres located in California, for example, total around 2,500,000,000,000 watt/hours. Whilst at the macro level this represents a mere 0.1% of California’s total energy consumption, at the local level it poses a significant consideration when placing a data centre near a town or city. In the UK, it is estimated that local data centres draw a similar proportion of the country’s overall energy supply.

The challenges that this energy demand presents are primarily four-fold:

First, there is the challenge of providing the engineering solutions required to supply large amounts of energy into a relatively small area, along with the ability to cool the area adequately to ensure reliable server operation, and the corresponding costs to implement such solutions. Some of the largest data centres in the world may cover an area suitable for parking up to 2,700 cars and a typical data centre may consume 50MW of power - or roughly the equivalent of 140,000 television sets.

Secondly, there is the cost to the business of supplying this amount of energy to its data centre, with almost all data centres running 24 hours/day and seven days/week. The Gartner Group recently predicted that the proportion of IT budget that pays the electricity bills will increase from today’s 10% level to a staggering 50% during the next few years.

Thirdly, there’s the cost to the environment of generating increasing amounts of electrical energy. Climate change featured prominently in the 2007 World Economic Forum report identifying current global risks. This conclusion was underlined by the summary report of the International Panel on Climate Change published in February 2007, which agrees that humans were ‘very likely’ responsible for climate change. A 50,000 square foot data centre running at 4 MW of power consumes the equivalent of 57 barrels of oil per day.

And, finally, there is the need for businesses to comply with increasingly stringent energy efficiency legislation. The EU energy directive, published in 2003, defines an objective to promote the improvement of energy performance of buildings within the community, taking into account outdoor climatic and local conditions, as well as indoor climate requirements and cost effectiveness. Each EU member state is required to transpose this directive into law by the beginning of 2006, with a further three years being allowed for implementation. This is just one of an increasing number of worldwide programmes focused on the promotion of energy-efficient products.

Clearly, there exists a need to implement technologies within the data centre that deliver the high performance required to meet growing demands, but to do this with the most efficient use of energy.

What Intel is doing

Intel has always been prominent in energy efficiency innovation, a legacy left by the co-founder of the company, Gordon E. Moore. Intel first built power management into its products with the Intel 386 microprocessor in 1990 and, since then, energy-efficient performance - and, increasingly, the design of energy-optimised products - has been key to Intel’s strategy. While there are many areas of the data centre where Intel is contributing to the drive for greater energy efficiency, three particular technology developments promise significant benefits: these are Intel’s virtualisation technology, multi-core processor strategy and recent developments in its 45nm transistor technology.

Consolidation equation

Virtualisation is a much talked about term in current IT circles. The trend toward virtualisation is driven by the fact that many of the servers currently filling the data centre are severely under-utilised. This has been caused by the need to ensure redundant capacity for peak traffic periods, but also the technical requirement to have dedicated servers supporting applications and operating systems.

Virtualisation enables applications and operating systems to be consolidated across multiple servers, maximising utilisation (within the boundaries of leaving some spare capacity for peak traffic) and generally reducing the number of servers required, and hence the energy consumed. This is not only of interest because of the pure reduction in volume of servers within the data centre (or the increase in capability from the same number of servers). It is also of interest because, even when servers are under-utilised, the processors are still drawing power. Power efficiency is increased, therefore, by ensuring servers are as consistently active as possible.

To help CIOs to make the most of data centres - and keep their hardware one step ahead of performance demands - Intel has virtualisation built into its entire product range. It has also been able to take advantage of its ecosystem of collaborative organisations and partners, it points out, “to ensure the broadest software and hardware support for this virtualisation technology to help CIOs apply to as much of the data centre as possible”. With eight out of every 10 servers shipped being Intel based, this means that the possibilities of virtualisation are already permeating data centres around the world.

Multi-core era

One of Intel’s most direct responses to the challenge of energy-efficient performance was the launch of multi-core technology in the shape of Intel Core 2 Duo and the Quad-Core Intel Xeon processor 5300 series. These microprocessors deliver greater performance with a reduction in power requirement by 40%, in comparison to previous single-core processor technologies.

“Multi-core delivers a new era in computing possibilities and signals a major development in the drive toward sustainable data centre strategies,” states Intel. “It represents the ability to increase the density of the servers in the data center, without increasing the power consumption. This enables the data centre to meet increasing performance pressures, without the concern of increased energy consumption and energy costs.”

The final major innovation was in the transistors themselves, the very DNA of computing and the data centre. Transistor technology has moved forward by leaps and bounds, and the modern power management system of the Intel Itanium processor range contains as many transistors as a complete Intel 486 microprocessor. Intel Itanium processors contain 1.72 billion transistors, made possible by the ever-decreasing size of the transistor itself, enabling increased density on a single silicon chip. And while increased transistor density translates to increased performance, of course, it also results typically in an increase in power consumption and hence threatens the sustainability of performance increases over time.

In 2007, Intel launched the 45 nanometer (nm) fabrication process. This not only doubles the number of transistors that can be included on a single chip, in comparison to Intel’s earlier 65 nm transistor processes. It also addresses the age-old problem of transistor power leakage. Intel’s radical innovations in design of the new 45 nm Hi-K process reduces the typical leakage of each transistor by up to 3%, it is estimated. And while the levels of leakage under discussion at an individual transistor level are obviously minute, with high-performance systems like Itanium containing 1.7 billion transistors, the resulting energy efficiency improvements are significant and measurable.

Energy efficiency, it can be seen, is being driven by many considerations within the data centre, but what is most apparent is that the model of ever-increasing power consumption is unsustainable. Computing developments have created new horizons, and organisations such as Google and MySpace are introducing new concepts every day that will require more powerful data centres to implement. For data centres to continue to support these performance demands, the energy consumption issue needs to be managed, both in terms of energy costs and environmental impact.

Intel says it is working with governments, industry and customers to ensure that high-density data centres do not mean increased energy consumption. “By building the capability for better asset utilisation into every product, and increasing the energy-efficiency performance of products at the chip and transistor level, Intel is looking to help deliver a future in which data centres can continue to support new horizons - without costing the Earth.”

The products referenced in this site are provided by parties other than BTC. BTC makes no representations regarding either the products or any information about the products. Any questions, complaints, or claims regarding the products must be directed to the appropriate manufacturer or vendor. Click here for usage terms and conditions.

©2006 Business and Technical Communications Ltd. All rights reserved.
No part of this site may be reproduced without written permission of the owners.
For Technical problems with this site contact the Webmaster