Schneider Electric’s Data Centre Science Centre recently calculated that typical data centre physical infrastructure energy losses have been cut by 80% during the past decade. In many cases this has been enabled by improvements in UPS efficiencies, cooling technologies as well as deployment practices – for example, the use of inrow cooling. Data centres are now inherently cheaper too, on a £ per watt basis.
However, the question remains, how influential will artificial intelligence (AI) and machine learning (ML) be when continuing this trend of increased performance, at a lower cost? AI and ML are two terms often used interchangeably or considered to be synonyms. In simple terms, AI refers to the concept that a machine or system can be ‘smart’ in carrying out tasks and operations based on programming and the input of data about itself or its environment.
ML, however, is the ability of a machine or system to automatically learn and improve its operation or functions without human input. ML could therefore be thought of as the current state-of-the-art software for a machine with AI capabilities.
Many of today’s data centre physical infrastructure systems incorporate some form of AI. UPSs and cooling units will often have programmed firmware and advanced algorithms that dictate how the equipment both operates and behaves as conditions change. For example, cooling control systems actuate valves, fans and pumps in a coordinated, logical way to achieve user-defined set points, as environmental conditions change over time.
In addition, all IoT-enabled power and cooling equipment is equipped with sensors. These devices collect a large amount of useful data about the machines and their environment, which can be used to determine machine operations and its response(s) to emerging conditions and events.
It can also be used by smart systems such as building management systems (BMS), power monitoring systems (PMS) and Schneider Electric’s StruxureWare for Data Centers data centre infrastructure management (DCIM) software to extract useful insights about the data centre’s status and provide real-time information on its capacity, reliability, and efficiency.
ML in data centres is an exciting new concept that is currently being researched by manufacturers, including Schneider Electric. By increasing the intelligence and automation of physical infrastructure equipment and management systems, and integrating it with the IT load, it is possible to make data centres more reliable and efficient, both in terms of energy use and operations.
Laying the foundations for this advance is Schneider Electric EcoStruxure IT and EcoStruxure for Data Centers system architecture. EcoStruxure leverages IoT, cloud and Big Data analytics to gain an insight into data centre operations with the aim of delivering improved data centre security, reliability, efficiency, and sustainability.
The vendor-neutral solution connects customers’ assets to the Schneider Electric Cloud to deliver faster issue resolution and digital services, while harnessing the power of IoT to predict and prevent incidents, or downtime in data centres. Real-time recommendations are provided to optimise infrastructure performance and mitigate risk.
An important component in Schneider Electric’s overall EcoStruxure offering has been the introduction of a more advanced data centre management as a service (DMaaS) solution.
This is an integrated portfolio of both hardware and software solutions that enables optimisation of the IT layer by simplifying, monitoring, and servicing data centre physical infrastructure from the edge to the enterprise. It uses cloud-based software for DCIM-like monitoring and information analysis, to offer real-time operational visibility, alerts, reporting and shortened resolution times.
Although DCIM tools have previously been made available on a software-as-a-service (SaaS) basis, DMaaS differs from this model in a number of ways. DMaaS has simplified the process of implementing monitoring software throughout a data centre facility.
Once the IoT-enabled infrastructure components are connected, monitoring can begin and the service both aggregates and analyses large sets of anonymised data directly from data centre hardware via a secure and encrypted connection. This information, once harvested, can then be further enhanced using big data analytics with the primary goal of predicting and preventing data centre failures, foreseeing service requirements and detecting capacity shortfalls.
This is useful to data centre operators with resource constraints for a number of reasons. According to a report carried out by 451 Research, DMaaS “ties remote cloud-based monitoring into maintenance and fix services, enabling a full-service business model for suppliers”.
It therefore opens a doorway allowing new and additional smart eyes on the infrastructure (from a service provider’s network operations centre) to support a customer’s internal team.
It also opens the door for the development of new offerings from service partners, from energy management to proactive maintenance, and again, for those with resource constraints, it provides the ability to have complete insight into both the data centre infrastructure and the IT load, enabling intelligent and proactive support to be provided when required, on a data driven basis.
The breadth of data is key
The greater the volume and depth of data that can now be captured from IoT-enabled equipment increases the capability of DMaaS compared with earlier software or service models. This is because the value of data is multiplied when it is aggregated and analysed at scale.
By applying algorithms to large datasets drawn from diverse types of data centres operating in different environmental conditions, the goal of DMaaS will be to both identify and predict when equipment will fail, and when cooling thresholds will be breached. The larger the dataset, the smarter DMaaS becomes with every iteration.
The report from 451 Research goes on to say that having more data about the performance of specific equipment in specific environments (temperature, humidity, air pressure) will enable predictions to become more accurate over time. It predicts that in the not-too-distant future, increased data centre automation will also be made possible, in addition to the full remote control as part of DMaaS-driven services; for example, the ability to switch a UPS to advanced eco-mode when utilisation is low and thereby directing IT load away from areas of potential failure.
In other markets, the emergence of IoT technology and use of Big Data has also been the stimulus for the introduction of innovative business models. A potential capability of DMaaS is to enable service suppliers and manufacturers to include monitoring and management services into lease agreements for data centre infrastructure equipment to deliver an asset-as-a-service offering.
With this type of DMaaS-enabled service, the supplier maintains ownership and charges for operation and service. 451 Research believes that this could be especially interesting for highly distributed IT deployments and companies reliant on edge data centre portfolios.
Data centre design remains a critical factor
Right now, it is important to say that AI is not going to solve all of the industry’s current data centre challenges. It will not magically transform a traditional or stick-built data centre into a cutting-edge site with a perfect PUE and availability record. The fundamentals and best practices of data centre design and operation will still be crucial to that success.
However, the advances brought via DMaaS are an excellent starting point, and we can expect that as future developments in AI and ML are applied in the data centre, they will build on, or provide incremental value to these major performance improvements that were gained over the past 10 years.
Patrick Donovan is senior research analyst with Schneider Electric’s Data Centre Science Center IT Division