The true cost of efficiency

In this months CISBE Journal, our head of innovation, Richard Clifford, argues that early collaboration is essential to ensure customers understand and avoid the hidden costs of energy saving measures within data centres.

Data centres are inherently energy intensive, making up as much as half of a company’s energy consumption in some cases. Naturally, as financial, regulatory and CSR pressures have increased, energy efficiency has been pushed to the top of the agenda for data centre design. A myriad of options available in the market, exacerbated by different consultants all championing their own approach or solutions, has led to confusion and in some cases an inherent lack of understanding for the end user. In the race to specify efficient data centre estates, options and outcomes are not being fully explored, which can lead to increased costs and compromise overall CO2 reductions in the long term.

Part of the problem is that the industry’s go to metrics can be misleading or easily manipulated to appear more attractive. Specifiers often focus on metrics because at a base level they provide an easy reference point that can be used to evaluate the efficiency of different options. Yet, measures that bring the best results on paper may not offer the most efficient or cost-effective option and can, in extreme cases, lead to long term operational faults.

Most of these measures also work on an assumption that the facility is operating with a full IT load, which rarely happens in most cases. Energy efficiency naturally decreases at lower IT loads and suggested levels can take longer to come into play, if ever. Solely investing in a design that offers a good efficiency metric can leave a business open to higher operational costs than expected later down the line.

For example, PUE (power usage effectiveness) – a ratio of how efficiently energy is used by a data centre’s computing equipment in contrast to cooling and other overheads – can be improved by raising rack temperatures, which has very little effect on savings, as any energy saved by turning down cooling systems is shifted to server fans.

Metrics like PUE don’t paint a full picture on efficiency and, as consultants, we should be working with all our stakeholders to ensure they understand the true implications of how different solutions and routes will affect their facilities in the long term.

Hidden costs

Many of the energy efficiency measures available are key examples of this and their long-term implications need to be thoroughly understood before they are committed to. Without proper consideration, some can create hidden costs elsewhere and cancel out any potential saving, or, in the worst-case scenario, have a negative impact on the facility’s resilience, increasing the risk of downtime.

Fresh air cooling, often considered one of the most eco-friendly cooling methods, is a textbook example. On paper it has almost no carbon footprint and can be particularly cost-effective for facilities located in colder climates. Such alluring benefits are causing many customers to specify these systems without considering suitability or being aware that operating these systems long-term can be more complicated than first anticipated. Fresh air cooling is highly dependent on location – it is easily contaminated by pollution or seawater which can damage hardware and bring high replacement costs. Preventing this damage means investing in additional cleaning and maintenance which brings more overheads and ultimately, a larger CO2 footprint if replacement parts are needed. Likewise, fresh air needs more complex control systems such as fire detection and suppression which all add further capital and operational costs.

Another cooling example impacting efficiency are the new changes to F-Gas legislation, which have brought forward price rises related to the management of stalwart gases such as R404A and R410A. This could have a large impact on pumped refrigerant DX systems, and lead to significant maintenance issues despite their lower capital costs.

Collaboration is key

It may be an attractive option to choose methods that appear to represent a lower upfront investment but without full consideration of the operational lifecycle, any financial or carbon savings may be eliminated in the long term. Many end-clients are stung because they don’t take a wide enough view of their data centre’s management, agreeing to efficiency measures without consulting the teams responsible for the day-to-day running of the facility.

Early consultancy and collaboration at the design stages is vital to prevent this from occurring. We should be ensuring that all stakeholders, including FM and operational teams, are involved in the design process to ensure the operation and maintenance of the facility is considered from the outset. This allows any potential pitfalls to be flagged before any decision are set in stone. It also means all teams can work together to argue for models that may cost more initially, but offer greater long-term efficiency and business flexibility.

Efficiency will always be a concern for the industry, as it should be. But this needs to go beyond just meeting metrics. By considering a facility’s lifecycle rather than just initial costs, environments can be created that save money and energy throughout the course of their use. As consultants we need to be driving this shift in focus to ensure data centres are built with long-term operation in mind.

This article was first published in the April edition of the CIBSE Magazine 

Keysource breaks new ground in China

Just over one month into the New Year and Keysource has already celebrated a major project milestone, with construction beginning at the major 80-acre data campus in Tianjin for Chayora, one of China’s leading data centre operators.

Our team is designing the site’s nine data centres, comprising of six general-purpose facilities and three high performance computing centres – a total of 21,000 racks and over 300MW. The site will serve the data needs of a range of international businesses and the greater Beijing region, which is home to more than 150 million people.

The Tianjin site is the first in a series of key data campuses that we will help Chayora deliver across China, alongside sites in Shanghai, Nanjing, Hangzhou and Guangzhou, as part of the business’ $2bn investment in infrastructure.

We were appointed in 2016 as Chayora’s lead design partner to provide initial strategic site planning, and as part of our partnership we have also been working with end customers to help meet their specific requirements.

Stephen Whatling, Managing Director of Keysource, said:

“China is rapidly becoming one of the world’s largest data centre markets and our partnership with Chayora is a key way we can help meet demand.”

“Working within such a strict regulatory environment and on such a large scale has put our expertise and consultancy skills to the test, but one we have more than risen to. We’re looking forward to getting to the next phases.”

The first phase of the Tianjin development will offer 25MW of power capacity and is expected to come online by the end of 2018.

Are you asking the right questions?

As the data centre landscape changes, it is becoming increasingly more important to make sure you are asking the right questions and including all stakeholders when considering your data centre options. As we see continue to see the disconnect between design and operation, our Associate Director, Steve Lorimer, highlights why we, as consultants, need to challenge our customers in understanding what they are trying to achieve, rather than taking briefs at face value. You can read the article below or see the full magazine here.

Traditionally, the design of new data centres has been at the forefront of clients’ minds when procuring new IT infrastructures. Meanwhile the less glamorous maintenance and operation element is put on the backburner until, in some cases, after the build is complete.

In recent years this has led to data centre systems that are excessively expensive and unable to perform in the long-term. The industry has been relatively slow in resolving this but now, more than ever, clients need greater insight to help them navigate the wealth of solutions on the market while minimising costs.

Last year we aimed to do this through the launch of our specialist consultancy division. We recognised the need to join our FM and design and build offering as a service, that can guide clients in considering both elements right from the outset. Since then, it’s proved to be the panacea clients didn’t know they needed.

IT is increasingly integral to companies’ wider business strategies as well as their dayto-day operation. Even now, high-profile examples of server downtime are acting as huge reputational issues affecting stock prices and customer perceptions. This is only set to continue as businesses grow their reliance on big data, automation and systems underpinned by highly available systems.

Often end-users have a preconceived idea of what they want for their data centre system. From a design perspective this can be any number of in-house, co-located, cloud systems or hybrid solutions. Often they are blinded to new technology on the market and the latest cutting edge systems. Stripping the process back to the fundamental question, ‘what do you want to achieve?’ is more vital than ever.

Clients have never been faced with a range of options as broad as they are today. Navigating this with them can show clients that initial plans, and the combination of new technology they want to include, may be too expensive, or, in the worst cases, not meet their objectives when maintenance and design are factored in.

Too often the industry simply takes the brief from clients without challenging it. We now work with clients before they put design and build tenders out to the market – working alongside internal teams to develop a system that meets their needs, is future-proof and is cost efficient in the long term.

As one example, one of the biggest operational costs clients face in running their own in-house data centre is cooling and failure of this can result in significant downtime. Design teams will often aim to ensure that cooling systems are optimised across rack space but, when it comes to operation – FM teams need to be in the loop to work out whether different permutations of cooling systems will be easy to access and maintain.

As the industry attempts to meet best practice guidance set out by the BS EN 50600 standard, collaboration will become even more important. With both design and maintenance considerations in the guidelines, simply having one party at the table is unable to produce a cutting edge data centre any longer. Particularly if clients are tempted to overinvest in new technology without considering their current and expected capacity needs and the long term maintenance costs of these systems.

Keysource is chosen to deliver major data centre project for leading cruise ship operator

We are proud to announce we have been appointed by a leading cruise ship operator to design and install a new data centre at its new headquarters in Uxbridge.

The contract will see Keysource deliver a turnkey data centre system to support the client’s business critical IT services associated with the operation of 14 cruise ships – including performance monitoring and proactive identification of potential issues.

The project follows a decision by the client to relocate its staff from Italy and central London to consolidate its operations and support future growth. In addition, the system will also house data for the recordings generated by the client’s UK contact centre which receives calls from new and existing customers around the world.

Keysource’s appointment will see it lead on key design objectives including the support of flexible IT requirements and the ability for the data centre to work efficiently at low loads in line with business requirements.

Jon Healy, Associate Director at Keysource, said:

This new solution will be developed in line with the latest regulations and industry standards. It will guarantee long term reliability and availability of critical services to the business but ensure they are delivered in a sustainable and efficient way, maximising the return on investment

See the bigger picture

Published in the Winter Data Centre Management Magazine, our Head of Design; Stephen Lorimer looks at resilience and how the industry mindset needs to change.

A common definition of ‘resilience’ is ‘the capacity to recover quickly from difficulties’. When applied to the data centre sector it is more commonly accepted as: the ability of an IT infrastructure to continue to operate, for example following an issue such as power outage, equipment failure or human error.

There is a general misconception that all data centres should be highly resilient. In fact, I lose count of the times that customers have started initial meetings by requesting a “Tier III or Tier IV facility and, above all, absolute protection against any data loss.” Quite often by the end of the initial engagement they realise that they are already achieving the redundancy they need within their IT layer and can normally operate safely with a lower resilience classification.

Historically organisations have often made the mistake of designing highly resilient data centres without properly considering why and whether or not they actually need them to be highly resilient. As a result they have ended up with, at best, facilities that have been both expensive to construct and continue to be operationally expensive, and unnecessarily complex, to operate.

At the heart of this problem is the fact that decisions about the resilience of supporting M&E infrastructure are often made without any proper consideration about what level of availability the IT service is actually required to deliver. To do this involves taking a step back and looking at the wider IT strategy – an approach that is endorsed in the EU Code of Conduct which clearly states that organisations should “deploy resilience in line with requirements.’

This failure in approach can happen for a number of reasons, but the most common one we see is that organisations do not engage with a specialist. Whilst in-house teams are often extremely competent, data centre design is rarely their ‘core skill’ so the end result may not always meet and sometimes contradicts the company’s IT objectives.

For those occasions where design and build is the best option we, as an industry, need to put more of a focus on ensuring that data centres are ‘designed for operation’ and the team responsible for maintaining and running the facility is engaged from the outset. As an organisation we encourage different stakeholders, to be part of the process from the outset, as we feel that this delivers the best results. This early engagement is key as not having all the stakeholders involved may mean that not all impacts are properly considered and addressed as part of the design…

Continue reading in the Winter edition of Data Centre Management Magazine

[x]

Contact Us

Please fill in the form below and a member of the team will be in touch shortly.
  • By submitting this form you accept we will use your data in accordance with our Privacy Policy.