The true cost of efficiency

In this months CISBE Journal, our head of innovation, Richard Clifford, argues that early collaboration is essential to ensure customers understand and avoid the hidden costs of energy saving measures within data centres.

Data centres are inherently energy intensive, making up as much as half of a company’s energy consumption in some cases. Naturally, as financial, regulatory and CSR pressures have increased, energy efficiency has been pushed to the top of the agenda for data centre design. A myriad of options available in the market, exacerbated by different consultants all championing their own approach or solutions, has led to confusion and in some cases an inherent lack of understanding for the end user. In the race to specify efficient data centre estates, options and outcomes are not being fully explored, which can lead to increased costs and compromise overall CO2 reductions in the long term.

Part of the problem is that the industry’s go to metrics can be misleading or easily manipulated to appear more attractive. Specifiers often focus on metrics because at a base level they provide an easy reference point that can be used to evaluate the efficiency of different options. Yet, measures that bring the best results on paper may not offer the most efficient or cost-effective option and can, in extreme cases, lead to long term operational faults.

Most of these measures also work on an assumption that the facility is operating with a full IT load, which rarely happens in most cases. Energy efficiency naturally decreases at lower IT loads and suggested levels can take longer to come into play, if ever. Solely investing in a design that offers a good efficiency metric can leave a business open to higher operational costs than expected later down the line.

For example, PUE (power usage effectiveness) – a ratio of how efficiently energy is used by a data centre’s computing equipment in contrast to cooling and other overheads – can be improved by raising rack temperatures, which has very little effect on savings, as any energy saved by turning down cooling systems is shifted to server fans.

Metrics like PUE don’t paint a full picture on efficiency and, as consultants, we should be working with all our stakeholders to ensure they understand the true implications of how different solutions and routes will affect their facilities in the long term.

Hidden costs

Many of the energy efficiency measures available are key examples of this and their long-term implications need to be thoroughly understood before they are committed to. Without proper consideration, some can create hidden costs elsewhere and cancel out any potential saving, or, in the worst-case scenario, have a negative impact on the facility’s resilience, increasing the risk of downtime.

Fresh air cooling, often considered one of the most eco-friendly cooling methods, is a textbook example. On paper it has almost no carbon footprint and can be particularly cost-effective for facilities located in colder climates. Such alluring benefits are causing many customers to specify these systems without considering suitability or being aware that operating these systems long-term can be more complicated than first anticipated. Fresh air cooling is highly dependent on location – it is easily contaminated by pollution or seawater which can damage hardware and bring high replacement costs. Preventing this damage means investing in additional cleaning and maintenance which brings more overheads and ultimately, a larger CO2 footprint if replacement parts are needed. Likewise, fresh air needs more complex control systems such as fire detection and suppression which all add further capital and operational costs.

Another cooling example impacting efficiency are the new changes to F-Gas legislation, which have brought forward price rises related to the management of stalwart gases such as R404A and R410A. This could have a large impact on pumped refrigerant DX systems, and lead to significant maintenance issues despite their lower capital costs.

Collaboration is key

It may be an attractive option to choose methods that appear to represent a lower upfront investment but without full consideration of the operational lifecycle, any financial or carbon savings may be eliminated in the long term. Many end-clients are stung because they don’t take a wide enough view of their data centre’s management, agreeing to efficiency measures without consulting the teams responsible for the day-to-day running of the facility.

Early consultancy and collaboration at the design stages is vital to prevent this from occurring. We should be ensuring that all stakeholders, including FM and operational teams, are involved in the design process to ensure the operation and maintenance of the facility is considered from the outset. This allows any potential pitfalls to be flagged before any decision are set in stone. It also means all teams can work together to argue for models that may cost more initially, but offer greater long-term efficiency and business flexibility.

Efficiency will always be a concern for the industry, as it should be. But this needs to go beyond just meeting metrics. By considering a facility’s lifecycle rather than just initial costs, environments can be created that save money and energy throughout the course of their use. As consultants we need to be driving this shift in focus to ensure data centres are built with long-term operation in mind.

This article was first published in the April edition of the CIBSE Magazine 

Keysource completes IT transformation project for Willis Towers Watson

Critical environment specialist, Keysource, has completed a multi-million pound IT transformation project for global advisory, broking and solutions company Willis Towers Watson.

Keysource was appointed by Willis Towers Watson in 2016 to support its global IT transformation, following the merger of Willis Group and Towers Watson. Designed to underpin Willis Towers Watson’s global IT strategy and prepare it for future business activity, Keysource delivered significant upgrades to two live data centre facilities in Ipswich and Reigate, which host critical data for the firm’s business, including the delivery of two new data halls.

Keysource consultancy teams worked in close partnership with Willis Towers Watson on the design and delivery approach, evaluating a number of options and strategies. Once a recommendation had been made, Keysource were then appointed under a design and build contract for a full turnkey project to upgrade two data centres which service much of Willis Towers Watson’s global business.

The project has provided Willis Towers Watson with its own on-premise, data estate with cloud-like flexibility, enhanced security and resiliency assurance – a business-critical consideration for the organisation and its customers. The new IT infrastructure will  also, reduce overheads and provide a platform to support the future growth of the business.

Paul West, Global Data Centres Director at Willis Towers Watson said:

“The delivery of the new data centre expansion was a crucial part of our IT strategy and has taken us to another level of resiliency. From designing to our requirements and delivering the project without downtime, the team at Keysource were instrumental in the project’s success.”

Jon Healy, Managing Executive at Keysource, said:

“Many of our customer are facing the challenge of transforming their data centre estate to meet the requirements of today and the future. As a key technology partner we are delighted to continue supporting Willis Towers Watson’s global strategy through the planning and execution of key tactical projects.

“Our appreciation of the IT services being delivered and Willis Towers Watson’s operational business environment, allowed us to tailor the delivery of this critical project to minimise the impacts, which are inherent within projects of this nature.”

Keysource breaks new ground in China

Just over one month into the New Year and Keysource has already celebrated a major project milestone, with construction beginning at the major 80-acre data campus in Tianjin for Chayora, one of China’s leading data centre operators.

Our team is designing the site’s nine data centres, comprising of six general-purpose facilities and three high performance computing centres – a total of 21,000 racks and over 300MW. The site will serve the data needs of a range of international businesses and the greater Beijing region, which is home to more than 150 million people.

The Tianjin site is the first in a series of key data campuses that we will help Chayora deliver across China, alongside sites in Shanghai, Nanjing, Hangzhou and Guangzhou, as part of the business’ $2bn investment in infrastructure.

We were appointed in 2016 as Chayora’s lead design partner to provide initial strategic site planning, and as part of our partnership we have also been working with end customers to help meet their specific requirements.

Stephen Whatling, Managing Director of Keysource, said:

“China is rapidly becoming one of the world’s largest data centre markets and our partnership with Chayora is a key way we can help meet demand.”

“Working within such a strict regulatory environment and on such a large scale has put our expertise and consultancy skills to the test, but one we have more than risen to. We’re looking forward to getting to the next phases.”

The first phase of the Tianjin development will offer 25MW of power capacity and is expected to come online by the end of 2018.

Keysource partners to provide new data centre risk management service

Critical environments and data centre specialist, Keysource, has partnered with Corporate Risk Associates (CRA) to meet growing demand for risk management services in the data centre sector.

This partnership will offer in-depth performance and risk management services, to allow them to build up a full risk profile of their data estates, taking in a range of factors including location, operational performance and resilience, risk and critical processes monitoring.

Keysource says that the in-depth analysis will also advise clients on selecting the most appropriate data centre model for their business, through an in-depth understanding of the risk involved in each option.

Mike West, chairman at Keysource, said:

“Despite cyber security and data resilience becoming board-level concerns, there is a dearth of specialist consultancy available in the market to help businesses grasp and manage risks in their data centre estates. This joint venture will fill that gap by combining Keysource’s vast experience in the design and operation of data centres with CRA’s risk management expertise.

“We will work with clients to ensure that they understand where risk lies in their IT infrastructure, select the best options for new investment and mitigate any potential threats. With cyber risk set to take on increasing significance for corporate due diligence, this partnership means we are well placed to capitalise on a growing market.”

Jasbir Sidhu, CEO of CRA, said

“We have 16 years’ experience of providing risk assessments to critical industries – including maintaining national infrastructures in the power, defence and transport sectors. Our approach, which sees us look at the facilities, hardware and human elements of day-to-day operation, will enable us to ensure that the joint venture’s clients have an incredibly robust understanding of the risk related to their data centres.”

Interested in finding out more or seeing how we can ensure visibility across your data centre estate? Call us on 0345 204 3333 and speak to Oliver Goodman.

Are you asking the right questions?

As the data centre landscape changes, it is becoming increasingly more important to make sure you are asking the right questions and including all stakeholders when considering your data centre options. As we see continue to see the disconnect between design and operation, our Associate Director, Steve Lorimer, highlights why we, as consultants, need to challenge our customers in understanding what they are trying to achieve, rather than taking briefs at face value. You can read the article below or see the full magazine here.

Traditionally, the design of new data centres has been at the forefront of clients’ minds when procuring new IT infrastructures. Meanwhile the less glamorous maintenance and operation element is put on the backburner until, in some cases, after the build is complete.

In recent years this has led to data centre systems that are excessively expensive and unable to perform in the long-term. The industry has been relatively slow in resolving this but now, more than ever, clients need greater insight to help them navigate the wealth of solutions on the market while minimising costs.

Last year we aimed to do this through the launch of our specialist consultancy division. We recognised the need to join our FM and design and build offering as a service, that can guide clients in considering both elements right from the outset. Since then, it’s proved to be the panacea clients didn’t know they needed.

IT is increasingly integral to companies’ wider business strategies as well as their dayto-day operation. Even now, high-profile examples of server downtime are acting as huge reputational issues affecting stock prices and customer perceptions. This is only set to continue as businesses grow their reliance on big data, automation and systems underpinned by highly available systems.

Often end-users have a preconceived idea of what they want for their data centre system. From a design perspective this can be any number of in-house, co-located, cloud systems or hybrid solutions. Often they are blinded to new technology on the market and the latest cutting edge systems. Stripping the process back to the fundamental question, ‘what do you want to achieve?’ is more vital than ever.

Clients have never been faced with a range of options as broad as they are today. Navigating this with them can show clients that initial plans, and the combination of new technology they want to include, may be too expensive, or, in the worst cases, not meet their objectives when maintenance and design are factored in.

Too often the industry simply takes the brief from clients without challenging it. We now work with clients before they put design and build tenders out to the market – working alongside internal teams to develop a system that meets their needs, is future-proof and is cost efficient in the long term.

As one example, one of the biggest operational costs clients face in running their own in-house data centre is cooling and failure of this can result in significant downtime. Design teams will often aim to ensure that cooling systems are optimised across rack space but, when it comes to operation – FM teams need to be in the loop to work out whether different permutations of cooling systems will be easy to access and maintain.

As the industry attempts to meet best practice guidance set out by the BS EN 50600 standard, collaboration will become even more important. With both design and maintenance considerations in the guidelines, simply having one party at the table is unable to produce a cutting edge data centre any longer. Particularly if clients are tempted to overinvest in new technology without considering their current and expected capacity needs and the long term maintenance costs of these systems.

[x]

Contact Us

Please fill in the form below and a member of the team will be in touch shortly.
  • By submitting this form you accept we will use your data in accordance with our Privacy Policy.