As the data centre landscape changes, it is becoming increasingly more important to make sure you are asking the right questions and including all stakeholders when considering your data centre options. As we see continue to see the disconnect between design and operation, our Associate Director, Steve Lorimer, highlights why we, as consultants, need to challenge our customers in understanding what they are trying to achieve, rather than taking briefs at face value. You can read the article below or see the full magazine here.
Traditionally, the design of new data centres has been at the forefront of clients’ minds when procuring new IT infrastructures. Meanwhile the less glamorous maintenance and operation element is put on the backburner until, in some cases, after the build is complete.
In recent years this has led to data centre systems that are excessively expensive and unable to perform in the long-term. The industry has been relatively slow in resolving this but now, more than ever, clients need greater insight to help them navigate the wealth of solutions on the market while minimising costs.
Last year we aimed to do this through the launch of our specialist consultancy division. We recognised the need to join our FM and design and build offering as a service, that can guide clients in considering both elements right from the outset. Since then, it’s proved to be the panacea clients didn’t know they needed.
IT is increasingly integral to companies’ wider business strategies as well as their dayto-day operation. Even now, high-profile examples of server downtime are acting as huge reputational issues affecting stock prices and customer perceptions. This is only set to continue as businesses grow their reliance on big data, automation and systems underpinned by highly available systems.
Often end-users have a preconceived idea of what they want for their data centre system. From a design perspective this can be any number of in-house, co-located, cloud systems or hybrid solutions. Often they are blinded to new technology on the market and the latest cutting edge systems. Stripping the process back to the fundamental question, ‘what do you want to achieve?’ is more vital than ever.
Clients have never been faced with a range of options as broad as they are today. Navigating this with them can show clients that initial plans, and the combination of new technology they want to include, may be too expensive, or, in the worst cases, not meet their objectives when maintenance and design are factored in.
Too often the industry simply takes the brief from clients without challenging it. We now work with clients before they put design and build tenders out to the market – working alongside internal teams to develop a system that meets their needs, is future-proof and is cost efficient in the long term.
As one example, one of the biggest operational costs clients face in running their own in-house data centre is cooling and failure of this can result in significant downtime. Design teams will often aim to ensure that cooling systems are optimised across rack space but, when it comes to operation – FM teams need to be in the loop to work out whether different permutations of cooling systems will be easy to access and maintain.
As the industry attempts to meet best practice guidance set out by the BS EN 50600 standard, collaboration will become even more important. With both design and maintenance considerations in the guidelines, simply having one party at the table is unable to produce a cutting edge data centre any longer. Particularly if clients are tempted to overinvest in new technology without considering their current and expected capacity needs and the long term maintenance costs of these systems.
We are proud to announce we have been appointed by a leading cruise ship operator to design and install a new data centre at its new headquarters in Uxbridge.
The contract will see Keysource deliver a turnkey data centre system to support the client’s business critical IT services associated with the operation of 14 cruise ships – including performance monitoring and proactive identification of potential issues.
The project follows a decision by the client to relocate its staff from Italy and central London to consolidate its operations and support future growth. In addition, the system will also house data for the recordings generated by the client’s UK contact centre which receives calls from new and existing customers around the world.
Keysource’s appointment will see it lead on key design objectives including the support of flexible IT requirements and the ability for the data centre to work efficiently at low loads in line with business requirements.
Jon Healy, Associate Director at Keysource, said:
This new solution will be developed in line with the latest regulations and industry standards. It will guarantee long term reliability and availability of critical services to the business but ensure they are delivered in a sustainable and efficient way, maximising the return on investment
Published in the Winter Data Centre Management Magazine, our Head of Design; Stephen Lorimer looks at resilience and how the industry mindset needs to change.
A common definition of ‘resilience’ is ‘the capacity to recover quickly from difficulties’. When applied to the data centre sector it is more commonly accepted as: the ability of an IT infrastructure to continue to operate, for example following an issue such as power outage, equipment failure or human error.
There is a general misconception that all data centres should be highly resilient. In fact, I lose count of the times that customers have started initial meetings by requesting a “Tier III or Tier IV facility and, above all, absolute protection against any data loss.” Quite often by the end of the initial engagement they realise that they are already achieving the redundancy they need within their IT layer and can normally operate safely with a lower resilience classification.
Historically organisations have often made the mistake of designing highly resilient data centres without properly considering why and whether or not they actually need them to be highly resilient. As a result they have ended up with, at best, facilities that have been both expensive to construct and continue to be operationally expensive, and unnecessarily complex, to operate.
At the heart of this problem is the fact that decisions about the resilience of supporting M&E infrastructure are often made without any proper consideration about what level of availability the IT service is actually required to deliver. To do this involves taking a step back and looking at the wider IT strategy – an approach that is endorsed in the EU Code of Conduct which clearly states that organisations should “deploy resilience in line with requirements.’
This failure in approach can happen for a number of reasons, but the most common one we see is that organisations do not engage with a specialist. Whilst in-house teams are often extremely competent, data centre design is rarely their ‘core skill’ so the end result may not always meet and sometimes contradicts the company’s IT objectives.
For those occasions where design and build is the best option we, as an industry, need to put more of a focus on ensuring that data centres are ‘designed for operation’ and the team responsible for maintaining and running the facility is engaged from the outset. As an organisation we encourage different stakeholders, to be part of the process from the outset, as we feel that this delivers the best results. This early engagement is key as not having all the stakeholders involved may mean that not all impacts are properly considered and addressed as part of the design…
Organisations today have many more options when it comes to storing and managing their data and supporting their IT infrastructure. Laurence Baker looks at how organisations can ensure they have a future-ready solution and discusses the rise of the modular solution and the benefits it can bring.
Ten years ago there was no real outsourcing model in our sector and organisations had to build and run their own data centres. So they invested heavily and built huge facilities in anticipation of strong, predicted growth. In many cases the facilities were big, with highly resilient Tier IV infrastructures, as they believed what they needed was 100% availability and, above all, absolute protection against any data loss. Then the economic downturn happened.
In some cases these decisions were made without properly considering their requirements. As a result many have ended up with, at best, facilities that have been both expensive to construct and continue to be operationally complex and expensive to run.
At Keysource we find that our customers rarely deploy a full IT load from day one, if ever, so this raises the question about whether the infrastructure to support this needs to be in place in day one. To determine this we believe that early engagement is key to ensure you are making the right decisions. Not having all the relevant stakeholders involved from the outset may mean that the team fails to understand the real business and IT requirements or that the wrong solution is specified and deployed.
This is increasingly important as businesses today are becoming ever more dependent on IT systems and associated data due to a number of changes such as an upsurge in ‘Cloud’ services, digitalisation and the internet of things. As a result a key priority is ensuring the availability of these systems, with companies looking for the best solution to meet their requirements, as efficiently as possible. For many the biggest challenge is how to keep aligning the IT infrastructure to the fast moving and ever changing business environment and ensuring that any solution is future ready, whilst also keeping costs to a minimum.
As a result, many organisations are opting for modular data centre solutions which are constantly evolving to address a wider range of business and operational requirements. Traditionally modular solutions were developed to overcome construction and deployment challenges, but now there is an overwhelming demand for these scalable facilities that also deliver high levels of performance, resilience and efficiency…
Read the full article on page 20 of Data Centre News Magazine
Find out more about our modular data centre solutions
Keysource, the expert in business critical environments, has been appointed to design and build a new data centre for a leading pharmaceutical company at its production site in the North of England. It will replace an existing, ageing facility and will underpin the critical services being delivered to the business for the next 10 to 15 years, saving over £250k in energy costs.
The location for the new data centre will be an existing IT services office. The Keysource team will strip this room and make any health and safety, as well as aesthetic refurbishments, before installation commences. The project will be completed under live conditions so that the existing data centre and staff working at the campus are not disturbed while construction takes place.
Designed to be concurrently maintainable, with N+1 critical cooling and power the new data centre will also be highly efficient, fully utilising the ASHRAE recommended temperature range. In addition, an environmental monitoring system will also be deployed. This system will allow provided real time insight across the data centre environment allowing cooling to be further optimised.
Mike West, Managing Director at Keysource concluded,
This new data centre will be developed in line with the latest regulations and industry standards. It will not only guarantee long term reliability and availability of critical services to the business but ensure they are delivered in a sustainable and efficient way, thereby maximising the return on investment.