Homegrown internet firm and smart cities operator Connexin has partnered with critical environment specialist, Keysource, to deliver its £5 million data centre in Hull, as it advances its smart cities growth strategy.
The scheme, known as CXNDC, follows Connexin securing a ten-year Wi-Fi contract with Hull City Council. Connexin is scaling up its work with local authorities to provide Wi-Fi as a public service for connected devices such as road sensors, energy, and security systems, as demand for smart city technology grows.
Keysource has designed the ‘state-of-the-art’ CXNDC 200 rack Data Centre to a tier-III standard covering nearly 10,000 sq ft, and bringing more than 40,000 Mbps of internet connectivity. The project will support demand from local and national clients and will also become the business’ new headquarters. Work is due to begin on the site in July.
Richard Clifford, Head of Innovation at Keysource, said:
“The growing demand for smart cities infrastructure in the UK represents a significant opportunity for Keysource. We have a long track record of delivering for colocation and internet service providers like Connexin and understand the particular needs of this growing area of the data centre market. CXNDC will provide Connexin with a state of the art data asset and we’re proud to be supporting both it and Hull’s smart city ambitions.”
Furqan Alamgir, CEO at Connexin, said:
“With CXNDC we are making a major investment in Hull as it progresses its journey to become a leading UK smart city. Keysource’s consultative approach has been key to ensuring this new asset is designed and optimised with the long-term operation of the site in mind.”
The Friday before bank holiday is always a good day in the Keysource offices, but the team have another reason to be celebrating this morning.
Last night at the 2018 Data Centre Solutions Awards, Keysource picked up the award for New Design / Build Project of the Year for their project with the University of Exeter.
Keysource has been working with the University of Exeter as their key technology partner since 2016, supporting them as they looked at the different options available for deploying high performance compute.
Instead of adopting an outsourced compute model, the university decided to build its own data centre to act as the critical infrastructure for its proprietary HPC system, called Isca. Isca provides a next-generation research computing environment, combining traditional HPC with a private cloud infrastructure – which was a first for the UK education sector.
This would allow it to ensure its specific research requirements and capacity challenges were met and give it new opportunities to build partnerships with other universities, including the GW4 consortium, and industrial partners.
After collecting the award, Jon Healy, Managing Executive of Keysource said:
We are absolutely delighted to have secured this award in such a prestigious category. We were up against some major competition, so the success of the University of Exeter project is testimony to the hard work and ongoing dedication of the entire Keysource team, who delivered this fantastic and truly innovative facility on-time and with no disruption.
The DCS Awards are designed to recognise the product designers, manufacturers, suppliers and providers in the data centre sector.
Critical environment specialist, Keysource, has won a five-year contract with colocation provider, Indectron, to provide facilities management and technical services at its Gloucester data centre.
Keysource will provide a full range of FM services at Indectron’s Shield House site, including engineering, service management, remote monitoring and coordinating all planned maintenance.
This contract builds on Keysource’s prior consultancy work with Indectron, which has included providing feasibility, design, and technical due diligence services for the development and commissioning of the 20,000 sq ft facility.
The 3MW site opened in 2018 and provides secure and flexible colocation services to their clients.
Stephen Lorimer, Associate Director at Keysource, said:
“We have a long history of working with colocation providers like Indectron. We’re seeing our clients increasingly value a consultancy-led approach which goes above and beyond the nuts and bolts of facilities management to include technical and client support services.”
“We’ve built a strong working relationship with Indectron over the past three years and the team knew our business could provide the standard of FM support it needed, as well as the technical and consultancy expertise to help enhance the site’s service and drive the business forward. We’ve helped develop this facility from beginning to end and look forward to continuing our partnership by supporting its management.”
Andrew Bence, Managing Director at Indectron, said:
“Having an FM provider that was a trusted partner was important to us. We were confident in Keysource’s ability to deliver what we needed, while providing the additional expertise to ensure our site offers the best service. This peace of mind allows us to focus on growing our business.”
Keysource is part of integrated property services group Styles&Wood, which provides a full range of professional and contracting services to some of the UK’s premier brands and leading blue chip organisations.
This was first published in the March issue of Data Centre News Magazine.
Richard Clifford, Head of Innovation at Keysource, explains that data centre owners and managers could be using their power infrastructure to generate revenue without sacrificing disaster recovery processes.
The UK energy market has seen significant price increases over recent years – as much as 62.6% between 2006-2016, according to price comparison provider Selectra. The market’s volatility was highlighted in December 2017 when wholesale gas prices hit their highest levels in six years, due to supply disruption in Europe. As such, it’s becoming increasingly important for volume energy users to consider innovative ways to reduce costs and ensure they remain competitive.
Battery storage has been billed as the missing piece of the puzzle in addressing the word’s energy challenges. Companies like Tesla are investing in research to create cutting-edge batteries for homes and businesses that store energy from renewables. But energy storage is nothing new. Similar technology lies within Uninterruptible Power Supply (UPS) systems, which are already used by the vast majority of data centre owners. Using these systems in new ways could allow the sector to guard against the rapid price changes in the energy market.
UPS systems have been used in Data Centres for decades, but it’s only recently that operators have started to consider this infrastructure as a potential source of revenue generation by taking advantage of National Grid’s Firm Frequency Response (FFR) incentive.
On a basic level, UPS draws in energy while infrastructure is running and automatically switches to power the data centre – keeping systems live in the event of a failure. The stored energy is effectively a reserve that rarely gets used. However some operators are now using this reserve to power their IT systems when energy prices are at their highest, switching back to mains supply when prices are lower. In doing so they’re able to take advantage of the best tariffs.
The obvious question is whether doing this will affect disaster recovery. Using UPS in this way means more frequent recharging and should a power failure happen at this time, there could be a gap in uptime. For some data centre operators, this risk has been enough to completely remove any interest in using UPS for battery storage.
There’s an easy fix, however. By using two systems – one for disaster recovery and a second to reduce costs – operators can avoid risk and assure customers and stakeholders that there will always be back-up power available.
The application of this is relatively straight forward. One UPS system sits below the infrastructure and draws in energy which can be routed back to power IT in the event of a failure – purely used for disaster recovery. Meanwhile, a second is connected further upstream, at the transformer. This second UPS simply stores energy directly from the grid as the IT infrastructure is powered from the mains. It then can act as the battery for use when tariffs are high, helping generate savings without the risk.
Operators that have made the change to using UPS in this way have achieved savings in the region of 5 to 10 per cent on their energy bills. And the model doesn’t just allow for savings. Data centre operators can generate revenue by selling stored energy back to the grid too thanks to incentives like the FFR – a framework that allows third parties to feed back to the grid.
Ultimately the viability of this strategy depends on the data centre and the operator. Among the things to consider is an increase to operational cost. If UPS systems are being used more frequently it means that maintenance of these systems may have to ramp up. And there is obviously the capital expenditure that comes with investing in a second UPS system and connecting this to the existing data centre infrastructure.
Yet the case for doing so is compelling. Margins in the sector are tightening – some are nine times less than they were a decade ago. Meanwhile, energy is still among the largest overheads businesses face –anywhere from 25 per cent to 60 per cent of running costs, according to trade association Intellect.
The good news is that data centre demand has never been higher, due to the increasingly business critical nature of IT systems and growing demands for the rapid accessibility of data. But this means that energy costs will likely continue to be a pressure point which restricts some operator’s ability to grow. Using UPS to generate savings is not a panacea for the sector’s challenge, but it is an example of the sort of small changes data centre operates can make to ease some of the pressure.
In this months CISBE Journal, our head of innovation, Richard Clifford, argues that early collaboration is essential to ensure customers understand and avoid the hidden costs of energy saving measures within data centres.
Data centres are inherently energy intensive, making up as much as half of a company’s energy consumption in some cases. Naturally, as financial, regulatory and CSR pressures have increased, energy efficiency has been pushed to the top of the agenda for data centre design. A myriad of options available in the market, exacerbated by different consultants all championing their own approach or solutions, has led to confusion and in some cases an inherent lack of understanding for the end user. In the race to specify efficient data centre estates, options and outcomes are not being fully explored, which can lead to increased costs and compromise overall CO2 reductions in the long term.
Part of the problem is that the industry’s go to metrics can be misleading or easily manipulated to appear more attractive. Specifiers often focus on metrics because at a base level they provide an easy reference point that can be used to evaluate the efficiency of different options. Yet, measures that bring the best results on paper may not offer the most efficient or cost-effective option and can, in extreme cases, lead to long term operational faults.
Most of these measures also work on an assumption that the facility is operating with a full IT load, which rarely happens in most cases. Energy efficiency naturally decreases at lower IT loads and suggested levels can take longer to come into play, if ever. Solely investing in a design that offers a good efficiency metric can leave a business open to higher operational costs than expected later down the line.
For example, PUE (power usage effectiveness) – a ratio of how efficiently energy is used by a data centre’s computing equipment in contrast to cooling and other overheads – can be improved by raising rack temperatures, which has very little effect on savings, as any energy saved by turning down cooling systems is shifted to server fans.
Metrics like PUE don’t paint a full picture on efficiency and, as consultants, we should be working with all our stakeholders to ensure they understand the true implications of how different solutions and routes will affect their facilities in the long term.
Many of the energy efficiency measures available are key examples of this and their long-term implications need to be thoroughly understood before they are committed to. Without proper consideration, some can create hidden costs elsewhere and cancel out any potential saving, or, in the worst-case scenario, have a negative impact on the facility’s resilience, increasing the risk of downtime.
Fresh air cooling, often considered one of the most eco-friendly cooling methods, is a textbook example. On paper it has almost no carbon footprint and can be particularly cost-effective for facilities located in colder climates. Such alluring benefits are causing many customers to specify these systems without considering suitability or being aware that operating these systems long-term can be more complicated than first anticipated. Fresh air cooling is highly dependent on location – it is easily contaminated by pollution or seawater which can damage hardware and bring high replacement costs. Preventing this damage means investing in additional cleaning and maintenance which brings more overheads and ultimately, a larger CO2 footprint if replacement parts are needed. Likewise, fresh air needs more complex control systems such as fire detection and suppression which all add further capital and operational costs.
Another cooling example impacting efficiency are the new changes to F-Gas legislation, which have brought forward price rises related to the management of stalwart gases such as R404A and R410A. This could have a large impact on pumped refrigerant DX systems, and lead to significant maintenance issues despite their lower capital costs.
Collaboration is key
It may be an attractive option to choose methods that appear to represent a lower upfront investment but without full consideration of the operational lifecycle, any financial or carbon savings may be eliminated in the long term. Many end-clients are stung because they don’t take a wide enough view of their data centre’s management, agreeing to efficiency measures without consulting the teams responsible for the day-to-day running of the facility.
Early consultancy and collaboration at the design stages is vital to prevent this from occurring. We should be ensuring that all stakeholders, including FM and operational teams, are involved in the design process to ensure the operation and maintenance of the facility is considered from the outset. This allows any potential pitfalls to be flagged before any decision are set in stone. It also means all teams can work together to argue for models that may cost more initially, but offer greater long-term efficiency and business flexibility.
Efficiency will always be a concern for the industry, as it should be. But this needs to go beyond just meeting metrics. By considering a facility’s lifecycle rather than just initial costs, environments can be created that save money and energy throughout the course of their use. As consultants we need to be driving this shift in focus to ensure data centres are built with long-term operation in mind.
This article was first published in the April edition of the CIBSE Magazine