Over the last few years, Data Centers have raised their profile significantly. As such the industry has come under a significant amount of scrutiny due to inefficiencies of power, water wastage and their overall carbon emissions.
Accordingly Data Centers today are under extreme pressure to become more sustainable in all the areas in which they negatively impact on the environment.
This is not an easy fix, and one that most certainly can’t be concluded overnight. There are however significant changes that can be adopted at relative speed to allow for smoother transition. For example, switching from fossil fuel supply to cleaner electricity supplies. This can be completed with relative ease through securing renewable energy bonds with a supplier. It should be said however, that whilst renewable energy is being built at a fantastic rate, the supply is still finite and until this is more ubiquitous, demand for Data Centers, as other concerns, needs to be reliable.
What is better than using renewable energy?
Using less energy overall to improve its availability elsewhere and reduce the wider grids reliance on fossil fuels.
In this document, we aim to provide information on Data Center cooling methodologies and the affects they have on the facilities in which they are embedded, in particular the benefits and challenges of liquid cooling.
Multiple Choices
When talking about next generation Data Center cooling, it’s important to realise that there are now multiple alternative choices as there are many next generational cooling methodologies available to design engineers, each with their own intricacies to be navigated.
kW Duty Deployment
In 2006, the kW per cabinet capacity was around 1 – 1.5kW per 42RU cabinet, increasing to around 2.4kW per 47RU cabinet in 2011. The Data Center industry responded by building facilities that could cope with 5kW per cabinet average which was seen to give the industry significant flexibility for the foreseeable future, however, this requirement was surpassed in 2017.
Because the advancements of the Hardware have increased, the Data Center has had to adapt to overcome the challenges of increased kW demand per cabinets in several ways.
One of the options was to share the capacity. This meant that the design of the facility which had allowed for 5kW per cabinet could facilitate a 10kW cabinet but this would mean a cabinet next to it would have to remain empty to keep the balance.
When real estate is at a premium, this is a costly waste which in turn either has to be charged to the customer or absorbed by the Data Center. With the trend for cabinet kW increasing, it raises questions on the existing design scalability. This could result in a Data Center being up to a third empty or unusable space if the industry continues to increase power demand.

Because of the advancements of the Hardware and its increase in duty, some Data Centers have added air controlling adaptations and additional hardware to their existing design to aid in the increased cooling demand.
These adaptations are typically, Aisle Containment, In row Cooling units and in some design cases higher duty air vent tiles.
Adaptations such as the ones shown have supported the Data Center facilities well, meaning where extreme kW duty cabinets were required, they were able to facilitate and deliver the necessary cooling required.
However this came at an increased cost , and often still had a typical cooling limit of around 30kW per cabinet. So even these technologies are not capable of efficiently cooling the latest cabinet densities.
kW Duty Per Technology and its Efficiency
The above graph demonstrates significant increase in kW demand for facilities where global average kW requirements per cabinet are closely aligned to kW per cabinet Data Center design, not allowing for the facilities to evolve to meet higher demands where required.
However, as there is now a bigger a drive for the industry to be more sustainable, this means we simply cannot continue designing facilities with so many existing conflicts to achieve a better future, for both operators and those utilising colocation space.
We have to look to introduce new more efficient technologies which can deliver both within the existing or new build infrastructure.
kW cooling capacities per technology need to be reviewed. The graph below covers the kW duties the technology supports as an average. There are even some indirect cooling technologies which could exceed the kW duty shown, dependent on the climate it is deployed in.

White Space Data Center Cooling
The Data Center has always been the key foundation to support IT hardware, with the capability to provide the space, power and cooling for exact requirements. However as the Hardware has evolved and increased in power demand, the design architecture and specifically the cooling ability of the Data Center has started to fall behind in its capability.
This is where the opportunity for Data Center designers to re-evaluate their existing design criteria and utilise alternative technology available which can enhance the future of their facilities and in turn the efficiency and sustainability of the Data Center.
Here we examine and give an insight into the benefits of some existing and new technologies helping to address the increased demand for Data Center white space cooling.
Immersion
Immersion cooling is a relatively new technology. There are two versions of immersion cooling, single phase and two-phase. Both are a technology that allows servers and other components that traditionally were in a free-standing cabinet, to now be submerged in a tub configuration with thermally conductive dielectric fluid. With this method, the need for air cooling around the Hardware is eliminated, including the fans within servers, however supplementary cooling is still required due to the excessive heat within the overall room from such a large system with significantly high temperature fluid, this supplementary cooling will typically be between 10-20% of the overall loading.
In a single-phase system servers can be installed vertically in a dielectric fluid that is thermally conductive. Heat is transferred to the liquid medium through direct contact with server components and removed by heat exchangers in a CDU.
The other challenge that is often lost is the use of ‘Dual phase’ or ‘Two phase’ systems, it is important to remember that these are in essence refrigerants which have a global warming impact which potentially can be thousands of times its own weight in CO2 equivalent.
Direct chip level cooling
DCLC or Direct Chip Level Cooling is a highly efficient heat rejection system with water or one- or two-phase dielectrics to reject heat from chip sets. Fast becoming one of the ‘go to technologies’ as an attractive way to boost or supplement the base load of air-cooling for higher density applications above that of Aisle Containment.
This system works well in new or retrofit DCs and compliments existing CHW (chilled water) design in a very efficient way which can extend or augment many Data Center facilities.
Supporting any server, including a number that are pre-integrated, while also Supporting potential for using higher fluid EWT/LWT with improved free cooling aspects. The technology is deemed Safe and reliable and has potential for heat recovery with process water, however this is likely to need a CDU for heat transfer, these CDU systems also need to be more local and consume vast areas of the white space.
DCLC is an excellent technology that has taken Hardware cooling to the next level, by treating just the GPU/CPU however it typically removes between 50-70% of the total IT heat load. Due to this you will still need another form of cooling to eradicate the additional heat.
The heat sinks and other materials typically get removed and replaced for every IT refresh due to variations in hardware design which is extremely costly, up to 60% of initial capital expenditure in some cases.
DCLC is a good solution for HPC and deployment in Colocation existing Data Centers reflects the reality that this form of equipment will normally require supplementary cooling for the foreseeable future, as it is concentrated at the main CPU/GPU heat sources. It serves a purpose in HPC hardware applications and allows for some support on deployments in traditional Data Center / MTDC environments.
It is a more mature approach than immersion cooling, and the reports of leaks, that would normally be a cause for concern are taken away by working with key established providers, however given that there are a significant amount of parts used in this type of deployment the TCO model can be challenging due to lack of forward IT compatibility and significant CAPEX replacement and deployment costs.
Rear Door Coolers (RDC)
RDCs have been around the longest of all the next generational white space technologies, all be it initially in passive deployments. Since the emergence of active RDC or RDHx (Rear Door Heat Exchangers) some models offer the highest heat rejection capabilites in the industry today.
Our product, the ColdLogik RDC is designed to run on liquids such as water or harmless synthetic fluids, potentially improving efficiencies and increasing flexibility, however unlike the other new technologies, it can also negate the need of mechanical cooling.
Most RDCs utilise a closed loop system, meaning they don’t consume water. They also waste significantly less water than traditional Data Center deployments, and in some scenarios would consume no water for the full year without mechanical cooling. As some Data Centers are facing massive pressure to reduce the amount of potable wast water they utilise, this would be of significant benefit to them and the local communities they impact.
The ColdLogik system cools 100% of the room load thereby mitigating the need for any additional cooling, improving efficiencies significantly and saving real estate.
This type of technology is non-intrusive, and as it is AALC (Air Assisted Liquid Cooling), it poses no risk to equipment warranty whilst allowing for easy integration and deployment into any existing environment.
The cost of the RDCs and components to deploy mean it is cost effective for the overall TCO and not as CAPEX heavy as other technology. It also can deliver full PUEs of 1.035 – 1.09 meaning the OPEX is also significantly profitable due to the savings the doors enable. This is achieved when delivering up to 98% power usage savings comparable to traditional CRAC/H air cooling, up to 83% comparable to traditional aisle containment cooling and up to 45% comparable to in-row cooling.
ColdLogik systems will support in excess of 200kW per cabinet whilst allowing N+1 redundancy which is a unique attribute in the market place. There is no need for specialist hardware to allow for integration.
Free cooling is the true goal for ColdLogik RDC systems and this can be achieved even in the harshest environments. RDCs are compatible with almost all external plant options including chillers, dry coolers,water towers, bore holes, rivers and lakes. The RDC also allows for higher water differentials enabling the use of heat recovery where a use exists for the waste heat.
Some RDC manufactures also require CDUs, which when deployed may need to take up space close to the deployment whereas ColdLogik systems can be deployed onto any existing cabinet allowing for simple retrofit options, without the need for CDU systems.
The RDC is a proven technology and the most mature and versatile of all next generational white space Data Center cooling. Efficiently delivering 100% of heat negation, more than all the other next generation technologies. It saves water, a key element the DC industry needs to focus on and also delivers power and space savings alongside huge carbon savings.