archive.php

contnt.php

Liquid Cooling: Benefits and Challenges

Over the last few years, Data Centers have raised their profile significantly. As such the industry has come under a significant amount of scrutiny due to inefficiencies of power, water wastage and their overall carbon emissions.

Accordingly Data Centers today are under extreme pressure to become more sustainable in all the areas in which they negatively impact on the environment.

This is not an easy fix, and one that most certainly can’t be concluded overnight. There are however significant changes that can be adopted at relative speed to allow for smoother transition.  For example, switching from fossil fuel supply to cleaner electricity supplies.  This can be completed with relative ease through securing renewable energy bonds with a supplier.  It should be said however, that whilst renewable energy is being built at a fantastic rate, the supply is still finite and until this is more ubiquitous, demand for Data Centers, as other concerns, needs to be reliable.

What is better than using renewable energy?

Using less energy overall to improve its availability elsewhere and reduce the wider grids reliance on fossil fuels.

In this document, we aim to provide information on Data Center cooling methodologies and the affects they have on the facilities in which they are embedded, in particular the benefits and challenges of liquid cooling.

Multiple Choices

When talking about next generation Data Center cooling, it’s important to realise that there are now multiple alternative choices as there are many next generational cooling methodologies available to design engineers, each with their own intricacies to be navigated.

kW Duty Deployment

In 2006, the kW per cabinet capacity was around 1 – 1.5kW per 42RU cabinet, increasing to around 2.4kW per 47RU cabinet in 2011.  The Data Center industry responded by building facilities that could cope with 5kW per cabinet average which was seen to give the industry significant flexibility for the foreseeable future, however, this requirement was surpassed in 2017.

Because the advancements of the Hardware have increased, the Data Center has had to adapt to overcome the challenges of increased kW demand per cabinets in several ways.

One of the options was to share the capacity. This meant that the design of the facility which had allowed for 5kW per cabinet could facilitate a 10kW cabinet but this would mean a cabinet next to it would have to remain empty to keep the balance.

When real estate is at a premium, this is a costly waste which in turn either has to be charged to the customer or absorbed by the Data Center.  With the trend for cabinet kW increasing, it raises questions on the existing design scalability.  This could result in a Data Center being up to a third empty or unusable space if the industry continues to increase power demand.

Global Average Cabinet kW Requirements and Data Center Design and Build Deliverable per cabinet

Because of the advancements of the Hardware and its increase in duty, some Data Centers have added air controlling adaptations and additional hardware to their existing design to aid in the increased cooling demand.

These adaptations are typically, Aisle Containment, In row Cooling units and in some design cases higher duty air vent tiles.

Adaptations such as the ones shown have supported the Data Center facilities well, meaning where extreme kW duty cabinets were required, they were able to facilitate and deliver the necessary cooling required. 

However this came at an increased cost , and often still had a typical cooling limit of around 30kW per cabinet.  So even these technologies are not capable of efficiently cooling the latest cabinet densities.

kW Duty Per Technology and its Efficiency

The above graph demonstrates significant increase in kW demand for facilities where global average kW requirements per cabinet are closely aligned to kW per cabinet Data Center design, not allowing for the facilities to evolve to meet higher demands where required.

However, as there is now a bigger a drive for the industry to be more sustainable, this means we simply cannot continue designing facilities with so many existing conflicts to achieve a better future, for both operators and those utilising colocation space.

We have to look to introduce new more efficient technologies which can deliver both within the existing or new build infrastructure.

kW cooling capacities per technology need to be reviewed. The graph below covers the kW duties the technology supports as an average.  There are even some indirect cooling technologies which could exceed the kW duty shown, dependent on the climate it is deployed in.

ColdLogik Rear Door Coolers vs Other Technologies​
ColdLogik Rear Door Coolers vs Other Technologies​

White Space Data Center Cooling

The Data Center has always been the key foundation to support IT hardware, with the capability to provide the space, power and cooling for exact requirements. However as the Hardware has evolved and increased in power demand, the design architecture and specifically the cooling ability of the Data Center has started to fall behind in its capability.

This is where the opportunity for Data Center designers to re-evaluate their existing design criteria and utilise alternative technology available which can enhance the future of their facilities and in turn the efficiency and sustainability of the Data Center.

Here we examine and give an insight into the benefits of some existing and new technologies helping to address the increased demand for Data Center white space cooling.

Immersion

Immersion cooling is a relatively new technology. There are two versions of immersion cooling, single phase and two-phase. Both are a technology that allows servers and other components that traditionally were in a free-standing cabinet, to now be submerged in a tub configuration with thermally conductive dielectric fluid. With this method, the need for air cooling around the Hardware is eliminated, including the fans within servers, however supplementary cooling is still required due to the excessive heat within the overall room from such a large system with significantly high temperature fluid, this supplementary cooling will typically be between 10-20% of the overall loading.

In a single-phase system servers can be installed vertically in a dielectric fluid that is thermally conductive. Heat is transferred to the liquid medium through direct contact with server components and removed by heat exchangers in a CDU.

The other challenge that is often lost is the use of ‘Dual phase’ or ‘Two phase’ systems, it is important to remember that these are in essence refrigerants which have a global warming impact which potentially can be thousands of times its own weight in CO2 equivalent.

Direct chip level cooling

DCLC or Direct Chip Level Cooling is a highly efficient heat rejection system with water or one- or two-phase dielectrics to reject heat from chip sets.  Fast becoming one of the ‘go to technologies’ as an attractive way to boost or supplement the base load of air-cooling for higher density applications above that of Aisle Containment.

This system works well in new or retrofit DCs and compliments existing CHW (chilled water) design in a very efficient way which can extend or augment many Data Center facilities.

Supporting any server, including a number that are pre-integrated, while also Supporting potential for using higher fluid EWT/LWT with improved free cooling aspects.  The technology is deemed Safe and reliable  and has potential for heat recovery with process water, however this is likely to need a CDU for heat transfer, these CDU systems also need to be more local and consume vast areas of the white space.

DCLC is an excellent technology that has taken Hardware cooling to the next level, by treating just the GPU/CPU however it typically removes between 50-70% of the total IT heat load.  Due to this you will still need another form of cooling to eradicate the additional heat.

The heat sinks and other materials typically get removed and replaced for every IT refresh due to  variations in hardware design which is extremely costly, up to 60% of initial capital expenditure in some cases.

DCLC is a good solution for HPC and deployment in Colocation existing Data Centers reflects the reality that this form of equipment will normally require supplementary cooling for the foreseeable future, as it is concentrated at the main CPU/GPU heat sources. It serves a purpose in HPC hardware applications and allows for some support on deployments in traditional Data Center / MTDC environments.

It is a more mature approach than immersion cooling, and the reports of leaks, that would normally be a cause for concern are taken away by working with key established providers, however given that there are a significant amount of parts used in this type of deployment the TCO model can be challenging due to lack of forward IT compatibility and significant CAPEX replacement and deployment costs.

Rear Door Coolers (RDC)

RDCs have been around the longest of all the next generational white space technologies, all be it initially in passive deployments. Since the emergence of active RDC or RDHx (Rear Door Heat Exchangers) some models offer the highest heat rejection capabilites in the industry today.

Our product, the ColdLogik RDC is designed to run on liquids such as water or harmless synthetic fluids, potentially improving efficiencies and increasing flexibility, however unlike the other new technologies, it can also negate the need of mechanical cooling. 

Most RDCs utilise a closed loop system, meaning they don’t consume water.  They also waste significantly less water than traditional Data Center deployments, and in some scenarios would consume no water for the full year without mechanical cooling.  As some Data Centers are facing massive pressure to reduce the amount of potable wast water they utilise, this would be of significant benefit to them and the local communities they impact.

The ColdLogik  system cools 100% of the room load thereby mitigating the need for any additional cooling, improving efficiencies significantly and saving real estate.

This type of technology is non-intrusive, and as it is AALC (Air Assisted Liquid Cooling), it poses no risk to equipment warranty whilst allowing for easy integration and deployment into any existing environment.

The cost of the RDCs and components to deploy mean it is cost effective for the overall TCO and not as CAPEX heavy as other technology.  It also can deliver full PUEs of 1.035 – 1.09 meaning the OPEX is also significantly profitable due to the savings the doors enable.  This is achieved when delivering up to 98% power usage savings comparable to traditional CRAC/H air cooling, up to 83% comparable to traditional aisle containment cooling and up to 45% comparable to in-row cooling.

ColdLogik systems will support in excess of 200kW per cabinet whilst allowing N+1 redundancy which is a unique attribute in the market place.  There is no need for specialist hardware to allow for integration.

Free cooling is the true goal for ColdLogik RDC systems and this can be achieved even in the harshest environments. RDCs are compatible with almost all external plant options including chillers, dry coolers,water towers, bore holes, rivers and lakes. The RDC also allows for higher water differentials enabling the use of heat recovery where a use exists for the waste heat.

Some RDC manufactures also require CDUs, which when deployed may need to take up space close to the deployment whereas ColdLogik systems can be deployed onto any existing cabinet allowing for simple retrofit options, without the need for CDU systems.

The RDC is a proven technology and the most mature and versatile of all next generational white space Data Center cooling.  Efficiently delivering 100% of heat negation, more than all the other next generation technologies. It saves water, a key element the DC industry needs to focus on and also delivers power and space savings alongside huge carbon savings.

contnt.php

Water Scarcity: An overview and how it will affect Data Centers

Water Scarcity, an overview

The presence of water around IT equipment was introduced mainly back in the 1970’s, at a time when mainframes were traditionally located within offices.  The mainframes were primarily situated in and around the desks where people were working and as result the room environment became a lot warmer and more uncomfortable for the personnel. The challenge to change this situation soon became apparent and that some form of cooling was required to address this increase in temperature. A solution that would meet the requirements of both the people in the organisation and the environment that the equipment needed to survive in and thus IT air conditioning was born.

In order for the personnel and IT equipment to co-exist it was deemed that a room temperature of 21°C (+/- 1°C) and an air humidity level of approximately 50% had to be achieved. This was only going to be possible using specialist air conditioning systems.

In the late 1980s and 1990s office spaces and IT rooms were becoming separated from each other and the IT rooms became more commonly recognised as a comms room/computer room (or data centers as we now know them). 

Water was the cooling medium used in the air conditioning systems, these were referred to as Computer Room Air Handling ‘CRAH’ systems.  Although this was situated some distance from the IT itself, water was present in the surrounding areas and allowed the room to reach the set point temperatures and humidity levels.

Since data centers became the dedicated supporting platform for IT, they have utilised water in multiple ways, from fire suppression to perimeter cooling for CRAH systems. However, we now appreciate that water is our most valuable natural resource and it needs to be used wisely.  Many areas of the world are already suffering from water scarcity and this is becoming more widespread across the globe which is a major concern and obviously harmful to the human race, wildlife and the planet.

Water Scarcity, why?

To address this we need to relay some facts about water.  Firstly, of the 100% of water on the planet, only 2.5% is actually only available to drink, known as ‘fresh or potable water’, with the remaining 97.5% as salt water.  These statistics become rather alarming, when you consider there are over 7.8 billion people (at time of writing) on the planet, all of whom need water to survive.  Of the 2.5% of fresh water, some of it is not actually available with approximately 68.9% in glaciers, 30.8% is groundwater and 0.3% is in lakes and rivers.

How does it occur?

Water scarcity occurs when the demand (due to agriculture, cities and environment) is higher than that of the available resource.  Even today over ⅓ of the population lives in an area where their water supply is not enough to fulfil demand or it has been compromised.  This is a massive issue and one which is happening all over the world.  Based on current projections it is predicted that this will be an issue for ⅔ of the world’s population by 2025 – less than 5 years away!!

There are two types of water scarcity, physical water scarcity and economical water scarcity. Physical water scarcity is not having enough water to meet our daily needs and economical water scarcity is when human, governmental, institutional or financial capital limit or throttle access, even though water in nature is free to access the scarcity requires it to be metered out to enable a fairer distribution.

 

So, how do we measure water usage?

Water Usage Effectiveness ‘WUE’ is a sustainability metric created by The Green Grid in 2011 to attempt to measure the amount of water used specifically by data centers cooling their IT assets. To calculate simple WUE, a data center manager divides the annual site water usage in litres by the IT equipment energy usage in kilowatt hours (kWh). Water usage includes water used for cooling, regulating humidity and producing electricity on-site.

How can you improve this usage?

There are multiple technologies available which limit water usage, without having to resort to potentially harmful chemical solutions. For example, in order to cool the equipment efficiently and effectively, retrospectively or as a new build, without the need to redesign is to use a Rear Door Cooler the RDC.  Unlike traditional CRAC (refrigeration/mechanical cooling principle) systems, the RDC removes the heat directly at the source and the air dispelled into the space is the desired room temperature. CRAC systems typically mix colder and warmer air to provide the room environment which isn’t the most efficient option.

Conventional air cooling traditionally consumes significant energy when using mechanical chillers, one way to reduce and potentially eliminate the additional energy use/wastage is by utilising adiabatic cooling. Whilst significantly improving efficiencies on one hand this exponentially increases water usage to facilitate evaporative cooling.  The major downside however is the growing scarcity of potable water in certain geographical locations.  RDC’s can utilise water from natural sources so as not to disrupt the availability of drinkable water in the vicinity. Sustainable natural water sources such as lakes, riverbeds, aquifers, bore holes, rainwater and even sea water.  Water is not taken, polluted or removed, simply flowing around the heat exchanger and returning back to its original location, recycling at its best!

How will this affect the data center market?

Reports suggest that there are data centers which use millions of gallons of water per day to keep them cool.  This along with power usage, is one of the hottest issues for data center managers, operators and owners as they try whatever they can to reduce their usage of both without affecting current and future performance.

With insight and the desire to change to more sustainable solutions, data center operators can utilise technology available today, such as ColdLogik by USystems which can reduce the amount of water usage to nearly net zero gallons of water, whilst also saving up to 93% cooling energy.

It is immoral and soon to be illegal in a lot of States, to utilize these huge amounts of water for a data center, especially when restrictions are close to being implemented in certain regions to stop the general public from using the natural resource of water due to its scarcity.

Conclusion

In conclusion, the ColdLogik RDC would likely save a minimum of 58% up to 100% water that would otherwise be consumed by traditional cooling methods. When looking to improve water usage with a product that is tried, tested, highly regarded, multi award winning and successfully deployed worldwide for over a decade. 

By utilising Air Assisted Liquid Cooling ‘AALC’ you can effectively increase the water temperature to the point where adiabatic cooling is no longer needed, giving the best of both worlds, no excess or drinkable water wasted and better energy efficiency with a simpler site set up and requirement.

contnt.php

Crypto Mining Data Centers: The ColdLogik Deployment

Introduction

Since its inception, crypto mining continues to be a hot (if not controversial) topic as it continues to divide opinions and even countries.  It is well documented that some countries are either are approving crypto mining in their countries and approving it as legal tender, whereas others have banned its use completely. 

Clearly it is disruptive to the financial sector which is trying to adjust and accommodate but the one thing that can be said, rightly or wrongly, it is not going away and is gaining in popularity.

The one thing it is not immune to is the need to address its power usage and its structure deployment and configuration, moving away rapidly from its wild west origin.

At USystems we have used the ColdLogik philosophy to design a highly energy efficient rapid roll-out solution, capable of cooling more crypto units per rack for new build and existing data center colos, whilst ensuring each retains a perfect temperature during operation in the most cost effective and sustainable way available.

Rapid Deployment Crypto Mining from USystems

The key to crypto mining sites is the ease and speed of deployment. USystems have designed a rack dedicated to housing Crypto miners, ensuring maximum density per square foot: in fact, the ColdLogik Crypto-Rack can house up to 35x S19 Asic Antminers.

At 112kW per rack as standard, high-performance cooling becomes a prerequisite, which is why we couple the Crypto-Rack with our latest ColdLogik CL23 HPC Rear Door Cooler.  With one eye on energy savings, the 112kW per Crypto-Rack can be cooled year-round using free-cooling, and within excess of 200kW per rack capability the ColdLogik solution can cope with 170kW of overclocking, meaning your deployment is future proofed. This unique configuration allows the USystems Crypto-Rack to be rapidly deployed into a dedicated facility or an existing Colo data center – Simply deploy, connect to supply water, and power up.

ColdLogik Crypto Cooling: The Technology

ColdLogik Rear Door Coolers are established as highly efficient cooling systems for use on data center/server racks.  Designed to operate on a closed loop water circuit, they ensuring optimum thermal and energy performance by removing heat generated by the active equipment directly at source.

Designed to meet the challenging demands of High-Performance Computer (HPC) cooling, USystems with its unique RDC has positioned itself alongside water to the chip  and immersion cooling technologies, and the CL23 HPC is capable of an unrivalled 200kW of sensible cooling per industry standard rack.  Unlike other high performing cooling technologies, the RDC requires no specialist infrastructure in the data center, no specialist servers, is fitted to standard IT racks, has retrofit capability, only occupies a small footprint, is easy to install and simple to roll out. The CL23 HPC is unquestionably cost effective on all levels.

The CL23 HPC by design is capable of controlling the whole room environment without any additional cooling apparatus, unlike equivalent technologies.  In addition, this ColdLogik Solution offers significant capital expenditure savings and with an EER in excess of 100 at maximum duty, the CL23 HPC provides a better operational expenditure too.

How it works

Air Assisted Liquid Cooling ‘AALC’ allows for the best of both worlds, enabling higher densities in standard data center designs and bringing levels of efficiency that are truly capable of enabling change in your next retrofit or new build project.

Ambient air is drawn into the rack via the IT equipment fans. The hot exhaust air is expelled from the equipment and pulled over the heat exchanger assisted by EC fans mounted in the RDC chassis. The exhaust heat transfers into the cooling fluid within the heat exchanger, and the newly chilled air is expelled into the room at, or just below, the predetermined room ambient temperature designed around sensible cooling. Both processes are managed by the ColdLogik adaptive intelligence present in every ProActive RDC, in this way the Rear Door Cooler uses air assisted liquid-cooling to control the whole room temperature automatically at its most efficient point.

ColdLogik RDC’s enhance the efficiency of most data centers without the need for any changes to their current design. However, greater energy efficiencies are achievable when the complete ColdLogik solution is deployed.

By negating heat at the source and removing the need for air mixing or containment, you gain the ability to significantly increase the supply water temperature, which means more efficient external heat rejection options become available. In some scenarios this means that the ColdLogik RDC solution removed all compressor-based cooling, therefore promoting the option to free-cool all year round.

Screenshot 2021 11 09 at 08.45.51

The ColdLogik Crypto Set Up

Purpose built reinforced slide-in shelves and corresponding CL23 RDC capable of up to 200kW of cooling per cabinet.

Suitable for cold climate locations and can be deployed anywhere in the world or where power is cheap, as the ColdLogik RDCs can provide up to 100% free cooling.

Supports 33x S19 Asic Antminers in a 1000mm (w) x 600mm (d) and 52RU in Height
Supports 33x S19 Asic Antminers in a 1000mm (w) x 600mm (d) and 52RU in Height
Supports 35 x S19 Asic Antminers in an 800mm (w) x 1200mm (d) and 52RU in Height
Supports 35 x S19 Asic Antminers in an 800mm (w) x 1200mm (d) and 52RU in Height

Conclusion on ColdLogik Crypto Mining Data Center

The USystems crypto mining data center solution is designed to deliver:

  • Speed of deployment
  • Largest mining deployment footprint
  • Most Efficient mining cooling that can be deployed anywhere in the world.
  • Cost effective deployment

With this system, crypto mining can scale quicker and easier, in climates where power is cheap and with no real worry on how to cool the miners. Delivering more miners per footprint than any other technology available today, it really is the true mining deliverable system.

contnt.php

Crypto Mining Data Centers: An Understanding

What is Cryptocurrency?

Cryptocurrency, crypto-currency, or crypto is a form of currency that is constructed and created with a collection of binary data. It is designed to work as a medium of exchange where individual coin ownership records are stored in a ‘ledger’, which is a computerized database using strong cryptography to secure transaction records. This controls the creation of additional coins and verifies the transfer of coin ownership.

Some crypto schemes use validators such as tokens to maintain the cryptocurrency. In a proof-of-stake model, owners put up their tokens as collateral, in return they get authority over the token in proportion to the amount they stake.  Generally, these token stakeholders get additional ownership in the token over time via network fees, newly minted tokens or other such reward mechanisms. Cryptocurrency does not exist in physical form (like paper money or coins) and is typically not issued by a central authority. Cryptocurrencies typically use decentralized control as opposed to a central bank digital currency (CBDC).   When a cryptocurrency is mined or created prior to issuance or issued by a single issuer, it is generally considered to be centralized. When implemented with decentralized control, each cryptocurrency works through distributed ledger technology, typically a blockchain, that serves as a public financial transaction database

What is a Ledger?

A ledger, also called a shared ledger, distributed ledger technology (DLT) or a wallet, is a consensus of replicated, shared, and synchronized digital data, geographically spread across multiple sites, countries, or institutions. Unlike with a centralized database, there is no central administrator.

This can also be known as a Replicated Journal Technology (RJT) since the information is replicated in the nodes containing full copy of the information and the information in the blocks is included in timely order, more in the form of an accounting journal than as an accounting ledger.

A peer-to-peer network is required as well as consensus algorithms to ensure replication across nodes is undertaken. One form of distributed ledger design is the ‘blockchain’ system, which can be either public or private.

A blockchain is a growing list of records, called ‘blocks’, that are linked together using cryptography. It’s also been described as a “trust-less and fully decentralized peer-to-peer immutable data storage” that is spread over a network of participants often referred to as ‘nodes’.  Each block contains a cryptographic ‘hash’ of the previous block, a timestamp, and transaction data (generally represented as a ‘Merkle Tree’).  The timestamp proves that the transaction data existed when the block was published in order to get into its hash.  As blocks each contain information about the block previous to it, they form a chain with each additional block reinforcing the ones before it. Therefore, blockchains are resistant to modification of their data because once recorded, the data in any given block cannot be altered retroactively without altering all subsequent blocks.

Blockchains are typically managed by a peer-to-peer network for use as a publicly distributed ledger, where nodes collectively adhere to a protocol to communicate and validate new blocks. Although blockchain records are not unalterable as forks are possible, blockchains may be considered secure by design and exemplify a distributed computing system with high Byzantine fault tolerance.

The blockchain was invented by a person (or group of people) using the name Satoshi Nakamoto in 2008 to serve as the public transaction ledger of the cryptocurrency ‘Bitcoin’. The identity of Satoshi Nakamoto remains unknown to date. The invention of the blockchain for Bitcoin made it the first digital currency to solve the double-spending problem without the need of a trusted authority or central server. The Bitcoin design has inspired other applications and blockchains that are readable by the public and are widely used by cryptocurrencies. The blockchain is considered a type of payment rail. Private blockchains have been proposed for business use but Computerworld called the marketing of such privatized blockchains without a proper security model “snake oil”. However, others have argued that permissioned blockchains, if carefully designed, may be more decentralized and therefore more secure in practice than permissionless ones

Crypto Mining Data Centers

Traditionally, these were labelled as ‘mining farms’ and were large hangars in remote areas with cold climates, but with significant power.  However, as we have progressed further on the crypto journey, more purpose-built facilities have been established and these are commonly known as data centers – ‘mining’ data centers.

Mining farms were initially constructed, in most cases with no real structure to the deployment of miners, using distribution racking to hold the miners, with the walls of the units having slots cut in to allow for airflow and that was it. But now, true mining data centers are starting to appear that are significantly more structured and designed more in line with existing data centers, allowing for all the design aspects you would normally see in a data center facility, including that of power and fire suppression plus more modern cooling.

Historically, the mines were cooled by the deployment location climate.  For example, when deployed in areas such as Russia which lines the Artic Circle, it is easier to just allow the air to pass through, but in some cases this is not really practical.  Mining data centers are appearing in more “standard/hot” areas such as Texas, where power is cheap, but without access to the cooler climate of areas such as Russia, Canada or Iceland for example.

Crypto Mining Centers
Crypto Mining Center

Crypto Mining Deployment

USystems, with their Crypto focused ColdLogik brand of Rear Door Heat Exchangers and purposely designed enclosures, are aligned and constructed to provide a complete deployment for a Crypto Mining Data Center. An example of this is 35x miners, in an 800mm x 1200mm 52RU cabinet that has purpose built reinforced slide-in shelves and corresponding CL23 RDC, capable of up to 200kW of cooling per cabinet.

When standard S19 miners are working at circa 3.2kW per unit (without Overclocking), a deployment of 112kW per cabinet is “Standard”.

This also now means that all mining areas no longer have to be in cold climate locations and can be deployed anywhere in the world, or where power is cheap, as the ColdLogik RDCs can provide up to 100% free cooling, meaning a significant saving operationally.

Crypto Mining Footprint

In traditional mining farms, the layout structure of the miners utilized a distribution style racking system, supporting the miners as shown here. This equated to around 21 units in the footprint area of a standard data center cabinet. This image shows a traditional deployment in the mining industry that would be classed as an unorthodox deployment where space is lost due to the need to cool.

With ColdLogik, we can hold 35 miners in a cabinet footprint, a gain of around 42%, and these can be deployed within a standard data center configuration, increasing the footprint to house more miners and efficiently cool in any region.

Crypto Mining Centers
Crypto Mining Center

Closing Summary

Crypto mining has fast become a popular form of currency.  Adoption is still on the increase, but it is still looked upon as potentially underhand as it is still unregulated, and though vast fortunes can be made, these fortunes can also be lost in some cases due to its volatility.

China used to be the crypto mining capital of the world until it was made illegal.  This has meant that other areas, especially in the USA like Miami, are vying to become crypto mining hotspots. The one notable issue is that the way crypto mining is delivered there, by only a handful of colocation data center providers that can truly help to support the extreme high-density power and cooling requirement necessary for mining, with all other data centers building specifically to meet the demand.

The fact is that mining is an anomaly compared to traditional hardware deployments, as it does not require redundancy and exceeds over 100kW on every rack and sustainable infrastructure to reduce the pricing. The availability could be classed as a difficult request, unless you opt for ColdLogik from USystems.

Get in touch with your regional ColdLogik salesperson for support on understanding and delivery of crypto mining data center solutions into your facility.

contnt.php

Liquid Cooling in Data Centers, why is it needed?

Modern chips and processors are smaller than ever and are at the point that the manufacturing process has exhausted Moore’s Law.  

These modern chips and processors are compacted so tight it creates a significant heat load, which then means we need more efficient and sustainable cooling solutions, e.g., they need to absorb heat more efficiently than standard air cooling. This has led the Data Center industry to investigate Liquid Cooling in more depth. 

What is Liquid Cooling? 

In theory it is a simple cooling methodology, using liquid as the primary means for capturing and expelling heat created by IT load.  

Unfortunately, there are many misconceptions about the different technologies available, with a large portion of our industry believing there are only two types of Liquid Cooling:  

  1. Immersion Cooling   

IT equipment is submerged inside an electrically non-conductive liquid inside a tub that absorbs heat. 

  1. Direct Chip Liquid Cooling (DCLC) 

Small hoses bring cool water to heat sinks or cold plates and  circulate warmed water to a heat exchanger.  

Even though both technologies have merit, there are considerable downfalls. With immersion cooling there is significant setup costs, it does not capture 100% of the heat generated, which would mean you would require a secondary, supplementary cooling source, and this is the same with Direct Chip Level Cooling, which only captures between 60-80% of the heat.  

So, is there an alternative technology?  

Yes, there is…… Rear Door Heat Exchangers (RDHx), or known also as Rear Door Coolers (RDC), is a proven liquid cooling technology, sometimes known as Air Assisted Liquid Cooling, which is designed to operate as a closed loop system.  

In principle there are two main types of Rear Door Cooling methods, passive and active.   

First, with a passive Rear Door Cooler (RDC), works by allowing ambient air to be pulled into the rack via the IT equipment fans with the resulting hardware hot exhaust air expelled over a heat exchanger, transferring the heat into the liquid inside the coil, with the resulting chilled air being expelled back into the room.  

Similarly, with an ProActive Rear Door Cooler (RDC), ambient air is pulled into the rack by the IT equipment fans, however the resulting hot exhaust air is assisted by EC fans mounted within the proactive door and the air expelled over a heat exchanger, transferring the heat into the liquid inside the coil, with the resulting chilled air being expelled back into the room at, or just below, the predetermined room ambient temperature.  

In this way the ProActive Rear Door Cooler can control the whole room temperature environment, without supplementary cooling technology. 

In summary, 

When the whole data center is taken into account, all liquid cooling solutions require air to assist in the data center cooling process, the ColdLogik solution integrates this as standard without the need for supplementary cooling equipment. 

There are many misleading industry articles, hopefully this document sets the record straight.  

contnt.php

ColdLogik Rear Door Cooling ‘Shared Deployment Philosophy’

Overview

The market perception of a Rear Door Cooler (RDC) is that it can only be used on a one-to-one basis with the cabinet it is cooling. However, for more than a decade, USystems have consistently disproved this theory. This document will detail how this can be done.

The basic principle surrounding the ColdLogik RDC, is that it controls the whole room environment with built in adaptive intelligence.

During standard operation of the USystems RDC systems, air is discharged evenly in both horizontal planes via EC fans. Due to the fans being equally positioned in the RDC, the air provided across the adjacent cabinets is uniform, therefore enabling USystems RDC solutions to cool multiple cabinets using a single unit. Cooling by Air-Mixing.

Whilst it is true that to achieve optimum operational efficiency an RDC per cabinet is the most effective method, it can still be an effective proposition to deploy an RDC for every second or every third cabinet. The choice of which comes down to a multitude of factors however it can be easily explained by describing the choice as ‘Optimal energy savings and OpEx vs CapEx’.

Capital Savings Potential

To illustrate the CapEx saving potential, this philosophy can be applied to a real-world application. Customer A required a cooling solution where the room temperature was maintained at ASHRAE A1, with a cooling requirement per cabinet of 10kW. The straightforward 1:1 deployment provides the largest opportunity to reduce the OpEx, as operational savings are made through the use of warmer water (reduced mechanical cooling investment required). However, this also has the largest CapEx, a greater initial venture than originally budgeted for by Customer A.

By redesigning the deployment to use a 3:1 method, each RDC produces instead 30kW of cooling duty. This significantly reduces the CapEx down with a 61% saving. While the 3:1 solution has increased mechanical cooling costs due to the use of colder water, the reduced CapEx aligns with Customer A’s budgetary considerations.

Formulae

Operating with ASHRAE standards for the active equipment internally for the Cabinet means that both the volume of air required by the equipment and the temperature required for the air flow from the door can be used to estimate the anticipated room temperature. The formulae by which this can be shown is as follows:

Screenshot 2021 04 08 at 5.22.26 pm

Operation Densities

At low duties, an RDC may be deployed in every two or three cabinets. While what determines the low duty threshold for Cooling by Air-mixing would vary (based on site-specific parameters, such as fluid temperatures, relative humidity etc), USystems are comfortable in applying a general threshold of 6kW and lower for RDC’s deployed on every third cabinet, and 15kW or lower for every second cabinet. This is not a definitive ceiling, as the adaptive intelligence of the RDC product, and the many variables in designing a data center allow for viable solutions that perform beyond these general models, such as the deployment illustrated for by Customer A. USystems have illustrated the thresholds of 6Kw and 15kW below:

Example A: 3 x Racks to 1 x RDC

SDP 1

In a 3:1 configuration shown here, where the cooling requirement in each cabinet is 6kW, each rear door would have a total cooling requirement of 18kW. this would directly cool the cabinet it is fitted to, and indirectly cool the air from the cabinets on the left and right. The pattern repeats for the full bank

Where a room temperature of 27°C/80.6°F or below is desired (to keep with ASHRAE A1 class guidelines) USystems would assume the air off from the Cabinets without the RDC to be approximately 40°C/104°F and produce circa 600CFM. If on application this information can be provided, USystems are able to produce more accurate room temperature projections.

The RDC in this application would be producing 18kW of cooling, generating 1800CFM, with an air off coil temperature of 20°C/68°F. The data in this example would populate the formulae as follows:

Screenshot 2021 04 08 at 5.26.20 pm

The anticipated room temperature in this case would give a mixed air temperature of 25°C/77°F, which is under the 27°C/80.6°F ASHRAE recommendations.

Example B: 2 x Racks to 1 x RDC

SDP 2

As the required cooling density increases, the ratio of RDC’s to racks generally increases proportionally. In a 2:1 configuration where the cabinets being cooled are producing 15kW each, the following would apply if a room temperature of 27°C/80.6°F is desired. USystems would expect the air off from the Cabinets without the RDC to be approximately 40°C/104°F and produce circa 1500CFM.

The RDC in this application would be producing 30kW of cooling, generating 2250CFM, with an air off coil temperature of 15°C/59°F. The data in this example would populate the formulae as follows:

Screenshot 2021 04 08 at 5.26.20 pm

The anticipated room temperature in this case would give a mixed air temperature of 25°C/77°F, which is under the 27°C/80.6°F ASHRAE recommendations.

Closing Statement

These examples illustrate the higher densities that can be applied using a 3:1 and 2:1 deployment strategy. With increased airflow requirements at higher duties, it is generally advisable to deploy low duty cabinets between high duty cabinets with RDC’s fitted to best maximise the air mixing strategy. There is also opportunity for growth in the data center to either increase density, or reduce OpEx by adding more RDC’s at a pace to suit CapEx. The reduced OpEx is achieved by increasing the number of RDC’s which allows an elevation in water temperatures (in most cases supplied by mechanical cooling), therefore reducing overall energy consumption.

When increasing the water temperatures, it typically means: first the external plant can be physically smaller, second, the efficiency per kW cooled will increase, and third, the free cooling that can be utilised on site will increase exponentially dependant on geographic location.

To conclude, the ColdLogik RDC solution is truly unique. Allowing data center stakeholders to utilise a system that is both sustainable in its capacity for future growth, and maximises flexibility with the ability to improve efficiency as density is added.  To find out more on how we can make a Cooling by Air Mixing solution work for you, please contact sales@systems.com or visit https://www.coldlogik.com/data-centre-products/cl20-proactive#prodeployment for more information.

contnt.php

WUE: How elevated water temperature can dramatically reduce water usage

China

With the current situation the world finds itself in one thing has become abundantly clear, data centers have provided people a safe haven in their own homes whilst lockdowns have been enforced across the globe, at one point half of the human race was in lockdown in one form or another.

There have been both positives and negatives that have arisen from this though, from the ‘key worker’ status held by data center employees and their primary suppliers highlighting how governments across the world perceive the industry and its need, through to a sterner examination from the wider world on energy consumption and water usage.

Uptime and reliability have always driven the major data center design philosophy, trade-offs have been made, understandably, in order for comfort to be achieved for the operators and owners to be safe in the knowledge that design and variability across sites maintains consistency and brings down the risk of misalignment or calculation.

Whilst data centers are more efficient than they have ever been on a whole, there is still a vast amount of improvement that can be made, in particular on both energy consumption for cooling and the consumption of water in the requirement for adiabatic cooling.

One of the major active equipment manufacturers has openly said that a realistic figure for water use per mW can be 68,000 litres of water a day. Unfortunately information is scarce and so a conservative figure of 1000mW can be used across the country, this would potentially give a usage of around 68 million litres of water per day.

What is adiabatic cooling?

The reason that water is used in these data center cooling solutions is traditionally to obtain the lowest possible air temperature entering the external plant and therefore extracting as much of the heat from the data center using the natural environment before the mechanical chiller needs to be implemented.

In any air content there are two temperature points, Dry Bulb (DB) and Wet Bulb (WB), the dry bulb is what you would feel if you were dry, the best way to describe wet bulb is when you walk out of the shower before you manage to get to the towel! The water on your body enables the air to reach the temperature of the relative humidity as you move through it, this is always equal to or lower than the DB temperature.

For example if the DB temperature in a room is 20°C/70°F and the WB temperature is 14°C/57°F then, if a wet object or air is pushed through a wet area or membrane the temperature would potentially reach the WB temperature, that is until the object is heated or dried.

Why is this usage so high?

The usage of water is inversely proportional to the water temperature flow to the data centre internal cooling equipment. The lower the temperature of the water flow into the data centre the higher the water usage by the external plant. Traditional plant has a normal water flow temperature of 7°C/45°F which means the highest temperature that you could potentially utilise naturally to get to the desired flow temperature is 5°C/41°F.

How can you improve this usage?

The best possible way to reduce the usage is to elevate the water temperature that the data centre requires in order to cool the equipment efficiently and effectively, the rear door cooler is a great example of this because, unlike traditional CRAC systems, instead of using colder air to mix with warm air to provide an ambient you are instead neutralising the air itself and therefore you can use a higher water temperature to obtain the same result. The graphs below show the average high temperature for DB and WB over a thirty year period.

China 1 WUE Graph
China 2 WUE Graph

As you can see above China provides a challenging environment for any cooling requirement, particularly in summer, with high DB temperatures and relatively high WB temperatures to suit.

The important factor here is that anything above the blue line can utilise the DB and therefore not require any additional water usage. Anything between the blue line and the orange line can be cooled using an adiabatic system and this is where the water usage would come into being. Anything beneath the orange line would require additional mechanical cooling such as a traditional chiller system, this would then be using maximum water and additional power for the mechanical equipment.

What happens when you implement a ColdLogik rear door?

China 3 WUE Graph
China 4 WUE Graph

In the graphs above you can see the marked difference between using a traditional cooling system, which is marked in yellow, and the ColdLogik cooling requirement, marked in Grey.

In China its clear to see that by utilising the traditional approach you would, on average, have a need for the adiabatic system for almost the whole year and you would also require mechanical for half the year in varying load. However, as most chillers have a minimum run of 25% less of the free cooling may be available.

By utilizing the ColdLogik door, on average, you would not need to use any additional water for 6 months of the year to provide adiabatic cooling, you would only require mechanical cooling assistance for around 1-2 months. Chillers would normally remain on site to provide redundancy on the rare occasions that a heat wave outside of the average occurs, however the chillers may not need to be run for 10 months of the year, causing an additional operational saving.

Conclusion

In conclusion, without considering the lower water usage across the remaining 4 months which could be substantial, the ColdLogik door would likely be able to save a minimum of 25% additional water that would otherwise be consumed by the traditional cooling methods.

Translating into physical water usage over the year, and based on the conservative 1tW figure, this could drop the current projected usage figure of 24.82 billion litres of water down to 18.6 billion litres of water which is a 6.2 billion litre drop. This is the equivalent of filling the Birds nest stadium in Beijing with water twice over which was the pinnacle of the 2008 Olympic games.

If you are looking to improve your water usage with a product that is tried and tested and deployed into the market worldwide then get in touch with USystems today.

Conventional air cooling traditionally consumes significant energy when using mechanical chillers, one way to reduce and potentially eliminate the additional energy wastage is by utilising adiabatic cooling. Whilst significantly improving efficiencies on one hand this exponentially increases water usage in order to equip evaporative cooling. The major down side however is the growing scarcity of water in certain geographical locations. A typical large scale Data Center consumes an equivalent of 2,500 peoples water which is putting pressure on local governments in order to drop water usage.

By utilising liquid cooling you can effectively increase the water temperature to the point where adiabatic cooling is no longer needed, giving the best of both worlds, no excess water wasted and better energy efficiency with a simpler site set up and requirement. It really is a WIN-WIN-WIN.

India

With the current situation the world finds itself in one thing has become abundantly clear, data centers have provided people a safe haven in their own homes whilst lockdowns have been enforced across the globe, at one point half of the human race was in lockdown in one form or another.

There have been both positives and negatives that have arisen from this though, from the ‘key worker’ status held by data center employees and their primary suppliers highlighting how governments across the world perceive the industry and its need, through to a sterner examination from the wider world on energy consumption and water usage.

Uptime and reliability have always driven the major data center design philosophy, trade-offs have been made, understandably, in order for comfort to be achieved for the operators and owners to be safe in the knowledge that design and variability across sites maintains consistency and brings down the risk of misalignment or calculation.

Whilst data centers are more efficient than they have ever been on a whole, there is still a vast amount of improvement that can be made, in particular on both energy consumption for cooling and the consumption of water in the requirement for adiabatic cooling.

One of the major active equipment manufacturers has openly said that a realistic figure for water use per mW can be 68,000 litres of water a day. Whilst public information is scarce a very conservative figure for water usage is around 34 million litres of water a day in the Indian market, utilised for cooling based on 500mW cooling capacity across the country.

What is adiabatic cooling?

The reason that water is used in these data center cooling solutions is traditionally to obtain the lowest possible air temperature entering the external plant and therefore extracting as much of the heat from the data center using the natural environment before the mechanical chiller needs to be implemented.

In any air content there are two temperature points, Dry Bulb (DB) and Wet Bulb (WB), the dry bulb is what you would feel if you were dry, the best way to describe wet bulb is when you walk out of the shower before you manage to get to the towel! The water on your body enables the air to reach the temperature of the relative humidity as you move through it, this is always equal to or lower than the DB temperature.

For example if the DB temperature in a room is 20°C/70°F and the WB temperature is 14°C/57°F then, if a wet object or air is pushed through a wet area or membrane the temperature would potentially reach the WB temperature, that is until the object is heated or dried.

Why is this usage so high?

The usage of water is inversely proportional to the water temperature flow to the data centre internal cooling equipment. The lower the temperature of the water flow into the data centre the higher the water usage by the external plant. Traditional plant has a normal water flow temperature of 7°C/45°F which means the highest temperature that you could potentially utilise naturally to get to the desired flow temperature is 5°C/41°F.

How can you improve this usage?

The best possible way to reduce the usage is to elevate the water temperature that the data centre requires in order to cool the equipment efficiently and effectively, the rear door cooler is a great example of this because, unlike traditional CRAC systems, instead of using colder air to mix with warm air to provide an ambient you are instead neutralising the air itself and therefore you can use a higher water temperature to obtain the same result. The graphs below show the average high temperature for DB and WB over a thirty year period.

India 1 WUE Graph
India 2 WUE Graph

As you can see above India provides a challenging environment for any cooling requirement, with high DB temperatures and relatively high WB temperatures to suit.

The important factor here is that anything above the blue line can utilise the DB and therefore not require any additional water usage. Anything between the blue line and the orange line can be cooled using an adiabatic system and this is where the water usage would come into being. Anything beneath the orange line would require additional mechanical cooling such as a traditional chiller system, this would then be using maximum water and additional power for the mechanical equipment.

What happens when you implement a ColdLogik rear door?

India 3 WUE Graph
India 4 WUE Graph

In the graphs above you can see the marked difference between using a traditional cooling system, which is marked in yellow, and the ColdLogik cooling requirement, marked in Grey.

In India its clear to see that by utilising the traditional approach you would, on average, have a need for the adiabatic system for the whole year and you would also require mechanical for the whole year in varying load. However, as most chillers have a minimum run of 25% less free cooling may be used.

By utilizing the ColdLogik door, on average, you would not require any additional mechanical cooling on site for standard operation. This is in the form of chillers with refrigeration circuits, whilst normally these systems would remain on site in order to maintain redundancy in case of exceptional need they would not be required on a regular basis. The water usage would be less for 6 months of the year on the ColdLogik system, this would most likely account for a drop in water usage across this period of around 20%.

Conclusion

In conclusion, considering the lower water usage across the 6 months, the ColdLogik door would likely be able to save a minimum of 10% additional water that would otherwise be consumed by the traditional cooling methods.

Translating into physical water usage over the year, and based on the publicly available information for India, this could drop the current projected usage figure of 12.37 billion litres of water down to 11.13 billion litres of water which is a 10% drop. In the future, as the Ashrae guidelines are pushed more into the allowable limits, the amount of water that could be saved is limitless.

If you are looking to improve your water usage with a product that is tried and tested and deployed into the market worldwide then get in touch with USystems today.

Conventional air cooling traditionally consumes significant energy when using mechanical chillers, one way to reduce and potentially eliminate the additional energy wastage is by utilising adiabatic cooling. Whilst significantly improving efficiencies on one hand this exponentially increases water usage in order to equip evaporative cooling. The major down side however is the growing scarcity of water in certain geographical locations. A typical large scale Data Center consumes an equivalent of 2,500 peoples water which is putting pressure on local governments in order to drop water usage.

By utilising liquid cooling you can effectively increase the water temperature to the point where adiabatic cooling is no longer needed, giving the best of both worlds, no excess water wasted and better energy efficiency with a simpler site set up and requirement. It really is a WIN-WIN-WIN.

The Nordics

With the current situation the world finds itself in one thing has become abundantly clear, data centers have provided people a safe haven in their own homes whilst lockdowns have been enforced across the globe, at one point half of the human race was in lockdown in one form or another.

There have been both positives and negatives that have arisen from this though, from the ‘key worker’ status held by data center employees and their primary suppliers highlighting how governments across the world perceive the industry and its need, through to a sterner examination from the wider world on energy consumption and water usage.

Uptime and reliability have always driven the major data center design philosophy, trade-offs have been made, understandably, in order for comfort to be achieved for the operators and owners to be safe in the knowledge that design and variability across sites maintains consistency and brings down the risk of misalignment or calculation.

Whilst data centers are more efficient than they have ever been on a whole, there is still a vast amount of improvement that can be made, in particular on both energy consumption for cooling and the consumption of water in the requirement for adiabatic cooling.

One of the major active equipment manufacturers has openly said that a realistic figure for water use per mW can be 68,000 litres of water a day. Whilst public information is scarce a very conservative figure for water usage is around 20 million litres of water a day in the Nordics, utilised for cooling. However importantly a large proportion of data centre owners have utilised the areas climate to reduce the mechanical power requirement, which whilst increasing water usage will provide greater overall efficiency for traditional systems.

What is adiabatic cooling?

The reason that water is used in these data center cooling solutions is traditionally to obtain the lowest possible air temperature entering the external plant and therefore extracting as much of the heat from the data center using the natural environment before the mechanical chiller needs to be implemented.

In any air content there are two temperature points, Dry Bulb (DB) and Wet Bulb (WB), the dry bulb is what you would feel if you were dry, the best way to describe wet bulb is when you walk out of the shower before you manage to get to the towel! The water on your body enables the air to reach the temperature of the relative humidity as you move through it, this is always equal to or lower than the DB temperature.

For example if the DB temperature in a room is 20°C/70°F and the WB temperature is 14°C/57°F then, if a wet object or air is pushed through a wet area or membrane the temperature would potentially reach the WB temperature, that is until the object is heated or dried.

Why is this usage so high?

The usage of water is inversely proportional to the water temperature flow to the data centre internal cooling equipment. The lower the temperature of the water flow into the data centre the higher the water usage by the external plant. Traditional plant has a normal water flow temperature of 7°C/45°F which means the highest temperature that you could potentially utilise naturally to get to the desired flow temperature is 5°C/41°F.

How can you improve this usage?

The best possible way to reduce the usage is to elevate the water temperature that the data centre requires in order to cool the equipment efficiently and effectively, the rear door cooler is a great example of this because, unlike traditional CRAC systems, instead of using colder air to mix with warm air to provide an ambient you are instead neutralising the air itself and therefore you can use a higher water temperature to obtain the same result. The graphs below show the average high temperature for DB and WB over a thirty year period.

Nordics 1 WUE Graph
Nordics 2 WUE Graph

As you can see above the Nordic region provides a very low dry and wet bulb for a large proportion of the year, this helps with efficiency on a whole.

The important factor here is that anything above the blue line can utilise the DB and therefore not require any additional water usage. Anything between the blue line and the orange line can be cooled using an adiabatic system and this is where the water usage would come into being. Anything beneath the orange line would require additional mechanical cooling such as a traditional chiller system, this would then be using maximum water and additional power for the mechanical equipment.

What happens when you implement a ColdLogik rear door?

Nordics 3 WUE Graph
Nordics 4 WUE Graph

In the graphs above you can see the marked difference between using a traditional cooling system, which is marked in yellow, and the ColdLogik cooling requirement, marked in Grey.

In the case of the Nordic region its clear to see that by utilising the traditional approach you would, on average, have a need for the adiabatic system for two thirds of the year and you would also require mechanical for just under half of the year in varying load. However, as most chillers have a minimum run of 25% less free cooling could be available.

By utilizing the ColdLogik door, on average, you would not need to use any additional water for 9 months of the year to provide adiabatic cooling, you would not require any mechanical assistance through the remaining 3 months either. Chillers would normally remain on site to provide redundancy on the rare occasions that a heat wave outside of the average occurs, however the chillers may not ever need to be run, causing an additional operational saving.

Conclusion

In conclusion, without considering the lower water usage across the remaining 3 months which could be substantial, the ColdLogik door would likely be able to save a minimum of 50% additional water that would otherwise be consumed by the traditional cooling methods.

Translating into physical water usage over the year, and based on the publicly available information in the Nordic region, this could drop the current projected usage figure of 4.86 billion litres of water down to 2.43 billion litres of water which is a massive 50% drop. This is the equivalent of filling the infamous Blue Lagoon in Iceland a whopping 270 times, which really does give it perspective.

If you are looking to improve your water usage with a product that is tried and tested and deployed into the market worldwide then get in touch with USystems today.

Conventional air cooling traditionally consumes significant energy when using mechanical chillers, one way to reduce and potentially eliminate the additional energy wastage is by utilising adiabatic cooling. Whilst significantly improving efficiencies on one hand this exponentially increases water usage in order to equip evaporative cooling. The major down side however is the growing scarcity of water in certain geographical locations. A typical large scale Data Center consumes an equivalent of 2,500 peoples water which is putting pressure on local governments in order to drop water usage.

By utilising liquid cooling you can effectively increase the water temperature to the point where adiabatic cooling is no longer needed, giving the best of both worlds, no excess water wasted and better energy efficiency with a simpler site set up and requirement. It really is a WIN-WIN-WIN.

London

With the current situation the world finds itself in one thing has become abundantly clear, data centers have provided people a safe haven in their own homes whilst lockdowns have been enforced across the globe, at one point half of the human race was in lockdown in one form or another.

There have been both positives and negatives that have arisen from this though, from the ‘key worker’ status held by data center employees and their primary suppliers highlighting how governments across the world perceive the industry and its need, through to a sterner examination from the wider world on energy consumption and water usage.

Uptime and reliability have always driven the major data center design philosophy, trade-offs have been made, understandably, in order for comfort to be achieved for the operators and owners to be safe in the knowledge that design and variability across sites maintains consistency and brings down the risk of misalignment or calculation.

Whilst data centers are more efficient than they have ever been on a whole, there is still a vast amount of improvement that can be made, in particular on both energy consumption for cooling and the consumption of water in the requirement for adiabatic cooling.

One of the major active equipment manufacturers has openly said that a realistic figure for water use per mW can be 68,000 litres of water a day. Even if you only take the 13 largest data centre operations in the UK then this would equate to 58,412,000 litres of water that are wasted each day.

What is adiabatic cooling?

The reason that water is used in these data center cooling solutions is traditionally to obtain the lowest possible air temperature entering the external plant and therefore extracting as much of the heat from the data center using the natural environment before the mechanical chiller needs to be implemented.

In any air content there are two temperature points, Dry Bulb (DB) and Wet Bulb (WB), the dry bulb is what you would feel if you were dry, the best way to describe wet bulb is when you walk out of the shower before you manage to get to the towel! The water on your body enables the air to reach the temperature of the relative humidity as you move through it, this is always equal to or lower than the DB temperature.

For example if the DB temperature in a room is 20°C/70°F and the WB temperature is 14°C/57°F then, if a wet object or air is pushed through a wet area or membrane the temperature would potentially reach the WB temperature, that is until the object is heated or dried.

Why is this usage so high?

The usage of water is inversely proportional to the water temperature flow to the data centre internal cooling equipment. The lower the temperature of the water flow into the data centre the higher the water usage by the external plant. Traditional plant has a normal water flow temperature of 7°C/45°F which means the highest temperature that you could potentially utilise naturally to get to the desired flow temperature is 5°C/41°F.

How can you improve this usage?

The best possible way to reduce the usage is to elevate the water temperature that the data centre requires in order to cool the equipment efficiently and effectively, the rear door cooler is a great example of this because, unlike traditional CRAC systems, instead of using colder air to mix with warm air to provide an ambient you are instead neutralising the air itself and therefore you can use a higher water temperature to obtain the same result. The graphs below show the average high temperature for DB and WB over a thirty year period.

London 1 WUE Graph
London 2 WUE Graph

As someone that lives in the UK I can safely say that our weather isn’t always the best, however this gives a wonderful opportunity for eliminating excess water use.

The important factor here is that anything above the blue line can utilise the DB and therefore not require any additional water usage. Anything between the blue line and the orange line can be cooled using an adiabatic system and this is where the water usage would come into being. Anything beneath the orange line would require additional mechanical cooling such as a traditional chiller system, this would then be using maximum water and additional power for the mechanical equipment.

What happens when you implement a ColdLogik rear door?

London 3 WUE Graph
London 4 WUE Graph

In the graphs above you can see the marked difference between using a traditional cooling system, which is marked in yellow, and the ColdLogik cooling requirement, marked in Grey.

In the case of the United Kingdom and in particular the London area its clear to see that by utilising the traditional approach you would, on average, have a need for the adiabatic system all year round and you would also require mechanical for over half of the year in varying load. However, as most chillers have a minimum run of 25% making less of the free cooling available.

By utilizing the ColdLogik door, on average, you would not need to use any additional water for 8 months of the year to provide adiabatic cooling, you would not require any mechanical assistance through the remaining 4 months either. Chillers would normally remain on site to provide redundancy on the rare occasions that a heat wave outside of the average occurs, however the chillers may not ever need to be run, causing an additional operational saving.

Conclusion

In conclusion, without considering the lower water usage across the remaining 4 months which could be substantial, the ColdLogik door would likely be able to save a minimum of 66% additional water that would otherwise be consumed by the traditional cooling methods.

Translating into physical water usage over the year, and based on the 13 largest publicly available data centres in the UK, this could drop the current projected usage figure of 21.32 billion litres of water down to 7.11 billion litres of water which is a 14.21 billion litre drop. This is the equivalent of filling 5550 Olympic swimming pools which would take up an area more than 130 x that which Windsor castle and it’s grounds currently occupies.

If you are looking to improve your water usage with a product that is tried and tested and deployed into the market worldwide then get in touch with USystems today.

Conventional air cooling traditionally consumes significant energy when using mechanical chillers, one way to reduce and potentially eliminate the additional energy wastage is by utilising adiabatic cooling. Whilst significantly improving efficiencies on one hand this exponentially increases water usage in order to equip evaporative cooling. The major down side however is the growing scarcity of water in certain geographical locations. A typical large scale Data Center consumes an equivalent of 2,500 peoples water which is putting pressure on local governments in order to drop water usage.

By utilising liquid cooling you can effectively increase the water temperature to the point where adiabatic cooling is no longer needed, giving the best of both worlds, no excess water wasted and better energy efficiency with a simpler site set up and requirement. It really is a WIN-WIN-WIN.

San Francisco bay area

With the current situation the world finds itself in one thing has become abundantly clear, data centers have provided people a safe haven in their own homes whilst lockdowns have been enforced across the globe, at one point half of the human race was in lockdown in one form or another.

There have been both positives and negatives that have arisen from this though, from the ‘key worker’ status held by data center employees and their primary suppliers highlighting how governments across the world perceive the industry and its need, through to a sterner examination from the wider world on energy consumption and water usage.

Uptime and reliability have always driven the major data center design philosophy, trade-offs have been made, understandably, in order for comfort to be achieved for the operators and owners to be safe in the knowledge that design and variability across sites maintains consistency and brings down the risk of misalignment or calculation.

Whilst data centers are more efficient than they have ever been on a whole, there is still a vast amount of improvement that can be made, in particular on both energy consumption for cooling and the consumption of water in the requirement for adiabatic cooling.

In 2014 Lawrence Berkeley National Laboratory in California issued a report that 639 billion liters of water were used in the USA alone on data center cooling, in 2020 the forecasted usage figure was predicted to be a startling 674 billion liters of water.

What is adiabatic cooling?

The reason that water is used in these data center cooling solutions is traditionally to obtain the lowest possible air temperature entering the external plant and therefore extracting as much of the heat from the data center using the natural environment before the mechanical chiller needs to be implemented.

In any air content there are two temperature points, Dry Bulb (DB) and Wet Bulb (WB), the dry bulb is what you would feel if you were dry, the best way to describe wet bulb is when you walk out of the shower before you manage to get to the towel! The water on your body enables the air to reach the temperature of the relative humidity as you move through it, this is always equal to or lower than the DB temperature.

For example if the DB temperature in a room is 20°C/70°F and the WB temperature is 14°C/57°F then, if a wet object or air is pushed through a wet area or membrane the temperature would potentially reach the WB temperature, that is until the object is heated or dried.

Why is this usage so high?

The usage of water is inversely proportional to the water temperature flow to the data centre internal cooling equipment. The lower the temperature of the water flow into the data centre the higher the water usage by the external plant. Traditional plant has a normal water flow temperature of 7°C/45°F which means the highest temperature that you could potentially utilise naturally to get to the desired flow temperature is 5°C/41°F.

How can you improve this usage?

The best possible way to reduce the usage is to elevate the water temperature that the data centre requires in order to cool the equipment efficiently and effectively, the rear door cooler is a great example of this because, unlike traditional CRAC systems, instead of using colder air to mix with warm air to provide an ambient you are instead neutralising the air itself and therefore you can use a higher water temperature to obtain the same result. The graphs below show the average high temperature for DB and WB over a thirty year period.

What happens when you implement a ColdLogik rear door?

SanFrancisco 3 WUE Graph
SanFrancisco 4 WUE Graph

In the graphs above you can see the marked difference between using a traditional cooling system, which is marked in yellow, and the ColdLogik cooling requirement, marked in Grey.

In the case of San Francisco and the Bay area its clear to see that by utilising the traditional approach you would, on average, have a need for the adiabatic system all year round and you would also require mechanical assistance all year round in varying load. However, as most chillers have a minimum run of 25%, less free cooling could be available.

By utilizing the ColdLogik door, on average, you would not need to use any additional water for 7 months of the year to provide adiabatic cooling, you would not require any mechanical assistance through the remaining 5 months either. Chillers would normally remain on site in order to provide redundancy on the rare occasions that a heat wave outside of the average occurs, however the chillers may not ever need to actually be run, causing an energy saving too.

Conclusion

In conclusion, without considering the lower water usage across the remaining 5 month which could be substantial, the ColdLogik door would likely be able to save a minimum of 58% additional water that would otherwise be consumed by the traditional cooling methods.

Translating into physical water usage over the year this could drop the current projected figure of 674 billion liters of water down to 283 billion liters of water which is a 391 billion liter drop. This is the equivalent of filling 156,400 Olympic swimming pools which would take up an area 1.5 times that of San Francisco city.

If you are looking to improve your wter usage with a product that is tried and tested and deployed into the market worldwide then get in touch with USystems today.

Conventional air cooling traditionally consumes significant energy when using mechanical chillers, one way to reduce and potentially eliminate the additional energy wastage is by utilising adiabatic cooling. Whilst significantly improving efficiencies on one hand this exponentially increases water usage in order to equip evaporative cooling. The major down side however is the growing scarcity of water in certain geographical locations. A typical large scale Data Center consumes an equivalent of 2,500 peoples water which is putting pressure on local governments in order to drop water usage.

By utilising liquid cooling you can effectively increase the water temperature to the point where adiabatic cooling is no longer needed, giving the best of both worlds, no excess water wasted and better energy efficiency with a simpler site set up and requirement. It really is a WIN-WIN-WIN.

Contact us

side_menu