archive.php

contnt.php

Liquid Cooling: Benefits and Challenges

Over the last few years, Data Centers have raised their profile significantly. As such the industry has come under a significant amount of scrutiny due to inefficiencies of power, water wastage and their overall carbon emissions.

Accordingly Data Centers today are under extreme pressure to become more sustainable in all the areas in which they negatively impact on the environment.

This is not an easy fix, and one that most certainly can’t be concluded overnight. There are however significant changes that can be adopted at relative speed to allow for smoother transition.  For example, switching from fossil fuel supply to cleaner electricity supplies.  This can be completed with relative ease through securing renewable energy bonds with a supplier.  It should be said however, that whilst renewable energy is being built at a fantastic rate, the supply is still finite and until this is more ubiquitous, demand for Data Centers, as other concerns, needs to be reliable.

What is better than using renewable energy?

Using less energy overall to improve its availability elsewhere and reduce the wider grids reliance on fossil fuels.

In this document, we aim to provide information on Data Center cooling methodologies and the affects they have on the facilities in which they are embedded, in particular the benefits and challenges of liquid cooling.

Multiple Choices

When talking about next generation Data Center cooling, it’s important to realise that there are now multiple alternative choices as there are many next generational cooling methodologies available to design engineers, each with their own intricacies to be navigated.

kW Duty Deployment

In 2006, the kW per cabinet capacity was around 1 – 1.5kW per 42RU cabinet, increasing to around 2.4kW per 47RU cabinet in 2011.  The Data Center industry responded by building facilities that could cope with 5kW per cabinet average which was seen to give the industry significant flexibility for the foreseeable future, however, this requirement was surpassed in 2017.

Because the advancements of the Hardware have increased, the Data Center has had to adapt to overcome the challenges of increased kW demand per cabinets in several ways.

One of the options was to share the capacity. This meant that the design of the facility which had allowed for 5kW per cabinet could facilitate a 10kW cabinet but this would mean a cabinet next to it would have to remain empty to keep the balance.

When real estate is at a premium, this is a costly waste which in turn either has to be charged to the customer or absorbed by the Data Center.  With the trend for cabinet kW increasing, it raises questions on the existing design scalability.  This could result in a Data Center being up to a third empty or unusable space if the industry continues to increase power demand.

Global Average Cabinet kW Requirements and Data Center Design and Build Deliverable per cabinet

Because of the advancements of the Hardware and its increase in duty, some Data Centers have added air controlling adaptations and additional hardware to their existing design to aid in the increased cooling demand.

These adaptations are typically, Aisle Containment, In row Cooling units and in some design cases higher duty air vent tiles.

Adaptations such as the ones shown have supported the Data Center facilities well, meaning where extreme kW duty cabinets were required, they were able to facilitate and deliver the necessary cooling required. 

However this came at an increased cost , and often still had a typical cooling limit of around 30kW per cabinet.  So even these technologies are not capable of efficiently cooling the latest cabinet densities.

kW Duty Per Technology and its Efficiency

The above graph demonstrates significant increase in kW demand for facilities where global average kW requirements per cabinet are closely aligned to kW per cabinet Data Center design, not allowing for the facilities to evolve to meet higher demands where required.

However, as there is now a bigger a drive for the industry to be more sustainable, this means we simply cannot continue designing facilities with so many existing conflicts to achieve a better future, for both operators and those utilising colocation space.

We have to look to introduce new more efficient technologies which can deliver both within the existing or new build infrastructure.

kW cooling capacities per technology need to be reviewed. The graph below covers the kW duties the technology supports as an average.  There are even some indirect cooling technologies which could exceed the kW duty shown, dependent on the climate it is deployed in.

ColdLogik Rear Door Coolers vs Other Technologies​
ColdLogik Rear Door Coolers vs Other Technologies​

White Space Data Center Cooling

The Data Center has always been the key foundation to support IT hardware, with the capability to provide the space, power and cooling for exact requirements. However as the Hardware has evolved and increased in power demand, the design architecture and specifically the cooling ability of the Data Center has started to fall behind in its capability.

This is where the opportunity for Data Center designers to re-evaluate their existing design criteria and utilise alternative technology available which can enhance the future of their facilities and in turn the efficiency and sustainability of the Data Center.

Here we examine and give an insight into the benefits of some existing and new technologies helping to address the increased demand for Data Center white space cooling.

Immersion

Immersion cooling is a relatively new technology. There are two versions of immersion cooling, single phase and two-phase. Both are a technology that allows servers and other components that traditionally were in a free-standing cabinet, to now be submerged in a tub configuration with thermally conductive dielectric fluid. With this method, the need for air cooling around the Hardware is eliminated, including the fans within servers, however supplementary cooling is still required due to the excessive heat within the overall room from such a large system with significantly high temperature fluid, this supplementary cooling will typically be between 10-20% of the overall loading.

In a single-phase system servers can be installed vertically in a dielectric fluid that is thermally conductive. Heat is transferred to the liquid medium through direct contact with server components and removed by heat exchangers in a CDU.

The other challenge that is often lost is the use of ‘Dual phase’ or ‘Two phase’ systems, it is important to remember that these are in essence refrigerants which have a global warming impact which potentially can be thousands of times its own weight in CO2 equivalent.

Direct chip level cooling

DCLC or Direct Chip Level Cooling is a highly efficient heat rejection system with water or one- or two-phase dielectrics to reject heat from chip sets.  Fast becoming one of the ‘go to technologies’ as an attractive way to boost or supplement the base load of air-cooling for higher density applications above that of Aisle Containment.

This system works well in new or retrofit DCs and compliments existing CHW (chilled water) design in a very efficient way which can extend or augment many Data Center facilities.

Supporting any server, including a number that are pre-integrated, while also Supporting potential for using higher fluid EWT/LWT with improved free cooling aspects.  The technology is deemed Safe and reliable  and has potential for heat recovery with process water, however this is likely to need a CDU for heat transfer, these CDU systems also need to be more local and consume vast areas of the white space.

DCLC is an excellent technology that has taken Hardware cooling to the next level, by treating just the GPU/CPU however it typically removes between 50-70% of the total IT heat load.  Due to this you will still need another form of cooling to eradicate the additional heat.

The heat sinks and other materials typically get removed and replaced for every IT refresh due to  variations in hardware design which is extremely costly, up to 60% of initial capital expenditure in some cases.

DCLC is a good solution for HPC and deployment in Colocation existing Data Centers reflects the reality that this form of equipment will normally require supplementary cooling for the foreseeable future, as it is concentrated at the main CPU/GPU heat sources. It serves a purpose in HPC hardware applications and allows for some support on deployments in traditional Data Center / MTDC environments.

It is a more mature approach than immersion cooling, and the reports of leaks, that would normally be a cause for concern are taken away by working with key established providers, however given that there are a significant amount of parts used in this type of deployment the TCO model can be challenging due to lack of forward IT compatibility and significant CAPEX replacement and deployment costs.

Rear Door Coolers (RDC)

RDCs have been around the longest of all the next generational white space technologies, all be it initially in passive deployments. Since the emergence of active RDC or RDHx (Rear Door Heat Exchangers) some models offer the highest heat rejection capabilites in the industry today.

Our product, the ColdLogik RDC is designed to run on liquids such as water or harmless synthetic fluids, potentially improving efficiencies and increasing flexibility, however unlike the other new technologies, it can also negate the need of mechanical cooling. 

Most RDCs utilise a closed loop system, meaning they don’t consume water.  They also waste significantly less water than traditional Data Center deployments, and in some scenarios would consume no water for the full year without mechanical cooling.  As some Data Centers are facing massive pressure to reduce the amount of potable wast water they utilise, this would be of significant benefit to them and the local communities they impact.

The ColdLogik  system cools 100% of the room load thereby mitigating the need for any additional cooling, improving efficiencies significantly and saving real estate.

This type of technology is non-intrusive, and as it is AALC (Air Assisted Liquid Cooling), it poses no risk to equipment warranty whilst allowing for easy integration and deployment into any existing environment.

The cost of the RDCs and components to deploy mean it is cost effective for the overall TCO and not as CAPEX heavy as other technology.  It also can deliver full PUEs of 1.035 – 1.09 meaning the OPEX is also significantly profitable due to the savings the doors enable.  This is achieved when delivering up to 98% power usage savings comparable to traditional CRAC/H air cooling, up to 83% comparable to traditional aisle containment cooling and up to 45% comparable to in-row cooling.

ColdLogik systems will support in excess of 200kW per cabinet whilst allowing N+1 redundancy which is a unique attribute in the market place.  There is no need for specialist hardware to allow for integration.

Free cooling is the true goal for ColdLogik RDC systems and this can be achieved even in the harshest environments. RDCs are compatible with almost all external plant options including chillers, dry coolers,water towers, bore holes, rivers and lakes. The RDC also allows for higher water differentials enabling the use of heat recovery where a use exists for the waste heat.

Some RDC manufactures also require CDUs, which when deployed may need to take up space close to the deployment whereas ColdLogik systems can be deployed onto any existing cabinet allowing for simple retrofit options, without the need for CDU systems.

The RDC is a proven technology and the most mature and versatile of all next generational white space Data Center cooling.  Efficiently delivering 100% of heat negation, more than all the other next generation technologies. It saves water, a key element the DC industry needs to focus on and also delivers power and space savings alongside huge carbon savings.

contnt.php

Water Scarcity: An overview and how it will affect Data Centers

Water Scarcity, an overview

The presence of water around IT equipment was introduced mainly back in the 1970’s, at a time when mainframes were traditionally located within offices.  The mainframes were primarily situated in and around the desks where people were working and as result the room environment became a lot warmer and more uncomfortable for the personnel. The challenge to change this situation soon became apparent and that some form of cooling was required to address this increase in temperature. A solution that would meet the requirements of both the people in the organisation and the environment that the equipment needed to survive in and thus IT air conditioning was born.

In order for the personnel and IT equipment to co-exist it was deemed that a room temperature of 21°C (+/- 1°C) and an air humidity level of approximately 50% had to be achieved. This was only going to be possible using specialist air conditioning systems.

In the late 1980s and 1990s office spaces and IT rooms were becoming separated from each other and the IT rooms became more commonly recognised as a comms room/computer room (or data centers as we now know them). 

Water was the cooling medium used in the air conditioning systems, these were referred to as Computer Room Air Handling ‘CRAH’ systems.  Although this was situated some distance from the IT itself, water was present in the surrounding areas and allowed the room to reach the set point temperatures and humidity levels.

Since data centers became the dedicated supporting platform for IT, they have utilised water in multiple ways, from fire suppression to perimeter cooling for CRAH systems. However, we now appreciate that water is our most valuable natural resource and it needs to be used wisely.  Many areas of the world are already suffering from water scarcity and this is becoming more widespread across the globe which is a major concern and obviously harmful to the human race, wildlife and the planet.

Water Scarcity, why?

To address this we need to relay some facts about water.  Firstly, of the 100% of water on the planet, only 2.5% is actually only available to drink, known as ‘fresh or potable water’, with the remaining 97.5% as salt water.  These statistics become rather alarming, when you consider there are over 7.8 billion people (at time of writing) on the planet, all of whom need water to survive.  Of the 2.5% of fresh water, some of it is not actually available with approximately 68.9% in glaciers, 30.8% is groundwater and 0.3% is in lakes and rivers.

How does it occur?

Water scarcity occurs when the demand (due to agriculture, cities and environment) is higher than that of the available resource.  Even today over ⅓ of the population lives in an area where their water supply is not enough to fulfil demand or it has been compromised.  This is a massive issue and one which is happening all over the world.  Based on current projections it is predicted that this will be an issue for ⅔ of the world’s population by 2025 – less than 5 years away!!

There are two types of water scarcity, physical water scarcity and economical water scarcity. Physical water scarcity is not having enough water to meet our daily needs and economical water scarcity is when human, governmental, institutional or financial capital limit or throttle access, even though water in nature is free to access the scarcity requires it to be metered out to enable a fairer distribution.

 

So, how do we measure water usage?

Water Usage Effectiveness ‘WUE’ is a sustainability metric created by The Green Grid in 2011 to attempt to measure the amount of water used specifically by data centers cooling their IT assets. To calculate simple WUE, a data center manager divides the annual site water usage in litres by the IT equipment energy usage in kilowatt hours (kWh). Water usage includes water used for cooling, regulating humidity and producing electricity on-site.

How can you improve this usage?

There are multiple technologies available which limit water usage, without having to resort to potentially harmful chemical solutions. For example, in order to cool the equipment efficiently and effectively, retrospectively or as a new build, without the need to redesign is to use a Rear Door Cooler the RDC.  Unlike traditional CRAC (refrigeration/mechanical cooling principle) systems, the RDC removes the heat directly at the source and the air dispelled into the space is the desired room temperature. CRAC systems typically mix colder and warmer air to provide the room environment which isn’t the most efficient option.

Conventional air cooling traditionally consumes significant energy when using mechanical chillers, one way to reduce and potentially eliminate the additional energy use/wastage is by utilising adiabatic cooling. Whilst significantly improving efficiencies on one hand this exponentially increases water usage to facilitate evaporative cooling.  The major downside however is the growing scarcity of potable water in certain geographical locations.  RDC’s can utilise water from natural sources so as not to disrupt the availability of drinkable water in the vicinity. Sustainable natural water sources such as lakes, riverbeds, aquifers, bore holes, rainwater and even sea water.  Water is not taken, polluted or removed, simply flowing around the heat exchanger and returning back to its original location, recycling at its best!

How will this affect the data center market?

Reports suggest that there are data centers which use millions of gallons of water per day to keep them cool.  This along with power usage, is one of the hottest issues for data center managers, operators and owners as they try whatever they can to reduce their usage of both without affecting current and future performance.

With insight and the desire to change to more sustainable solutions, data center operators can utilise technology available today, such as ColdLogik by USystems which can reduce the amount of water usage to nearly net zero gallons of water, whilst also saving up to 93% cooling energy.

It is immoral and soon to be illegal in a lot of States, to utilize these huge amounts of water for a data center, especially when restrictions are close to being implemented in certain regions to stop the general public from using the natural resource of water due to its scarcity.

Conclusion

In conclusion, the ColdLogik RDC would likely save a minimum of 58% up to 100% water that would otherwise be consumed by traditional cooling methods. When looking to improve water usage with a product that is tried, tested, highly regarded, multi award winning and successfully deployed worldwide for over a decade. 

By utilising Air Assisted Liquid Cooling ‘AALC’ you can effectively increase the water temperature to the point where adiabatic cooling is no longer needed, giving the best of both worlds, no excess or drinkable water wasted and better energy efficiency with a simpler site set up and requirement.

contnt.php

Crypto Mining Data Centers: The ColdLogik Deployment

Introduction

Since its inception, crypto mining continues to be a hot (if not controversial) topic as it continues to divide opinions and even countries.  It is well documented that some countries are either are approving crypto mining in their countries and approving it as legal tender, whereas others have banned its use completely. 

Clearly it is disruptive to the financial sector which is trying to adjust and accommodate but the one thing that can be said, rightly or wrongly, it is not going away and is gaining in popularity.

The one thing it is not immune to is the need to address its power usage and its structure deployment and configuration, moving away rapidly from its wild west origin.

At USystems we have used the ColdLogik philosophy to design a highly energy efficient rapid roll-out solution, capable of cooling more crypto units per rack for new build and existing data center colos, whilst ensuring each retains a perfect temperature during operation in the most cost effective and sustainable way available.

Rapid Deployment Crypto Mining from USystems

The key to crypto mining sites is the ease and speed of deployment. USystems have designed a rack dedicated to housing Crypto miners, ensuring maximum density per square foot: in fact, the ColdLogik Crypto-Rack can house up to 35x S19 Asic Antminers.

At 112kW per rack as standard, high-performance cooling becomes a prerequisite, which is why we couple the Crypto-Rack with our latest ColdLogik CL23 HPC Rear Door Cooler.  With one eye on energy savings, the 112kW per Crypto-Rack can be cooled year-round using free-cooling, and within excess of 200kW per rack capability the ColdLogik solution can cope with 170kW of overclocking, meaning your deployment is future proofed. This unique configuration allows the USystems Crypto-Rack to be rapidly deployed into a dedicated facility or an existing Colo data center – Simply deploy, connect to supply water, and power up.

ColdLogik Crypto Cooling: The Technology

ColdLogik Rear Door Coolers are established as highly efficient cooling systems for use on data center/server racks.  Designed to operate on a closed loop water circuit, they ensuring optimum thermal and energy performance by removing heat generated by the active equipment directly at source.

Designed to meet the challenging demands of High-Performance Computer (HPC) cooling, USystems with its unique RDC has positioned itself alongside water to the chip  and immersion cooling technologies, and the CL23 HPC is capable of an unrivalled 200kW of sensible cooling per industry standard rack.  Unlike other high performing cooling technologies, the RDC requires no specialist infrastructure in the data center, no specialist servers, is fitted to standard IT racks, has retrofit capability, only occupies a small footprint, is easy to install and simple to roll out. The CL23 HPC is unquestionably cost effective on all levels.

The CL23 HPC by design is capable of controlling the whole room environment without any additional cooling apparatus, unlike equivalent technologies.  In addition, this ColdLogik Solution offers significant capital expenditure savings and with an EER in excess of 100 at maximum duty, the CL23 HPC provides a better operational expenditure too.

How it works

Air Assisted Liquid Cooling ‘AALC’ allows for the best of both worlds, enabling higher densities in standard data center designs and bringing levels of efficiency that are truly capable of enabling change in your next retrofit or new build project.

Ambient air is drawn into the rack via the IT equipment fans. The hot exhaust air is expelled from the equipment and pulled over the heat exchanger assisted by EC fans mounted in the RDC chassis. The exhaust heat transfers into the cooling fluid within the heat exchanger, and the newly chilled air is expelled into the room at, or just below, the predetermined room ambient temperature designed around sensible cooling. Both processes are managed by the ColdLogik adaptive intelligence present in every ProActive RDC, in this way the Rear Door Cooler uses air assisted liquid-cooling to control the whole room temperature automatically at its most efficient point.

ColdLogik RDC’s enhance the efficiency of most data centers without the need for any changes to their current design. However, greater energy efficiencies are achievable when the complete ColdLogik solution is deployed.

By negating heat at the source and removing the need for air mixing or containment, you gain the ability to significantly increase the supply water temperature, which means more efficient external heat rejection options become available. In some scenarios this means that the ColdLogik RDC solution removed all compressor-based cooling, therefore promoting the option to free-cool all year round.

Screenshot 2021 11 09 at 08.45.51

The ColdLogik Crypto Set Up

Purpose built reinforced slide-in shelves and corresponding CL23 RDC capable of up to 200kW of cooling per cabinet.

Suitable for cold climate locations and can be deployed anywhere in the world or where power is cheap, as the ColdLogik RDCs can provide up to 100% free cooling.

Supports 33x S19 Asic Antminers in a 1000mm (w) x 600mm (d) and 52RU in Height
Supports 33x S19 Asic Antminers in a 1000mm (w) x 600mm (d) and 52RU in Height
Supports 35 x S19 Asic Antminers in an 800mm (w) x 1200mm (d) and 52RU in Height
Supports 35 x S19 Asic Antminers in an 800mm (w) x 1200mm (d) and 52RU in Height

Conclusion on ColdLogik Crypto Mining Data Center

The USystems crypto mining data center solution is designed to deliver:

  • Speed of deployment
  • Largest mining deployment footprint
  • Most Efficient mining cooling that can be deployed anywhere in the world.
  • Cost effective deployment

With this system, crypto mining can scale quicker and easier, in climates where power is cheap and with no real worry on how to cool the miners. Delivering more miners per footprint than any other technology available today, it really is the true mining deliverable system.

contnt.php

Crypto Mining Data Centers: An Understanding

What is Cryptocurrency?

Cryptocurrency, crypto-currency, or crypto is a form of currency that is constructed and created with a collection of binary data. It is designed to work as a medium of exchange where individual coin ownership records are stored in a ‘ledger’, which is a computerized database using strong cryptography to secure transaction records. This controls the creation of additional coins and verifies the transfer of coin ownership.

Some crypto schemes use validators such as tokens to maintain the cryptocurrency. In a proof-of-stake model, owners put up their tokens as collateral, in return they get authority over the token in proportion to the amount they stake.  Generally, these token stakeholders get additional ownership in the token over time via network fees, newly minted tokens or other such reward mechanisms. Cryptocurrency does not exist in physical form (like paper money or coins) and is typically not issued by a central authority. Cryptocurrencies typically use decentralized control as opposed to a central bank digital currency (CBDC).   When a cryptocurrency is mined or created prior to issuance or issued by a single issuer, it is generally considered to be centralized. When implemented with decentralized control, each cryptocurrency works through distributed ledger technology, typically a blockchain, that serves as a public financial transaction database

What is a Ledger?

A ledger, also called a shared ledger, distributed ledger technology (DLT) or a wallet, is a consensus of replicated, shared, and synchronized digital data, geographically spread across multiple sites, countries, or institutions. Unlike with a centralized database, there is no central administrator.

This can also be known as a Replicated Journal Technology (RJT) since the information is replicated in the nodes containing full copy of the information and the information in the blocks is included in timely order, more in the form of an accounting journal than as an accounting ledger.

A peer-to-peer network is required as well as consensus algorithms to ensure replication across nodes is undertaken. One form of distributed ledger design is the ‘blockchain’ system, which can be either public or private.

A blockchain is a growing list of records, called ‘blocks’, that are linked together using cryptography. It’s also been described as a “trust-less and fully decentralized peer-to-peer immutable data storage” that is spread over a network of participants often referred to as ‘nodes’.  Each block contains a cryptographic ‘hash’ of the previous block, a timestamp, and transaction data (generally represented as a ‘Merkle Tree’).  The timestamp proves that the transaction data existed when the block was published in order to get into its hash.  As blocks each contain information about the block previous to it, they form a chain with each additional block reinforcing the ones before it. Therefore, blockchains are resistant to modification of their data because once recorded, the data in any given block cannot be altered retroactively without altering all subsequent blocks.

Blockchains are typically managed by a peer-to-peer network for use as a publicly distributed ledger, where nodes collectively adhere to a protocol to communicate and validate new blocks. Although blockchain records are not unalterable as forks are possible, blockchains may be considered secure by design and exemplify a distributed computing system with high Byzantine fault tolerance.

The blockchain was invented by a person (or group of people) using the name Satoshi Nakamoto in 2008 to serve as the public transaction ledger of the cryptocurrency ‘Bitcoin’. The identity of Satoshi Nakamoto remains unknown to date. The invention of the blockchain for Bitcoin made it the first digital currency to solve the double-spending problem without the need of a trusted authority or central server. The Bitcoin design has inspired other applications and blockchains that are readable by the public and are widely used by cryptocurrencies. The blockchain is considered a type of payment rail. Private blockchains have been proposed for business use but Computerworld called the marketing of such privatized blockchains without a proper security model “snake oil”. However, others have argued that permissioned blockchains, if carefully designed, may be more decentralized and therefore more secure in practice than permissionless ones

Crypto Mining Data Centers

Traditionally, these were labelled as ‘mining farms’ and were large hangars in remote areas with cold climates, but with significant power.  However, as we have progressed further on the crypto journey, more purpose-built facilities have been established and these are commonly known as data centers – ‘mining’ data centers.

Mining farms were initially constructed, in most cases with no real structure to the deployment of miners, using distribution racking to hold the miners, with the walls of the units having slots cut in to allow for airflow and that was it. But now, true mining data centers are starting to appear that are significantly more structured and designed more in line with existing data centers, allowing for all the design aspects you would normally see in a data center facility, including that of power and fire suppression plus more modern cooling.

Historically, the mines were cooled by the deployment location climate.  For example, when deployed in areas such as Russia which lines the Artic Circle, it is easier to just allow the air to pass through, but in some cases this is not really practical.  Mining data centers are appearing in more “standard/hot” areas such as Texas, where power is cheap, but without access to the cooler climate of areas such as Russia, Canada or Iceland for example.

Crypto Mining Centers
Crypto Mining Center

Crypto Mining Deployment

USystems, with their Crypto focused ColdLogik brand of Rear Door Heat Exchangers and purposely designed enclosures, are aligned and constructed to provide a complete deployment for a Crypto Mining Data Center. An example of this is 35x miners, in an 800mm x 1200mm 52RU cabinet that has purpose built reinforced slide-in shelves and corresponding CL23 RDC, capable of up to 200kW of cooling per cabinet.

When standard S19 miners are working at circa 3.2kW per unit (without Overclocking), a deployment of 112kW per cabinet is “Standard”.

This also now means that all mining areas no longer have to be in cold climate locations and can be deployed anywhere in the world, or where power is cheap, as the ColdLogik RDCs can provide up to 100% free cooling, meaning a significant saving operationally.

Crypto Mining Footprint

In traditional mining farms, the layout structure of the miners utilized a distribution style racking system, supporting the miners as shown here. This equated to around 21 units in the footprint area of a standard data center cabinet. This image shows a traditional deployment in the mining industry that would be classed as an unorthodox deployment where space is lost due to the need to cool.

With ColdLogik, we can hold 35 miners in a cabinet footprint, a gain of around 42%, and these can be deployed within a standard data center configuration, increasing the footprint to house more miners and efficiently cool in any region.

Crypto Mining Centers
Crypto Mining Center

Closing Summary

Crypto mining has fast become a popular form of currency.  Adoption is still on the increase, but it is still looked upon as potentially underhand as it is still unregulated, and though vast fortunes can be made, these fortunes can also be lost in some cases due to its volatility.

China used to be the crypto mining capital of the world until it was made illegal.  This has meant that other areas, especially in the USA like Miami, are vying to become crypto mining hotspots. The one notable issue is that the way crypto mining is delivered there, by only a handful of colocation data center providers that can truly help to support the extreme high-density power and cooling requirement necessary for mining, with all other data centers building specifically to meet the demand.

The fact is that mining is an anomaly compared to traditional hardware deployments, as it does not require redundancy and exceeds over 100kW on every rack and sustainable infrastructure to reduce the pricing. The availability could be classed as a difficult request, unless you opt for ColdLogik from USystems.

Get in touch with your regional ColdLogik salesperson for support on understanding and delivery of crypto mining data center solutions into your facility.

contnt.php

USystems Launches New ColdLogik Website


Known for their multiplicity of data center solutions, USystems has now separated its ColdLogik Rear Door Cooler solution from the main USystems website as the product has gained more traction in the marketplace and now warrants its own website accordingly.

They are very pleased to announce the launch of their new website www.coldlogik.com as this is one of their most searched for phrases on the internet. 

USystems are the leading provider of Data Center Cooling solutions, they strive to provide unparalleled performance and true value to their customers. Designing and manufacturing world class multi award winning cooling solutions which improve operational and energy efficiency, helping to reduce carbon emissions and save money.

With increased rack densities coming into the mainstream, data centers now have to focus on their environmental impact, and the energy and emission savings ColdLogik offers for both low to high density cooling, this is the perfect time to let ColdLogik have its own standalone platform.

The new website focuses on the savings and benefits the ColdLogik RDC offers with clear navigation to the most relevant information, relating to technical capabilities or environmental advances.

“We are very excited to launch the ColdLogik by USystems website, it is a key step in our goal to differentiate our Rear Door Coolers in the marketplace. The ColdLogik RDC range can fit any Data Center. Whether it’s a retrofit or new build design, USystems technology can be deployed to meet any criteria, from increased efficiency, be more sustainable for the planet, and even support with a bespoke duty up to 200kW per rack”.

Gary Tinkler, VP Sales Marketing ColdLogik


Founded in 2004, USystems is headquartered in the UK, with additional offices in Europe, Middle East, North America, and India. The company offers Air Cooling, DX, and LX Cooling solutions to various sized Data Centres, including standard, hyperscale’s and edge, and those with High Performance and Super Compute capabilities.

For more information, visit www.usystems.com or follow USystems Ltd on LinkedIn and Instagram.

Media Contact:

Kate Rooney

Kate.rooney@usystems.com

+44 (0) 1234 761 720

contnt.php

USystems exhibits latest ColdLogik Cooling Solutions at SC21

USystems exhibits latest ColdLogik Cooling Solutions at SC21, as data is driving development across every industry, by harnessing data faster and more accurate than ever before, we can achieve things previously a dream. As innovators, we are inspired by the people behind the breakthroughs and the SC21 exhibition is the place to be.

ColdLogik by USystems is primed to support any organization, with its range of efficient and sustainable IT Cooling and Micro Data Center products. No matter the requirement, ColdLogik allows for flexible, adaptive, scalable, and futureproofed deployments. Aimed to deliver optimized Data Center cooling technology that is proven to save money as well as water, electricity, and carbon emissions. Our solutions are purposefully designed to deliver efficiencies for HPC.

Communities such as SC which have built on its diverse community of researchers, scientists, and other participants, celebrate progression in industry and inspire breakthroughs. USystems is thrilled to be a part of the community, and to launch our new additions to our ColdLogik RDC range at SC21.

To ensure the health and safety of our team members and other attendees, USystems onsite team will follow all COVID-19 protocols required by the SC21 conference, which include providing a proof of vaccination and wearing a mask or facial coverings while indoors.

Join us in St. Louis to learn why USystems is exhibiting its latest ColdLogik Solutions at SC21 and how we are cooling the innovations that could improve every life across the globe.

Find us at SC21 at Booth #2827

To find out more about why USystems is the ideal supplier for HPC cooling visit: About – USystems

To find out more about the show visit: Home • SC21 (supercomputing.org)

contnt.php

New model of ColdLogik Smart Passive Rear Door Cooler: Now available

London, August 2021 –  USystems released a new version of ColdLogik Smart Passive Rear Door Cooler (RDC), the CL21 Smart Passive RDC, a self-cooling passive RDC with an upgrade system to a proactively cooled RDC.


The Features:

  • Universal rack compatibility
  • Retrofit capable
  • Simple installation
  • Air pressure drops fully disclosed to ensure compatibility with in rack equipment prior to deployment
  • Pay as you grow for definable kW ratings if upgrade path is needed
  • Less wastage through upgrade path – no need to replace the existing system
  • Feature packed ‘ColdLogik management system’ available on active upgrade
  • New axial fan design on upgrade path
  • Best in class energy usage at industry typical densities, when compared to other active systems
  • Integrated interface frame for the most shallow RDC USystems has ever produced

Carefully designed and tested, the new version of Smart Passive RDC is expected to enable better transition from traditional cooling methods into a much more energy efficient overall solution.

The CL21 was designed, from the ground up, to be the best possible solution in the industry for racks with a cooling requirement between 0-29kW. The CL21 facilitates significantly better energy efficiency, waste free upgrade paths, all whilst reducing the footprint required within the whitespace and the carbon footprint for the environment.


‘The CL21 is the next step in our journey towards helping our customers improve their efficiency of energy and space alongside their on site sustainability. As with any of our products it was born through multiple customers requirements and feedback has been taken by the team throughout the process to optimise design.

As a result of this I am sure you will see subtle differences in the CL21 to the rest of the product range. However in our push to improve our offering it will certainly not be our last product to see some change.

Working with our customers to facilitate their requirements is right at the very core of USystems DNA and the CL21 shows our continuing commitment to this.’

-Sam Allen, VP Tech Sales

contnt.php

USystems and Rahi Partner to Deliver to the Global Data Center Market

London, United Kingdom, August 2021 – USystems Ltd, the specialist manufacturer of data center cooling solutions, and Rahi Systems, the design, supply and data center solution installation specialists, have announced their partnership to deliver global supply and installation of USystems ColdLogik Rear Door Heat Exchangers (RDHX) and In Row Coolers.

USystems and Rahi will work together to assist some of the world’s largest organisations to increase efficiency, reduce carbon emissions to become more sustainable; by designing and delivering robust data centers appropriate for the future.  This partnership will deliver a full turnkey solution, which not only offers exceptional value and service, but due to the efficient design of the ColdLogik RDHX and InRow Coolers, will offer significant operational savings, reduced time-to-market and eliminates inefficiencies for Data Centers whilst increasing its sustainability.

The USystems ColdLogik RDHX solution is an innovative liquid solution with adaptive technology. It provides an unrivalled versatile system which can not only deliver an unrivalled 200kW plus of cooling per rack, (ideal for HPC or AI applications), but can also work at the lower ends of the kW per rack needs to support MTDCs offering a scale that’s not possible in their current configurations.  No supplementary cooling is required, and it has the ability to control the entire data hall temperature, allowing more space for racks and complete flexibility on kW per rack cooling, higher than any other cooling platform.  A room neutral system means no more hot or cold aisles with additional “bolt on components” driving up operational expenditure.

Rahi will provide extensive data center design and supply all components for your desired layout and install as a full turnkey system, offering hyperscale, colocation, and enterprise data center operators a standardized and flexible data center solution with huge benefits including some of the best operational savings in the industry. The new partnership enables Rahi to incorporate the USystems RDHX into their global data center market portfolio and allows USystems to have a turnkey delivery with a partner that is expert in their field in data center deployments for the world’s leading data center providers.

“Our aim is to optimise existing data centers and design new build DCs that offer exceptional scale as standard while achieving significant free cooling with no major design changes. Rahi Solutions, reflects our own drive to support data centers globally, embracing this innovation to deliver efficient and sustainable data centers of the future by sharing USystems technology to assist the industry we work in. Partnering with Rahi is enabling us to deliver globally, at scale for our own customers as well as Rahi’s, who now gain access to the world leading Rear Door Coolers as well as receiving the exceptional design, supply, install and maintenance Rahi delivers”.

Scott Bailey, CEO of USystems

“We value our partner eco-system at Rahi as they enable us to provide the best possible solutions and services to our customers. We are delighted to bring USystems on board to support the data centers of tomorrow.”

Tarun Raisoni, CEO and Co-Founder of Rahi

contnt.php

Liquid Cooling in Data Centers, why is it needed?

Modern chips and processors are smaller than ever and are at the point that the manufacturing process has exhausted Moore’s Law.  

These modern chips and processors are compacted so tight it creates a significant heat load, which then means we need more efficient and sustainable cooling solutions, e.g., they need to absorb heat more efficiently than standard air cooling. This has led the Data Center industry to investigate Liquid Cooling in more depth. 

What is Liquid Cooling? 

In theory it is a simple cooling methodology, using liquid as the primary means for capturing and expelling heat created by IT load.  

Unfortunately, there are many misconceptions about the different technologies available, with a large portion of our industry believing there are only two types of Liquid Cooling:  

  1. Immersion Cooling   

IT equipment is submerged inside an electrically non-conductive liquid inside a tub that absorbs heat. 

  1. Direct Chip Liquid Cooling (DCLC) 

Small hoses bring cool water to heat sinks or cold plates and  circulate warmed water to a heat exchanger.  

Even though both technologies have merit, there are considerable downfalls. With immersion cooling there is significant setup costs, it does not capture 100% of the heat generated, which would mean you would require a secondary, supplementary cooling source, and this is the same with Direct Chip Level Cooling, which only captures between 60-80% of the heat.  

So, is there an alternative technology?  

Yes, there is…… Rear Door Heat Exchangers (RDHx), or known also as Rear Door Coolers (RDC), is a proven liquid cooling technology, sometimes known as Air Assisted Liquid Cooling, which is designed to operate as a closed loop system.  

In principle there are two main types of Rear Door Cooling methods, passive and active.   

First, with a passive Rear Door Cooler (RDC), works by allowing ambient air to be pulled into the rack via the IT equipment fans with the resulting hardware hot exhaust air expelled over a heat exchanger, transferring the heat into the liquid inside the coil, with the resulting chilled air being expelled back into the room.  

Similarly, with an ProActive Rear Door Cooler (RDC), ambient air is pulled into the rack by the IT equipment fans, however the resulting hot exhaust air is assisted by EC fans mounted within the proactive door and the air expelled over a heat exchanger, transferring the heat into the liquid inside the coil, with the resulting chilled air being expelled back into the room at, or just below, the predetermined room ambient temperature.  

In this way the ProActive Rear Door Cooler can control the whole room temperature environment, without supplementary cooling technology. 

In summary, 

When the whole data center is taken into account, all liquid cooling solutions require air to assist in the data center cooling process, the ColdLogik solution integrates this as standard without the need for supplementary cooling equipment. 

There are many misleading industry articles, hopefully this document sets the record straight.  

contnt.php

USystems opens a new office in Rochester, NY

London, 1st July 2021 – USystems, the leading provider of critical infrastructure solutions is proud to announce the official opening of its brand-new office downtown Rochester, NY. The New York office will serve as USA Headquarters to support growing demand for the ColdLogik Product

The new site is located downtown Rochester, NY in the historic Sibley building, which in 1929 was the largest department store between Chicago and New York City.  This city rose to prominence as the birthplace for technology companies such as Eastman Kodak Company, Xerox, and Bausch & Lomb. 

In addition, they have also secured the talents and knowledge of Dr. David Brown who will serve as Vice President of Engineering and brings years of experience in the data center infrastructure field with him. Please join us in welcoming him to our growing U.S. based team.

‘In a critical point in USystems history, we are extremely proud to bring our knowledge direct to our customers in the United States of America with our brand-new USA HQ’

Scott Bailey, Director

Picture 2

About USystems

USystems is a leading provider in Data Center Cooling solutions that strives to provide unparalleled results and true value to our customers with cooling solutions that improve operational and energy efficiency. Founded in 2004, USystems is headquartered in the UK, with additional offices in Europe, Middle East, North America, and India. The company offers Air Cooling, DX, and LX Cooling solutions to various sized Data Centres, including standard, hyperscale’s and edge, and those with High Performance and Super Compute capabilities.

These cooling solutions are achieved with the innovative range of ColdLogik Rear Door Coolers which since their inception in 2007 have garnered much praise and industry attention.  Winning multiple awards globally for their outstanding performance, sustainability, energy saving and unrivalled efficiencies.

For more information, visit www.usystems.com or follow USystems Ltd on LinkedIn and Instagram.

Contact Information:

Picture 3

David Brown

Vice President – Engineering

david.brown@usystems.com

Contact us

side_menu