Mastering the Heat: Cooling & Power Solutions for a 50kW Rack Density AI Data Center

by Sean Murphy on 2/27/24 11:42 AM

As artificial intelligence (AI) continues to reshape industries and drive innovation, the demand for high-performance computing in data centers has reached unprecedented levels. Managing the cooling and power requirements of a 50kW rack density AI data center presents a unique set of challenges. In this blog post, we will explore effective strategies and cutting-edge solutions to ensure optimal performance and efficiency in such a demanding environment. 

artificial-int

Precision Cooling Systems

The heart of any high-density data center is its cooling system. For a 50kW rack density AI data center, precision cooling is non-negotiable. Invest in advanced cooling solutions such as in-row or overhead cooling units that can precisely target and remove heat generated by high-density servers. These systems offer greater control and efficiency compared to traditional perimeter cooling methods.

Liquid Cooling Technologies

liquid-cooling-newsletterLiquid cooling has emerged as a game-changer for high-density computing environments. Immersive liquid cooling systems or direct-to-chip solutions can effectively dissipate heat generated by AI processors, allowing for higher power densities without compromising on reliability. Explore liquid cooling options to optimize temperature control in your data center.

High-Efficiency Power Distribution

To meet the power demands of a 50kW rack density, efficient power distribution is paramount. Implementing high-voltage power distribution systems and exploring alternative power architectures, such as busway systems, can enhance energy efficiency and reduce power losses. This not only ensures reliability but also contributes to sustainability efforts.

Redundancy and Resilience

A high-density AI data center demands a robust power and cooling infrastructure with built-in redundancy. Incorporate N+1 or 2N redundancy models for both cooling and power systems to mitigate the impact of potential failures. Redundancy not only enhances reliability but also allows for maintenance without disrupting critical operations.

Dynamic Thermal Management

Utilize intelligent thermal management systems that adapt to the dynamic workload of AI applications. These systems can adjust cooling resources in real-time, ensuring that the infrastructure is optimized for varying loads. Dynamic thermal management contributes to energy efficiency by only using the necessary resources when and where they are needed.

Energy-Efficient Hardware

Opt for energy-efficient server hardware designed for high-density environments. AI-optimized processors often come with advanced power management features that can significantly reduce energy consumption. Choosing hardware that aligns with your data center's efficiency goals is a key factor in managing power and cooling requirements effectively.

Monitoring and Analytics

Implement comprehensive monitoring and analytics tools to gain insights into the performance of your AI data center. Real-time data on temperature, power consumption, and system health can help identify potential issues before they escalate. Proactive monitoring allows for predictive maintenance and ensures optimal conditions for your high-density racks.

Successfully cooling and powering a 50kW rack density AI data center requires a holistic and forward-thinking approach. By investing in precision cooling, liquid cooling technologies, high-efficiency power distribution, redundancy, dynamic thermal management, energy-efficient hardware, and robust monitoring tools, you can create a resilient and high-performing infrastructure. Embrace the technological advancements available in the market to not only meet the challenges posed by high-density AI computing, but to excel in this dynamic and transformative era of data center management.

Author's Note:

Not a bad blog post, right? I was tasked with writing a blog post on how to power and cool high density racks for AI applications. So, I had Chat GPT write my blog post in 15 seconds, saving me a ton of time and allowing me to enjoy watching my kid’s athletic events this weekend. As end users embrace AI technology, it is imperative that we understand how to support the hardware and software that enables us to achieve these time saving technologies. Over the past 6 months, about 20% of my time has been spent discussing how to support customer 35kW to 75kW rack densities.

Additionally, another key to understand, is the balance of AI and the end-user’s ability to recognize limitations and areas for improvement. AI taps into the database of information that is the Internet. Powerful, but it does so (at least currently) in a fashion that makes it appear to be two years behind. For example, this blog post was written to reflect a 35kW rack density, and subsequently, ChatGPT noted 35kW. However, today, I’m regularly working with racks supporting AI that average 50kW, and have seen go up to 75kW… and know that applications can hit upwards of 300kW per rack. So, please note, anywhere in the blog where it says 50kW, human intervention made these necessary edits to AI's outdated "35kW".

Also, just for reference, a 75kW application requires 21 tons of cooling for one IT rack! So, these new high-density technologies require the equivalent of one traditional perimeter CRAC to cool one AI IT Rack. DVL is here to help provide engineering and manufacturing support to design your Cooling, PLC Switchgear, Busway Distribution, Rack Power Distribution, Cloud Monitoring, and other critical infrastructure to support your efficient AI Technology.

Read More

Topics: Data Center, Thermal Management, Data Center efficiency, beyond the product, artificial intelligence

Available On-Demand: DVL Power Hour Webinars

by Jodi Holland on 7/6/23 2:15 PM

Since we began our DVL Power Hour webinar series a few years ago, we've been able to bring you more than 40 live episodes. We’ve hosted many discussions about a variety of topics related to critical infrastructure and data centers. Thermal Management. Batteries. E-Rates. Green Data. Pandemics. Service. We’ve talked about all of this and more, as we welcomed guests from some of our partners, such as representatives from Vertiv, Generac, Critical Labs, Packet Power, and more, as well as some of our customers, and even scientists who have helped explain some of the latest technologies and trends.

If you haven’t had the chance to tune in for any of these webinars, or haven't in a while, we hope you’ll make your way over to our list of past webinars, as all our previously broadcasted webinar episodes can be accessed on-demand via our website. We invite you to browse topics and titles to find any that may interest you.

webinars

Some of our most popular episodes include:

  • "How to Choose the Right Cooling System"
  • "The Importance of Indoor Air Quality"
  • "Research & Development: Advanced Methods of Cooling Electronics"
  • "Power Distribution in Critical Facilities"
  • "Expanding the Monitoring Equation: Alert Management to Risk Mitigation"
  • "NFPA Standards & Generator UL Listings with Generac"

As far as new webinar episodes go, we are currently on a break for the summer, but check back soon for more information. We will continue to bring you new episodes on a monthly basis. In the meantime, if you'd prefer, all our webinars are ALSO available in a podcast format as well. Episodes have been edited down--you won't be able to see video or slides, but will still get to enjoy some interesting conversations and insights into the critical infrastructure world while on the go. We hope you'll tune in. And if you have any questions or comments, please reach us at Marketing@DVLnet.com.

 

Read More

Topics: Data Center, Data Center efficiency, mission-critical, webinar

Six Steps to Improve Data Center Efficiency

by Emerson Network Power on 2/3/16 9:34 AM

six-steps-to-improve-data-center-efficiency.jpg

 

 

 

 

 

 

 

 

 

Imagine the CEO of a public company saying, “on average, our employees are productive 10 percent of the day.” Sounds ridiculous, doesn’t it? Yet we regularly accept productivity level of 10 percent or less from our IT assets. Similarly, no CEO would accept their employees showing up for work 10, 20, or even 50 percent of the time, yet most organizations accept this standard when it comes to server utilization.

These examples alone make it clear our industry is not making the significant gains needed to get the most out of our IT assets. So, what can we do? Here are six steps you’re likely not yet taking to improve efficiency.

1. Increase Server Utilization: Raising server utilization is a major part of enabling server power supplies to operate at maximum efficiency, but the biggest benefit comes in the build-out it could delay. If you can tap into four times more server capacity, that could delay your need for additional servers – and possibly more space – by a factor of four.

2. Sleep Deep: Placing servers into a sleep state during known extended periods of non-use, such as nights and weekends, will go a long way toward improving overall data center efficiency. Powering down your servers has the potential to cut your total data center energy use by 9 percent, so it may be worth the extra effort.

3. Migrate to Newer Servers: In a typical data center, more than half of severs are ‘old,’ consuming approximately 65 percent of the energy and producing 4 percent of the output. In most enterprise data centers, you can probably shut off all servers four or more years old after you migrate the workloads with VMs to your newer hardware. In addition to the straight energy savings, this consolidation will free up space, power and cooling for your new applications.

4. Identify and Decommission Comatose Servers: Identifying servers that aren’t being utilized is not as simple as measuring CPU and memory usage. An energy efficiency audit from a trusted partner can help you put a program in place to take care of comatose servers and make improvements overall. An objective third-party can bring a fresh perspective beyond comatose servers including an asset management plan and DCIM to prevent more comatose servers in the future.

5. Draw from Existing Resources: If you haven’t already implemented the ten vendor-neutral steps of our Energy Logic framework, here are the steps. The returns on several of the steps outlined in this article are quantified in Energy Logic and, with local incentives, may achieve paybacks in less than two years.

6. Measure: The old adage applies: what isn’t measured isn’t managed. Whether you use PUE, CUPS, SPECPower or a combination of them all, knowing where you stand and where you want to be is essential.

Are there any other steps to improving data center efficiency you’ve seen? 

Read More

Topics: data center energy, PUE, UPS, Thermal Management, DCIM, monitoring, the green grid, energy efficiency, availability, Data Center efficiency, preventative maintenance, energy cost

Emerson Network Power Announces Water and Energy-Saving Liebert® DSE with Liebert EconoPhase Economizer Approved for California Data Centers

by Marissa Donatone on 10/6/15 9:27 AM

Designed to save millions of gallons of water and increase energy efficiency by up to 50 percent

liebert-dse-mediumColumbus, Ohio [September 16, 2015] – Emerson Network Power, a business of Emerson, (NYSE: EMR) and the world’s leading provider of critical infrastructure for information and communications technology systems, today announced that the California Energy Commission (CEC) has approved the use in California data centers of the Liebert® DSE thermal management system with the Liebert EconoPhase Pumped Refrigerant Economizer. The Liebert DSE system represents a break-through technology that uses no water and saves up to 50 percent of thermal energy, through its patented design and advancedLiebert iCOM™ controls.  

“The Liebert DSE system is a great environmental steward. When used in a typical mid-sized data center of one megawatt load, the Liebert DSE is significantly more efficient than current cooling systems, and eliminate the use of around four million gallons of water each year. If deployed broadly in California data centers, the Liebert DSE with EconoPhase could save hundreds of million gallons of water every year,” said John Peter Valiulis, vice president North America marketing, thermal management, Emerson Network Power.
 
The CEC has approved the Liebert DSE system with Liebert EconoPhase as a prescriptive economization option, as part of Title 24 of the the CEC’s 2103 Building Energy Efficiency Standards For Residential and Non Residential Buildings, meeting the code’s requirements for energy efficiency and its prescriptive requirements for economizers.
 
The Liebert DSE system eliminates the need for any water in the heat rejection process and associated chemical water treatment, and it eliminates the risk of exposure to harmful waterbound bacteria. In addition, the Emerson modeling for the CEC compliance program demonstrated an 8 to 10 percent reduction in the data center Time Dependent Valuation measure, compared to the water economizer prescriptive option. The Liebert DSE system design also reduces or eliminates several of the power components associated with water economizers. In actual usage, the entire Liebert DSE system has demonstrated thermal system energy savings of up to 50 percent over older legacy systems.
 
For more information on Emerson Network Power’s Liebert DSE with EconoPhase or other products and solutions, visit www.EmersonNetworkPower.com.
 
 
About Emerson Network Power
Emerson Network Power, a business of Emerson, is the world’s leading provider of critical infrastructure technologies and life cycle services for information and communications technology systems. With an expansive portfolio of intelligent, rapidly deployable hardware and software solutions for power, thermal and infrastructure management, Emerson Network Power enables efficient, highly-available networks. Learn more at www.EmersonNetworkPower.com.
 
About Emerson 
Emerson, based in St. Louis, Missouri (USA), is a global leader in bringing technology and engineering together to provide innovative solutions for customers in industrial, commercial, and consumer markets around the world. The company is comprised of five business segments: Process Management, Industrial Automation, Network Power, Climate Technologies, and Commercial & Residential Solutions. Sales in fiscal 2014 were $24.5 billion. For more information, visit www.Emerson.com.
 
Media Contact:
Vince McMorrow
614-383-1622
vince.mcmorrow@fahlgren.com
Read More

Topics: data center infrastructure, data center design, DVL, energy, DC Power, critical air conditioning, HVAC, Thermal Management, capacity, cooling, Data Center efficiency, ASHRAE, power, water cool

Choosing Between VSDs and EC Fans. Making the right investment when upgrading fan technology.

by Emerson Network Power on 7/15/15 3:23 PM

Blog_VSD

Fans that move air and pressurize the data center’s raised floor are significant components of cooling system energy use. After mechanical cooling, fans are the next largest energy consumer on computer room air condition (CRAC) units. One way many data center managers reduce energy usage and control their costs is by investing in variable speed fan technology. Such improvements can save fan energy consumption by as much as 76 percent.

With the different options on the market, it may not be clear which technology is best. Today, variable speed drives (VSDs)—also referred to as variable frequency drives or VFDs—and electrically commutated (EC) fansare two of the most effective fan improvement technologies available. The advantages of both options are outlined below to help data center managers determine which fan technology is best for achieving energy efficiency goals.

How do different fan technologies work? 
In general, variable speed fan technologies save energy by enabling cooling systems to adjust fan speed to meet the changing demand, which allows them to operate more efficiently. While cooling units are typically sized for peak demand, peak demand conditions are rare in most applications. VSDs and EC fans more effectively match airflow output with load requirements, adjusting speeds based on changing needs. This prevents overcooling and generates significant energy savings.

With VSDs, drives are added to the fixed speed motors that propel the centrifugal fans traditionally used in precision cooling units. The drives enable fan speed to be adjusted based on operating conditions, reducing fan speed and power draw as load decreases. Energy consumption changes dramatically as fan speed is decreased or increased due to the fan laws. For this reason, a 20 percent reduction in fan speed provides nearly 50 percent savings in fan power consumption.

EC fans are direct drive fans that are integrated into the cooling unit by replacing the centrifugal fans and motor assemblies. They are inherently more efficient than traditional centrifugal fans because of their unique design, which uses a brushless EC motor in a backward curved motorized impeller. EC fans achieve speed control by varying the DC voltage delivered to the fan. Independent testing of EC fan energy consumption versus VSDs found that EC fans mounted inside the cooling unit created an 18 percent savings. With new units, EC fans can be located under the floor, further increasing the savings.

How do VSDs and EC fans compare?

Energy Savings
One of the main differences between VSDs and EC fans is that VSDs save energy when the fan speed can be operated below full speed. VSDs do not reduce energy consumption when the airflow demands require the fans to operate at or near peak load. Conversely, EC fans typically require less energy even when the same quantity of air is flowing. This allows them to still save energy when the cooling unit is at full load. EC fans also distribute air more evenly under the floor, resulting in more balanced air distribution. Another benefit of direct-drive EC fans is the elimination of belt losses seen with centrifugal blowers. Ultimately, EC fans are the more efficient fan technology.

Cooling Unit Type
VSDs are particularly well-suited for larger systems with ducted upflow cooling units that require higher static pressures, while EC fans are better suited for downflow units.

Maintenance 
In terms of maintenance, EC fans offer an advantage. EC fans also reduce maintenance because they have no fan belts that wear and their integrated motors virtually eliminate fan dust.

Installation 
Both VSDs and EC fans can be installed on existing cooling units or specified in new units. When installing on existing units, factory-grade installation is a must.

Payback
In many cases, the choice between VSDs and EC fans comes down to payback. If rapid payback is a priority, then VSDs are likely the better choice. These devices can offer payback in fewer than 10 months when operated at 75 percent.

However, EC fans will deliver greater, long-term energy savings and a better return on investment (ROI). While EC fans can cost up to 50 percent more than VSDs, they generate greater energy savings and reduce overall maintenance costs, ultimately resulting in the lowest total cost of ownership.

Have the experts weigh in. 
Service professionals can be an asset in helping choose the best fan technology for a data center. Service professionals can calculate the ROI from both options, and they can recommend the best fan technologies for specific equipment.

Service professionals trained in optimizing precision cooling system performance can also ensure factory-grade installations, complete set point adjustment to meet room requirements, and properly maintain equipment, helping businesses achieve maximum cooling unit efficiency today and in the future.

Whether you ultimately decide to go with VSDs or EC fans, either way, you’ll be rewarded with a greener data center, more efficient cooling, and significant energy savings that translate into a better bottom line.


Original Emerson Network Power Blog Post

Read More

Topics: data center energy, PUE, Battery, Efficiency, Thermal Management, DCIM, Uptime, the green grid, AHRI, availability, education, KVM, Data Center efficiency, preventative maintenance

Subscribe to Our Blog

Recent Posts

Posts by Tag

see all