Microgrids Part 1: Make It a Grid, But Micro

by Alexander "D'Angelo" D'Angelo on 3/11/24 3:29 PM

Are you ready to revolutionize the way we power our communities and data centers? Picture a future where electricity isn't just distributed from centralized grids but generated and managed locally. Welcome to the world of microgrids, battery energy storage systems, and electronic isolation and controls. 

While it is fun to use these buzzwords and speak about the possibilities the future holds, why does this matter? Simply put, resources. Whether it is capital, space, power, water, or talent, we live in a resource constrained world. As our technology becomes more advanced, its demands for power and cooling will increase. This puts a large strain on our already fully loaded power grids, with the states ¹most at-risk being Texas, Michigan, Ohio, New York, and California. Texas is not interconnected to the national grid, which puts it at risk for downtime due to a lack of redundancy. New York and California, on the other hand, are strained due to their large populations and the decommissioning of traditional power plants. Additionally, with an increase in legislation supporting EV vehicles, the strain on the grid can be too large especially in inclement weather (i.e. hot and cold) increasing risk of downtime. 

Like it or not, soon we will have to supplement power and storage solutions that are smart and reliable enough to be treated as de-centralized grid assets. Let us dive deeper into the realm of Microgrids. 

What is a microgrid? 

Microgrids represent a paradigm shift in how we think about energy distribution. These localized grids can operate independently or in conjunction with the main grid, offering resilience and flexibility in the face of outages and disruptions.  

microgrid picSo, what are some of the basic components that we’d expect to see in a microgrid? Renewable energy, most commonly solar (PV), wind, or, in some cases, hydropower. Next, we would expect to see an inverter to convert the energy from the renewables to a usable form for the loads that are connected. After that, a BESS (Battery Energy Storage System), isolation with controls, a fuel cell, and/or hydrogen electrolyzer.  

While these individual components, alone, could not support an outage, when deployed together, the sky is the limit for “islanding” yourself from utility. These assets could be on a commercial site, outside of a housing community, a data center, and beyond. These are the building blocks for these locally deployed decentralized grids. 

Imagine a community powered by its own microgrid, seamlessly integrating renewable energy sources, like solar panels and battery storage systems, into its infrastructure. These technologies not only reduce reliance on fossil fuels but also pave the way for a more sustainable future.  

Outside of the communities, integrating renewables into their energy portfolio, there are mission critical operators who look to add redundancy to their utility connection and further control their uptime parameters. Mission critical operations are businesses that cannot suffer an outage even for a second. These customers are mostly data centers, healthcare providers, departments of transportation, utilities, etc.  

Furthering the point of living in a resource-constrained environment, these providers are seeing that the addition of high compute applications are driving their energy consumption up higher every year. To combat the risk associated with simply relying on utility, they deploy uninterruptible power supplies, generators, and, now, renewables and BESS systems to allow them even more flexibility during utility loss. 

Market Overview 

As AI and other high performance compute practices start becoming the norm in the market, the utilities won’t be able to adapt quick enough. Standard per rack power density in hyperscale and co-location data centers ranges from 10 - 20 kW of consumption. And, in the next 3-5 years, market analysis predicts for this to shoot to 50 - 300 kW/rack of consumption. While this can increase revenue per sq/ft tremendously in colocation data halls, it is also introducing challenges in cooling and power requirements. Liquid cooling, active rear door heat exchangers, and cold plates, are poised to address these challenges on the heat rejection side. However, the power requirements are an entirely different beast to deal with.  

ai-microgridEnter, the need to BYOP (Bring Your Own Power). This is a facility level strategy that is creating and managing your own distribution, generation, and energy asset deployment. This can be accomplished through a variety of solutions. Utilizing DERs (Distributed Energy Resources), which is a fancy terminology to describe the energy generating and storage assets that comprise a microgrid, facilities can manage peak demand, add layers of redundancy to their systems, and ultimately, completely island themselves from the grid.  

While a completely renewable and stand-alone data center is not happening in the next 1-2 years, it is just over the horizon, and it is critical to start having important conversations as these systems require large intellectual investment, planning, and capital to get them off the drawing boards and into the real world.  

While the matters mentioned above mainly concern data center providers, an energy intensive activity that more and more consumers are participating in, every day, is… Electric Vehicle (EV) charging. Subsequently, never have we seen before, parking garages and multifamily home developments requiring the addition of new transformers to support 1000 amp and above services. Super chargers and 220V standard EV chargers require a large amount of power to charge vehicles quickly. Understandably, this strains the utility provider, especially considering that most charging is occurring simultaneously. What this looks like is a large group of EV users who commute to work and charge during the day, and another other group of users who charge exclusively at home during the night. As adoption increases, these routinely popular charging times become more and more problematic for utility providers.  

So, as the US continues to push automakers to electrify their fleets, the demand on the grid and surrounding infrastructures cannot keep up. Critical equipment necessary to install these new services have lead times measured in years, while the cost to retrofit existing parking structures to support charging can add up quickly, pricing many providers out of the market.  

microgrid-products

The need for more readily available power is here, and we are just barely knocking on the door of what is possible, as we will need to, as an adapted society, further expand upon the utilization of already existing technologies. And, as mentioned, a BESS and PV Farm separately will not achieve much, but the value lies in linking them together into a smart controllable system. As we continue to be creative with implementing these already existing solutions together, then we can iterate and create more efficient systems, which allows for more of a mainstream adoption across the industry. 

Looking Ahead 

Plain and simple, for most operations these solutions are currently cost prohibitive. However, let’s keep in mind a key learning from the ramp up of the solar industry; Utilities and governments are willing to subsidize and incentivize companies that choose to implement these solutions ahead of the curve. Currently, in Utah, Rocky Mountain Power (RMP) is rolling out an incentive program that is either per kWh or a one-time upfront incentive for the installation of a BESS. These are not small sums either, with some programs covering up to 75% of the cost of the BESS.  

One may ask, what is the angle for RMP? In short, the more DERs that are connected to the grid, the more redundancy is built into the utility framework. In the case of a contingency, these assets can all be controlled as one, spinning reserve for RMP. During normal operation, owners can enjoy peak shaving benefits, as well as outage protection. A truly rare “win-win” scenario. As peak demand charges continue to increase, ROI numbers start to make sense on 12- and 24-month timelines.  

Additionally, RMP is utilizing “Make-Ready” incentives to support the adoption and installation of EV charging. These incentives could cover up to 100% of the cost associated with powering EV chargers in commercial and residential applications. 

To further this discussion of the future, we can start to think of abstract solutions such as on-site hydrogen generation using natural gas. We can replace diesel gensets with hydrogen fuel cells, as hydrogen is three-times more energy dense/liter than diesel. We are even close to the deployment of small, self-contained, 300 – 500 MW nuclear reactors that can be deployed in remote environments and do not require service for 60 years.  

So, when it comes to reliability and cost savings, all signs point to BYOP. 

While the adoption of microgrid solutions may currently pose financial challenges, the tide is turning as incentives and awareness grow. Just as the solar industry witnessed exponential growth fueled by supportive policies, the trajectory of microgrids and BESS suggests a similar transformation in the energy landscape. As we stand on the cusp of this paradigm shift, it is necessary to initiate conversations and investments today for a more sustainable and resilient tomorrow. The journey towards decentralized, renewable energy is not merely an option; it's a strategic imperative for businesses and communities alike. 

If you enjoyed this high-level overview of the current market of microgrids, please join us for part two of this blog series, which will be released the last week of March. We'll do a deep dive on use/case and applications, and we’ll expand upon DVL’s current product offerings that support this infrastructure and qualify for utility incentives. Additionally, we will provide real-life applications to this equipment.

Have a question or comment about this blog?

Reach out to blog author Alexander "D'Angelo" D'Angelo, Power Systems Sales Engineer,  (based out of our Salt Lake City office) at ADangelo@DVLnet.com.

¹ https://www.generac.com/be-prepared/power-outages/top-5-states-where-power-outage-occur

Read More

Topics: data center design, data center outages, sustainability, microgrids

Mastering the Heat: Cooling & Power Solutions for a 50kW Rack Density AI Data Center

by Sean Murphy on 2/27/24 11:42 AM

As artificial intelligence (AI) continues to reshape industries and drive innovation, the demand for high-performance computing in data centers has reached unprecedented levels. Managing the cooling and power requirements of a 50kW rack density AI data center presents a unique set of challenges. In this blog post, we will explore effective strategies and cutting-edge solutions to ensure optimal performance and efficiency in such a demanding environment. 

artificial-int

Precision Cooling Systems

The heart of any high-density data center is its cooling system. For a 50kW rack density AI data center, precision cooling is non-negotiable. Invest in advanced cooling solutions such as in-row or overhead cooling units that can precisely target and remove heat generated by high-density servers. These systems offer greater control and efficiency compared to traditional perimeter cooling methods.

Liquid Cooling Technologies

liquid-cooling-newsletterLiquid cooling has emerged as a game-changer for high-density computing environments. Immersive liquid cooling systems or direct-to-chip solutions can effectively dissipate heat generated by AI processors, allowing for higher power densities without compromising on reliability. Explore liquid cooling options to optimize temperature control in your data center.

High-Efficiency Power Distribution

To meet the power demands of a 50kW rack density, efficient power distribution is paramount. Implementing high-voltage power distribution systems and exploring alternative power architectures, such as busway systems, can enhance energy efficiency and reduce power losses. This not only ensures reliability but also contributes to sustainability efforts.

Redundancy and Resilience

A high-density AI data center demands a robust power and cooling infrastructure with built-in redundancy. Incorporate N+1 or 2N redundancy models for both cooling and power systems to mitigate the impact of potential failures. Redundancy not only enhances reliability but also allows for maintenance without disrupting critical operations.

Dynamic Thermal Management

Utilize intelligent thermal management systems that adapt to the dynamic workload of AI applications. These systems can adjust cooling resources in real-time, ensuring that the infrastructure is optimized for varying loads. Dynamic thermal management contributes to energy efficiency by only using the necessary resources when and where they are needed.

Energy-Efficient Hardware

Opt for energy-efficient server hardware designed for high-density environments. AI-optimized processors often come with advanced power management features that can significantly reduce energy consumption. Choosing hardware that aligns with your data center's efficiency goals is a key factor in managing power and cooling requirements effectively.

Monitoring and Analytics

Implement comprehensive monitoring and analytics tools to gain insights into the performance of your AI data center. Real-time data on temperature, power consumption, and system health can help identify potential issues before they escalate. Proactive monitoring allows for predictive maintenance and ensures optimal conditions for your high-density racks.

Successfully cooling and powering a 50kW rack density AI data center requires a holistic and forward-thinking approach. By investing in precision cooling, liquid cooling technologies, high-efficiency power distribution, redundancy, dynamic thermal management, energy-efficient hardware, and robust monitoring tools, you can create a resilient and high-performing infrastructure. Embrace the technological advancements available in the market to not only meet the challenges posed by high-density AI computing, but to excel in this dynamic and transformative era of data center management.

Author's Note:

Not a bad blog post, right? I was tasked with writing a blog post on how to power and cool high density racks for AI applications. So, I had Chat GPT write my blog post in 15 seconds, saving me a ton of time and allowing me to enjoy watching my kid’s athletic events this weekend. As end users embrace AI technology, it is imperative that we understand how to support the hardware and software that enables us to achieve these time saving technologies. Over the past 6 months, about 20% of my time has been spent discussing how to support customer 35kW to 75kW rack densities.

Additionally, another key to understand, is the balance of AI and the end-user’s ability to recognize limitations and areas for improvement. AI taps into the database of information that is the Internet. Powerful, but it does so (at least currently) in a fashion that makes it appear to be two years behind. For example, this blog post was written to reflect a 35kW rack density, and subsequently, ChatGPT noted 35kW. However, today, I’m regularly working with racks supporting AI that average 50kW, and have seen go up to 75kW… and know that applications can hit upwards of 300kW per rack. So, please note, anywhere in the blog where it says 50kW, human intervention made these necessary edits to AI's outdated "35kW".

Also, just for reference, a 75kW application requires 21 tons of cooling for one IT rack! So, these new high-density technologies require the equivalent of one traditional perimeter CRAC to cool one AI IT Rack. DVL is here to help provide engineering and manufacturing support to design your Cooling, PLC Switchgear, Busway Distribution, Rack Power Distribution, Cloud Monitoring, and other critical infrastructure to support your efficient AI Technology.

Read More

Topics: Data Center, Thermal Management, Data Center efficiency, beyond the product, artificial intelligence

Liquid Cooling: The Need is Now

by Robert Leake on 2/13/24 3:14 PM

Recently, liquid cooling has garnered significant attention in the data center landscape, yet its roots trace back to the 1980s in various forms. The recent surge of interest in this cooling method is largely due to the growth of artificial intelligence (AI) technologies, therefore introducing it to even MORE people, as they wonder what this “cool new thing” is. And, since the quest to increase your systems’ efficiency is never complete, it’s no wonder why liquid cooling has found its way into nearly all discussions about the future of data center operations. 

As mentioned, the emergence of AI-powered programs, like ChatGPT, have intensified the world's focus on AI across various sectors, from data science to university programs. According to a recent study by Accenture, 98% of company leaders are contemplating the potential impact of AI on their businesses. One inevitable consequence AI will bring to businesses, is the escalation of their data centers’ temperatures. Without adequate cooling mechanisms, these temperature increases can lead to equipment failure due to heat tolerance thresholds constantly being surpassed. This is where liquid cooling emerges as a vital solution. 

liquid-cooling-blog

Liquid cooling offers superior thermophysical properties, making it more efficient for managing the extreme heat densities that are generated by IT racks supporting high-performance computing (HPC) applications. As data centers integrate liquid cooling solutions into their networks, several considerations come into play. 

One critical decision point is selecting the appropriate liquid cooling technology. Fortunately, cooling manufacturers have developed various methodologies, with immersion and direct-to-chip being the most popular. However, there is no universal guideline for determining which technology suits a specific setup.  

Each methodology presents unique advantages: direct-to-chip cooling offers better thermal resistance and easier maintenance, while immersion cooling tends to be simpler to manage and space-efficient, with greater flexibility across different hardware types. Both methodologies offer some opportunities for heat reuse and can be integrated with other infrastructure components to create hybrid solutions. For example, pairing with rear door heat exchangers for enhanced heat dissipation efficiency. 

If you decide to go the liquid cooling route, you’ll need to determine the most applicable liquid cooling methodology for your critical infrastructure’s needs. An important part of this process will be assessing the different methods’ impact on power consumption, ensuring that when implemented, you will have structural support for the additional weight (think: raised floors), and establishing a maintenance program to ensure smooth operations and a significantly prolonged lifespan. 

So, if and when you decide liquid cooling is the best path for your business’s needs, be sure to give careful consideration of these various factors. Most importantly, you’ll want to ensure optimal performance and longevity of infrastructure.  

To learn more on this topic, we encourage you to watch our recent webinar “The 2024 State of Liquid Cooling” in which guests Dr. Richard Bonner and Liz Cruz of Accelsius explore a more in-depth look at the world of liquid cooling. 

WATCH THE WEBINAR

Read More

Gems of Wisdom from Beyond the Product

by Jodi Holland on 1/30/24 2:26 PM

“Experience is simply the name we give our mistakes.” — Oscar Wilde

When you’ve worked in the deep trenches of critical infrastructure long enough, like quite a few of our longest tenured employee owners, you know that the most valuable lessons don’t come from a textbook or a policy manual. Rather, we learn the most, right in the field, from those real world mistakes and mishaps—or “experiences” as Oscar Wilde would say.

So, we asked our Associates about their most valuable gems of wisdom that they could pass on to colleagues in the industry. Here are some of the top answers we received. We're sharing them here, in hopes that we'll be able to save others from learning some unfortunate lessons the hard way.

SERVICES

  • Spare parts are an on-site technician’s best friend.
  • A single-point-of-failure is NOT an end user’s best friend.
  • A hungry mouse can be disastrous to critical wires.
  • The smallest of unchecked details can be the source of a project’s biggest (and most expensive) problem.
  • Arc-flash is dangerous and not anything you ever want to see.
  • Weekly exercises are the surest way to know your GenSet will work when needed.
  • The UPS battery system is like a loaf of bread…you only get so many slices (or discharges)…and the bigger, and more often, the discharge, the quicker you’ll need a new loaf.

Let us know what you think of these nuggets. Have you had the unfortunate opportunity to learn any of these lessons on your own? Or do you have an important one to add to the list? Email us at Marketing@DVLnet.com. If we get enough responses, we’ll post a Part 2.

Read More

Topics: Data Center, service, optimized performance, top trends

Achieving Excellence in Data Center Operations

by Robert Leake on 1/9/24 11:41 AM

Data centers are the beating hearts of modern businesses. They house critical infrastructure and sensitive data that is vital to all departments across an organization. In this fast-paced digital landscape, making sure your data center is always in top operational shape shouldn’t be just a goal, but an absolute necessity on any given day that someone will need to access pivotal data at the click of a mouse.

And, as you know quite well, running a data center pulls you in multiple directions at once. That’s why, to ensure you’re never offline, it’s important to always have a real-time pulse on the areas outlined below. 

data center operations infographic

Security: Building Fortresses for Data

Imagine a data center as a fortress with a hard outer shell and multiple layers within, each with their own security measures. Strict management of access ensures only those who require entry to each of these levels can actually get in. This goes beyond the front door and is a physical concern throughout the entire data center. To minimize security risks, it’s a must to manage the who, why, and where of every person entering your facility, as non-company staff must access the grounds for daily demands or periodic maintenance.

Preparation is Key

The COVID-19 pandemic brought many unexpected challenges for those leading data center operations at the time. Companies have long developed various types of disaster recovery plans accounting for a variety of scenarios. However, the pandemic tested those plans. And, when we found ourselves in a situation that hadn’t been experienced in 100 years, many failed the test. Fortunately, lessons learned strengthened disaster recovery going forward. Such lessons include the delicate nature of supply chain management, the importance of procuring inventory when available, and being able to execute “on a dime” during even the most chaotic of times. For these reasons, establishing thorough disaster recovery plans and being able to quickly adapt to unknowns have become indispensable.

Safety: A Cultural Requirement

Prioritizing the well-being of employees working under extreme conditions is crucial and should never be a question. That is why, for very good reasons, safety has become a cultural requirement for all businesses. Main concerns within data center environments include managing worksites where employees from multiple companies are working in tandem, ensuring the safety of workers that are working alone, taking precautions when working with high voltage power infrastructure, and having in place efficient response processes in case of emergencies. It’s not just enough to have these processes in place, but to ensure that no one is cutting corners, especially organizational leaders, as values are engrained from the very top. If you get everyone home safely at the end of the day, you’ve got yourself a strong culture and a safe data center.

Continuous Improvement

Even the top tier of organizations have room for improvement, whether being driven for the need to optimize efficiency or new ways to stay on budget. Repetitive tasks can be improved by identifying process enhancements and design strategies. Challenging the status quo can have significant results when driven by the employees who are closest to the challenges. Buy-in at all levels is needed for improvements and long-term success, as support from leadership helps to ensure this evolution occurs.

Nurturing Future Leaders

As the most experienced data center professionals continue to retire, there is a greater need for fresh faces. But to accomplish this, the industry needs to make sure students at all levels are being properly introduced to the concept of data centers, how they work, and why they must work for society to function. For example, younger generations are the largest consumers and creators of data. The broadband requirements are ever increasing, and the workhorse behind this data isn’t even a thought, as they may not recognize the connection between data centers and their iCloud folders, unless it is demonstrated to them. Furthermore, tomorrow’s professionals stand to benefit from learning more about our industry, as it opens for them a new door of career potential and even lucrative compensation.

Exposing younger generations to the industry, whether through professional forums and societies or internships, providing guidance on required skills, and mentoring them as they mature, are essential to properly pass the torch. These future leaders will shape the industry's evolution and will more immediately allow you to sleep soundly at night knowing the lights are being properly kept on, and equipment is up and running.

Finding the Right Fit

Attitude and aptitude are definite requirements for an employee to succeed in data center operations. When recruiting for the best possible fit, you’re going to ultimately need someone who can handle the stress of working in such an unpredictable environment. Being resilient during challenging times makes for outstanding professionals in any field. Additionally, communication skills are vital. Being able to identify and resolve problems is great, but being able to turn those problems into learning opportunities for an entire team, is invaluable, especially in the high-stress moments.

By making these items a priority, and by constantly reevaluating your organization’s needs, you are positioning your organization for great success. One data center operations team that has figured this out quite well, is the EdgeCore Data Centers’ team of operations leaders, led by Therese Kerfoot, SVP Operations. In December, Kerfoot and her team, Harrison Stoll (VP Operations), Matt Silvers (VP Operations Programs), and Sarah Kasper (Sr. Director, Environmental Health & Safety) joined us on the DVL Power Hour, “Data Center Excellence: Operations & Safety,” where the four shared their experiences in these areas and more. To learn about the extremely valuable insights they brought to the table, please check out the On-Demand webinar, or listen to the adapted podcast version available below and on iTunes and Spotify.

WATCH THE WEBINAR LISTEN TO THE PODCAST
Read More

Topics: Data Center, Safety, beyond the product, operations

Subscribe to Our Blog

Recent Posts

Posts by Tag

see all