The Sustainability and Efficiency of Our Data Centers

by Jodi Holland on 8/30/21 11:32 AM

As Dave Rubcich (Vertiv’s VP, Key Accounts- Multi-Tenant) puts it, “you can’t be sustainable without being efficient,” and “if you’re going to have a sustainable data center you’re certainly going to be efficient—but you can be efficient without being sustainable.” He cautions they are two different terms not to be confused with one another.

sustainability-header-bg-700x242

Data center energy efficiency has varying driving factors for the range of effects yielded. When infrastructure and equipment are more energy efficient in the way they work, one of the most favorable results is the fact that operating costs will go down. Also, less repairs are needed, and less equipment too, which results in more open space in your data center. Lastly, but perhaps most importantly, the less energy that is used, the less of an impact you have on the environment and its natural resources. That’s where sustainability comes into the picture.

Sustainability is becoming more and more companies’ priority, but can have different meanings depending on how you’re looking at the issue. Overall, we are trying to sustain the levels of natural resources we have on the planet so as not to contribute to global warming, or even my some miracle make a dent in efforts to reduce it. To work towards this, data centers are striving to have absolutely no impact on the planet.

It is a dream of an ideal scenario. The way Vertiv sees it, sustainability means zero losses, zero carbon, zero water, and zero waste. “We’re nowhere near there today,” Rubcich admits, “But if we don’t start thinking about it, we can never get there.” So, is it plausible to truly not use any natural resources? Not today, but down the road, it’s the long-term goal, but only once real efforts have been made to chip away at the issue. Rubcich adds, “If you’re going to be carbon neutral or carbon negative, you’re not going to be using generators that are running on diesel fuels.” Alternative energy sources will be a must, going into the future.sustainability
Elsewhere, in the case of cooling equipment that rely on water, and therefore the equipment’s WUE (water usage effectiveness) is measured, there has been considerable movement away from certain any of these technologies that use a large supply water. Total water usage is becoming a leading factor for companies’ decision-making criteria for new equipment.

So, in what other way can end-users start to include sustainability strategies in the present-day operations of their data center? Rubcich notes that there are already a number of products readily available on the market today that are going to help improve overall efficiency of the data center and will help drive some of the sustainability goals. For example, pumped refrigerant as an economizer, as is the case with the Vertiv DX system, which doesn’t use any water.

Vertiv, along with many other companies are ramping up their efforts to be innovative with all types of technologies. Companies like Microsoft, Google, Amazon, and are able to make commitments for sustainability milestones in the future. For example, Microsoft is committed to use 100% renewables by 2025, and to be carbon-negative by 2030. While some companies’ sustainability goals seem like far off pipedreams, they are on the right path as they have brought on C-suite level sustainability officers to create and implement certain strategies to attain these results. As Rubcich points out, “when you’re hiring [someone to focus on sustainability] at that level, you’re committed to it.” And it is that commitment that will make it a reality.

To explore more of this subject with Dave Rubcich, we invite you to listen to our recent Podcast, The Cooler Side of Data Center Sustainability.

Listen to the Podcast

Read More

Topics: efficient data center, Thermal Management, sustainability

Finding the right architecture for power protection in hospitals

by Emerson Network Power on 3/30/16 8:42 AM

Finding-the-right-architecture-for-power-protection-in-hospitals.jpg

If you’ve read the post about distributed and centralized bypass architectures, you’re probably evaluating the right architecture for a new datacenter, or maybe you’re re-designing the one you’re currently using. The decision is not easy and it will often impact the operation and performance of the power protection system in your datacenter or the connected loads. Unfortunately, in technology there is rarely a simple “yes – no” or black – white” answer, and this holds true for power distribution as well. Yet, in technology and science, there’s a “grey area”, in which the ‘right’ decision is strongly influenced by the specific context and case, and is dependent on many parameters. Luckily, there are paths to find the best solution as a trade-off between the multiple parameters involved.

If you’re considering the use of an Uninterruptible Power Supply (UPS), it means you are worried about the possibility of utility power failures and the associated downtime problems that follow. Given this, the selection of the appropriate configuration or architecture for power distribution is one of the first topics of discussion, and the use of a centralized, parallel, distributed, redundant, hot-standby or other configurations available, becomes an important part of it.  While there are numerous architectures to choose from, there are also several internal variables that will require your attention. Fortunately, a few elementary decisions will make the selection easier. Even if not all parameters can be matched, it’s important to at least begin the conversation and explore trade-offs and other considerations. Without trying to be exhaustive (which would require a dedicated white paper), you should consider at least the following:

a) Cost: more complex architectures will increase both your initial investment and your cost, not only at the initial design stage but during the entire life of your power system, especially with regards to efficiency. In other words, we could say that complex architectures will increase your TCO.

b) Availability and reliability: how reliable should your power system be? And what about single or multiple points of failure? Would you need any type of redundancy?

c) Plans for growth: Do you expect your power demand or capacity to increase in the future? Will you re-configure your load distribution?

d) Related to the previous point, but highlighted separately because of its importance for UPS is modularity. Do you need a modular solution for future expansion or redundancy?

e) Bypass architecture; an important point as explained in a separate post.

f) Need for monitoring of the complete UPS power system, also considering any shutdown of loads, and in combination with other systems like thermal management.

g) Service and maintenance: Once the initial investment in power protection has been made, please do not forget to keep it at optimum conditions. This maintenance at regular intervals has to be achieved through service contracts, check for spares availability if multiple types of UPS are used, capability to isolate a subset, or use of remote diagnostic and preventive monitoring services such as Emerson Network Power’s LIFE for maximum availability.

h) Profile of the loads; especially if you’re considering a few large loads or many “small” loads (perhaps distributed across several buildings or in a wide area such as a wind farm), autonomy required for each load, peak power demands, etc.

In addition, the decision is not only related to the internal requirements of the power systems, but it is also linked to the type of load or application to be protected, as requirements and decisions may vary depending on the application being industrial, education, government, banking, healthcare or data center. For example, an application where the loads are servers which manage printers in a bank, compared to a hospital where the power protection systems may manage several surgery rooms, are by no means the same. In fact, in the case of bank printers, in the worst case they can be shut down, while in the case of the surgery rooms, their shutdown is not an option unless for scheduled maintenance. This is because a non-scheduled shutdown of the medical equipment in a surgery room would have a serious impact on the people inside that room for a surgical operation.

Let’s take the hospital example further and consider a particular case. In order to do a quick exercise and simplify, we can use a scenario with several surgery rooms as a reference (for example 5 to 20 rooms, each one with a 5-10 kVA UPS for individual protection), plus a small data center (for example with 30 kVA power consumption) and finally, other critical installations in the facility (let’s assume 300 kVA for offices, laboratories, elevators, etc.).

In this scenario, initially, the architectures that could be envisaged as a first step are:

1. Fully distributed, and for simplicity’s sake, a hospital with 10 surgery rooms is assumed here with 10 kVA for each surgery room plus a centralized UPS (>330 kVA) for the remaining loads.

2. A fully redundant solution based on a centralized UPS protecting all the loads (this UPS being in a parallel redundant configuration). The power for any of these UPS would be 300 kVA + 30 kVA + (10 x 10 kVA).

3. An intermediate solution, referred to as “redundant hot standby”, so that this redundant UPS is sized only for the surgery rooms (10 surgery rooms x 10 kVA), and with a bypass line connected to the large centralized UPS (>430 kVA). This solution shows the advantage of a smaller capacity required for this redundant hot standby UPS.

Emerson Network Power has done several simulations based on typical scenarios as the one described above for a hospital, and considered the factors for optimization a), b), e) and h). Considering the parameters for optimization, the energy savings (power consumption and heat dissipation), initial investment (CAPEX) as well as the maintenance costs (OPEX), the solution based on the “redundant hot standby” seems to be the most convenient.
Moreover, the difference between architectures 1 and 3 is larger as far as quantity of surgery rooms or period for cost simulation (from 1 year up to 10 years).
This points us in the right direction in selecting the best distribution architecture for this application in hospitals and using these parameters for optimization. Clearly, it can be enriched using the other parameters shown in the sections above, or adapted to the particular case (quantity of surgery rooms, autonomy for each load, power demanded by CPD room, reliability, …) that could lead to a different choice, but globally, this redundant hot standby has resulted in a good trade-off.

As said at the beginning, there is no magic solution for the optimum selection, but we have sought to explore several guidelines and check points that will help drive you towards the best solution for your case. Of course, any additional variables and the reader’s experience are welcome and can only serve to enrich the discussion.

Read More

Topics: Data Center, PUE, energy, UPS, Efficiency, Thermal Management, DCIM, Uptime, sustainability, energy efficiency, preventative maintenance, power system, healthcare, hospitals

Speed, Flexibility and the Data Center

by Emerson Network Power on 11/25/14 11:45 AM

Kollengode Anand | November 20, 2014 | Emerson Network Power

iStock_000019241308Small

It’s no longer enough to be dependable if you’re running a data center.  With greater demands being placed by customers, both external and internal, data center administrators are required to be both dependable and fast. Consider these facts, from our “State of the Data Center” report last year:

  • The equivalent of one of every nine persons on the planet uses Facebook.
  • We generated 1.6 trillion gigabytes of data last year. That’s enough data to give every single person on Earth eight 32-gigabyte iPhones, and it’s an increase of 60 percent in just two years.
  • Every hour, enough information is created to fill more than 46 million DVDs.
  • Global e-Commerce spending topped $1.25 trillion in 2013.

It’s always been important to respond to your customers, of course, but now there are more of them, demanding more information, and more quickly:  the report says that if the online video they are watching buffers for more than five seconds, 25 percent of viewers drop off.  And if the video buffers for more than 10 seconds, half of them are gone.

Oh, and did we mention that the average cost of a data center outage now runs more than $900,000…an increase of one-third in just two years?

Which is why it’s critical for administrators to be able to flexibly configure their data centers, and to be able to react rapidly when requirements change, or when there’s a problem.  We’ve found that a unified approach to the entire infrastructure is the best way of handling these situations.  Whether it’s heating and cooling, power, servers, software, or more, the ability to administer data center operations in a real-time manner has become more imperative than ever.

It’s one of the key elements in the development of the dynamic data center, and in being able to easily manage changes and maintain an optimal environment.

We’ll be at the Gartner Data Center, Infrastructure & Operations Management Conference in Las Vegas in a couple of weeks, at booth #211, showing off the equipment and software that we’ve developed to help you make your business as dynamic as your data center.  We’ll also be speaking about where our clients believe the data center is headed more than ten years from now.   Their input has proven critical in the past, and their thinking is helping us develop the solutions that will solve their challenges both today and tomorrow.

For More Blogs by Emerson Network Power- Click Here

Read More

Topics: Emerson Network Power, Data Center, cloud computing, 7x24 exchange, Thermal Management, DCIM, Uptime, sustainability, clean energy, monitoring, Trellis

Subscribe to Our Blog

Recent Posts

Posts by Tag

see all