Save Thousands by Right-sizing Your Inefficient UPS

by Mike Beck on 11/1/18 11:49 AM

In this age of Hybrid Computing, we have found that many of our customers’ large Uninterruptible Power Systems (UPS) are running very low loads compared to their original design requirements. This is due, in large part, to many applications migrating to the Cloud and Colocation Facilities. However, UPS Power is still required for their remaining “on-premises” and “edge” computing needs.

DVL recently did analysis for a customer who was experiencing this scenario. They had a pair of 750 KVA UPS’s in a redundant configuration (2N) running at 30KVA each. This is a very inefficient point to run a UPS! In addition, they were facing a wet cell battery replacement that would cost them $355K. Without replacing the batteries, they were running the risk of a failure that could jeopardize their entire business. 

DVL’s Data Center Engineer provided a complete analysis of the systems. The results are as shown below:

Option

Description

Cost

UPS Efficiency
at 30 KVA

Annual Energy Cost

1

Replace the wet cell batteries

$355,000

60%

$53,600

2

Replace the UPS’s with two 250 KVA UPS's with VRLA Batteries

$340,000

96.5%

$2,915

3

Replace the UPS’s with two 150 KVA UPS’s with VRLA Batteries

$299,400

97.5%

$2,060

As you can see, it would be cheaper to correctly right-size the UPS rather than replace the batteries. In addition, due to the inefficiency of the old UPS, resizing would also save the customer 90% on their energy costs!

As a PECO Energy Partner, we can also help you obtain rebate dollars that are offered by PECO (and other utilities) to Data Center Customers that make such energy improvements. 

An additional option, worthy of consideration, would be to use Lithium Ion Batteries in lieu of the VRLA type. They would add about $50k to the project cost, but you wouldn’t need a future battery replacement for 15 years.

Are you in this situation with your UPS?  Contact your DVL Data Center Engineer for a complete analysis.

Contact a Data Center Expert

Read More

Topics: VLRA, facilities, casestudy, UPS, Resizing, Wet Cell, KVA UPS

Are You Taking Good Care of Your Data Center?

by Emerson Network Power on 7/13/16 8:40 AM

One of the things we often take for granted is our health. We tend to neglect things like taking our vitamins, eating right or visiting our doctor regularly. For enterprises, taking care of our data center’s health is critical. An ailing data center can have disastrous effects to your business. In fact, the 2016 Ponemon Institute study on data center downtime found that a minute of downtime leads to a loss of up to $8,851.

six-steps-to-improve-data-center-efficiency.jpg

So what can you do to keep your data center healthy? Simple: Preventive Maintenance (PM). Just like regular visits to the doctor, original equipment manufacturer (OEM)-trained technicians can help you diagnose possible issues within your critical infrastructure and resolve it promptly through systematic inspections, detection and correction of failures. What’s more, routine maintenance increases the life of your equipment by identifying needed repairs and upgrades.

PM covers all aspects of your critical infrastructure, from power, cooling to electrical systems. Note, though, that the same Ponemon Institute study found that UPS battery failure remains to be the leading cause of data center downtime and attention should be given to servicing your UPS.

The idea of having a PM strategy in your organization’s critical infrastructure sounds straightforward, but is often overlooked by decision-makers either due to lack of available resources or priority. True enough, we’ve all been guilty of neglecting to undergo “maintenance” even in our daily lives. Can you think of an instance when you’ve put off having your car serviced by professionals, either because you lack time or are simply too lazy to do so? Or the time when you decided to delay going to your dentist until your tooth starts to ache and it’s too late?

A good PM strategy involves understanding the performance of each aspect of your critical infrastructure. For example, the ideal PM for your UPS is four times a year, while for your thermal management units require more attention with six times a year. This is something that you have to discuss with your OEM-technician. Regardless, regular PM increases the Mean Time Between Failures (MTBF) of your equipment. A higher MTBF means a more reliable unit.

I talk more about having a good PM strategy in this article.

Read More

Topics: Data Center, UPS

Finding the right architecture for power protection in hospitals

by Emerson Network Power on 3/30/16 8:42 AM

Finding-the-right-architecture-for-power-protection-in-hospitals.jpg

If you’ve read the post about distributed and centralized bypass architectures, you’re probably evaluating the right architecture for a new datacenter, or maybe you’re re-designing the one you’re currently using. The decision is not easy and it will often impact the operation and performance of the power protection system in your datacenter or the connected loads. Unfortunately, in technology there is rarely a simple “yes – no” or black – white” answer, and this holds true for power distribution as well. Yet, in technology and science, there’s a “grey area”, in which the ‘right’ decision is strongly influenced by the specific context and case, and is dependent on many parameters. Luckily, there are paths to find the best solution as a trade-off between the multiple parameters involved.

If you’re considering the use of an Uninterruptible Power Supply (UPS), it means you are worried about the possibility of utility power failures and the associated downtime problems that follow. Given this, the selection of the appropriate configuration or architecture for power distribution is one of the first topics of discussion, and the use of a centralized, parallel, distributed, redundant, hot-standby or other configurations available, becomes an important part of it.  While there are numerous architectures to choose from, there are also several internal variables that will require your attention. Fortunately, a few elementary decisions will make the selection easier. Even if not all parameters can be matched, it’s important to at least begin the conversation and explore trade-offs and other considerations. Without trying to be exhaustive (which would require a dedicated white paper), you should consider at least the following:

a) Cost: more complex architectures will increase both your initial investment and your cost, not only at the initial design stage but during the entire life of your power system, especially with regards to efficiency. In other words, we could say that complex architectures will increase your TCO.

b) Availability and reliability: how reliable should your power system be? And what about single or multiple points of failure? Would you need any type of redundancy?

c) Plans for growth: Do you expect your power demand or capacity to increase in the future? Will you re-configure your load distribution?

d) Related to the previous point, but highlighted separately because of its importance for UPS is modularity. Do you need a modular solution for future expansion or redundancy?

e) Bypass architecture; an important point as explained in a separate post.

f) Need for monitoring of the complete UPS power system, also considering any shutdown of loads, and in combination with other systems like thermal management.

g) Service and maintenance: Once the initial investment in power protection has been made, please do not forget to keep it at optimum conditions. This maintenance at regular intervals has to be achieved through service contracts, check for spares availability if multiple types of UPS are used, capability to isolate a subset, or use of remote diagnostic and preventive monitoring services such as Emerson Network Power’s LIFE for maximum availability.

h) Profile of the loads; especially if you’re considering a few large loads or many “small” loads (perhaps distributed across several buildings or in a wide area such as a wind farm), autonomy required for each load, peak power demands, etc.

In addition, the decision is not only related to the internal requirements of the power systems, but it is also linked to the type of load or application to be protected, as requirements and decisions may vary depending on the application being industrial, education, government, banking, healthcare or data center. For example, an application where the loads are servers which manage printers in a bank, compared to a hospital where the power protection systems may manage several surgery rooms, are by no means the same. In fact, in the case of bank printers, in the worst case they can be shut down, while in the case of the surgery rooms, their shutdown is not an option unless for scheduled maintenance. This is because a non-scheduled shutdown of the medical equipment in a surgery room would have a serious impact on the people inside that room for a surgical operation.

Let’s take the hospital example further and consider a particular case. In order to do a quick exercise and simplify, we can use a scenario with several surgery rooms as a reference (for example 5 to 20 rooms, each one with a 5-10 kVA UPS for individual protection), plus a small data center (for example with 30 kVA power consumption) and finally, other critical installations in the facility (let’s assume 300 kVA for offices, laboratories, elevators, etc.).

In this scenario, initially, the architectures that could be envisaged as a first step are:

1. Fully distributed, and for simplicity’s sake, a hospital with 10 surgery rooms is assumed here with 10 kVA for each surgery room plus a centralized UPS (>330 kVA) for the remaining loads.

2. A fully redundant solution based on a centralized UPS protecting all the loads (this UPS being in a parallel redundant configuration). The power for any of these UPS would be 300 kVA + 30 kVA + (10 x 10 kVA).

3. An intermediate solution, referred to as “redundant hot standby”, so that this redundant UPS is sized only for the surgery rooms (10 surgery rooms x 10 kVA), and with a bypass line connected to the large centralized UPS (>430 kVA). This solution shows the advantage of a smaller capacity required for this redundant hot standby UPS.

Emerson Network Power has done several simulations based on typical scenarios as the one described above for a hospital, and considered the factors for optimization a), b), e) and h). Considering the parameters for optimization, the energy savings (power consumption and heat dissipation), initial investment (CAPEX) as well as the maintenance costs (OPEX), the solution based on the “redundant hot standby” seems to be the most convenient.
Moreover, the difference between architectures 1 and 3 is larger as far as quantity of surgery rooms or period for cost simulation (from 1 year up to 10 years).
This points us in the right direction in selecting the best distribution architecture for this application in hospitals and using these parameters for optimization. Clearly, it can be enriched using the other parameters shown in the sections above, or adapted to the particular case (quantity of surgery rooms, autonomy for each load, power demanded by CPD room, reliability, …) that could lead to a different choice, but globally, this redundant hot standby has resulted in a good trade-off.

As said at the beginning, there is no magic solution for the optimum selection, but we have sought to explore several guidelines and check points that will help drive you towards the best solution for your case. Of course, any additional variables and the reader’s experience are welcome and can only serve to enrich the discussion.

Read More

Topics: Data Center, PUE, energy, UPS, Efficiency, Thermal Management, DCIM, Uptime, sustainability, energy efficiency, preventative maintenance, power system, healthcare, hospitals

Six Steps to Improve Data Center Efficiency

by Emerson Network Power on 2/3/16 9:34 AM

six-steps-to-improve-data-center-efficiency.jpg

 

 

 

 

 

 

 

 

 

Imagine the CEO of a public company saying, “on average, our employees are productive 10 percent of the day.” Sounds ridiculous, doesn’t it? Yet we regularly accept productivity level of 10 percent or less from our IT assets. Similarly, no CEO would accept their employees showing up for work 10, 20, or even 50 percent of the time, yet most organizations accept this standard when it comes to server utilization.

These examples alone make it clear our industry is not making the significant gains needed to get the most out of our IT assets. So, what can we do? Here are six steps you’re likely not yet taking to improve efficiency.

1. Increase Server Utilization: Raising server utilization is a major part of enabling server power supplies to operate at maximum efficiency, but the biggest benefit comes in the build-out it could delay. If you can tap into four times more server capacity, that could delay your need for additional servers – and possibly more space – by a factor of four.

2. Sleep Deep: Placing servers into a sleep state during known extended periods of non-use, such as nights and weekends, will go a long way toward improving overall data center efficiency. Powering down your servers has the potential to cut your total data center energy use by 9 percent, so it may be worth the extra effort.

3. Migrate to Newer Servers: In a typical data center, more than half of severs are ‘old,’ consuming approximately 65 percent of the energy and producing 4 percent of the output. In most enterprise data centers, you can probably shut off all servers four or more years old after you migrate the workloads with VMs to your newer hardware. In addition to the straight energy savings, this consolidation will free up space, power and cooling for your new applications.

4. Identify and Decommission Comatose Servers: Identifying servers that aren’t being utilized is not as simple as measuring CPU and memory usage. An energy efficiency audit from a trusted partner can help you put a program in place to take care of comatose servers and make improvements overall. An objective third-party can bring a fresh perspective beyond comatose servers including an asset management plan and DCIM to prevent more comatose servers in the future.

5. Draw from Existing Resources: If you haven’t already implemented the ten vendor-neutral steps of our Energy Logic framework, here are the steps. The returns on several of the steps outlined in this article are quantified in Energy Logic and, with local incentives, may achieve paybacks in less than two years.

6. Measure: The old adage applies: what isn’t measured isn’t managed. Whether you use PUE, CUPS, SPECPower or a combination of them all, knowing where you stand and where you want to be is essential.

Are there any other steps to improving data center efficiency you’ve seen?

Here is a video about the 6 steps as well! 

 

Read More

Topics: data center energy, PUE, UPS, Thermal Management, DCIM, monitoring, the green grid, energy efficiency, availability, Data Center efficiency, preventative maintenance, energy cost

Selecting The Right Economizer

by Emerson Network Power on 12/9/15 2:19 PM

Written By: David Klusas, Emerson Network Power

forest-fire-Thermal-economizer-blog.jpg

You can say what you want about mink farms, but one thing is certain: They stink!

That can be a problem if you’re operating a data center near one and trying to use airside economizers to bring in fresh outside air for free cooling.

There are many efficiency benefits to utilizing outside air for economization, but not every situation is right for bringing outside air into a data center. Each type of economizer has its own advantages and challenges, depending on data center goals, site requirements, geography and climate.

I recently visited four data centers, from Canada to Utah, including the one next to the mink farm, and found multiple occasions where airside economization was not the ideal solution, despite its energy savings.

One data center in Canada was near a heavily forested area, and the company was concerned about smoke from forest fires entering the facility. A data center in Washington was next to an apple orchard, which creates a lot of dust during harvest. Another is using 100% outside air for economization, but has an 8MW chiller plant for backup, in case they ever need to close the outside air dampers and recirculate the indoor air. That’s a HUGE initial investment for only a backup system.

Data centers have made cutting energy consumption a priority to save money and meet government regulations. Cooling accounts for almost 40 percent of data center energy usage, so it’s a main focal point for driving energy savings. More recently, water conservation has become a priority in the selection of cooling systems and economization strategies. At the same time, relative cost and the payback periods remain key factors in selecting these large, expensive systems.

All economizer systems use either outside air and/or water to reduce or eliminate mechanical cooling in data center cooling units. These economizer systems generate significant energy savings of up to 50 percent, compared to legacy systems. The first decision most data center managers make in selecting an economization strategy is the type of data center environment they want to operate, which naturally then leads to a decision on whether or not to bring outside air into the data center. As a result, there are two primary economizer designs typically deployed in data centers: direct and indirect.

While direct and indirect economizers operate in different ways, the ultimate goal of both systems is to provide free cooling to a room or facility, thus reducing the overall energy consumption of the facility. However, fundamental differences between the methods in which direct and indirect systems economize greatly impact the temperature and humidity environment that can be efficiently maintained within the data center.

Direct economization brings outside air into the data center using a system of ductwork, dampers, and sensors. These systems usually have lower capital costs than other forms of economization and work well in moderate climates. In the right climate, direct outside air economizers can be very efficient and an effective economization strategy, but do introduce the risk for contaminants and wide humidity swings into the data center. For maximum annual savings, a wide acceptable supply air temperature and humidity window needs to be implemented in the data center. For highly critical data centers, the risk of outdoor contaminants and wide temperature and humidity swings is sometimes too significant for comfort.

In contrast, indirect economizers can offer significant energy savings while limiting the prior concerns. Indirect economizers do not bring outside air into the data center, but instead use an indirect method to transfer heat from the data center to outside the building. There are primarily three types of indirect economizer technologies:
• Air-to-air heat exchangers, or heat wheels, in a wet or dry state
• Pumped refrigerant economizers, such as the Liebert® DSE™ system economizer
• Cooling towers for chilled water systems

Sensible air-to-air plate frame heat exchangers transfer heat between two air streams, but maintain a complete separation, thus eliminating the opportunity for contamination and transfer of humidity into the data center space. These units can be operated in a dry state, or can be sprayed with water to increase their effectiveness and hours of economization. Heat wheels offer similar qualities to air-to-air plate frame heat exchangers, but can have higher air leakage rates and require additional maintenance to maintain their performance.
The Liebert DSE system is a direct-expansion (DX) system that utilizes an integrated pumped refrigerant economizer to maximize annual energy savings and provide superior availability without the need for separate economization coils. When outdoor ambient temperatures are low enough, the integrated refrigerant pump is used to circulate the refrigerant in lieu of the compressor to maintain the desired supply air temperature. The refrigerant pump uses a faction of the energy used by the compressor. As the outdoor ambient temperatures rise, the Liebert DSE system automatically transitions on compressors to maintain the desired supply air temperature. Its integrated Liebert iCOM™ thermal controls work to automatically optimize the entire system to provide more free-cooling throughout the year.

Because of its efficiency advantages, the Liebert DSE system was recently approved for use in California data centers under Title 24. Its economizer was shown to reduce time dependent valuation (TDV) by 8-10 percent and, since it uses no water, save around 4 million gallons of water annually in a 1MW data center, compared to water economizers.

Initial installation costs for any of these economizer options can be affected by how well the technology under consideration fits into the overall design of the existing facility. The amount of indoor, outdoor or rooftop space required for situating the units will affect the selection decision. Chilled water systems with cooling towers tend to be the most costly, because of the high system first cost, use of water and a higher maintenance burden relating to their complexity.

Emerson Network Power offers options for all of these economizer technologies. There is no single economizer technology that fits every situation. Each has its own strengths based on location and application, and each has its challenges.   Fortunately, there’s an economization option for virtually every location – even next to a mink farm.

Read More

Topics: CUE, Emerson Network Power, Data Center, data center energy, efficient data center, DVL, UPS, Thermal Management, DCIM, energy efficiency, preventative maintenance, 7x24, Economizer

Subscribe to Our Blog

Recent Posts

Posts by Tag

see all