Finding the right architecture for power protection in hospitals

by Emerson Network Power on 3/30/16 8:42 AM

Finding-the-right-architecture-for-power-protection-in-hospitals.jpg

If you’ve read the post about distributed and centralized bypass architectures, you’re probably evaluating the right architecture for a new datacenter, or maybe you’re re-designing the one you’re currently using. The decision is not easy and it will often impact the operation and performance of the power protection system in your datacenter or the connected loads. Unfortunately, in technology there is rarely a simple “yes – no” or black – white” answer, and this holds true for power distribution as well. Yet, in technology and science, there’s a “grey area”, in which the ‘right’ decision is strongly influenced by the specific context and case, and is dependent on many parameters. Luckily, there are paths to find the best solution as a trade-off between the multiple parameters involved.

If you’re considering the use of an Uninterruptible Power Supply (UPS), it means you are worried about the possibility of utility power failures and the associated downtime problems that follow. Given this, the selection of the appropriate configuration or architecture for power distribution is one of the first topics of discussion, and the use of a centralized, parallel, distributed, redundant, hot-standby or other configurations available, becomes an important part of it.  While there are numerous architectures to choose from, there are also several internal variables that will require your attention. Fortunately, a few elementary decisions will make the selection easier. Even if not all parameters can be matched, it’s important to at least begin the conversation and explore trade-offs and other considerations. Without trying to be exhaustive (which would require a dedicated white paper), you should consider at least the following:

a) Cost: more complex architectures will increase both your initial investment and your cost, not only at the initial design stage but during the entire life of your power system, especially with regards to efficiency. In other words, we could say that complex architectures will increase your TCO.

b) Availability and reliability: how reliable should your power system be? And what about single or multiple points of failure? Would you need any type of redundancy?

c) Plans for growth: Do you expect your power demand or capacity to increase in the future? Will you re-configure your load distribution?

d) Related to the previous point, but highlighted separately because of its importance for UPS is modularity. Do you need a modular solution for future expansion or redundancy?

e) Bypass architecture; an important point as explained in a separate post.

f) Need for monitoring of the complete UPS power system, also considering any shutdown of loads, and in combination with other systems like thermal management.

g) Service and maintenance: Once the initial investment in power protection has been made, please do not forget to keep it at optimum conditions. This maintenance at regular intervals has to be achieved through service contracts, check for spares availability if multiple types of UPS are used, capability to isolate a subset, or use of remote diagnostic and preventive monitoring services such as Emerson Network Power’s LIFE for maximum availability.

h) Profile of the loads; especially if you’re considering a few large loads or many “small” loads (perhaps distributed across several buildings or in a wide area such as a wind farm), autonomy required for each load, peak power demands, etc.

In addition, the decision is not only related to the internal requirements of the power systems, but it is also linked to the type of load or application to be protected, as requirements and decisions may vary depending on the application being industrial, education, government, banking, healthcare or data center. For example, an application where the loads are servers which manage printers in a bank, compared to a hospital where the power protection systems may manage several surgery rooms, are by no means the same. In fact, in the case of bank printers, in the worst case they can be shut down, while in the case of the surgery rooms, their shutdown is not an option unless for scheduled maintenance. This is because a non-scheduled shutdown of the medical equipment in a surgery room would have a serious impact on the people inside that room for a surgical operation.

Let’s take the hospital example further and consider a particular case. In order to do a quick exercise and simplify, we can use a scenario with several surgery rooms as a reference (for example 5 to 20 rooms, each one with a 5-10 kVA UPS for individual protection), plus a small data center (for example with 30 kVA power consumption) and finally, other critical installations in the facility (let’s assume 300 kVA for offices, laboratories, elevators, etc.).

In this scenario, initially, the architectures that could be envisaged as a first step are:

1. Fully distributed, and for simplicity’s sake, a hospital with 10 surgery rooms is assumed here with 10 kVA for each surgery room plus a centralized UPS (>330 kVA) for the remaining loads.

2. A fully redundant solution based on a centralized UPS protecting all the loads (this UPS being in a parallel redundant configuration). The power for any of these UPS would be 300 kVA + 30 kVA + (10 x 10 kVA).

3. An intermediate solution, referred to as “redundant hot standby”, so that this redundant UPS is sized only for the surgery rooms (10 surgery rooms x 10 kVA), and with a bypass line connected to the large centralized UPS (>430 kVA). This solution shows the advantage of a smaller capacity required for this redundant hot standby UPS.

Emerson Network Power has done several simulations based on typical scenarios as the one described above for a hospital, and considered the factors for optimization a), b), e) and h). Considering the parameters for optimization, the energy savings (power consumption and heat dissipation), initial investment (CAPEX) as well as the maintenance costs (OPEX), the solution based on the “redundant hot standby” seems to be the most convenient.
Moreover, the difference between architectures 1 and 3 is larger as far as quantity of surgery rooms or period for cost simulation (from 1 year up to 10 years).
This points us in the right direction in selecting the best distribution architecture for this application in hospitals and using these parameters for optimization. Clearly, it can be enriched using the other parameters shown in the sections above, or adapted to the particular case (quantity of surgery rooms, autonomy for each load, power demanded by CPD room, reliability, …) that could lead to a different choice, but globally, this redundant hot standby has resulted in a good trade-off.

As said at the beginning, there is no magic solution for the optimum selection, but we have sought to explore several guidelines and check points that will help drive you towards the best solution for your case. Of course, any additional variables and the reader’s experience are welcome and can only serve to enrich the discussion.

Read More

Topics: Data Center, PUE, energy, UPS, Efficiency, Thermal Management, DCIM, Uptime, sustainability, energy efficiency, preventative maintenance, power system, healthcare, hospitals

Six Steps to Improve Data Center Efficiency

by Emerson Network Power on 2/3/16 9:34 AM

six-steps-to-improve-data-center-efficiency.jpg

 

 

 

 

 

 

 

 

 

Imagine the CEO of a public company saying, “on average, our employees are productive 10 percent of the day.” Sounds ridiculous, doesn’t it? Yet we regularly accept productivity level of 10 percent or less from our IT assets. Similarly, no CEO would accept their employees showing up for work 10, 20, or even 50 percent of the time, yet most organizations accept this standard when it comes to server utilization.

These examples alone make it clear our industry is not making the significant gains needed to get the most out of our IT assets. So, what can we do? Here are six steps you’re likely not yet taking to improve efficiency.

1. Increase Server Utilization: Raising server utilization is a major part of enabling server power supplies to operate at maximum efficiency, but the biggest benefit comes in the build-out it could delay. If you can tap into four times more server capacity, that could delay your need for additional servers – and possibly more space – by a factor of four.

2. Sleep Deep: Placing servers into a sleep state during known extended periods of non-use, such as nights and weekends, will go a long way toward improving overall data center efficiency. Powering down your servers has the potential to cut your total data center energy use by 9 percent, so it may be worth the extra effort.

3. Migrate to Newer Servers: In a typical data center, more than half of severs are ‘old,’ consuming approximately 65 percent of the energy and producing 4 percent of the output. In most enterprise data centers, you can probably shut off all servers four or more years old after you migrate the workloads with VMs to your newer hardware. In addition to the straight energy savings, this consolidation will free up space, power and cooling for your new applications.

4. Identify and Decommission Comatose Servers: Identifying servers that aren’t being utilized is not as simple as measuring CPU and memory usage. An energy efficiency audit from a trusted partner can help you put a program in place to take care of comatose servers and make improvements overall. An objective third-party can bring a fresh perspective beyond comatose servers including an asset management plan and DCIM to prevent more comatose servers in the future.

5. Draw from Existing Resources: If you haven’t already implemented the ten vendor-neutral steps of our Energy Logic framework, here are the steps. The returns on several of the steps outlined in this article are quantified in Energy Logic and, with local incentives, may achieve paybacks in less than two years.

6. Measure: The old adage applies: what isn’t measured isn’t managed. Whether you use PUE, CUPS, SPECPower or a combination of them all, knowing where you stand and where you want to be is essential.

Are there any other steps to improving data center efficiency you’ve seen? 

Read More

Topics: data center energy, PUE, UPS, Thermal Management, DCIM, monitoring, the green grid, energy efficiency, availability, Data Center efficiency, preventative maintenance, energy cost

Selecting The Right Economizer

by Emerson Network Power on 12/9/15 2:19 PM

Written By: David Klusas, Emerson Network Power

forest-fire-Thermal-economizer-blog.jpg

You can say what you want about mink farms, but one thing is certain: They stink!

That can be a problem if you’re operating a data center near one and trying to use airside economizers to bring in fresh outside air for free cooling.

There are many efficiency benefits to utilizing outside air for economization, but not every situation is right for bringing outside air into a data center. Each type of economizer has its own advantages and challenges, depending on data center goals, site requirements, geography and climate.

I recently visited four data centers, from Canada to Utah, including the one next to the mink farm, and found multiple occasions where airside economization was not the ideal solution, despite its energy savings.

One data center in Canada was near a heavily forested area, and the company was concerned about smoke from forest fires entering the facility. A data center in Washington was next to an apple orchard, which creates a lot of dust during harvest. Another is using 100% outside air for economization, but has an 8MW chiller plant for backup, in case they ever need to close the outside air dampers and recirculate the indoor air. That’s a HUGE initial investment for only a backup system.

Data centers have made cutting energy consumption a priority to save money and meet government regulations. Cooling accounts for almost 40 percent of data center energy usage, so it’s a main focal point for driving energy savings. More recently, water conservation has become a priority in the selection of cooling systems and economization strategies. At the same time, relative cost and the payback periods remain key factors in selecting these large, expensive systems.

All economizer systems use either outside air and/or water to reduce or eliminate mechanical cooling in data center cooling units. These economizer systems generate significant energy savings of up to 50 percent, compared to legacy systems. The first decision most data center managers make in selecting an economization strategy is the type of data center environment they want to operate, which naturally then leads to a decision on whether or not to bring outside air into the data center. As a result, there are two primary economizer designs typically deployed in data centers: direct and indirect.

While direct and indirect economizers operate in different ways, the ultimate goal of both systems is to provide free cooling to a room or facility, thus reducing the overall energy consumption of the facility. However, fundamental differences between the methods in which direct and indirect systems economize greatly impact the temperature and humidity environment that can be efficiently maintained within the data center.

Direct economization brings outside air into the data center using a system of ductwork, dampers, and sensors. These systems usually have lower capital costs than other forms of economization and work well in moderate climates. In the right climate, direct outside air economizers can be very efficient and an effective economization strategy, but do introduce the risk for contaminants and wide humidity swings into the data center. For maximum annual savings, a wide acceptable supply air temperature and humidity window needs to be implemented in the data center. For highly critical data centers, the risk of outdoor contaminants and wide temperature and humidity swings is sometimes too significant for comfort.

In contrast, indirect economizers can offer significant energy savings while limiting the prior concerns. Indirect economizers do not bring outside air into the data center, but instead use an indirect method to transfer heat from the data center to outside the building. There are primarily three types of indirect economizer technologies:
• Air-to-air heat exchangers, or heat wheels, in a wet or dry state
• Pumped refrigerant economizers, such as the Liebert® DSE™ system economizer
• Cooling towers for chilled water systems

Sensible air-to-air plate frame heat exchangers transfer heat between two air streams, but maintain a complete separation, thus eliminating the opportunity for contamination and transfer of humidity into the data center space. These units can be operated in a dry state, or can be sprayed with water to increase their effectiveness and hours of economization. Heat wheels offer similar qualities to air-to-air plate frame heat exchangers, but can have higher air leakage rates and require additional maintenance to maintain their performance.
The Liebert DSE system is a direct-expansion (DX) system that utilizes an integrated pumped refrigerant economizer to maximize annual energy savings and provide superior availability without the need for separate economization coils. When outdoor ambient temperatures are low enough, the integrated refrigerant pump is used to circulate the refrigerant in lieu of the compressor to maintain the desired supply air temperature. The refrigerant pump uses a faction of the energy used by the compressor. As the outdoor ambient temperatures rise, the Liebert DSE system automatically transitions on compressors to maintain the desired supply air temperature. Its integrated Liebert iCOM™ thermal controls work to automatically optimize the entire system to provide more free-cooling throughout the year.

Because of its efficiency advantages, the Liebert DSE system was recently approved for use in California data centers under Title 24. Its economizer was shown to reduce time dependent valuation (TDV) by 8-10 percent and, since it uses no water, save around 4 million gallons of water annually in a 1MW data center, compared to water economizers.

Initial installation costs for any of these economizer options can be affected by how well the technology under consideration fits into the overall design of the existing facility. The amount of indoor, outdoor or rooftop space required for situating the units will affect the selection decision. Chilled water systems with cooling towers tend to be the most costly, because of the high system first cost, use of water and a higher maintenance burden relating to their complexity.

Emerson Network Power offers options for all of these economizer technologies. There is no single economizer technology that fits every situation. Each has its own strengths based on location and application, and each has its challenges.   Fortunately, there’s an economization option for virtually every location – even next to a mink farm.

Read More

Topics: CUE, Emerson Network Power, Data Center, data center energy, efficient data center, DVL, UPS, Thermal Management, DCIM, energy efficiency, preventative maintenance, 7x24, Economizer

Highly reliable data centers using managed PDUs

by Emerson Network Power on 10/8/15 9:09 AM

Ronny Mees | Emerson Network Power

24

Today’s most innovative data centers are generally equipped with managed PDUs since their switching capabilities improve reliability. However, simply installing managed PDUs is not enough – an “unmanaged” managed PDU will actually reduce reliability.

So how do managed PDUs work? These advanced units offer a series of configurations which – if properly implemented – improve the availability of important services. The main features are Software Over Temperature Protection (SWOTP) and Software Over Current Protection (SWOCP), which are well described in the blog post “Considerations for a Highly Available Intelligent Rack PDU”.

It is also well-known, that managed PDUs can support commissioning or repairing workflows in data centers. The combination of well designed workflows and managed PDUs pushes the operational reliability to a higher level.

In high performance data centers, using clusters, another important point comes into play: clusters are complex hierarchical structures  of server farms, which are able to run high performance virtual machines and fully automated workflows.

As described here or here, such clusters are managed by centralized software together with server hardware.

Over the last couple of years cluster solutions have been developed following strong and challenging availability goals, in order to avoid any situation, which make physical servers struggle within the cluster. However, there would still be the risk of applications and processes generating  faults and errors and screwing-up the complete cluster, unless there was an automated control process – the good news is: there is.

rack-blog-post-709x532

The process which controls those worst case scenarios is called fencing. Fencing automatically kicks out of the cluster any not working nodes or services in order to maintain the availability of the others.

Fencing has different levels, which are hopefully wisely managed. In a smooth scenario fencing will stop disturbing services, or re-organize storage access (Fibre channel switch fencing) to let the cluster proceed with its tasks.

Another power fencing option is also called “STONITH” (Shoot The Other Node In The Head) and allows the software to initiate an immediate shutdown (internal power fencing) of a node and/or a hard switch off (external power fencing).

The internal power fencing method uses IPMI and other service processer protocols, while the external power fencing uses any supported network protocol to switch of a PDU outlet.  It is recommended to use secured protocols only, such as SNMPv3. So managed PDUs as MPH2 or MPX do not only support a nice power balance, monitor power consumptions or support datacenter operations workflows – they also allow the fence software to react quickly for higher cluster reliability. So it’s not a secret that cluster solutions manufacturers – e.g. Red Hat with RHEL 6.7 and newer – openly support such managed rack PDUs.

For More Emerson Network Power Blogs, Click Here

Read More

Topics: Data Center, PUE, robust data center, Containment, efficient data center, DVL, electrical distribution, energy, Battery, Thermal Management, energy efficiency, 7x24, PDU

Cut Thermal System Energy Use by up to 50%

by Marissa Donatone on 5/22/15 1:39 PM

Electrical_Engineering_highresMake sure to catch Emerson Network Power's Critical Advantage Webcast Series on Tuesday, June 9, at 1 p.m. ET

The New Era of Thermal Controls: See Where They Can Take Your Data Center

Thermal systems account for 38% of data center energy usage. A new generation of thermal system controls can help you reduce it.

Find out how by attending our Emerson Critical Advantage Webcast on May 18 – where we introduce the industry’s latest innovation in thermal system controls: the all-new Liebert® iCOM™ controls.

During our webcast, you’ll see how this new technology can:

  • Improve thermal system energy efficiency by up to 50%
  • Maximize thermal performance by harmonizing multiple cooling systems
  • Better protect your data center by improving air flow and air temperature control
  • Identify and resolve adverse conditions before it’s too late
  • Extend the life of cooling equipment by reducing wear and tear
  • Simply and easily gain insight for action into real-time thermal system operation and metrics
  • Manage every thermal system device with a single system


Register today.

Read More

Topics: Emerson Network Power, Data Center, energy, Energy Star, Thermal Management, energy efficiency, webcast, performance, iCom

Subscribe to Our Blog

Recent Posts

Posts by Tag

see all