Finding the right architecture for power protection in hospitals

by Emerson Network Power on 3/30/16 8:42 AM

Finding-the-right-architecture-for-power-protection-in-hospitals.jpg

If you’ve read the post about distributed and centralized bypass architectures, you’re probably evaluating the right architecture for a new datacenter, or maybe you’re re-designing the one you’re currently using. The decision is not easy and it will often impact the operation and performance of the power protection system in your datacenter or the connected loads. Unfortunately, in technology there is rarely a simple “yes – no” or black – white” answer, and this holds true for power distribution as well. Yet, in technology and science, there’s a “grey area”, in which the ‘right’ decision is strongly influenced by the specific context and case, and is dependent on many parameters. Luckily, there are paths to find the best solution as a trade-off between the multiple parameters involved.

If you’re considering the use of an Uninterruptible Power Supply (UPS), it means you are worried about the possibility of utility power failures and the associated downtime problems that follow. Given this, the selection of the appropriate configuration or architecture for power distribution is one of the first topics of discussion, and the use of a centralized, parallel, distributed, redundant, hot-standby or other configurations available, becomes an important part of it.  While there are numerous architectures to choose from, there are also several internal variables that will require your attention. Fortunately, a few elementary decisions will make the selection easier. Even if not all parameters can be matched, it’s important to at least begin the conversation and explore trade-offs and other considerations. Without trying to be exhaustive (which would require a dedicated white paper), you should consider at least the following:

a) Cost: more complex architectures will increase both your initial investment and your cost, not only at the initial design stage but during the entire life of your power system, especially with regards to efficiency. In other words, we could say that complex architectures will increase your TCO.

b) Availability and reliability: how reliable should your power system be? And what about single or multiple points of failure? Would you need any type of redundancy?

c) Plans for growth: Do you expect your power demand or capacity to increase in the future? Will you re-configure your load distribution?

d) Related to the previous point, but highlighted separately because of its importance for UPS is modularity. Do you need a modular solution for future expansion or redundancy?

e) Bypass architecture; an important point as explained in a separate post.

f) Need for monitoring of the complete UPS power system, also considering any shutdown of loads, and in combination with other systems like thermal management.

g) Service and maintenance: Once the initial investment in power protection has been made, please do not forget to keep it at optimum conditions. This maintenance at regular intervals has to be achieved through service contracts, check for spares availability if multiple types of UPS are used, capability to isolate a subset, or use of remote diagnostic and preventive monitoring services such as Emerson Network Power’s LIFE for maximum availability.

h) Profile of the loads; especially if you’re considering a few large loads or many “small” loads (perhaps distributed across several buildings or in a wide area such as a wind farm), autonomy required for each load, peak power demands, etc.

In addition, the decision is not only related to the internal requirements of the power systems, but it is also linked to the type of load or application to be protected, as requirements and decisions may vary depending on the application being industrial, education, government, banking, healthcare or data center. For example, an application where the loads are servers which manage printers in a bank, compared to a hospital where the power protection systems may manage several surgery rooms, are by no means the same. In fact, in the case of bank printers, in the worst case they can be shut down, while in the case of the surgery rooms, their shutdown is not an option unless for scheduled maintenance. This is because a non-scheduled shutdown of the medical equipment in a surgery room would have a serious impact on the people inside that room for a surgical operation.

Let’s take the hospital example further and consider a particular case. In order to do a quick exercise and simplify, we can use a scenario with several surgery rooms as a reference (for example 5 to 20 rooms, each one with a 5-10 kVA UPS for individual protection), plus a small data center (for example with 30 kVA power consumption) and finally, other critical installations in the facility (let’s assume 300 kVA for offices, laboratories, elevators, etc.).

In this scenario, initially, the architectures that could be envisaged as a first step are:

1. Fully distributed, and for simplicity’s sake, a hospital with 10 surgery rooms is assumed here with 10 kVA for each surgery room plus a centralized UPS (>330 kVA) for the remaining loads.

2. A fully redundant solution based on a centralized UPS protecting all the loads (this UPS being in a parallel redundant configuration). The power for any of these UPS would be 300 kVA + 30 kVA + (10 x 10 kVA).

3. An intermediate solution, referred to as “redundant hot standby”, so that this redundant UPS is sized only for the surgery rooms (10 surgery rooms x 10 kVA), and with a bypass line connected to the large centralized UPS (>430 kVA). This solution shows the advantage of a smaller capacity required for this redundant hot standby UPS.

Emerson Network Power has done several simulations based on typical scenarios as the one described above for a hospital, and considered the factors for optimization a), b), e) and h). Considering the parameters for optimization, the energy savings (power consumption and heat dissipation), initial investment (CAPEX) as well as the maintenance costs (OPEX), the solution based on the “redundant hot standby” seems to be the most convenient.
Moreover, the difference between architectures 1 and 3 is larger as far as quantity of surgery rooms or period for cost simulation (from 1 year up to 10 years).
This points us in the right direction in selecting the best distribution architecture for this application in hospitals and using these parameters for optimization. Clearly, it can be enriched using the other parameters shown in the sections above, or adapted to the particular case (quantity of surgery rooms, autonomy for each load, power demanded by CPD room, reliability, …) that could lead to a different choice, but globally, this redundant hot standby has resulted in a good trade-off.

As said at the beginning, there is no magic solution for the optimum selection, but we have sought to explore several guidelines and check points that will help drive you towards the best solution for your case. Of course, any additional variables and the reader’s experience are welcome and can only serve to enrich the discussion.

Read More

Topics: Data Center, PUE, energy, UPS, Efficiency, Thermal Management, DCIM, Uptime, sustainability, energy efficiency, preventative maintenance, power system, healthcare, hospitals

Six Steps to Improve Data Center Efficiency

by Emerson Network Power on 2/3/16 9:34 AM

six-steps-to-improve-data-center-efficiency.jpg

 

 

 

 

 

 

 

 

 

Imagine the CEO of a public company saying, “on average, our employees are productive 10 percent of the day.” Sounds ridiculous, doesn’t it? Yet we regularly accept productivity level of 10 percent or less from our IT assets. Similarly, no CEO would accept their employees showing up for work 10, 20, or even 50 percent of the time, yet most organizations accept this standard when it comes to server utilization.

These examples alone make it clear our industry is not making the significant gains needed to get the most out of our IT assets. So, what can we do? Here are six steps you’re likely not yet taking to improve efficiency.

1. Increase Server Utilization: Raising server utilization is a major part of enabling server power supplies to operate at maximum efficiency, but the biggest benefit comes in the build-out it could delay. If you can tap into four times more server capacity, that could delay your need for additional servers – and possibly more space – by a factor of four.

2. Sleep Deep: Placing servers into a sleep state during known extended periods of non-use, such as nights and weekends, will go a long way toward improving overall data center efficiency. Powering down your servers has the potential to cut your total data center energy use by 9 percent, so it may be worth the extra effort.

3. Migrate to Newer Servers: In a typical data center, more than half of severs are ‘old,’ consuming approximately 65 percent of the energy and producing 4 percent of the output. In most enterprise data centers, you can probably shut off all servers four or more years old after you migrate the workloads with VMs to your newer hardware. In addition to the straight energy savings, this consolidation will free up space, power and cooling for your new applications.

4. Identify and Decommission Comatose Servers: Identifying servers that aren’t being utilized is not as simple as measuring CPU and memory usage. An energy efficiency audit from a trusted partner can help you put a program in place to take care of comatose servers and make improvements overall. An objective third-party can bring a fresh perspective beyond comatose servers including an asset management plan and DCIM to prevent more comatose servers in the future.

5. Draw from Existing Resources: If you haven’t already implemented the ten vendor-neutral steps of our Energy Logic framework, here are the steps. The returns on several of the steps outlined in this article are quantified in Energy Logic and, with local incentives, may achieve paybacks in less than two years.

6. Measure: The old adage applies: what isn’t measured isn’t managed. Whether you use PUE, CUPS, SPECPower or a combination of them all, knowing where you stand and where you want to be is essential.

Are there any other steps to improving data center efficiency you’ve seen? 

Read More

Topics: data center energy, PUE, UPS, Thermal Management, DCIM, monitoring, the green grid, energy efficiency, availability, Data Center efficiency, preventative maintenance, energy cost

Beyond the Finish Line: What to expect from the Federal Civilian Cyber-security Strategy

by Emerson Network Power on 9/23/15 9:14 AM

Rick Holloway September 23, 2015

cybersecurity_breach

The Federal Government’s 30-day Cybersecurity Sprint ended earlier this summer, but the real work continues. Government agencies and equipment manufacturers are awaiting the results of the ongoing cybersecurity review and the release of the Federal Civilian Cybersecurity Strategy – expected soon – but the preliminary principles of the strategy are intriguing on their own.

One thing that’s clear – and not at all surprising – is the government believes the approach to the increasing cybersecurity challenge is both behavioral and equipment-focused. There is no magic bullet piece of hardware or software that will provide adequate protection against all of today’s security threats, but a combination of threat awareness, adherence to best practices and deploying and properly using today’s hardened technologies can reduce risks.

There are eight key principles that will form the foundation of the Federal Civilian Cybersecurity Strategy. They are:

1. Protecting Data: Better protect data at rest and in transit.

2. Improving Situational Awareness: Improve indication and warning.

3. Increasing Cybersecurity Proficiency: Ensure a robust capacity to recruit and retain cybersecurity personnel.

4. Increase Awareness: Improve overall risk awareness by all users.

5. Standardizing and Automating Processes: Decrease time needed to manage configurations and patch vulnerabilities.

6. Controlling, Containing, and Recovering from Incidents: Contain malware proliferation, privilege escalation, and lateral movement. Quickly identify and resolve events and incidents.

7. Strengthening Systems Lifecycle Security: Increase inherent security of platforms by buying more secure systems and retiring legacy systems in a timely manner.

8. Reducing Attack Surfaces: Decrease complexity and number of things defenders need to protect.

I doubt anyone would disagree with those points. But what can we infer if we take a closer look?

It’s not called out specifically, but a consistent theme is access awareness and control. We live in a time when everything is connected—and needs to be, to ensure our data, our networks, our lives move at the speed the world demands. But every connection is an access point, and every access point is a potential vulnerability. Understanding where those access points are and securing them through both technology and best practices is a significant first step in securing a network. This can be as simple as proper credential and password controls.

The point about replacing less secure legacy systems with more secure, modern technologies is important. While there are limits to the effectiveness of software updates and patches, equipment replacement can be costly. Organizations that value security will put plans in place to upgrade equipment over time—and the sooner they start, the better.

One of the more interesting and encouraging points in the preliminary list is the bullet about recruiting and training cybersecurity personnel. This reflects a necessary awareness of the nature of these threats. They aren’t static; hackers are evolving and devising new attacks and tactics every day. It’s critical that our IT personnel maintain the same vigilance and dedication to security and threat education.

Of course, these are simply preliminary indications of the government’s thinking. We’ll know more when the Federal CIO releases the final Federal Civilian Cybersecurity Strategy, and we’ll take a closer look at that strategy and what it means at that time.

For More Blogs and News from Emerson Network Power Click Here

Read More

Topics: Data Center, Thermal Management, DCIM, monitoring, cybersecurity, security

Three Best Practices to Avoid Cyberattacks

by Emerson Network Power on 8/19/15 8:49 AM

Cyber_Security

From major retail cyberattacks to Hollywood studio hackers, cybersecurity is now, more than ever, on the mind of every CIO in the world — and rightfully so. According to our recent article in Data Center Journal, the most common cause of a data breach is malicious or criminal attacks, which could end up costing not only nights of sleep for CIOs, but also millions of dollars; in some cases upwards of $5.4 million.

While these attacks can be devastating, there are some best practices to help avoid cyber-disaster:

1. Don’t give hackers a back doorIn order to prevent data breaches, consider isolating your network to avoid allowing easy access to your information. Since access can be logged through network isolation, unwanted activity can be monitored and flagged. To isolate your network and limit threats without compromising necessary access or performance, consider utilizing isolated out-of-band management networks. These networks provide full, real-time access without giving hackers back door entry.

2. Enforce the three A’sAuthentication, authorization and auditing are all critical to securing your network. Ensure your cybersecurity by using fine-grain user authentication through a centralized and controlled process, while still allowing easy access for administrators.

3. Ensure trust and best practices with outside vendors: Servicing data center equipment typically requires allowing atypical access to sensitive information about your data center with people outside your organization. Even new technologies are now requiring software updates while sharing IP addresses and network ports to accommodate those updates. While you may feel confident in your organization’s security practices, it’s also important you trust the security measures practiced by those outside parties or contractors, as well.

Security is a complex, never-ending process, but the right partners can help cut through that complexity and ensure your network—and your business—do not become the next victim.

What other best practices do you use to ensure your network is secure?

For more Blogs by Emerson Network Power, Click Here!

Read More

Topics: Data Center, PUE, UPS, DCIM, monitoring, Trellis, the green grid, cybersecurity

Choosing Between VSDs and EC Fans. Making the right investment when upgrading fan technology.

by Emerson Network Power on 7/15/15 3:23 PM

Blog_VSD

Fans that move air and pressurize the data center’s raised floor are significant components of cooling system energy use. After mechanical cooling, fans are the next largest energy consumer on computer room air condition (CRAC) units. One way many data center managers reduce energy usage and control their costs is by investing in variable speed fan technology. Such improvements can save fan energy consumption by as much as 76 percent.

With the different options on the market, it may not be clear which technology is best. Today, variable speed drives (VSDs)—also referred to as variable frequency drives or VFDs—and electrically commutated (EC) fansare two of the most effective fan improvement technologies available. The advantages of both options are outlined below to help data center managers determine which fan technology is best for achieving energy efficiency goals.

How do different fan technologies work? 
In general, variable speed fan technologies save energy by enabling cooling systems to adjust fan speed to meet the changing demand, which allows them to operate more efficiently. While cooling units are typically sized for peak demand, peak demand conditions are rare in most applications. VSDs and EC fans more effectively match airflow output with load requirements, adjusting speeds based on changing needs. This prevents overcooling and generates significant energy savings.

With VSDs, drives are added to the fixed speed motors that propel the centrifugal fans traditionally used in precision cooling units. The drives enable fan speed to be adjusted based on operating conditions, reducing fan speed and power draw as load decreases. Energy consumption changes dramatically as fan speed is decreased or increased due to the fan laws. For this reason, a 20 percent reduction in fan speed provides nearly 50 percent savings in fan power consumption.

EC fans are direct drive fans that are integrated into the cooling unit by replacing the centrifugal fans and motor assemblies. They are inherently more efficient than traditional centrifugal fans because of their unique design, which uses a brushless EC motor in a backward curved motorized impeller. EC fans achieve speed control by varying the DC voltage delivered to the fan. Independent testing of EC fan energy consumption versus VSDs found that EC fans mounted inside the cooling unit created an 18 percent savings. With new units, EC fans can be located under the floor, further increasing the savings.

How do VSDs and EC fans compare?

Energy Savings
One of the main differences between VSDs and EC fans is that VSDs save energy when the fan speed can be operated below full speed. VSDs do not reduce energy consumption when the airflow demands require the fans to operate at or near peak load. Conversely, EC fans typically require less energy even when the same quantity of air is flowing. This allows them to still save energy when the cooling unit is at full load. EC fans also distribute air more evenly under the floor, resulting in more balanced air distribution. Another benefit of direct-drive EC fans is the elimination of belt losses seen with centrifugal blowers. Ultimately, EC fans are the more efficient fan technology.

Cooling Unit Type
VSDs are particularly well-suited for larger systems with ducted upflow cooling units that require higher static pressures, while EC fans are better suited for downflow units.

Maintenance 
In terms of maintenance, EC fans offer an advantage. EC fans also reduce maintenance because they have no fan belts that wear and their integrated motors virtually eliminate fan dust.

Installation 
Both VSDs and EC fans can be installed on existing cooling units or specified in new units. When installing on existing units, factory-grade installation is a must.

Payback
In many cases, the choice between VSDs and EC fans comes down to payback. If rapid payback is a priority, then VSDs are likely the better choice. These devices can offer payback in fewer than 10 months when operated at 75 percent.

However, EC fans will deliver greater, long-term energy savings and a better return on investment (ROI). While EC fans can cost up to 50 percent more than VSDs, they generate greater energy savings and reduce overall maintenance costs, ultimately resulting in the lowest total cost of ownership.

Have the experts weigh in. 
Service professionals can be an asset in helping choose the best fan technology for a data center. Service professionals can calculate the ROI from both options, and they can recommend the best fan technologies for specific equipment.

Service professionals trained in optimizing precision cooling system performance can also ensure factory-grade installations, complete set point adjustment to meet room requirements, and properly maintain equipment, helping businesses achieve maximum cooling unit efficiency today and in the future.

Whether you ultimately decide to go with VSDs or EC fans, either way, you’ll be rewarded with a greener data center, more efficient cooling, and significant energy savings that translate into a better bottom line.


Original Emerson Network Power Blog Post

Read More

Topics: data center energy, PUE, Battery, Efficiency, Thermal Management, DCIM, Uptime, the green grid, AHRI, availability, education, KVM, Data Center efficiency, preventative maintenance

Subscribe to Our Blog

Recent Posts

Posts by Tag

see all