Finding the right architecture for power protection in hospitals

by Emerson Network Power on 3/30/16 8:42 AM

Finding-the-right-architecture-for-power-protection-in-hospitals.jpg

If you’ve read the post about distributed and centralized bypass architectures, you’re probably evaluating the right architecture for a new datacenter, or maybe you’re re-designing the one you’re currently using. The decision is not easy and it will often impact the operation and performance of the power protection system in your datacenter or the connected loads. Unfortunately, in technology there is rarely a simple “yes – no” or black – white” answer, and this holds true for power distribution as well. Yet, in technology and science, there’s a “grey area”, in which the ‘right’ decision is strongly influenced by the specific context and case, and is dependent on many parameters. Luckily, there are paths to find the best solution as a trade-off between the multiple parameters involved.

If you’re considering the use of an Uninterruptible Power Supply (UPS), it means you are worried about the possibility of utility power failures and the associated downtime problems that follow. Given this, the selection of the appropriate configuration or architecture for power distribution is one of the first topics of discussion, and the use of a centralized, parallel, distributed, redundant, hot-standby or other configurations available, becomes an important part of it.  While there are numerous architectures to choose from, there are also several internal variables that will require your attention. Fortunately, a few elementary decisions will make the selection easier. Even if not all parameters can be matched, it’s important to at least begin the conversation and explore trade-offs and other considerations. Without trying to be exhaustive (which would require a dedicated white paper), you should consider at least the following:

a) Cost: more complex architectures will increase both your initial investment and your cost, not only at the initial design stage but during the entire life of your power system, especially with regards to efficiency. In other words, we could say that complex architectures will increase your TCO.

b) Availability and reliability: how reliable should your power system be? And what about single or multiple points of failure? Would you need any type of redundancy?

c) Plans for growth: Do you expect your power demand or capacity to increase in the future? Will you re-configure your load distribution?

d) Related to the previous point, but highlighted separately because of its importance for UPS is modularity. Do you need a modular solution for future expansion or redundancy?

e) Bypass architecture; an important point as explained in a separate post.

f) Need for monitoring of the complete UPS power system, also considering any shutdown of loads, and in combination with other systems like thermal management.

g) Service and maintenance: Once the initial investment in power protection has been made, please do not forget to keep it at optimum conditions. This maintenance at regular intervals has to be achieved through service contracts, check for spares availability if multiple types of UPS are used, capability to isolate a subset, or use of remote diagnostic and preventive monitoring services such as Emerson Network Power’s LIFE for maximum availability.

h) Profile of the loads; especially if you’re considering a few large loads or many “small” loads (perhaps distributed across several buildings or in a wide area such as a wind farm), autonomy required for each load, peak power demands, etc.

In addition, the decision is not only related to the internal requirements of the power systems, but it is also linked to the type of load or application to be protected, as requirements and decisions may vary depending on the application being industrial, education, government, banking, healthcare or data center. For example, an application where the loads are servers which manage printers in a bank, compared to a hospital where the power protection systems may manage several surgery rooms, are by no means the same. In fact, in the case of bank printers, in the worst case they can be shut down, while in the case of the surgery rooms, their shutdown is not an option unless for scheduled maintenance. This is because a non-scheduled shutdown of the medical equipment in a surgery room would have a serious impact on the people inside that room for a surgical operation.

Let’s take the hospital example further and consider a particular case. In order to do a quick exercise and simplify, we can use a scenario with several surgery rooms as a reference (for example 5 to 20 rooms, each one with a 5-10 kVA UPS for individual protection), plus a small data center (for example with 30 kVA power consumption) and finally, other critical installations in the facility (let’s assume 300 kVA for offices, laboratories, elevators, etc.).

In this scenario, initially, the architectures that could be envisaged as a first step are:

1. Fully distributed, and for simplicity’s sake, a hospital with 10 surgery rooms is assumed here with 10 kVA for each surgery room plus a centralized UPS (>330 kVA) for the remaining loads.

2. A fully redundant solution based on a centralized UPS protecting all the loads (this UPS being in a parallel redundant configuration). The power for any of these UPS would be 300 kVA + 30 kVA + (10 x 10 kVA).

3. An intermediate solution, referred to as “redundant hot standby”, so that this redundant UPS is sized only for the surgery rooms (10 surgery rooms x 10 kVA), and with a bypass line connected to the large centralized UPS (>430 kVA). This solution shows the advantage of a smaller capacity required for this redundant hot standby UPS.

Emerson Network Power has done several simulations based on typical scenarios as the one described above for a hospital, and considered the factors for optimization a), b), e) and h). Considering the parameters for optimization, the energy savings (power consumption and heat dissipation), initial investment (CAPEX) as well as the maintenance costs (OPEX), the solution based on the “redundant hot standby” seems to be the most convenient.
Moreover, the difference between architectures 1 and 3 is larger as far as quantity of surgery rooms or period for cost simulation (from 1 year up to 10 years).
This points us in the right direction in selecting the best distribution architecture for this application in hospitals and using these parameters for optimization. Clearly, it can be enriched using the other parameters shown in the sections above, or adapted to the particular case (quantity of surgery rooms, autonomy for each load, power demanded by CPD room, reliability, …) that could lead to a different choice, but globally, this redundant hot standby has resulted in a good trade-off.

As said at the beginning, there is no magic solution for the optimum selection, but we have sought to explore several guidelines and check points that will help drive you towards the best solution for your case. Of course, any additional variables and the reader’s experience are welcome and can only serve to enrich the discussion.

Read More

Topics: Data Center, PUE, energy, UPS, Efficiency, Thermal Management, DCIM, Uptime, sustainability, energy efficiency, preventative maintenance, power system, healthcare, hospitals

Six Steps to Improve Data Center Efficiency

by Emerson Network Power on 2/3/16 9:34 AM

six-steps-to-improve-data-center-efficiency.jpg

 

 

 

 

 

 

 

 

 

Imagine the CEO of a public company saying, “on average, our employees are productive 10 percent of the day.” Sounds ridiculous, doesn’t it? Yet we regularly accept productivity level of 10 percent or less from our IT assets. Similarly, no CEO would accept their employees showing up for work 10, 20, or even 50 percent of the time, yet most organizations accept this standard when it comes to server utilization.

These examples alone make it clear our industry is not making the significant gains needed to get the most out of our IT assets. So, what can we do? Here are six steps you’re likely not yet taking to improve efficiency.

1. Increase Server Utilization: Raising server utilization is a major part of enabling server power supplies to operate at maximum efficiency, but the biggest benefit comes in the build-out it could delay. If you can tap into four times more server capacity, that could delay your need for additional servers – and possibly more space – by a factor of four.

2. Sleep Deep: Placing servers into a sleep state during known extended periods of non-use, such as nights and weekends, will go a long way toward improving overall data center efficiency. Powering down your servers has the potential to cut your total data center energy use by 9 percent, so it may be worth the extra effort.

3. Migrate to Newer Servers: In a typical data center, more than half of severs are ‘old,’ consuming approximately 65 percent of the energy and producing 4 percent of the output. In most enterprise data centers, you can probably shut off all servers four or more years old after you migrate the workloads with VMs to your newer hardware. In addition to the straight energy savings, this consolidation will free up space, power and cooling for your new applications.

4. Identify and Decommission Comatose Servers: Identifying servers that aren’t being utilized is not as simple as measuring CPU and memory usage. An energy efficiency audit from a trusted partner can help you put a program in place to take care of comatose servers and make improvements overall. An objective third-party can bring a fresh perspective beyond comatose servers including an asset management plan and DCIM to prevent more comatose servers in the future.

5. Draw from Existing Resources: If you haven’t already implemented the ten vendor-neutral steps of our Energy Logic framework, here are the steps. The returns on several of the steps outlined in this article are quantified in Energy Logic and, with local incentives, may achieve paybacks in less than two years.

6. Measure: The old adage applies: what isn’t measured isn’t managed. Whether you use PUE, CUPS, SPECPower or a combination of them all, knowing where you stand and where you want to be is essential.

Are there any other steps to improving data center efficiency you’ve seen? 

Read More

Topics: data center energy, PUE, UPS, Thermal Management, DCIM, monitoring, the green grid, energy efficiency, availability, Data Center efficiency, preventative maintenance, energy cost

Internet of Things, but what counts as a “thing”?

by Emerson Network Power on 10/26/15 11:50 AM

Internet-Of-Things1

By: Simon Brady

The latest buzz or trend being discussed at almost every IT and technology innovation meeting around the world is currently the Internet of Things (or IoT, as like everything else in the tech world it has been shortened to). We are rapidly moving forward from the original Internet as we know it, built by people for people, and we are connecting devices and machines, allowing them to intercommunicate in a vast network.

The Internet of Things, but what counts as a “thing”? Basically, we can fit almost any object you can think of in this category. You can attribute an IP address to nearly everything that exists within our universe, regardless if it’s a kitchen sink, a closet or an UPS. Everything can be connected to the Internet and can begin to send signals and data towards a server, as long as it has a digital sensor integrated.

Today, roughly 1% of “things” are connected to the Internet, but according to Gartner, Inc. (a technology research and advisory corporation), there will be nearly 26 billion devices on the Internet of Things by 2020.

At first, it might sound a bit scary, right? Hollywood movies and more recently even Professor Stephen Hawking tell us that it’s dangerous if machines talk to other machines and become self-aware. So should we be frightened or overly excited? There is no correct answer, because this new revolution and innovation in the field of technology is still not yet fully understood by people.

Back in history, all great inventions were first doubted and rejected by human kind. Remember how at the early stage of the Internet, some people considered it will be a great failure? Not many envisioned how it would change the world and eventually become an essential part of our lives.

Huge billion dollar companies, like Google, Microsoft, Samsung and Cisco, are investing a lot of money in developing IoT, and this could be the proof that the Internet of Things is here to stay and successful businesses will start building products and services compliant to IoT required functionalities.

So, how does it work? For normal people, interconnecting their own devices can lead to a better life quality and fewer concerns. For example, a smart watch or health monitor bracelet could be connected to a coffee maker, so that when you get out of bed, hot coffee is waiting for you in the kitchen. Temperature sensors in your house will manage heating in each room and even learn when you are home so your boiler is more efficient and you save energy. In making normal everyday life easier, IoT will include household items like parking sensors, washing machines or oven sensors, basically anything that has been connected and networked through a control device. Your fridge can know everything about your diet and your daily calorie intake and react accordingly, sending you updated grocery lists and recommended meal recipes. Already Samsung is building smart fridges to help you keep track of items, tell you when they are out of date and in the future automatically order you milk when you are running low.

But this is the micro-level we’re talking about. Let’s think about autonomous cars, smart cities and smart manufacturing tools. Bridges that can track every vehicle, monitor traffic flow and automatically open and close lanes to help traffic safety; cars that can talk to each other on highways, to help keep rush hour traffic moving and enhancing driver experiences; this is more than simply connecting machines or sensors, it’s using the data from all these connected devices in a way that can significantly improve life as we know it.

The key to the IoT is that all of the connected devices can send data in a very short timeframe, which is critical in many circumstances, but that’s not all. Instead of simply storing the data, it can also immediately analyse it and trigger an action, without requiring any human intervention.

Companies worldwide can greatly benefit from Internet of Things software applications, increasing their product’s efficiency and availability, whilst decreasing costs and negative environmental effects.

In a data center for example, by inter-connecting all active components, including UPS systems, chillers, cooling units, PDU’s, etc, a data center administrator can easily monitor and supervise their group activity. Control solutions like Liebert® iCOM are actually more than simple monitoring interfaces; they can coordinate all of the cooling systems and deliver the required air flow at the temperature needed on demand. When problems arise, alerts and notifications sent to the data center administrator are more than essential, in order to restore them to normal. But wait; shouldn’t this Internet of Things be something new? Liebert iCOM has been on the market for several years now. Let’s clear this up.

The term Internet of Things was first mentioned in 1999, by British visionary Kevin Ashton, but the actual process has been in development for a long time now. You see, the name can be a bit confusing, and indeed it just recently crawled into the mainstream media, so people think it’s something very new. But in fact, major companies have already been using and developing IoT for a couple of years now, changing perspectives on how things should really be done.

However, taking full-advantage of this great innovation in all life aspects is still in its early phase. The greatest challenges that IoT faces in this moment are high costs and security threats. For the time being, IoT solutions can be really expensive, so we’re dealing with an ongoing process of lowering costs, to allow more and more people and businesses to adopt it.

Also, the security breaches can be a reason to be concerned, since IoT is very vulnerable at this point; many hackers have manifested their overwhelming interest in this direction, so developers need to be extremely cautious when it comes to security protocols.

All things considered, we can conclude that the Internet of Things is our huge opportunity to create a better life for everybody, to build a strong foundation in the technology field and develop products and solutions that could actually change the world.

For More Emerson Network Power Blogs, CLICK HERE

Read More

Topics: CUE, PUE, DVL, Thermal Management, monitoring, iCom, KVM, IoT

Highly reliable data centers using managed PDUs

by Emerson Network Power on 10/8/15 9:09 AM

Ronny Mees | Emerson Network Power

24

Today’s most innovative data centers are generally equipped with managed PDUs since their switching capabilities improve reliability. However, simply installing managed PDUs is not enough – an “unmanaged” managed PDU will actually reduce reliability.

So how do managed PDUs work? These advanced units offer a series of configurations which – if properly implemented – improve the availability of important services. The main features are Software Over Temperature Protection (SWOTP) and Software Over Current Protection (SWOCP), which are well described in the blog post “Considerations for a Highly Available Intelligent Rack PDU”.

It is also well-known, that managed PDUs can support commissioning or repairing workflows in data centers. The combination of well designed workflows and managed PDUs pushes the operational reliability to a higher level.

In high performance data centers, using clusters, another important point comes into play: clusters are complex hierarchical structures  of server farms, which are able to run high performance virtual machines and fully automated workflows.

As described here or here, such clusters are managed by centralized software together with server hardware.

Over the last couple of years cluster solutions have been developed following strong and challenging availability goals, in order to avoid any situation, which make physical servers struggle within the cluster. However, there would still be the risk of applications and processes generating  faults and errors and screwing-up the complete cluster, unless there was an automated control process – the good news is: there is.

rack-blog-post-709x532

The process which controls those worst case scenarios is called fencing. Fencing automatically kicks out of the cluster any not working nodes or services in order to maintain the availability of the others.

Fencing has different levels, which are hopefully wisely managed. In a smooth scenario fencing will stop disturbing services, or re-organize storage access (Fibre channel switch fencing) to let the cluster proceed with its tasks.

Another power fencing option is also called “STONITH” (Shoot The Other Node In The Head) and allows the software to initiate an immediate shutdown (internal power fencing) of a node and/or a hard switch off (external power fencing).

The internal power fencing method uses IPMI and other service processer protocols, while the external power fencing uses any supported network protocol to switch of a PDU outlet.  It is recommended to use secured protocols only, such as SNMPv3. So managed PDUs as MPH2 or MPX do not only support a nice power balance, monitor power consumptions or support datacenter operations workflows – they also allow the fence software to react quickly for higher cluster reliability. So it’s not a secret that cluster solutions manufacturers – e.g. Red Hat with RHEL 6.7 and newer – openly support such managed rack PDUs.

For More Emerson Network Power Blogs, Click Here

Read More

Topics: Data Center, PUE, robust data center, Containment, efficient data center, DVL, electrical distribution, energy, Battery, Thermal Management, energy efficiency, 7x24, PDU

Three Best Practices to Avoid Cyberattacks

by Emerson Network Power on 8/19/15 8:49 AM

Cyber_Security

From major retail cyberattacks to Hollywood studio hackers, cybersecurity is now, more than ever, on the mind of every CIO in the world — and rightfully so. According to our recent article in Data Center Journal, the most common cause of a data breach is malicious or criminal attacks, which could end up costing not only nights of sleep for CIOs, but also millions of dollars; in some cases upwards of $5.4 million.

While these attacks can be devastating, there are some best practices to help avoid cyber-disaster:

1. Don’t give hackers a back doorIn order to prevent data breaches, consider isolating your network to avoid allowing easy access to your information. Since access can be logged through network isolation, unwanted activity can be monitored and flagged. To isolate your network and limit threats without compromising necessary access or performance, consider utilizing isolated out-of-band management networks. These networks provide full, real-time access without giving hackers back door entry.

2. Enforce the three A’sAuthentication, authorization and auditing are all critical to securing your network. Ensure your cybersecurity by using fine-grain user authentication through a centralized and controlled process, while still allowing easy access for administrators.

3. Ensure trust and best practices with outside vendors: Servicing data center equipment typically requires allowing atypical access to sensitive information about your data center with people outside your organization. Even new technologies are now requiring software updates while sharing IP addresses and network ports to accommodate those updates. While you may feel confident in your organization’s security practices, it’s also important you trust the security measures practiced by those outside parties or contractors, as well.

Security is a complex, never-ending process, but the right partners can help cut through that complexity and ensure your network—and your business—do not become the next victim.

What other best practices do you use to ensure your network is secure?

For more Blogs by Emerson Network Power, Click Here!

Read More

Topics: Data Center, PUE, UPS, DCIM, monitoring, Trellis, the green grid, cybersecurity

Subscribe to Our Blog

Recent Posts

Posts by Tag

see all