Emerson Network Power

Recent Posts

Six Steps to Improve Data Center Efficiency

by Emerson Network Power on 2/3/16 9:34 AM

six-steps-to-improve-data-center-efficiency.jpg

 

 

 

 

 

 

 

 

 

Imagine the CEO of a public company saying, “on average, our employees are productive 10 percent of the day.” Sounds ridiculous, doesn’t it? Yet we regularly accept productivity level of 10 percent or less from our IT assets. Similarly, no CEO would accept their employees showing up for work 10, 20, or even 50 percent of the time, yet most organizations accept this standard when it comes to server utilization.

These examples alone make it clear our industry is not making the significant gains needed to get the most out of our IT assets. So, what can we do? Here are six steps you’re likely not yet taking to improve efficiency.

1. Increase Server Utilization: Raising server utilization is a major part of enabling server power supplies to operate at maximum efficiency, but the biggest benefit comes in the build-out it could delay. If you can tap into four times more server capacity, that could delay your need for additional servers – and possibly more space – by a factor of four.

2. Sleep Deep: Placing servers into a sleep state during known extended periods of non-use, such as nights and weekends, will go a long way toward improving overall data center efficiency. Powering down your servers has the potential to cut your total data center energy use by 9 percent, so it may be worth the extra effort.

3. Migrate to Newer Servers: In a typical data center, more than half of severs are ‘old,’ consuming approximately 65 percent of the energy and producing 4 percent of the output. In most enterprise data centers, you can probably shut off all servers four or more years old after you migrate the workloads with VMs to your newer hardware. In addition to the straight energy savings, this consolidation will free up space, power and cooling for your new applications.

4. Identify and Decommission Comatose Servers: Identifying servers that aren’t being utilized is not as simple as measuring CPU and memory usage. An energy efficiency audit from a trusted partner can help you put a program in place to take care of comatose servers and make improvements overall. An objective third-party can bring a fresh perspective beyond comatose servers including an asset management plan and DCIM to prevent more comatose servers in the future.

5. Draw from Existing Resources: If you haven’t already implemented the ten vendor-neutral steps of our Energy Logic framework, here are the steps. The returns on several of the steps outlined in this article are quantified in Energy Logic and, with local incentives, may achieve paybacks in less than two years.

6. Measure: The old adage applies: what isn’t measured isn’t managed. Whether you use PUE, CUPS, SPECPower or a combination of them all, knowing where you stand and where you want to be is essential.

Are there any other steps to improving data center efficiency you’ve seen? 

Read More

Topics: data center energy, PUE, UPS, Thermal Management, DCIM, monitoring, the green grid, energy efficiency, availability, Data Center efficiency, preventative maintenance, energy cost

Selecting The Right Economizer

by Emerson Network Power on 12/9/15 2:19 PM

Written By: David Klusas, Emerson Network Power

forest-fire-Thermal-economizer-blog.jpg

You can say what you want about mink farms, but one thing is certain: They stink!

That can be a problem if you’re operating a data center near one and trying to use airside economizers to bring in fresh outside air for free cooling.

There are many efficiency benefits to utilizing outside air for economization, but not every situation is right for bringing outside air into a data center. Each type of economizer has its own advantages and challenges, depending on data center goals, site requirements, geography and climate.

I recently visited four data centers, from Canada to Utah, including the one next to the mink farm, and found multiple occasions where airside economization was not the ideal solution, despite its energy savings.

One data center in Canada was near a heavily forested area, and the company was concerned about smoke from forest fires entering the facility. A data center in Washington was next to an apple orchard, which creates a lot of dust during harvest. Another is using 100% outside air for economization, but has an 8MW chiller plant for backup, in case they ever need to close the outside air dampers and recirculate the indoor air. That’s a HUGE initial investment for only a backup system.

Data centers have made cutting energy consumption a priority to save money and meet government regulations. Cooling accounts for almost 40 percent of data center energy usage, so it’s a main focal point for driving energy savings. More recently, water conservation has become a priority in the selection of cooling systems and economization strategies. At the same time, relative cost and the payback periods remain key factors in selecting these large, expensive systems.

All economizer systems use either outside air and/or water to reduce or eliminate mechanical cooling in data center cooling units. These economizer systems generate significant energy savings of up to 50 percent, compared to legacy systems. The first decision most data center managers make in selecting an economization strategy is the type of data center environment they want to operate, which naturally then leads to a decision on whether or not to bring outside air into the data center. As a result, there are two primary economizer designs typically deployed in data centers: direct and indirect.

While direct and indirect economizers operate in different ways, the ultimate goal of both systems is to provide free cooling to a room or facility, thus reducing the overall energy consumption of the facility. However, fundamental differences between the methods in which direct and indirect systems economize greatly impact the temperature and humidity environment that can be efficiently maintained within the data center.

Direct economization brings outside air into the data center using a system of ductwork, dampers, and sensors. These systems usually have lower capital costs than other forms of economization and work well in moderate climates. In the right climate, direct outside air economizers can be very efficient and an effective economization strategy, but do introduce the risk for contaminants and wide humidity swings into the data center. For maximum annual savings, a wide acceptable supply air temperature and humidity window needs to be implemented in the data center. For highly critical data centers, the risk of outdoor contaminants and wide temperature and humidity swings is sometimes too significant for comfort.

In contrast, indirect economizers can offer significant energy savings while limiting the prior concerns. Indirect economizers do not bring outside air into the data center, but instead use an indirect method to transfer heat from the data center to outside the building. There are primarily three types of indirect economizer technologies:
• Air-to-air heat exchangers, or heat wheels, in a wet or dry state
• Pumped refrigerant economizers, such as the Liebert® DSE™ system economizer
• Cooling towers for chilled water systems

Sensible air-to-air plate frame heat exchangers transfer heat between two air streams, but maintain a complete separation, thus eliminating the opportunity for contamination and transfer of humidity into the data center space. These units can be operated in a dry state, or can be sprayed with water to increase their effectiveness and hours of economization. Heat wheels offer similar qualities to air-to-air plate frame heat exchangers, but can have higher air leakage rates and require additional maintenance to maintain their performance.
The Liebert DSE system is a direct-expansion (DX) system that utilizes an integrated pumped refrigerant economizer to maximize annual energy savings and provide superior availability without the need for separate economization coils. When outdoor ambient temperatures are low enough, the integrated refrigerant pump is used to circulate the refrigerant in lieu of the compressor to maintain the desired supply air temperature. The refrigerant pump uses a faction of the energy used by the compressor. As the outdoor ambient temperatures rise, the Liebert DSE system automatically transitions on compressors to maintain the desired supply air temperature. Its integrated Liebert iCOM™ thermal controls work to automatically optimize the entire system to provide more free-cooling throughout the year.

Because of its efficiency advantages, the Liebert DSE system was recently approved for use in California data centers under Title 24. Its economizer was shown to reduce time dependent valuation (TDV) by 8-10 percent and, since it uses no water, save around 4 million gallons of water annually in a 1MW data center, compared to water economizers.

Initial installation costs for any of these economizer options can be affected by how well the technology under consideration fits into the overall design of the existing facility. The amount of indoor, outdoor or rooftop space required for situating the units will affect the selection decision. Chilled water systems with cooling towers tend to be the most costly, because of the high system first cost, use of water and a higher maintenance burden relating to their complexity.

Emerson Network Power offers options for all of these economizer technologies. There is no single economizer technology that fits every situation. Each has its own strengths based on location and application, and each has its challenges.   Fortunately, there’s an economization option for virtually every location – even next to a mink farm.

Read More

Topics: CUE, Emerson Network Power, Data Center, data center energy, efficient data center, DVL, UPS, Thermal Management, DCIM, energy efficiency, preventative maintenance, 7x24, Economizer

Internet of Things, but what counts as a “thing”?

by Emerson Network Power on 10/26/15 11:50 AM

Internet-Of-Things1

By: Simon Brady

The latest buzz or trend being discussed at almost every IT and technology innovation meeting around the world is currently the Internet of Things (or IoT, as like everything else in the tech world it has been shortened to). We are rapidly moving forward from the original Internet as we know it, built by people for people, and we are connecting devices and machines, allowing them to intercommunicate in a vast network.

The Internet of Things, but what counts as a “thing”? Basically, we can fit almost any object you can think of in this category. You can attribute an IP address to nearly everything that exists within our universe, regardless if it’s a kitchen sink, a closet or an UPS. Everything can be connected to the Internet and can begin to send signals and data towards a server, as long as it has a digital sensor integrated.

Today, roughly 1% of “things” are connected to the Internet, but according to Gartner, Inc. (a technology research and advisory corporation), there will be nearly 26 billion devices on the Internet of Things by 2020.

At first, it might sound a bit scary, right? Hollywood movies and more recently even Professor Stephen Hawking tell us that it’s dangerous if machines talk to other machines and become self-aware. So should we be frightened or overly excited? There is no correct answer, because this new revolution and innovation in the field of technology is still not yet fully understood by people.

Back in history, all great inventions were first doubted and rejected by human kind. Remember how at the early stage of the Internet, some people considered it will be a great failure? Not many envisioned how it would change the world and eventually become an essential part of our lives.

Huge billion dollar companies, like Google, Microsoft, Samsung and Cisco, are investing a lot of money in developing IoT, and this could be the proof that the Internet of Things is here to stay and successful businesses will start building products and services compliant to IoT required functionalities.

So, how does it work? For normal people, interconnecting their own devices can lead to a better life quality and fewer concerns. For example, a smart watch or health monitor bracelet could be connected to a coffee maker, so that when you get out of bed, hot coffee is waiting for you in the kitchen. Temperature sensors in your house will manage heating in each room and even learn when you are home so your boiler is more efficient and you save energy. In making normal everyday life easier, IoT will include household items like parking sensors, washing machines or oven sensors, basically anything that has been connected and networked through a control device. Your fridge can know everything about your diet and your daily calorie intake and react accordingly, sending you updated grocery lists and recommended meal recipes. Already Samsung is building smart fridges to help you keep track of items, tell you when they are out of date and in the future automatically order you milk when you are running low.

But this is the micro-level we’re talking about. Let’s think about autonomous cars, smart cities and smart manufacturing tools. Bridges that can track every vehicle, monitor traffic flow and automatically open and close lanes to help traffic safety; cars that can talk to each other on highways, to help keep rush hour traffic moving and enhancing driver experiences; this is more than simply connecting machines or sensors, it’s using the data from all these connected devices in a way that can significantly improve life as we know it.

The key to the IoT is that all of the connected devices can send data in a very short timeframe, which is critical in many circumstances, but that’s not all. Instead of simply storing the data, it can also immediately analyse it and trigger an action, without requiring any human intervention.

Companies worldwide can greatly benefit from Internet of Things software applications, increasing their product’s efficiency and availability, whilst decreasing costs and negative environmental effects.

In a data center for example, by inter-connecting all active components, including UPS systems, chillers, cooling units, PDU’s, etc, a data center administrator can easily monitor and supervise their group activity. Control solutions like Liebert® iCOM are actually more than simple monitoring interfaces; they can coordinate all of the cooling systems and deliver the required air flow at the temperature needed on demand. When problems arise, alerts and notifications sent to the data center administrator are more than essential, in order to restore them to normal. But wait; shouldn’t this Internet of Things be something new? Liebert iCOM has been on the market for several years now. Let’s clear this up.

The term Internet of Things was first mentioned in 1999, by British visionary Kevin Ashton, but the actual process has been in development for a long time now. You see, the name can be a bit confusing, and indeed it just recently crawled into the mainstream media, so people think it’s something very new. But in fact, major companies have already been using and developing IoT for a couple of years now, changing perspectives on how things should really be done.

However, taking full-advantage of this great innovation in all life aspects is still in its early phase. The greatest challenges that IoT faces in this moment are high costs and security threats. For the time being, IoT solutions can be really expensive, so we’re dealing with an ongoing process of lowering costs, to allow more and more people and businesses to adopt it.

Also, the security breaches can be a reason to be concerned, since IoT is very vulnerable at this point; many hackers have manifested their overwhelming interest in this direction, so developers need to be extremely cautious when it comes to security protocols.

All things considered, we can conclude that the Internet of Things is our huge opportunity to create a better life for everybody, to build a strong foundation in the technology field and develop products and solutions that could actually change the world.

For More Emerson Network Power Blogs, CLICK HERE

Read More

Topics: CUE, PUE, DVL, Thermal Management, monitoring, iCom, KVM, IoT

Highly reliable data centers using managed PDUs

by Emerson Network Power on 10/8/15 9:09 AM

Ronny Mees | Emerson Network Power

24

Today’s most innovative data centers are generally equipped with managed PDUs since their switching capabilities improve reliability. However, simply installing managed PDUs is not enough – an “unmanaged” managed PDU will actually reduce reliability.

So how do managed PDUs work? These advanced units offer a series of configurations which – if properly implemented – improve the availability of important services. The main features are Software Over Temperature Protection (SWOTP) and Software Over Current Protection (SWOCP), which are well described in the blog post “Considerations for a Highly Available Intelligent Rack PDU”.

It is also well-known, that managed PDUs can support commissioning or repairing workflows in data centers. The combination of well designed workflows and managed PDUs pushes the operational reliability to a higher level.

In high performance data centers, using clusters, another important point comes into play: clusters are complex hierarchical structures  of server farms, which are able to run high performance virtual machines and fully automated workflows.

As described here or here, such clusters are managed by centralized software together with server hardware.

Over the last couple of years cluster solutions have been developed following strong and challenging availability goals, in order to avoid any situation, which make physical servers struggle within the cluster. However, there would still be the risk of applications and processes generating  faults and errors and screwing-up the complete cluster, unless there was an automated control process – the good news is: there is.

rack-blog-post-709x532

The process which controls those worst case scenarios is called fencing. Fencing automatically kicks out of the cluster any not working nodes or services in order to maintain the availability of the others.

Fencing has different levels, which are hopefully wisely managed. In a smooth scenario fencing will stop disturbing services, or re-organize storage access (Fibre channel switch fencing) to let the cluster proceed with its tasks.

Another power fencing option is also called “STONITH” (Shoot The Other Node In The Head) and allows the software to initiate an immediate shutdown (internal power fencing) of a node and/or a hard switch off (external power fencing).

The internal power fencing method uses IPMI and other service processer protocols, while the external power fencing uses any supported network protocol to switch of a PDU outlet.  It is recommended to use secured protocols only, such as SNMPv3. So managed PDUs as MPH2 or MPX do not only support a nice power balance, monitor power consumptions or support datacenter operations workflows – they also allow the fence software to react quickly for higher cluster reliability. So it’s not a secret that cluster solutions manufacturers – e.g. Red Hat with RHEL 6.7 and newer – openly support such managed rack PDUs.

For More Emerson Network Power Blogs, Click Here

Read More

Topics: Data Center, PUE, robust data center, Containment, efficient data center, DVL, electrical distribution, energy, Battery, Thermal Management, energy efficiency, 7x24, PDU

Beyond the Finish Line: What to expect from the Federal Civilian Cyber-security Strategy

by Emerson Network Power on 9/23/15 9:14 AM

Rick Holloway September 23, 2015

cybersecurity_breach

The Federal Government’s 30-day Cybersecurity Sprint ended earlier this summer, but the real work continues. Government agencies and equipment manufacturers are awaiting the results of the ongoing cybersecurity review and the release of the Federal Civilian Cybersecurity Strategy – expected soon – but the preliminary principles of the strategy are intriguing on their own.

One thing that’s clear – and not at all surprising – is the government believes the approach to the increasing cybersecurity challenge is both behavioral and equipment-focused. There is no magic bullet piece of hardware or software that will provide adequate protection against all of today’s security threats, but a combination of threat awareness, adherence to best practices and deploying and properly using today’s hardened technologies can reduce risks.

There are eight key principles that will form the foundation of the Federal Civilian Cybersecurity Strategy. They are:

1. Protecting Data: Better protect data at rest and in transit.

2. Improving Situational Awareness: Improve indication and warning.

3. Increasing Cybersecurity Proficiency: Ensure a robust capacity to recruit and retain cybersecurity personnel.

4. Increase Awareness: Improve overall risk awareness by all users.

5. Standardizing and Automating Processes: Decrease time needed to manage configurations and patch vulnerabilities.

6. Controlling, Containing, and Recovering from Incidents: Contain malware proliferation, privilege escalation, and lateral movement. Quickly identify and resolve events and incidents.

7. Strengthening Systems Lifecycle Security: Increase inherent security of platforms by buying more secure systems and retiring legacy systems in a timely manner.

8. Reducing Attack Surfaces: Decrease complexity and number of things defenders need to protect.

I doubt anyone would disagree with those points. But what can we infer if we take a closer look?

It’s not called out specifically, but a consistent theme is access awareness and control. We live in a time when everything is connected—and needs to be, to ensure our data, our networks, our lives move at the speed the world demands. But every connection is an access point, and every access point is a potential vulnerability. Understanding where those access points are and securing them through both technology and best practices is a significant first step in securing a network. This can be as simple as proper credential and password controls.

The point about replacing less secure legacy systems with more secure, modern technologies is important. While there are limits to the effectiveness of software updates and patches, equipment replacement can be costly. Organizations that value security will put plans in place to upgrade equipment over time—and the sooner they start, the better.

One of the more interesting and encouraging points in the preliminary list is the bullet about recruiting and training cybersecurity personnel. This reflects a necessary awareness of the nature of these threats. They aren’t static; hackers are evolving and devising new attacks and tactics every day. It’s critical that our IT personnel maintain the same vigilance and dedication to security and threat education.

Of course, these are simply preliminary indications of the government’s thinking. We’ll know more when the Federal CIO releases the final Federal Civilian Cybersecurity Strategy, and we’ll take a closer look at that strategy and what it means at that time.

For More Blogs and News from Emerson Network Power Click Here

Read More

Topics: Data Center, Thermal Management, DCIM, monitoring, cybersecurity, security

Subscribe to Our Blog

Recent Posts

Posts by Tag

see all