Data Center Performance Benchmarks: Cost and Causes of Data Center Outages- Webcast

by Marissa Donatone on 1/21/16 1:20 PM

Join Emerson Network Power for a webcast featuring Dr. Larry Ponemon and Peter Panfil who will review the data collected over the three Ponemon Cost of Data Center Outages Reports published since 2010—including the 2016 Report. They will also uncover the factors contributing to the steady increase in downtime costs, explore trends in the causes of downtime, and explore prevention strategies. 

By tuning in to this webcast, you’ll be better able to:

  • Make sound financial decisions about your data center infrastructure
  • Identify and eliminate vulnerabilities that lead directly to outages

Speakers:

Larry_Ponemon.jpg Peter_A._Panfil.jpg

Larry Ponemon
Chairman and Founder,
Ponemon Institute

Peter A. Panfil
Vice President of Global Power, 
Emerson Network Power

Register TodayTime: 11:00 AM EST
Date: Wednesday, February 3, 2016    

dc_perf.png 

Read More

Topics: Data Center, data center energy, DVL, Uptime

New Infographic! SmartRow DCR Intelligent Integrated Infrastructure

by Marissa Donatone on 12/29/15 8:31 AM

Infographic_Smart_Row.jpg

Learn More Here

Read More

Topics: Emerson Network Power, Data Center, data center design, DVL, monitoring, smart solutions, IT, smartrow

Selecting The Right Economizer

by Emerson Network Power on 12/9/15 2:19 PM

Written By: David Klusas, Emerson Network Power

forest-fire-Thermal-economizer-blog.jpg

You can say what you want about mink farms, but one thing is certain: They stink!

That can be a problem if you’re operating a data center near one and trying to use airside economizers to bring in fresh outside air for free cooling.

There are many efficiency benefits to utilizing outside air for economization, but not every situation is right for bringing outside air into a data center. Each type of economizer has its own advantages and challenges, depending on data center goals, site requirements, geography and climate.

I recently visited four data centers, from Canada to Utah, including the one next to the mink farm, and found multiple occasions where airside economization was not the ideal solution, despite its energy savings.

One data center in Canada was near a heavily forested area, and the company was concerned about smoke from forest fires entering the facility. A data center in Washington was next to an apple orchard, which creates a lot of dust during harvest. Another is using 100% outside air for economization, but has an 8MW chiller plant for backup, in case they ever need to close the outside air dampers and recirculate the indoor air. That’s a HUGE initial investment for only a backup system.

Data centers have made cutting energy consumption a priority to save money and meet government regulations. Cooling accounts for almost 40 percent of data center energy usage, so it’s a main focal point for driving energy savings. More recently, water conservation has become a priority in the selection of cooling systems and economization strategies. At the same time, relative cost and the payback periods remain key factors in selecting these large, expensive systems.

All economizer systems use either outside air and/or water to reduce or eliminate mechanical cooling in data center cooling units. These economizer systems generate significant energy savings of up to 50 percent, compared to legacy systems. The first decision most data center managers make in selecting an economization strategy is the type of data center environment they want to operate, which naturally then leads to a decision on whether or not to bring outside air into the data center. As a result, there are two primary economizer designs typically deployed in data centers: direct and indirect.

While direct and indirect economizers operate in different ways, the ultimate goal of both systems is to provide free cooling to a room or facility, thus reducing the overall energy consumption of the facility. However, fundamental differences between the methods in which direct and indirect systems economize greatly impact the temperature and humidity environment that can be efficiently maintained within the data center.

Direct economization brings outside air into the data center using a system of ductwork, dampers, and sensors. These systems usually have lower capital costs than other forms of economization and work well in moderate climates. In the right climate, direct outside air economizers can be very efficient and an effective economization strategy, but do introduce the risk for contaminants and wide humidity swings into the data center. For maximum annual savings, a wide acceptable supply air temperature and humidity window needs to be implemented in the data center. For highly critical data centers, the risk of outdoor contaminants and wide temperature and humidity swings is sometimes too significant for comfort.

In contrast, indirect economizers can offer significant energy savings while limiting the prior concerns. Indirect economizers do not bring outside air into the data center, but instead use an indirect method to transfer heat from the data center to outside the building. There are primarily three types of indirect economizer technologies:
• Air-to-air heat exchangers, or heat wheels, in a wet or dry state
• Pumped refrigerant economizers, such as the Liebert® DSE™ system economizer
• Cooling towers for chilled water systems

Sensible air-to-air plate frame heat exchangers transfer heat between two air streams, but maintain a complete separation, thus eliminating the opportunity for contamination and transfer of humidity into the data center space. These units can be operated in a dry state, or can be sprayed with water to increase their effectiveness and hours of economization. Heat wheels offer similar qualities to air-to-air plate frame heat exchangers, but can have higher air leakage rates and require additional maintenance to maintain their performance.
The Liebert DSE system is a direct-expansion (DX) system that utilizes an integrated pumped refrigerant economizer to maximize annual energy savings and provide superior availability without the need for separate economization coils. When outdoor ambient temperatures are low enough, the integrated refrigerant pump is used to circulate the refrigerant in lieu of the compressor to maintain the desired supply air temperature. The refrigerant pump uses a faction of the energy used by the compressor. As the outdoor ambient temperatures rise, the Liebert DSE system automatically transitions on compressors to maintain the desired supply air temperature. Its integrated Liebert iCOM™ thermal controls work to automatically optimize the entire system to provide more free-cooling throughout the year.

Because of its efficiency advantages, the Liebert DSE system was recently approved for use in California data centers under Title 24. Its economizer was shown to reduce time dependent valuation (TDV) by 8-10 percent and, since it uses no water, save around 4 million gallons of water annually in a 1MW data center, compared to water economizers.

Initial installation costs for any of these economizer options can be affected by how well the technology under consideration fits into the overall design of the existing facility. The amount of indoor, outdoor or rooftop space required for situating the units will affect the selection decision. Chilled water systems with cooling towers tend to be the most costly, because of the high system first cost, use of water and a higher maintenance burden relating to their complexity.

Emerson Network Power offers options for all of these economizer technologies. There is no single economizer technology that fits every situation. Each has its own strengths based on location and application, and each has its challenges.   Fortunately, there’s an economization option for virtually every location – even next to a mink farm.

Read More

Topics: CUE, Emerson Network Power, Data Center, data center energy, efficient data center, DVL, UPS, Thermal Management, DCIM, energy efficiency, preventative maintenance, 7x24, Economizer

Internet of Things, but what counts as a “thing”?

by Emerson Network Power on 10/26/15 11:50 AM

Internet-Of-Things1

By: Simon Brady

The latest buzz or trend being discussed at almost every IT and technology innovation meeting around the world is currently the Internet of Things (or IoT, as like everything else in the tech world it has been shortened to). We are rapidly moving forward from the original Internet as we know it, built by people for people, and we are connecting devices and machines, allowing them to intercommunicate in a vast network.

The Internet of Things, but what counts as a “thing”? Basically, we can fit almost any object you can think of in this category. You can attribute an IP address to nearly everything that exists within our universe, regardless if it’s a kitchen sink, a closet or an UPS. Everything can be connected to the Internet and can begin to send signals and data towards a server, as long as it has a digital sensor integrated.

Today, roughly 1% of “things” are connected to the Internet, but according to Gartner, Inc. (a technology research and advisory corporation), there will be nearly 26 billion devices on the Internet of Things by 2020.

At first, it might sound a bit scary, right? Hollywood movies and more recently even Professor Stephen Hawking tell us that it’s dangerous if machines talk to other machines and become self-aware. So should we be frightened or overly excited? There is no correct answer, because this new revolution and innovation in the field of technology is still not yet fully understood by people.

Back in history, all great inventions were first doubted and rejected by human kind. Remember how at the early stage of the Internet, some people considered it will be a great failure? Not many envisioned how it would change the world and eventually become an essential part of our lives.

Huge billion dollar companies, like Google, Microsoft, Samsung and Cisco, are investing a lot of money in developing IoT, and this could be the proof that the Internet of Things is here to stay and successful businesses will start building products and services compliant to IoT required functionalities.

So, how does it work? For normal people, interconnecting their own devices can lead to a better life quality and fewer concerns. For example, a smart watch or health monitor bracelet could be connected to a coffee maker, so that when you get out of bed, hot coffee is waiting for you in the kitchen. Temperature sensors in your house will manage heating in each room and even learn when you are home so your boiler is more efficient and you save energy. In making normal everyday life easier, IoT will include household items like parking sensors, washing machines or oven sensors, basically anything that has been connected and networked through a control device. Your fridge can know everything about your diet and your daily calorie intake and react accordingly, sending you updated grocery lists and recommended meal recipes. Already Samsung is building smart fridges to help you keep track of items, tell you when they are out of date and in the future automatically order you milk when you are running low.

But this is the micro-level we’re talking about. Let’s think about autonomous cars, smart cities and smart manufacturing tools. Bridges that can track every vehicle, monitor traffic flow and automatically open and close lanes to help traffic safety; cars that can talk to each other on highways, to help keep rush hour traffic moving and enhancing driver experiences; this is more than simply connecting machines or sensors, it’s using the data from all these connected devices in a way that can significantly improve life as we know it.

The key to the IoT is that all of the connected devices can send data in a very short timeframe, which is critical in many circumstances, but that’s not all. Instead of simply storing the data, it can also immediately analyse it and trigger an action, without requiring any human intervention.

Companies worldwide can greatly benefit from Internet of Things software applications, increasing their product’s efficiency and availability, whilst decreasing costs and negative environmental effects.

In a data center for example, by inter-connecting all active components, including UPS systems, chillers, cooling units, PDU’s, etc, a data center administrator can easily monitor and supervise their group activity. Control solutions like Liebert® iCOM are actually more than simple monitoring interfaces; they can coordinate all of the cooling systems and deliver the required air flow at the temperature needed on demand. When problems arise, alerts and notifications sent to the data center administrator are more than essential, in order to restore them to normal. But wait; shouldn’t this Internet of Things be something new? Liebert iCOM has been on the market for several years now. Let’s clear this up.

The term Internet of Things was first mentioned in 1999, by British visionary Kevin Ashton, but the actual process has been in development for a long time now. You see, the name can be a bit confusing, and indeed it just recently crawled into the mainstream media, so people think it’s something very new. But in fact, major companies have already been using and developing IoT for a couple of years now, changing perspectives on how things should really be done.

However, taking full-advantage of this great innovation in all life aspects is still in its early phase. The greatest challenges that IoT faces in this moment are high costs and security threats. For the time being, IoT solutions can be really expensive, so we’re dealing with an ongoing process of lowering costs, to allow more and more people and businesses to adopt it.

Also, the security breaches can be a reason to be concerned, since IoT is very vulnerable at this point; many hackers have manifested their overwhelming interest in this direction, so developers need to be extremely cautious when it comes to security protocols.

All things considered, we can conclude that the Internet of Things is our huge opportunity to create a better life for everybody, to build a strong foundation in the technology field and develop products and solutions that could actually change the world.

For More Emerson Network Power Blogs, CLICK HERE

Read More

Topics: CUE, PUE, DVL, Thermal Management, monitoring, iCom, KVM, IoT

Highly reliable data centers using managed PDUs

by Emerson Network Power on 10/8/15 9:09 AM

Ronny Mees | Emerson Network Power

24

Today’s most innovative data centers are generally equipped with managed PDUs since their switching capabilities improve reliability. However, simply installing managed PDUs is not enough – an “unmanaged” managed PDU will actually reduce reliability.

So how do managed PDUs work? These advanced units offer a series of configurations which – if properly implemented – improve the availability of important services. The main features are Software Over Temperature Protection (SWOTP) and Software Over Current Protection (SWOCP), which are well described in the blog post “Considerations for a Highly Available Intelligent Rack PDU”.

It is also well-known, that managed PDUs can support commissioning or repairing workflows in data centers. The combination of well designed workflows and managed PDUs pushes the operational reliability to a higher level.

In high performance data centers, using clusters, another important point comes into play: clusters are complex hierarchical structures  of server farms, which are able to run high performance virtual machines and fully automated workflows.

As described here or here, such clusters are managed by centralized software together with server hardware.

Over the last couple of years cluster solutions have been developed following strong and challenging availability goals, in order to avoid any situation, which make physical servers struggle within the cluster. However, there would still be the risk of applications and processes generating  faults and errors and screwing-up the complete cluster, unless there was an automated control process – the good news is: there is.

rack-blog-post-709x532

The process which controls those worst case scenarios is called fencing. Fencing automatically kicks out of the cluster any not working nodes or services in order to maintain the availability of the others.

Fencing has different levels, which are hopefully wisely managed. In a smooth scenario fencing will stop disturbing services, or re-organize storage access (Fibre channel switch fencing) to let the cluster proceed with its tasks.

Another power fencing option is also called “STONITH” (Shoot The Other Node In The Head) and allows the software to initiate an immediate shutdown (internal power fencing) of a node and/or a hard switch off (external power fencing).

The internal power fencing method uses IPMI and other service processer protocols, while the external power fencing uses any supported network protocol to switch of a PDU outlet.  It is recommended to use secured protocols only, such as SNMPv3. So managed PDUs as MPH2 or MPX do not only support a nice power balance, monitor power consumptions or support datacenter operations workflows – they also allow the fence software to react quickly for higher cluster reliability. So it’s not a secret that cluster solutions manufacturers – e.g. Red Hat with RHEL 6.7 and newer – openly support such managed rack PDUs.

For More Emerson Network Power Blogs, Click Here

Read More

Topics: Data Center, PUE, robust data center, Containment, efficient data center, DVL, electrical distribution, energy, Battery, Thermal Management, energy efficiency, 7x24, PDU

Subscribe to Our Blog

Recent Posts

Posts by Tag

see all