Choosing Between VSDs and EC Fans. Making the right investment when upgrading fan technology.

by Emerson Network Power on 7/15/15 3:23 PM

Blog_VSD

Fans that move air and pressurize the data center’s raised floor are significant components of cooling system energy use. After mechanical cooling, fans are the next largest energy consumer on computer room air condition (CRAC) units. One way many data center managers reduce energy usage and control their costs is by investing in variable speed fan technology. Such improvements can save fan energy consumption by as much as 76 percent.

With the different options on the market, it may not be clear which technology is best. Today, variable speed drives (VSDs)—also referred to as variable frequency drives or VFDs—and electrically commutated (EC) fansare two of the most effective fan improvement technologies available. The advantages of both options are outlined below to help data center managers determine which fan technology is best for achieving energy efficiency goals.

How do different fan technologies work? 
In general, variable speed fan technologies save energy by enabling cooling systems to adjust fan speed to meet the changing demand, which allows them to operate more efficiently. While cooling units are typically sized for peak demand, peak demand conditions are rare in most applications. VSDs and EC fans more effectively match airflow output with load requirements, adjusting speeds based on changing needs. This prevents overcooling and generates significant energy savings.

With VSDs, drives are added to the fixed speed motors that propel the centrifugal fans traditionally used in precision cooling units. The drives enable fan speed to be adjusted based on operating conditions, reducing fan speed and power draw as load decreases. Energy consumption changes dramatically as fan speed is decreased or increased due to the fan laws. For this reason, a 20 percent reduction in fan speed provides nearly 50 percent savings in fan power consumption.

EC fans are direct drive fans that are integrated into the cooling unit by replacing the centrifugal fans and motor assemblies. They are inherently more efficient than traditional centrifugal fans because of their unique design, which uses a brushless EC motor in a backward curved motorized impeller. EC fans achieve speed control by varying the DC voltage delivered to the fan. Independent testing of EC fan energy consumption versus VSDs found that EC fans mounted inside the cooling unit created an 18 percent savings. With new units, EC fans can be located under the floor, further increasing the savings.

How do VSDs and EC fans compare?

Energy Savings
One of the main differences between VSDs and EC fans is that VSDs save energy when the fan speed can be operated below full speed. VSDs do not reduce energy consumption when the airflow demands require the fans to operate at or near peak load. Conversely, EC fans typically require less energy even when the same quantity of air is flowing. This allows them to still save energy when the cooling unit is at full load. EC fans also distribute air more evenly under the floor, resulting in more balanced air distribution. Another benefit of direct-drive EC fans is the elimination of belt losses seen with centrifugal blowers. Ultimately, EC fans are the more efficient fan technology.

Cooling Unit Type
VSDs are particularly well-suited for larger systems with ducted upflow cooling units that require higher static pressures, while EC fans are better suited for downflow units.

Maintenance 
In terms of maintenance, EC fans offer an advantage. EC fans also reduce maintenance because they have no fan belts that wear and their integrated motors virtually eliminate fan dust.

Installation 
Both VSDs and EC fans can be installed on existing cooling units or specified in new units. When installing on existing units, factory-grade installation is a must.

Payback
In many cases, the choice between VSDs and EC fans comes down to payback. If rapid payback is a priority, then VSDs are likely the better choice. These devices can offer payback in fewer than 10 months when operated at 75 percent.

However, EC fans will deliver greater, long-term energy savings and a better return on investment (ROI). While EC fans can cost up to 50 percent more than VSDs, they generate greater energy savings and reduce overall maintenance costs, ultimately resulting in the lowest total cost of ownership.

Have the experts weigh in. 
Service professionals can be an asset in helping choose the best fan technology for a data center. Service professionals can calculate the ROI from both options, and they can recommend the best fan technologies for specific equipment.

Service professionals trained in optimizing precision cooling system performance can also ensure factory-grade installations, complete set point adjustment to meet room requirements, and properly maintain equipment, helping businesses achieve maximum cooling unit efficiency today and in the future.

Whether you ultimately decide to go with VSDs or EC fans, either way, you’ll be rewarded with a greener data center, more efficient cooling, and significant energy savings that translate into a better bottom line.


Original Emerson Network Power Blog Post

Read More

Topics: data center energy, PUE, Battery, Efficiency, Thermal Management, DCIM, Uptime, the green grid, AHRI, availability, education, KVM, Data Center efficiency, preventative maintenance

Powering the Critical IT Edge: Webcast

by Marissa Donatone on 4/6/15 4:38 PM

webcast_header

Powerful Innovations:
Powering the Critical IT Edge
Live Webcast: Thursday, April 16,
11 a.m. - 12 p.m. ET


The dynamics surrounding distributed computing systems which reside on the edge of a network are creating challenges. Housed in smaller spaces, outside of a central data center, these application hubs continue to expand in quantity and criticality. The result is that IT professionals are faced with a new set of demands for providing proper power and support.

Join Kyle Keeper, Director of AC Power Product Management, who will explore 4 primary elements that need to be addressed to effectively power critical, distributed network IT. You will save time, money and reduced headaches … all while driving improved reliability. Join us to discuss:
  • Where and what edge of the network systems should be evaluated.
  • How to recognize network IT edge problems before they escalate.
  • Best practices for managing and deploying power.
  • New technologies and services to save time, money and headaches.

Speaker:

Kyle Keeper
Director, AC Power Product Management

Who should attend?
Managers responsible for:

  • Network/server closets
  • Telecom rooms
  • Back-office systems
  • Small data rooms
  • Branch offices with IT

Emerson Webcast Registration

Read More

Topics: Green IT, cloud computing, 7x24 exchange, DCIM

ECO-Friendly Choice in Single Phase UPS

by Miguel Rascon on 3/26/15 2:46 PM

ECOblog

“Environmental protection”, “high efficiency” and “energy savings” are topics of very high concern for both large corporations, as well as for individuals since these aspects are part of everyday life and may generate significant cost savings in both cases.

This is also valid for UPS ranging from large ones protecting substantially vast datacenters, to small ones with just a few kilowatts that safeguard a network or single cabinet in more modest settings.

The ECO mode – also known as energy saving mode or high efficiency mode depending on the UPS manufacturer – is currently highly discussed within the industry. The debate primarily focuses on on-line UPS and on large UPS. Nevertheless, energy savings and efficiency are extremely important aspects also in small and micro power UPS (typically from 500 VA to 10 kVA). The reasons are the same as for large power systems: savings on energy costs and lower environmental footprint.

Choosing a UPS in the most proper way, means considering the criticality of the application that needs to be protected, as well as evaluating the energy used by the UPS to protect the load against disturbances and interruptions.

Here I would like to highlight the “inherent ECO mode” that can be found in line interactive UPS products (VI or Voltage Independent according to EN 62040-3).

In this type of line interactive UPS (VI), the power stream flows from the input through several protection devices (overcurrent, overvoltage, etc.) and mainly through an Automatic Voltage Regulation (AVR) transformer.

The AVR is in charge of providing output voltage regulation, in order to minimize voltage variations in AC supply and ensure a regulated voltage according to the load tolerances.

Because of the high efficiency of the AVR (typically around 98% or 99%) and of the protection devices through which energy flows, as well as the lower quantity of electronic components used in this type of UPS topology, a high performance line interactive UPS can provide an efficiency level higher than 96% at full load. A perfect example of this is Liebert PSI UPS, which makes use of line interactive technology and therefore of AVR, and which can reach the efficiency levels mentioned above. As said, this operation mode is inherent to line interactive UPS topology, and its high efficiency is also ensured in wide load operating conditions and AC mains variation. While ECO mode in on-line UPS is operating in a smaller input voltage range, line interactive topology is able to operate in high efficiency mode during most input voltage changes while still being capable to provide some output regulation.

When comparing a line interactive UPS with a double conversion online UPS there are many aspects to be taken into consideration such as stepwise or pure sine wave inverter, transfer time, size, etc. However one of the main differences is exactly that line interactive UPS feature “inherent high efficiency” because of the VI technology and the use of AVR as mentioned earlier.

The energy savings associated to it are highly appreciated even if we are talking about single phase UPS meaning UPS which range from 0 to 10 kVA, because:

1. Daily saving  just  a  few  watts  in  continuous UPS operation 365 days a year amounts to a significant total yearly saving

2. In applications such as campuses or big corporations where many of these small UPS devices are used contemporarily, the few watts saved daily per each device increase even more the daily and yearly total saving and reduce the total campus or corporation expenditure.

To provide an example, assume a load of 2.5 kW being protected by a UPS. Such load may correspond to a cabinet with several servers for enterprise applications or to a wiring closet distribution panel. Such UPS can work in line interactive mode (assuming 97% efficiency) or operate in double conversion mode with 90% efficiency, using a rough estimation. The difference in power losses and thus energy savings, will be around 200 W. Assuming an electricity cost of 0.138 €/KWh and doing a quick calculation on yearly savings, you can get a value of around 272 € saved per year. This amount can be multiplied for five years and the total saving will reach nearly 1,500 €.

So it will be clear by now that additional to traditional ECO mode (typically used in double conversion online UPS in general and large UPS in particular) there is an inherent ECO mode used in single phase UPS, specifically line interactive ones.

This grants significant savings to customers as the line interactive technology is inherently highly efficient and as the UPS making use of it are typically used daily all year long so in the long term even little daily saving amounts to a considerable total figure.

And what about ECO modes in on-line UPS in this small UPS range? Is there any difference or advantage? There is an interesting story too.

Interested in reading more blogs by Emerson Network Power? FOLLOW THIS LINK.

Read More

Topics: Data Center, Green IT, data center design, data center infrastructure management, UPS, Thermal Management, DCIM, Uptime, monitoring, the green grid, energy efficiency

How infrastructure monitoring can help increase data center efficiency and availability

by Emerson Network Power on 2/20/15 8:00 AM

Written By: Diego Chisena | Emerson Network Power
During the first decade of the 21st century, the data center emerged as a significant corporate asset, playing a vital role in business management and customer service. Throughout this period, the data center underwent an evolution as computing and data storage capacities increased significantly.

Data centers have traditionally been designed with extra headroom to accommodate growth, but during the last decade, demand escalated so quickly that added IT capacity consumed available headroom and outpaced supply in terms of floor space and power and cooling capacity. This created conflicts as facility personnel struggled to supply IT’s demand for server capacity.

 

These problems were further worsened by two trends that emerged in the second half of the decade.

1. The first trend is the increased focus on data center energy consumption. With both the density and quantity of servers rising, data center energy consumption became a significant factor in terms of IT cost management and, in some companies, response to concerns about global warming. Early efforts to reduce data center energy consumption focused on lowering costs around data center cooling, which accounts for approximately 35 percent of data center energy consumption.

2. The second trend was the adoption of virtualization technologies. In a recent survey of data center managers, virtualization adoption rates stood at 81 percent. This has created a dynamically changing application environment layered on an essentially static physical environment, increasing data center complexity and introducing new challenges to physical infrastructure management.

In most organizations, data center managers lacked the tools to effectively address these challenges. The network management systems essential to IT personnel in monitoring and managing IT equipment did not address the critical issues of energy consumption, available rack capacity, or ambient air temperatures that are essential to proactive data center management. Further, the building management systems used by facility personnel to monitor power and cooling in the data center failed to provide the alarm management capabilities required for critical systems and to account for the interdependencies between systems. Evolving from a reactive to a proactive approach to infrastructure monitoring requires a new type of management system that provides visibility into the data center’s physical infrastructure within both the IT and facility domains and across these two domains.

If you want to learn more, read the white paper.

For More Emerson Network Power Blogs, subscribe below, and check out the Emerson Network Power Blog.

Read More

Topics: data center infrastructure, Data Center, data center design, data center energy, data center infrastructure management, DCIM, Trellis, the green grid, energy efficiency

Speed, Flexibility and the Data Center

by Emerson Network Power on 11/25/14 11:45 AM

Kollengode Anand | November 20, 2014 | Emerson Network Power

iStock_000019241308Small

It’s no longer enough to be dependable if you’re running a data center.  With greater demands being placed by customers, both external and internal, data center administrators are required to be both dependable and fast. Consider these facts, from our “State of the Data Center” report last year:

  • The equivalent of one of every nine persons on the planet uses Facebook.
  • We generated 1.6 trillion gigabytes of data last year. That’s enough data to give every single person on Earth eight 32-gigabyte iPhones, and it’s an increase of 60 percent in just two years.
  • Every hour, enough information is created to fill more than 46 million DVDs.
  • Global e-Commerce spending topped $1.25 trillion in 2013.

It’s always been important to respond to your customers, of course, but now there are more of them, demanding more information, and more quickly:  the report says that if the online video they are watching buffers for more than five seconds, 25 percent of viewers drop off.  And if the video buffers for more than 10 seconds, half of them are gone.

Oh, and did we mention that the average cost of a data center outage now runs more than $900,000…an increase of one-third in just two years?

Which is why it’s critical for administrators to be able to flexibly configure their data centers, and to be able to react rapidly when requirements change, or when there’s a problem.  We’ve found that a unified approach to the entire infrastructure is the best way of handling these situations.  Whether it’s heating and cooling, power, servers, software, or more, the ability to administer data center operations in a real-time manner has become more imperative than ever.

It’s one of the key elements in the development of the dynamic data center, and in being able to easily manage changes and maintain an optimal environment.

We’ll be at the Gartner Data Center, Infrastructure & Operations Management Conference in Las Vegas in a couple of weeks, at booth #211, showing off the equipment and software that we’ve developed to help you make your business as dynamic as your data center.  We’ll also be speaking about where our clients believe the data center is headed more than ten years from now.   Their input has proven critical in the past, and their thinking is helping us develop the solutions that will solve their challenges both today and tomorrow.

For More Blogs by Emerson Network Power- Click Here

Read More

Topics: Emerson Network Power, Data Center, cloud computing, 7x24 exchange, Thermal Management, DCIM, Uptime, sustainability, clean energy, monitoring, Trellis

Subscribe to Our Blog

Recent Posts

Posts by Tag

see all