Exploring the business case for implementing green-tech corporate strategies

Green Technology Journal

Subscribe to Green Technology Journal: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Green Technology Journal: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Green Technology Journal Authors: Liz McMillan, Elizabeth White, Greg Schulz, Allison Thomas, Paul Miller

Related Topics: Cloud Computing, Virtualization Magazine, Infrastructure On Demand, Green Technology Journal, GreenIT

Article

Green IT: How Infrastructure as a Service Can Lead to 80% Energy Savings

Using virtualization technology in a public or private cloud scenario

The data center industry, especially cloud providers, has recently been in the spotlight for massive power consumption. For example, if the industry were a country, it would be in the Top 15 users of energy, somewhere between Spain and Italy. The 30+ Gigawatts pushed through data centers across the globe has a far-reaching impact, including on the enterprise bottom line. Electricity isn't cheap and, to top it off, analyst firm IDC estimated un-utilized server capacity as equal to $140 billion, more than 20 million servers and 80 million tons of CO2 per year.

Yet, using virtualization technology in a public or private cloud scenario to run many instances of operating systems simultaneously on the same piece of hardware can lead to up to 80% efficiency. Pair this with Infrastructure as a Service or cloud computing models where virtualization is the norm, and the potential for cost savings is tremendous.

In a typical data center, servers use 80% of the IT power load and 40% of total power. Cooling and other infrastructure add up to another 50%. Most modern equipment uses processors with x86 architecture, which is designed to only house a single application at any given time. That leads to entire servers sitting idle about 90% of the time. Virtualization software installs a hypervisor layer on top of the bare-metal machine, allowing it to run multiple operating systems and therefore many apps at once, using more of that available computing power. Servers can be decreased or consolidated.

Computing requirements change rapidly as business needs adjust, and one of the strongest arguments for both cloud computing and virtualization is the true flexibility to provision across Iinfrastructure as a Service. With a virtual machine, resources can easily be scaled up on existing hardware, increasing the utilization of server resources from about 10% to 80%. As the number of physical servers decrease, so do the cooling and power needs, reducing cost significantly.

Considering the ongoing pressure on IT budgets, the ROI of virtual machines is significant.

For the sake of comparison, we'll assume an "average" virtual machine size of 1 virtual CPU, 3 GB of RAM and 60 GB of disk space. We will place these VMs on an example server with a 1100 watt power supply and 256 GB RAM, paired with a storage array that pulls 1400 watts of power. That means we could fit about 85 of our average sized-VMs in our imaginary data center, with plenty of storage room left over. Assuming this server is pushed to its maximum power load of 1100W, each average VM would pull 30.59 watts including storage use (they barely use 13W before storage).

If each of those virtual machines were on their own server, and each server was using 350W at an average load (as they are using less of their available resources than a virtualized machine), they would use 29,750 total watts, plus the 1400W for storage.

We need to add cooling on top of this, too. A decent rule of thumb is about 50% of the total wattage added for cooling power, so for our virtualized server we would need an additional 1300W, and for the non-virtualized servers we would need an additional 15,575W. Our grand totals, then, are about 46.7 Kilowatts for the standalone servers and 2.73 KW for our virtualized server.

What's the final ROI? If we assume a (cheap) price of electricity of 8 cents per Kilowatt Hour, and look at athree year life cyle, the standalone price would be $98,234.64 while the virtualized server would be $5,740.59 a cost savings of over $90,000.

These savings will vary with different hardware types and workloads, yet even using your own numbers, it should be clear that virtualization and Infrastructure as a Service dramatically decreases energy use and CO2 emissions while saving companies a significantly on space and equipment.

In early 2013, 451 Research reported that average x86 server virtualization levels have reached 51 percent, a 13 percent increase from the prior year. And, while this certainly signals a tipping point in the industry, there are still major cost and efficiency savings yet to be realized.

More Stories By Shawn Mills

Shawn Mills has a deep background in the technology industry, with expertise in product and business development and go-to-market strategy. He is president and a founding member of Green House Data, the nation’s greenest cloud hosting and colocation data center service provider, and of several other IT and telecommunication companies. Under his leadership, Green House Data has expanded out of its Cheyenne, Wyoming headquarters, launching cloud hosting and IaaS services in the Portland, Oregon and New York metro areas. Mills has been a featured speaker at other industry events, including 7x24 Exchange, Data Center World and the National Center for Super Computing Applications.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.