To say that green initiatives involving power, cooling and the environmental effects of IT are a popular topic is an understatement, as the issue seems to be about everywhere you look these days. If you aren't up to speed on power, cooling, carbon off-set and associated IT infrastructure storage issues, check out the article Storage Power and Cooling Issues Heat Up along with the free educational Webcast Storage Power and Cooling: Why You Should Care and What You Can Do About IT.
There are many IT stories being given a green spin these days, giving rise to the term "green washing": instead of white washing a story, use green to get more attention.
Talking with IT professionals, while some have green and environmental consciousness issues, what I hear most frequently is that they need to do more with the electrical power and cooling they have available. Many IT organizations I talk with have already reached or maxed out their available power, cooling, backup or UPS power, with an even larger number of organizations anticipating running into a power availability issue in the next 12 to 18 months. Common power and cooling related issues I commonly hear include:
- Power restrictions in your geographic area, limiting growth, power availability or stability
- Reaching a ceiling on the available power in or to your facility (there is power in the region)
- Your existing cooling capability is constrained either by its capacity or lack of available power
- Your UPS or standby backup power capabilities are saturated or approaching saturation
- Constrained by your internal power conditioning (surge) and distribution (circuit breakers)
- Limited floor space to support growth, or lack of power accessibility where you have floor space
Since we are talking about infrastructure items, let's spend a moment and talk about the electrical power infrastructure in general. Electrical power availability will vary by how much power your local electrical utility or service provider can generate or acquire from other power sources. Limits on local and long distance transmission facilities, local substation and transformer capabilities that you are serviced by, as well as your facilities limitations, can also affect power availability. For example, your electrical service provider may be able to acquire and provide enough power, but the long distance transmission lines may be saturated and unable to transport the power where it is needed.
Variables that affect power and cooling include:
- Power and cooling availability or cost
- Floor space and backup power capacity
- Disk drive type, including make, model, vintage, interface and capacity
- Storage system power management and performance effectiveness
- Storage system architecture and disk drive packaging
- RAID configuration balancing performance, availability, capacity and energy (PACE)
Finding a Solution
Depending on your current or anticipated future power and cooling challenges, there are several approaches that can be used to maximize what you currently have for short term or possibly long term relief. Additional approaches can be applied or combined with short term solutions to enable longer term relief from power, cooling and energy environmental issues. Some examples of short term and longer term approaches include:
- Establishing new facilities or obtaining additional power and cooling capacity
- Optimizing existing IT resources and facilities to be more energy efficient
- Upgrading or replacing existing technologies to be more energy effective
- Reducing your data footprint, or moving the problem elsewhere
Building a new facility in an area with more available power and transmission capacity can be easier said and done. While you can readily find co-located space or shell facilities in many different regions, will these meet your specific demands? New data centers are not the exclusive domain of ultra large organizations like Google. Many IT organizations are establishing new, secondary or tertiary data centers either in the same general area or further away in a different region to address power and cooling as well as business continuity (BC) or disaster recovery (DR) requirements. Beyond cost and complexity, the other issue involved with adding on to, or accessing more power, or stabling a new data center, is the time delay between when you decide and get approvals and when the facility can be occupied.
Optimizing Existing Resources
Have a power and cooling assessment performed of your data center facilities to identify hot spots to maximize cooling capabilities to enable growth, or enable enough power and cooling capacity to support migration to more energy efficient and performance-effective technologies. Many vendors are jumping into the power and cooling facilities assessment services game. In addition to a general power and cooling assessment, a more exhaustive analysis of your facility's energy consumption could result in a reconfiguration of cooling, air-flow and equipment location or rebalancing of power distribution and circuits which may be easier said than done.
Consolidation can be an approach to deal with distributed power and facilities concerns, or on a local basis, to reduce footprints of IT equipment and drive up utilization. However, a couple of caveats on consolidation include avoiding negatively affecting performance or availability of applications by focusing on just resource space capacity utilization and neglecting performance and availability. Another caveat is with the race to consolidate remote office and branch office technologies back to a main, central or peer data center site, you may inadvertently aggregate or throw out of balance your existing performance, capacity, availability and energy capabilities.
Reducing Your Data Footprint
Data footprint reduction can be accomplished a number of ways: using archiving (e-mail, database and unstructured), data compression or compaction using host-based software, device or appliance based, as well as emerging single instancing also known as commonality factoring, data differencing or de-duplication techniques.
If your need is for more storage capacity, you could employ data compaction or compression techniques for online as well as off-line backup and archive data with appliances like those from StoreWiz.
Instead of adding more capacity, you could use a compression appliance to reduce your data footprint for online as well as secondary near-line or off-line storage without affecting performance. For backup, you could use VTLs that support compression, compaction, single-instancing and de-duplication. To increase I/O performance, you could use a caching I/O acceleration appliance to make your existing NAS storage or NAS cluster system run faster to support consolidation without incurring a performance bottleneck.
Thin provisioning is a technique that is becoming more widely available in different variations, incarnations and capabilities from many vendors. Thin provisioning may require upgrading or replacing existing technologies or cause disruptions to re-allocate non-thin provisioned storage, so look for solutions that are as transparent as possible. Also, keep in mind that with thin provisioning, if you have a stable predictable environment, you can leverage the overbooking capabilities of thin provisioning, similarly to how airlines overbook seats on airplanes assuming that some reservations will not show up. However, when they do, there is a lack of capacity.
With thin provisioning of storage, it is important to have good storage management tools and information to help plan and predict growth to avoid overbooking. The effect of unplanned overbooking with thin provisioning, if not enabled with good predictive management tools, can be as disruptive as denied boarding at the airport when traveling to your summer vacation on an over-booked flight.
You could also play the spin the disk down game with one of the increasing number of storage systems with power management; however, you also need to avoid performance bottlenecks when trying to restore large amounts of data while waiting for disks to spin back up. Likewise, you want to be aware of any potential performance bottlenecks if you need to do large scale restores from de-duped data while it is being re-hydrated or expanded back to normal size for restoration.
The trick with the need to eliminate many individual smaller sites is to avoid over-consolidation that results in facilities or an infrastructure that becomes strained by power and cooling demand. Server and storage vendors, including HP and IBM, have some interesting stories that they are using to back up and support their power and cooling assessment services based on their own consolidation efforts.
Technology upgrades or replacement activities include replacing on a proactive scheduled basis older disk drives with newer generation, faster disk drives that draw less power, if you need to retain your storage system for lease or other purposes. Another approach is to replace your existing storage system (controller and disk drives) using a newer model with better performance, less power consumption and increased capacity to meet your available power, cooling and application service requirements.
For those who have not heard it a million times already, exercise caution when aggregating storage capacity onto larger capacity disk drives so as not to cause an application performance problem.
Another variation of the storage system upgrade or replacement scenario is when you can replace two or more storage systems on both a performance and capacity basis with a storage system that can do the same amount of work (IOPS and bandwidth) using the same or fewer disk drives and thus less power. If two storage systems are required for availability, BC or DR purposes, then look for storage systems that can be scaled to meet your needs and use less power, including leveraging clustered block iSCSI or Fibre Channel and clustered NAS storage systems.
Some storage power and cooling stories focus on consolidation and using the aggregated capacity to reduce the total number of disk drives to reduce energy consumption and emissions. For dormant data, the approach of reducing components can be similar to migrating data off-line to tape or other mediums. However, for active workloads, another approach is to keep the disk drive count constant, increasing performance and capacity while reducing power consumption as in Figure 1 below.
For example, newer generation 4Gbps FC 146GB 15.5K disk drives draw less power and have twice the capacity with a slight performance increase over older generation 2Gbps 73GB disk drives. Assuming that your application requirements from a performance standpoint are fairly stable and you could leverage the extra capacity without incurring or causing a performance problem, then a simple disk drive swap might be a benefit for you. For example, if your applications only use about 30 percent of the performance of an existing storage system and you need to consolidate two like systems from different locations, yet you need to fit into a reduced available power footprint, switching from 73GB or even 146GB 15K disk drives to 300GB 15.5K drives could be an option.
Let's assume that new generation of 500GB, 750GB and 1TB disk drives are more energy efficient than previous versions, with a configured, operational average including packaging power overhead equal to or less than high-performance drives. For example, current generation Seagate high capacity SATA disk drives draw about on average one watt per 125GB, compared to about 80.65GB per watt in previous generations. Expect to see different power numbers for disk drives, since there are the manufacturers' (Seagate, HGST and Fujitsu, among others) specifications that include idle, seek, operational average or a maximum power draw. You can also expect to see different power consumption numbers from various vendors that include any packaging overhead, including disk interposers for dual-porting SATA disk drives as well as any overhead for enclosure power and cooling.
As an example, 1TB of usable solid state disk (SSD), that is, RAM- and not FLASH-based SSD, capable of delivering hundreds of MB/sec performance would occupy the footprint of a standard 19" rack or equipment cabinet while consuming about 3 kWh. Compare that with two other extremes, storage centric or disk I/O. Solid state disk can be a good fit for I/O intensive applications or workloads where you can reduce the number of disk drives and subsequent power consumption as part of a tiered data storage infrastructure.
For less active or dormant data, larger capacity disk drives can be used with new generation 750GB and 1TB disk drives, which are more energy efficient than previous 250GB or even 500GB disk drives while offering more capacity. Many storage vendors are supporting multiple power settings to vary the power consumption of the disk drive without having to spin disk drives completely down. An industry trend is for more storage systems to be able to intelligently use the built-in capabilities of modern disk drives to reduce power by retracing disk heads when not in use, to stepping down the RPM of the drives, to going into other low-power consumption modes.
In Figure 2, I'm using an annual energy cost of 18 cents per kWh, which might be higher than what you normally hear or read about, that accounts for energy surcharges, usage charges above a given kWh base, and other fees. If you have not done so lately, take a look at your electric bill sometime and make note of the base kWh rate plus all of the additional fees and pricing tiers depending on the number of kWh used per month. The CO2 emissions per ton are based on the kWh used to power the device plus an additional 50 percent to cover power required for cooling the equipment.
Putting It All Together
SSD is not the catch-all, cure-all solution by itself. If you need a balance of I/O or performance along with storage capacity, high-performance disk drives in the 146GB and 300GB 15,000RPM class are a good fit with 500GB, 750GB and 1TB class disk drives for storage capacity-centric workloads. For example, in Figure 2, you could in the footprint of three standard IT equipment cabinets configure a 221TB three-tier storage solution comprising 1TB of solid state disk, 28TB of high-performance disk and 192TB of high-capacity storage that consumes about 14 kWh of power with about 384 disk drives and several SSD devices.
The missing piece to Figure 2 is a performance indicator in terms of throughput for bandwidth or sequential applications as well as IOPS for smaller, random workloads. One approach I have seen by some vendors is to simply quote the disk drive manufacturer's IOPS and throughput numbers and then aggregate those for the number of disks being used. The thing to watch out for is that you are assuming that the storage controller can actually utilize and leverage the full disk performance capability, which in many systems is surprising not the case.
If you know the performance rating of your storage systems in other words, what the controllers can actually deliver for useful work you can determine your IOPS per watt. Otherwise, you can go to the Storage Performance Council Web site and look up the relative performance of various systems, and based on a given configuration, determine the IOPS per watt or energy footprint.
In a different scenario, instead of using three separate storage systems, in roughly the same footprint you could configure a single monolithic three-bay storage array with 480 disk drives (500GB each) for about 224TB of raw capacity consuming about 24 kWh, which would be for storage-centric applications, or for an I/O-intensive environment, use 480 high performance 15K 4Gbps FC 146GB disk drives for about 70TB raw and about 24 kWh. Granted, these are extreme examples to help illustrate the importance of balancing performance, availability, capacity and energy (PACE) consumption to meet your different application service requirements.
Another variable in all of the previous examples is how you configure the storage system in terms of RAID level for performance, availability and capacity, since the various RAID levels affect energy consumption based on the number of disk drives being used. Ultimately, the right PACE balance will vary, as will other decision and design criteria, among them vendor and technology preferences.
When looking at power, factor in the power requirements for cooling the equipment as well as your UPS and other power conditioning requirements. Also, keep in perspective that software has a power profile regardless of how good a vendor's pitch is; software still requires hardware to run on and that hardware still requires power and cooling. Software can also affect your power and cooling profile by how effectively resources are used, or misused, resulting in extra overhead and hardware to support a given level of service.
Infrastructure resource management (IRM) tools can help identify issues as well as opportunities to maximize your existing IT and data infrastructure resources. Storage management software and tools, whether storage system, appliance or operating system-based, including thin provisioning, compression, compaction or de-duplication more on these and other techniques at a later time.
There is a growing awareness of environmental and green issues, including reducing CO2 emissions, proper disposal of IT equipment and media, energy conservation and reducing or stretching your energy budget for IT equipment and cooling costs. To close with for now, keep performance, availability, capacity and energy (PACE) in balance to meet your various application service requirements so as not to introduce performance bottlenecks or instability (downtime) in your quest to reduce or maximize your existing IT resources, including power and cooling.
Greg Schulz is founder and senior analyst of the StorageIO group and author of "Resilient Storage Networks" (Elsevier).