Once upon a time, storage was regarded as an expensive necessity. Not anymore. As data growth has exploded and technology has evolved, the days of paying a premium for numerous expensive disk arrays are over. Todays priority is keeping costs down. Here are ten ways to make storage more cost efficient.
Despite the propaganda surrounding the cloud and disk, the economics of tape are still hard to beat if you use it smartly. One example being a blended disk/tape storage solution to optimize efficiency, with tape harnessed primarily as an archive.
Any time you can substitute a low power consumption device (tape) for a high power consumption device (disk), you can save money, said Jon Hiles, senior product manager at Spectra Logic. This is particularly true when comparing the power consumption rates for the two devices over extended periods of time.
When architecting storage to house data indefinitely (for regulatory, compliance, research, or data mining purposes), directing long-term data to tape makes financial sense. Hiles recommends making data readily available using an extensible file system like that found in an active archive to maximize value. Such file systems allow direct writes and reads of data to tape in non-proprietary formats without staging to disk.
Offsite Data Protection
Data protection has long been an expensive business: build a remote data center; fill it with servers and storage; and add in the backup and replication software to protect data. More and more companies are foregoing that expense in favor of outsourcing.
Companies have the option to trim or even forgo the capital and operational costs, as well as complexities of traditional server backup (agent licenses, media servers, tapes, libraries, media pickup services) with an outsourced solution, said Jeff Bell, director of corporate marketing at Zetta.
According to Bells numbers, traditional methods for protecting 10TB of data with both disk and offsite tape for three years would cost more than $350,000. He quotes about a third of that cost for outsourced data protection by providers such as Zetta. An outsource approach eliminates the upfront capital expenses, backup software licenses, and the cost of removable media. It also reduces complexity and risk.
There are many ways to virtualize storage. In some cases, the vendor suggests throwing away old hardware to buy a brand new virtualization array. A cheaper way is to leave legacy disk in place and incorporate it into a shared storage pool using software-based virtualization.
With software-based storage virtualization, you can take full advantage of disk resources already in place before you spend any money on additional capacity, said Augie Gonzalez, director of product marketing, DataCore Software.
Spend to Save
Occasionally there is a little wiggle room in the budget. In such circumstances, Greg Schulz of the Server and Storage IO Group recommends spending a little to save a lot by investing in the right tools.
If you have some budget to work with, invest in tools to determine what you have including what inactive data can be archived or moved to other media, he said. Look into data footprint reduction (DFR) tools including archive, compression, deduplication, data management (including deletion), thin provisioning and space saving snapshots, along with tiered storage.
Most organizations today are not using their storage effectively. The allocation of storage on arrays typically goes way beyond what is actually being consumed by the host systems, real or virtual, and by applications. By analyzing the real storage requirements at the application and host system level, organizations can reclaim this storage.
Take the case of a Large U.S. health maintenance organization (HMO) where $2 million per year could be realized through improved storage efficiency. The average storage utilization across the HMOs servers was 54.5 percent (four servers had below 30 percent). With an estimated cost of Tier 1 storage at around $8,000 per TB per year, this organization could save as much as $2 million when spread across the organizations 4PB.
One way to achieve more with less is to understand real storage usage end-to-end, said Rick Clark, president and CEO of Aptare, a company that offers capacity management and storage reporting software.
Obviously deduplication is a smart way to save money on backup and storage. But lets take a different look at it. Lortu offers a family of backup/deduplication appliances that it claims can store around 100 daily full backups and replicate that data over the WAN.
All the backups are online all the time so the administrator can restore any backup from his computer without moving from his desk to find and take the appropriate tape, said Lortus CEO, Carlos Ardanza. He claims 50x to 80x deduplication for unstructured data.
In tough times, most storage managers focus on the low-hanging fruit, but another approach is to increase performance as a way of saving in the long term.
While there is a common tendency to think about saving money via abstinence or avoidance, money can also be saved by using high performance storage that provides more IOPS per watt per cost or a lower cost per transaction as opposed to capacity-centric storage where the focus is cost per GB, said Schulz.
There are many vendors offering storage hardware that speeds transactions, eliminates bottlenecks and prevents unnecessary delays. One of the best options these days is Flash. Violin Memory, for instance, offers Flash memory arrays that provide an attractive value proposition for certain use cases.
According to Matt Barletta, vice president of product marketing at Violin Memory, each online business transaction typically requires 20 or more I/O operations, and one standard hard drive can support 2 to 10 transactions per second. A Violin Memory Flash array, he said, supports 10,000 transactions per second and can replace up to 1,000 disk drives.
Typical database systems provide a transactions per minute cost of $2.40, with much of this cost absorbed by storage or CPUs that are waiting for storage, said Barletta. Flash arrays reduce this cost to less than $1.00.
If money is really tight yet storage demands are high, Schulz suggests a review of RAID levels. You can look at changing RAID levels which, while disruptive, could [result in] opportunities to reclaim some storage capacity, he said. However, keep performance and availability needs in perspective.
Perhaps you have RAID 6 coverage over a lot of storage. Keeping only mission critical (or highly used) data on RAID 6 while moving the rest onto RAID 5 could free up a considerable amount of disk space.
Schulz recommends a freeware tool known as TreeSize from Jam Software. It tells you where space has gone to, showing you the size of each folder and subfolder. Scanning is done in a thread, so you can already see results while TreeSize Free is working. This is a quick way to hunt for areas of orphan or wasted storage.
Drew Robb is a freelance writer specializing in technology and engineering. Currently living in California, he is originally from Scotland, where he received a degree in geology and geography from the University of Strathclyde. He is the author of Server Disk Management in a Windows Environment (CRC Press).
Follow Enterprise Storage Forum on Twitter.