In last month’s story, Top Ten Ways to Trim Storage Costs, we offered up some alternatives to expensive disk arrays and a few tips and technologies that can make storage more cost efficient. Since there are more than 10 ways to skin your storage budget, here are 10 more ways to keep costs down and streamline your storage environment.
There are so many ways to deduplicate data these days and more interesting approaches seem to emerge every week. One example is Nexsan DeDupe SG, which is said to cut power usage by 60 percent and storage capacity requirements by more than 30X, according to Victoria Grey, senior vice president of marketing for Nexsan.
“The first generation of deduplication systems focused on saving storage capacity,” she said. “The DeDupe SG delivers 10-30X deduplication rates while being the only power-managed deduplication solution that saves on energy and data center cooling costs between backups.”
Nexsan’s disk-based deduplication solution provides backups at 750MBps via CIFS and NFS and up to 1500MBps with Symantec OST. Performance is given a further boost via a high-speed RAID subsystem. Time savings are achieved by first writing and storing backups on disk cache (deduping at a later time or concurrently) so that backup tasks are completed quicker.
File Servers Begone!
Moosa Matariyeh, an enterprise storage specialist at CDW advises users to ditch their many file servers and switch to NAS appliances, which, he said, are lower cost, easier-to-administer and consume less power.
If file servers are used, though, there is a way to get more out of them and reduce storage costs. NXPowerLite for File Servers by Neuxpower was designed to shrink bloated Microsoft Office and JPEG files to a fraction of their original size to improve network efficiency, shorten backup windows, and reclaim primary storage resources. Neuxpower optimizes such files, as well as Microsoft PowerPoint, Word, and Excel files, by removing any excess “baggage” and converting graphics to the most appropriate file format and resolution.
“Most files are reduced by up to 95 percent of their original size, remain in their original format and retain all their original attributes and functionality, so they can be opened and edited by anyone,” said Mike Power, CEO of Neuxpower. “They do not need to be unzipped, decompressed or rehydrated before use, and no special viewing software is required.”
Using the Cloud for Backup
Another dedupe-based product also adds in the element of the cloud to save further on costs. Asigra Cloud Backup uses multiple techniques to optimize backup environments and reduce hardware requirements. First, it identifies duplicate data by looking for the same data queued for backup more than once. All data is compared based on its content, so it does not matter if the files are on different servers or have different names. Common data is stored to the appropriate repository and a pointer/stub is used to point from the data’s original location to the library location. This is a continuous process, as common data can appear at any time. Data reduction happens at the LAN level, and at a global level across all LANs and laptops. Only new/changed data (incremental forever) is transmitted to the backup vault and data compression is also performed to further reduce data under management.
“Understanding that organizations do not value older data the same as younger data, the solution also allows for the backup to be tiered with older data stored on more cost-efficient hardware,” said Ashar Baig, senior director of product marketing at Asigra.
SANlogics is a multi-vendor technology platform that deals with data migration and consolidation, and takes the time needed for the procedure from months to days. SANpulse, for instance, recently enabled an enterprise to migrate 257 hosts with 104TB of storage spanning six data centers in two weekend migration windows, said Marie-Pierre Belanger, vice president of product management at SANpulse.
“SANlogics accomplishes this by consolidating and automating internal knowledge and best practices into a versatile framework that includes data collection and correlation, intelligence generation, and actionable output,” she said. “As part of the solution, a business-aware intelligence engine delivers analytic reporting and automation technology that adjusts for constantly evolving SAN infrastructures to speed the process without the risk of errors.”
Dr. Geoff Barrall, CTO and vice president of engineering at Overland Storage, said that many businesses are revisiting tape storage as a cost-effective alternative to deduplication for backup. New technologies like LTO-5, he said, are making tape storage as fast as, and safer than, disk storage for backup and, as a result, it can deliver considerable savings. According to Overland, businesses can expect tape storage to achieve the same performance as disk storage but at a lower cost than de-duplicating disk.
“According to the Clipper Group, tape uses around 300X less power than disk and is more reliable for long-term data storage,” said Barrall. “With tape solutions, a business is able to back up around 1TB onto one cartridge costing less than $10, which is dramatically less expensive than disk and easier to take offsite.”
Implement Latency-based Storage Tiering
There are a lot of companies out there implementing storage tiering. Typically, they base data placement on such factors as frequency of data access or array performance metrics. Virtual Instruments suggests an alternate approach using application response time or latency from the server to the LUN.
“When you measure actual application response time by tier, you can confidently move application data to lower cost tiers with full performance visibility,” said Len Rosenthal, vice president of marketing at Virtual Instruments.
He cites saving up to 75 percent on storage costs.
Cut Down on Inter-Data Center WAN Traffic
Raj Kanaya, CEO of Infineta Systems, called out an overlooked aspect of the data explosion – the growth in data center-to-data center WAN traffic. A great majority of this inter-data center traffic is replication and backup traffic. Take a major bank whose replication traffic was growing at 115 percent per year. They’re currently paying for 1.5Gbps WAN link, but need will to upgrade to a 5Gbps WAN in about 18 months.
Kanaya said, if you assume that $10 per Mbps per month is being paid for the link, they are going from $15,000 per month (for the 1.5Gbps link) to $50,000 per month (from $180,000 to $600,000 per year). All while replicating only 40-50 percent of the total data they want to protect. Products that work on reducing inter-data center WAN traffic will do so by 5X on average. This 5X value amounts to about an 80 percent reduction in traffic, which frees up WAN capacity.
“For the customer this means that you can protect more data with replication while paying less on bandwidth,” said Kanaya. “Customers could save $630,000 over that 18 month period.”
Don’t Take Cost Cutting Too Far
There does come a point when cost cutting can go too far. You can trim a server here, and a NAS appliance there, and when you reach the point where you feel warm all over about the fantastic savings, you suffer a disaster and find out the deficiencies of the slimmed-down infrastructure.
“Don’t eliminate protective elements for the sole purpose of cutting costs – it can end up costing you more over time,” said Matariyeh.
Just Say No
Finally, the time-honored way of cutting costs is to hold a hard line on purchase order acceptance. If you approve POs, tighten it up and don’t let the storage guys sweet talk you into more storage, yet again. Make them prove that they can use that money to increase efficiency, reduce the time spent on administration or otherwise bring about a fast return on investment. If you are the one writing the POs, you’ll be a lot more successful if you assume the viewpoint of the approval person and begin rejecting your own faulty submissions. Take an executive view and only ask for resources that you can turn into business value.
Drew Robb is a freelance writer specializing in technology and engineering. Currently living in California, he is originally from Scotland, where he received a degree in geology and geography from the University of Strathclyde. He is the author of Server Disk Management in a Windows Environment (CRC Press).
Follow Enterprise Storage Forum on Twitter.