Making the Most of Storage Budgets

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

With the economy in a downward spiral, it comes as no surprise that already stagnant IT budgets will likely be reduced next year. That leaves storage directors in the unenviable position of having to make revenue-generating initiatives occur while new application development projects may be put on hold.

According to Forrester Research, storage management is out of control at many companies. They don’t know what they have in assets or what their storage needs will be in the future because they don’t do a good job of forecasting.

“Most companies don’t have a handle on the storage they’re deploying and they’re in the dark on what the real business need is, so they overprovision or underprovision rather than putting the right performance and availability in place,” said Andrew Reichman, a senior analyst a Forrester. “You can’t manage what you don’t know.”

Data storage represents at least 11 percent of IT hardware budgets, according to Forrester. When a company doesn’t know if it has the right performance and availability characteristics, it can build extremely high availability storage capabilities — but that ends up being very costly, Reichman added.

The good news is, while the immediate future looks bleak, there are tangible steps storage managers can take to make the most of the budgets they’ve got to work with.

Start with Reporting Tools

The first step is to focus on visibility and put processes and reporting tools in place. The motivation is since there’s lot of waste, millions of dollars can be saved in the process, Reichman said.

Capacity utilization is generally in the 20 percent to 40 percent range, according to Forrester. “So if you have a database with 400 gigs of data, you’ve probably allocated a full terabyte to it in the expectation it will grow over time,” he said.

The reason for this, said Reichman, is that “application teams don’t trust development teams, so they ask for more storage than they need and the systems themselves are clunky and difficult to move or expand once [the space] has been allocated. So those factors contribute to building some allocation that’s bigger than what they’re anticipating.” As a result, they’re wasting a lot of storage space.

Reporting tools give companies the ability to continuously monitor change over time in utilization, performance, capacity incidents and provisioning time. Emerging storage vendors that provide reporting tools with their products include Compellent (NYSE: CML) and Dell EqualLogic (NASDAQ: DELL), he said.

“If you can keep your environment consistent and focus on those vendors, you keep it simple,” Reichman said, but added that these tools don’t work for heterogeneous environments. In that case, companies will want to look at tools from vendors such as IBM (NYSE: HPQ) Tivoli, HP (NYSE: HPQ) Storage Essentials and EMC (NYSE: EMC) Control Center. But he warned that such tools are “extremely complicated and heterogeneity can be hit or miss because they’re always better with their own product.”

When trying to determine what to monitor, Reichman advised a keep-it-simple model. Figure out six areas to report on and you’re likely to have more success than having the tool do everything under the sun. He suggested looking at capacity utilization, general capacity, availability, performance, incident count/response times and provisioning times.

Virtualization’s Role

Server Virtualizationin many ways complicates the issue of storage utilization, Reichman said, because it “adds a link in the chain,” and when communication is poor, the tendency is to create a template of what all servers will look like from the beginning and then add application data.

“So every virtualization server starts out at the same size, but if you don’t have the tools to manage the capacity you have the potential of allocating more storage and having it sit idle,” he said.

In many cases, IT builds the highest common denominator image because some servers may need 500GB, and for sake of consistency, they make 95 percent of the images bigger than they need to be, which chews up a lot of disk space, he said.

While server virtualization solves some problems, such as a reduction in hardware acquisitions, it may have negative implications on storage side.

Reichman said companies should keep their storage environment consistent and not have too many vendors and too many different technologies. Building standard configurations for the application groups can also help make the server environment more manageable.

Avoid Backup Overkill

Backup and replication are critical technologies for protecting data and being able to access it so companies can conduct business in event of a disaster or outage. However, typical storage environments may have as many as 10 copies of the same data — several days of full backups, some snapshots and a fully replicated copy offsite. Because most backup systems have inadequate reporting capabilities, it becomes a challenge for storage administrators to associate applications to their backup jobs and retention schedules.

Auditing backup policies and storage configurations can eliminate unnecessary backup jobs, snapshots, cloning and replication, and can return unused disk or tape media. Reichman said IT can ensure storage is not being wasted by reviewing replication levels to make sure applications are the right size.

He said companies should also explore alternatives to Fibre Channel, which has traditionally been seen as the only appropriate solution for robust applications. These include iSCSI, another block storage protocol that uses standard Ethernet as a transport; the file protocol NFS, which can also support applications; and direct attached storage (DAS), which can be the easiest path to shared storage.

Saving with Tiering

Lastly, tiering, or ILM, is another area Reichman sees as an area to revisit in a down economy.

“There’s been a fiction in the storage industry that information lifecycle management is a dynamic way to move data around the environment as the value of data changes,” he said.

Companies start out provisioning, and then after the data sits around for two months it gets demoted. The vast majority of storage systems make it difficult to move data; once it gets created, it sits there, Reichman said. As a result, automated tiering generally doesn’t work.

Companies need to think about using performance analytics to identify what data would be better served using a lower-powered solution. They should also gives users incentive to select lower-cost solutions so certain data can be placed on middle tiers.

“When people think and talk about tiering, what they can do is identify applications that are over-provisioned,” said Reichman. “You can tier your network, and if you identify applications that don’t need high-performance disks … it’s much cheaper.”

And have a mix of drive types, he said. Any application that can be identified as a good fit for SATAdrives, which are slower, denser and consume less power, will be much cheaper on a per gig basis.

Reichman said taking a hard look at these areas can help to make the most of a tight budget, generate tangible savings and meet business goals even when times are tough.

Back to Enterprise Storage Forum

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.