Top 5 Tips for Effective Storage Management

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Managing the data deluge requires a combination of talent and technology resources. Enterprises are faced with data that increases 60 percent year-over-year, which makes dealing with the data problem a lot more complex.

To make the situation even more challenging, virtualization technologies and regulatory compliance laws further complicate data management and push professionals to the max, greatly hindering their ability to effectively manage data.

A good storage administrator has probably set storage policies and implemented storage resource management (SRM) tools. But, a phenomenal storage administrator is moving toward being able to connect the dots, understand what is driving data growth and integrate capacity planning.

Today, there are five tips from which every storage administrator can benefit.

1. Get real

Face it; not all data centers are the same. Every storage environment differs in architecture, needs and size.

Take a moment and ask yourself: When was the last time I really studied my storage environment?

It’s critical to understand questions like “What is driving your data growth?” “What equipment do you have?” “How are they connected?” “What applications and compute infrastructure depend on your storage environment?” “Do you have storage policies?” “When were they last reviewed?” etc.

2. Visibility, visibility, visibility

Regardless of the size of the environment, virtualization in the data center is driving the need for visibility, particularly in the areas of root cause analysis and capacity planning. Accurate visibility lets storage administrators pinpoint specific problems, see where growth is occurring and make informed decisions about what technology should be implemented to make the most of a storage environment.

With the ramp of computer virtualization, storage has become a critical linchpin in the performance of applications. Specifically, in a virtual environment, it’s critical to see which virtual machines (VMs) are using storage so professionals can avoid I/O bottlenecks. It is critical to have end-to-end mapping that will allow administrators to view the data path within a virtual or physical environment, visualizing the VMs that are connected to data stores, the shared LUNs and the arrays in an environment.

3. Capacity planning’s a saving grace

Bottom line, the only way professionals will be able to make the most of their environments is by integrating capacity planning. Nothing is worse than suddenly realizing you’re out of space and having to get a quick signoff on more storage.

With capacity planning tools, you’ll be able to see how much storage you need, what type of storage you have and how fast it is growing based on previous growth trends. By taking stock of what you have, you’ll be able to see if you are accurately using storage resources and if you should be making wiser choices in your storage decisions.

A cheap and fairly easy optimization approach that many administrators rely on is thin provisioning. Thin provisioning technology allows storage administrators to allocate the exact required amount of storage capacity to applications as at the time it is required. But users beware: Although thin provisioning can save money, remember to tie it back to capacity planning and make sure you are mindful of spikes in storage needs to avoid unexpected shortages.

4. Reclaim it

Reclaiming assets goes hand-in-hand with capacity planning.

In both virtual and physical storage environments, it’s important to be mindful of the dark storage that may be lurking and taking up valuable server space. For most environments, SRM tools should help you gain back at least 15 percent of storage (even more if it’s a virtual environment), which all translates into bigger-dollar savings.

5. Disaster strikes, I have a plan for that

Disasters are always looming and should be in the back of a storage administrator’s mind – whether it’s a power failure, a bottleneck or any other unexpected situation.

With technology like data replication, companies are able to take snapshots of business-critical data, which allows for greater insight and provides them with a benchmark of environments prior to disaster.

One of the top tips I offer colleagues is to monitor backups and disaster recovery. Much like visibility, it’s always smart to monitor the backups to make sure that when you’re left in the unfortunate situation of a failure, you can feel confident that you’ve backed up most of your data.

The bottom line – What this means for me

Companies need to keep a close eye on IT environments and leverage the tools that will provide insight to keep storage environments running smoothly. Have a storage plan that outlines all of your assets and allows for the optimization of your storage while generating cost savings.

Sanjay Castelino is the vice president of product marketing and product management at SolarWinds, an IT management software provider based in Austin, Texas. Sanjay can be reached at sanjay.castelino@solarwinds.com.

Follow Enterprise Storage Forum on Twitter.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.