Is It Time to Add More Storage?

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

On any given day, you’ll find system administrators searching every corner of their desks, their data centers, and their secret hardware stashes for more storage, much like junkies trying to score their next drug fix. You’ll hear sweet nothings pour from the mouths of managers in praise of the awesome job the storage guys are doing, and you’ll smell sacrifices of pizza, hot wings, and various baked goods to the SAN gods, only to hear the meek words just above a whisper: “We’re out of space. We need to purchase more.”

We’re all storage junkies, and we all need rehab.

From this point forward, things go awry. “How can we be out of space?” the wary project managers asks, “We purchased 50 TB less than six months ago.” Yes, you did purchase 50 TB less than six months ago as part of your physical-to-virtual (P2V) initiative. But, less than halfway into the transition, you’re out of space. In fact, the space you have is overprovisioned.

“But, we’re using thin provisioning, we should have plenty of space,” he states emphatically.

Thin provisioning. Yes, another good “in theory” practice that works everywhere except the production data center.

Overprovisioned space is a rampant problem in data centers. Check out any of your virtual cluster datastores and report what you see. You’ll see that almost every LUN is full or overprovisioned. Why? Thin provisioning. Thin provisioning is a great idea if your data never grows or grows so slowly that you’ll never fill up the allotted resources in a system’s expected lifetime. However, to believe that space remains constant is a fallacy that causes more outages than failed hardware. How often do you experience an outage related to filled space due to overprovisioning? Was this overprovisioning related to thin provisioning for your systems? The answer to both questions is likely, “Yes.”

SAN vendors and virtualization software companies claim that overprovisioning is an acceptable practice, and it’s actually a feature of their systems. While some workloads (file services, network services, software repositories) operate well on thin-provisioned surfaces, most do not.

Does this mean thin provisioning is bad, and you should never use it? Certainly not. Thin provisioning from a virtual machine perspective is a bad practice. It leads to overprovisioning and outages. Thin provisioning from the SAN configuration viewpoint is a good practice. It leads to less storage waste and faster expansion as systems request space from the host’s available storage pool.

But, the solution isn’t thin provisioning. The solution is to take a conservative approach to storage provisioning and storage use. System administrators and SAN administrators can expand volumes as needed so overprovisioning is not necessary. The exception to this, however, is the system volume, which in some cases cannot be extended.

The typical organization takes the attitude that there is unlimited storage available, and it requests far more than it will ever realistically use for a system. This storage addiction led vendors to develop thin provisioning and overprovisioning. To alleviate this addiction, they don’t take a conservative approach — they add more storage and waste it.

But, we’re not totally to blame for our storage addiction. Organizations are conditioned to need bigger, better, faster and more storage. We’re rewarded with larger disks that promise extreme speeds coupled with lower cost and lower power consumption. Operating systems are bloated. File sizes are bloated. Databases have increased in size exponentially. We now employ data warehouses, data marts, and data malls in every aspect of business. We’re information junkies. We’re data junkies. And, we need more space to accommodate our addiction. How long ago were you impressed with a 500MB database? Five years? Now, we hardly flinch at 2TB databases. We’re no longer surprised by bloatware; nor are we impressed with it. Storage is cheap. It’s fast. It’s available. And, you’re entitled to more of it.

We’re storage junkies and there seems to be no rehabilitation for us in the forseeable future. We’re happy with our data hoarding. We’re happy with our ever-increasing storage waistlines. We’re happy with our addiction. Now, stop reading this and go allocate another 500GB LUN for me, I’m hurting bad.

Ken Hess is a freelance writer who writes on a variety of open source topics including Linux, databases, and virtualization. He is also the coauthor of Practical Virtualization Solutions, which was published in October 2009.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.