Download the authoritative guide: Enterprise Data Storage 2018: Optimizing Your Storage Infrastructure
Do you want to save a little money on storage costs? Do you want to save a lot of money on storage? You can by following some simple rules and by ignoring Moore's Law. Moore's Law left out one important variable from the equation: The Frugality Quotient. These 7 simple rules will reduce your overall storage costs and allow you to take the law into your own hands.
1. Reclaim Capacity
Often called good housekeeping by system administrators, reclaiming capacity is a constant effort. And, not only in user environments but in virtual infrastructures as well. System administrators run periodic checks to find duplicate virtual machine disk files, orphaned virtual machine disk files, and mothballed systems that are out of use and waiting for decommission.
The decommissioning process also helps reclaim storage on a large scale. Systems marked for decommission should have their storage reclaimed early in the process. This constant storage reclamation lowers your storage costs by freeing space that you already own rather than buying new storage.
Every company runs into the topic of quotas during discussions of runaways storage use. User data grows at a very fast rate, often rivaling that of production system data use. User quotas help quell unchecked storage use. A quota is a limit placed on storage allocated to a particular user.
The best example of user space quotas is email. When you near your space limit quota, you receive a message that tells you you're close to the limit, and you should move or remove some email. This warning is called a "soft" limit. When you reach your "hard" or absolute limit, you stop receiving email. You must move or remove email to restore service. Quotas prevent users from consuming all available space so that the individual uses her own space and doesn't halt productivity by causing an outage for everyone.
3. Control Storage Growth
Controlling storage growth isn't an easy task, since the proliferation of Moore's Law all but gives license to burn storage at a very high rate (doubling every two years). Moore's Law is accurate, but it doesn't have to be accepted practice. The best way to control storage growth is through policy: You create standards for every kind of practice that concerns storage. This doesn't have to be an elaborate tome, but it needs to exist and have enforcement.
Since virtualization gobbles storage, an example policy is that standard virtual machines come standard with a 40GB C: drive and a 40GB D: drive. Any exceptions to that standard must go through some sort of governance. This type of policy enables storage administrators to accurately predict storage consumption rates.
4. End Server Sprawl
Server sprawl is an expensive practice and one that consumes storage at an extremely high rate. Virtual machine server sprawl also consumes licenses, CPU, memory and network resources. Perhaps because it's easy and quick to deploy virtual machines, server sprawl in virtual infrastructures can be rampant. The problem isn't as significant with physical systems because of the expense, provisioning, planning and resources it takes to deploy them. An administrator can create and start a virtual machine in minutes, whereas its physical counterpart can take weeks to trudge through the same process.
The best way to end server sprawl is to use an accounting method similar to that used with physical machines. Yes, part of the joy of virtualization is the speed at which you can deploy new systems, but it's also part of the pain. Agility comes with a price. That price is server sprawl. Require regular system counts and "true-ups" comparing system numbers with purchased licenses.
5. Data Deduplication
Deduplication reduces the amount of data stored in backups and in archival volumes. Reducing the amount of data stored by reducing duplicates will extend the life of your current storage capacity. Data deduplication saves money, capacity and time by reducing backup time.
If you've never heard of data deduplication or don't know where to start, look at this overview of the subject: Data Deduplication: A Tongue Twister Worth the Effort.
6. Match Data to Storage Tier
If you must save every bit of data, at least move some of it to less-expensive storage. Data that's rarely accessed but you are required by law to keep can reside on inexpensive SAS drives. You don't want to fill your expensive solid state drives (SSDs) or 15,000rpm disks with 10 year-old financial data, log files or archived email.
Keep your production data and mission-critical systems on the expensive and highly redundant storage (fast and deep). For everything else, store it slow and cheap.
7. Use Cloud Storage
What if you didn't store your data at all? Cloud vendors provide safe and inexpensive methods for storing data. Amazon's Simple Storage Service (S3) is the most notable solution. Several personal and small business cloud storage vendors use Amazon's S3. You can use S3 directly without the involvement of a third party. Amazon offers highly durable storage and pay-as-you-go pricing that's excellent for backup, archiving and disaster recovery solutions. Lower your ever-expanding storage budget and hassle by letting Amazon handle it for you.
And, an even more compelling storage option offered by Amazon is what they call Reduced Redundancy Storage (RRS). RRS is a non-critical data storage option. For example, if you need a secondary storage for your archived data, RRS is a good choice because of its lower cost. But, with that lower cost comes lower assurance (99.99 percent durability and reliability). This "four nines" worth of assurance is low compared to Amazon's standard (99.999999999 percent), you aren't likely to experience any great losses from it.
Ken Hess is a freelance writer who writes on a variety of open source topics including Linux, databases, and virtualization. He is also the coauthor of Practical Virtualization Solutions, which was published in October 2009. You may reach him through his web site at http://www.kenhess.com.