Should you keep running your existing HDDs or should you commit to all-flash throughout your data center?
Solid state storage has risen to dominance, outselling hard drives in the enterprise market. While organizations are opting to buy all-flash or hybrid arrays in preference over disk arrays, that still leaves the thorny question of what you do with all the old stuff: existing hard disk drive (HDD) arrays, NAS filers or even older hybrid arrays.
Assuming the absence of an unlimited budget, how can you maximize existing storage investments, while adding all-flash arrays strategically?
The good news is that there are plenty of ways to eke out more value from older storage hardware. This article offers tips on how to achieve that, advice on what to run on the newest all-flash arrays, how best to make the transition to an all-flash (or mostly flash) future, how to migrate from one medium to another and more.
Jason Nadeau, senior director of products, Pure Storage, offered some straightforward advice for those looking at block storage investments – don’t consider anything but all-flash.
“Any block storage array coming up for refresh should go all-flash,” he said. “Better data reduction combined with current flash economics and a fundamentally upgradeable buying model means it costs less to buy flash than buy disk again, regardless of level of performance need.”
Those holding onto old disk arrays may be concerned about soaring maintenance costs. After all, maintenance and support are rarely cheap. And many vendors try to force customers onto their latest platforms by either bumping up the price of legacy support or discontinuing it completely. Nadeau’s advice is to put existing disk and/or hybrid storage on third-party maintenance.
“This can deliver the same support SLAs as OEM contracts, but reduce costs up to 60 percent,” he said. “That frees up more money for managed data migrations to all-flash, as well as bigger all-flash purchases.”
Many fail to realize that all-flash can be utilized to consolidate workloads sitting on multiple disk arrays. In addition, virtualization will not only create much more agility for IT and the business, but also facilitate data migrations.
“The best all-flash arrays excel at consolidating workloads, freeing up people and monetary resources and removing complexity,” said Nadeau.
“There may be some workloads that aren't ready to move to flash. Mainframe workloads are one example. These typically sit on disk-based arrays currently along with other workloads. In such cases, it may be possible to dramatically improve application performance for a given workload on that disk-based array by migrating the rest of the workloads onto flash. This gives the remaining workload more resources on the original array. “Turn existing disk-based consolidator arrays into point application arrays,” said Nadeau.
Jeff Baxter, chief evangelist, ONTAP at NetApp, doesn’t believe that there’s one single “storage media to rule them all.” He is also not in favor of demands to carry out a forklift upgrade to suddenly adopt the “next best thing.”
“As customers adopt all-flash arrays, they should be able to seamlessly add them into hybrid flash and HDD-based storage architectures and not have to re-architect their entire environment or take extensive downtime to do so,” said Baxter. “That leaves the older storage hardware intact to hold applications with lesser performance requirements, or even be repurposed for backup or disaster recovery purposes.”
Kaminario CTO Eyal David sees all-flash as the obvious platform to adopt for new storage and tier 1 applications from a total cost of ownership (TCO) perspective. HDDs, he added, may find some value in tier 2 workloads and archiving. But that ultimately comes with an expiry date. Maybe its two years, maybe it’s five, but the time will come where it makes no economic or performance sense to keep data on legacy platforms.
“The constant push for leveraging the data assets of an organization to create value is accelerating the adoption of flash for all storage tiers,” said David.
Data protection is another smart way to extract value from aging HDD assets. Older storage, for example, can be leveraged for local backup and recovery.
“Once workloads have been migrated off, old arrays can still be useful for data protection,” said Nadeau.
Many IT departments are looking at how to move towards a software-defined future. This cannot be done without also considering solid state arrays (SSAs). After all, SSAs simplify the task of sharing data between many applications and servers without the requirement to move data or storage.
“Software costs and complexity are reduced as vendors have integrated software-defined storage administration orchestration and provisioning within the features of their storage arrays' controller software, thereby virtualizing, disaggregating and decoupling storage from the server layer,” said Gartner analyst Valdis Filks. “With SSAs being kept for seven years, with all software features included, customers need to spend less time and money on SDS server migration projects and moving data between integrated systems.”
Currently, there is a mass exodus from disk to solid state. This was mirrored two decades or so back with a shift from tape to disk. But tape survived — in a minor way as a backup platform, but mostly as a secure and compliant archive. Could it be that tape will continue while disk will fade? Is so, then organizations may be well served to look closely at the vendor roadmaps for their tape resources, more so than their HDDs.
“We expect data centers to bifurcate into solid-state (of some kind) for warm/hot data and tape for cold archive data,” said Nadeau. “In the long run, everything will be solid state, but that's probably a ways down the road.”
Despite the promise of all-flash and the fact that prices are falling rapidly, due diligence still applies. For all new storage investments, organizations should make a TCO calculation. The price-per-capacity of all-flash storage is still higher than for hard disk. But that is only looking at one aspect. All-flash provides other savings: lower price per performance, cheaper maintenance costs (disks need to be replaced more often than SSDs), less power consumption, less space and lower operational costs (as they require less tuning effort).
But this will vary from environment to environment, and from application to application. In many cases, there may be no long-term TCO justification for buying more HDDs. But that may not be the case every time. So check your own numbers.
“Just a pure purchase price comparison will not deliver a complete picture,” said Frank Reichart, head of storage marketing at Fujitsu. “During the lifetime of a new system, the prices between SSD capacity and SAS-HDDs will match, so it is generally recommended to go for all-flash in the next investment cycle.”
Many will try to wrap their wits around how all-flash fits in with current storage architectures. But that could be a major mistake. Some attention has to be placed on where storage design is heading.
David is convinced that the storage landscape is radically changing. He believes the market is headed in a direction that requires the infrastructure layer to adapt and respond to rapid and frequent changes in business needs and workloads. While NAND will still be the dominant media for primary storage workloads, the architecture and delivery model is shifting. Storage is gradually becoming composed of data center resource pools driven by analytics and automation-based orchestration and optimization tools.
“Next generation NVMe technologies are finding their place within these large resource pools as specialized tiers,” said David.
Photo courtesy of Shutterstock.