The data storage industry is in a transition period. In the past, sizing a company’s storage array presented no headaches: data growth was relatively predictable, and largely linear. The traditional “scale-up architectures” handled most problems. Need more storage? Just add more racks of disk drives. Sure, the need for capacity was huge, but the planning was pretty straightforward. Some basic math on the back of an envelope allowed storage admins to know what they’d need to buy.
But those simple days are receding into the past. Factors like cloud computing and virtualization are causing data storage infrastructures to morph and reshape about as fast as the daily weather map – and just as unpredictably.
Companies facing the bedazzling new world of data storage – heck, it’s called Big Data now – are tempted to throw up their hands and say, Can we just start over? Or, the attitude is: okay we’ll keep all those racks of traditional drives, but we need to tuck them into a larger plan. But what shape should the larger plan take?
What’s trendy these days is the converged storage architecture – combining storage and compute capacity in one entity, and units that handle sequential and random access in the same system. The old approach of having these disparate functions side by side doesn’t offer the same flexibility and scalability.
To talk about these changes, I spoke with Sean Kinney, Director of Product Marketing, HP Storage. Kinney spoke about the rise of flash storage, its value and its limitations, and about converged storage infrastructures. In his view, when companies rethink their storage needs, they should be aware that “it’s more about architecture than format.” Planning the ideal storage set-up is all about “future-proofing,” he says.
My interview with him: