Growing complexity and soaring data demands make it inevitable that storage users will eventually have to try new approaches to manage it all. The trick is knowing when — and to which technology — to switch.
Brian Biles, vice president of marketing at Data Domain, suggests looking at history for clues. At what point would a decision to change underlying architectures have led to a more productive, responsive and cost-effective enterprise? At what point did it no longer make sense to stick with mainframes and ignore PCs and client/server architectures? At what point did ignoring Internet and networking technologies become foolish?
Telling the future is largely a guessing game, says Biles, but looking at the past offers illuminating examples.
Whenever a new IT architecture is introduced, a new set of server and storage building blocks emerge that optimize the deployment of that architecture, says David Scott, president and CEO of 3PAR. In mainframe computing, the storage building block that emerged was the monolithic, shared-cache array. In distributed computing, the dual controller modular array emerged.
According to Scott, the third wave of IT architecture, utility computing, is now gaining momentum.
“It allows customers to achieve more with less, on demand, by leveraging server, network and storage virtualization,” he says.
Scott believes that a new building block for storage has emerged in the form of utility storage. Utility storage, he says, is built on unique n-way, clustered controller architecture with fine-grain virtualization and centralized volume management. As a result, he says, utility storage has been designed from the ground up to deliver a simple, efficient, and massively scalable tiered-storage array for utility computing.
The Future of Data Classification
Scott says the future isn’t about dynamic or static data, but the ability to handle ever-growing amounts of both as simply and efficiently as possible. “This inevitably leads to the need for solutions that use automation, policy, and dynamic data movement to optimize the cost and quality of service associated with the data over its lifecycle,” he says.
The challenge, according to Scott, is to achieve this without the need for broad data classification, since he fears that universal classification will be an objective that proves to be pragmatically impossible to achieve. “Individuals just won’t want to do it, and even if they do, they are likely to classify incorrectly,” he predicts.
“The more challenging problem is that static data can become active and vice-versa, so the infrastructure needs to adapt to changing access and usage while optimizing for cost,” says Biles. According to Biles, doing appropriate data movement between performance-optimized and capacity-optimized storage on demand will be a dynamic and challenging area.
One challenge that won’t go away is the ever-growing demand for more cost-effective and efficient storage systems to manage all that data.
According to Biles, costs will continue to escalate due to continued improvements in disk technology. Historically, he says, storage budgets grow about 10 percent annually, while the price per TB of external storage arrays declines at a 40 percent annual rate.
“Without that price decline, users would need to grow their budgets dramatically to support their current growth in applications. That would meet significant corporate resistance,” he says.
Scott, meanwhile, sees utility storage as an important means of achieving more with less.
New Computing Paradigms Pose Challenge
Some experts believe that with all the challenges facing the storage marketplace, it may be necessary for end user and vendor organizations to incorporate new computing paradigms and revisit storage architectures to meet future enterprise workload requirements.
“It is definitely going to be necessary for enterprises to account for three of the biggest tremors on the IT landscape and how they affect architectures within a data center and the wide area networks (WANs) for connecting data centers,” says Paul Schoenau, senior product marketing manager at Ciena Corp. Schoenau says those “three tremors” are:
- Service-oriented architectures (SOAs) and Web services, a fundamental shift in the way enterprise software is implemented;
- Networked Remote Storage for real-time access to geographically separated storage assets, whether to integrate an acquisition, for competitive advantage, or simply to comply with new government mandates with a robust business continuity and disaster recovery solution; and
- Grid computing.
“We’re going to see how these three tremors, both individually and collectively, are triggering an exponential growth in inter-site traffic,” says Schoenau.
According to Schoenau, applications are increasingly struggling and competing for clogged WAN resources, and enterprises might not see their investments in IT hardware and software realizing their expected benefits.
Schoenau says enterprises will need to react to these changes. “I expect they’ll react as they have in the past: prioritize high-priority traffic and add bandwidth incrementally,” he says.
While this approach may have worked when bandwidth needs were growing linearly, this time around, traffic growth is exponential, so a more proactive approach will be required, he says.
According to Schoenau, necessary steps may include moving from dedicated single-application networks to a more flexible, adaptable network architecture. In areas where fiber is an economical choice, such as metropolitan areas, enterprises will move from leased connectivity to private fiber-based networks, he predicts.
“Over longer distances, fiber is not usually an economical choice, so enterprises will increasingly rely on networking techniques that have until recently been used mainly for storage system connectivity, where high bandwidth and low latency have been key considerations,” he said.
Regardless of the currently deployed solutions — SAN, NAS, WAFS, DAS — the overall direction of storage management needs to move toward a shared data model, where information can be accessed by anyone, system or function at any time, says Robert Skeffington, solutions specialist at Dimension Data.
According to Skeffington, a system that treats data as the core to the infrastructure, rather than as a byproduct of function, will be critical in eliminating duplication and waste and lowering overall cost.
And those are benefits that end users will increasingly demand over time.
For more storage features, visit Enterprise Storage Forum Special Reports