FalconStor’s Lallier agrees that scalability is crucial to any storage strategy, although he says he’s not sure that the integration always needs to be seamless. Nor are transactions per second always the crucial factor, he continues. “It depends on what the storage system is used for — as ideally a storage strategy would take into account the data requirements of the various users/systems and have different categories with different characteristics.”
“I've never heard a user tell me their application would run faster if they could only archive more.”
Zophar Sante, SANRAD
Nexsan’s Lauffin says it’s a simple question of logic. “Seamless integration has always been and will only become more of a decision point for purchase by all end users,” says Lauffin. Lauffin also believes that manufacturers which continue to produce solutions that cannot operate transparently will lose business to manufacturers of solutions that can. He points out that “the winner by default is the end user.”
Sante says he agrees 100 percent because 90 percent of today’s storage is still internal disk within a single server or is directly attached (DAS). But he adds that in the future the majority of storage systems will be SAN connected, and instead of taking I/O from a single application server, the system will be responsible for servicing I/O for 4, 10, or even 20 application servers.
“If a storage system cannot maintain a minimum of 15K I/O operations per second (IOPS) and 100 MBs of throughput, then it will become the bottleneck — no-one buys bottlenecks,” says Sante.
Archiving will be in application performance management
Other industry analysts believe the most significant results of archiving will be in application performance improvements. Lallier believes this is very likely as it can be clearly defined and implemented. Archiving older, rarely used data can free up valuable, fast storage devices for database and messaging applications, he says.
Lauffin totally disagrees with this concept and says that there is very little room for improvement in this area over the direction and cost reductions of already implemented technologies, and what room is available does not matter because this would not be the right approach to really providing a better solution for the end user.
“Archiving in its most basic definition defines that these are going to be files that are seldom or if ever accessed, so speed is of little concern. And just putting a faster engine in a car is not the answer to significantly impact an end user's life,” says Lauffin.
“If the cost is low enough,” he continues, “then once I write my data, it is already archived. So if we want to talk about the opportunity for a software application to more effectively move data to a secondary location and then use that data with version control, etc. to be used as both DR and archive, I would agree.”
Sante also disagrees and says that he’s not sure he understands why this would be the most significant result since most applications are slow because of client load, LAN traffic, security policies, and storage IO performance. “I've never heard a user tell me their application would run faster if they could only archive more,” says Sante.
This is Part I of a two-part article. Part II will address the following predictions:
Users will not be able to automate storage management until they build dedicated storage management teams
Through 2006, e-mail archiving products will dominate the overall archiving market
Through 2005, storage virtualization will not improve storage utilization
By 2006, storage area network (SAN) management functions will be embedded as part of storage element managers and storage resource management tools
See All Articles by Columnist Leslie Wood