Storage Virtualization Has Room to Grow

SAN JOSE, Calif. — Server virtualization, the consolidation of physical servers on one powerful machine, has been growing like gangbusters in major enterprises. So it stands to reason that storage virtualization could fit into that pattern, given the exploding universe of user-generated digital content.

Storage virtualization was the topic of several session tracks here at the IDC Directions conference, the bi-coastal industry briefing hosted by the research firm. The technology is usually used in a storage area network (SAN) to manage a number of storage devices, often as a single device. Storage virtualization abstracts the physical location of the data, thus making it appear to be anywhere on the network.

Data appears in a logical storage space and the virtualization system handles the process of mapping it to the actual physical location. This is very different from server virtualization, which is done primarily to consolidate a number of servers on one machine to improve utilization.

Storage and server consolidation are often done together at a company. IDC’s research found that 85 percent of firms enabling storage virtualization also adopt server virtualization, according to Rick Villars, vice president of storage system research for the firm.

“But they are done in silos, so no one can use the two in combination due to limitations of the hardware and the APIs,” he added.

Storage virtualization is often geared for dynamic storage, disaster recovery and planned downtime, all designed to improve uptime and data availability. But, Villars added, storage virtualization isn’t just for large enterprises any more. Increasingly, it’s becoming a tool for individuals and plenty of consumer-facing applications.

“If you’re Flickr, people expect you to protect their photos forever. If they log in two years from now and they aren’t there, they’re going to be upset. Who’s protecting that?” said Villars.

This is bringing about the rise of what he called “content depots,” like Flickr, YouTube and Google. Villars predicted that sites like those will consume 25 times as much storage space by 2010 as they do now — great news for the likes of EMC.

“We have some companies installing 100 terabytes a day, while others are tearing out a 100 terabytes a week. A petabyte is now the entry point,” he said.

Article courtesy of Internet News

Previous articleVirtually Tape-Free
Next articleStorewiz Squeezes Data

Latest Articles

How Tape Storage is Used by Banco Bradesco, Treasury of Puerto Rico, Computational Medicine Center, Calgary Police Department, and Franklin Pierce University: Case Studies

Most technologies eventually outlive their own usefulness, but a rare few withstand the passage of time. While floppy discs vanish beyond the horizon, taking...

How Servers are Used by Ducati, Dashen Bank, Vivo Energy, Skyhawk Chemicals, and Feinberg School of Medicine

Out-of-date legacy systems can act as the weak link in an organization’s push for innovation. This is particularly true of legacy servers attempting to...

How Flash Storage is Used by BDO Unibank, Cerium Networks, British Army, University of Pittsburgh Medical Center, and School District of Palm Beach County:...

Flash storage is a solid-state technology that uses non-volatile memory (NVM), meaning data is never lost when the power is turned off. It can...