I've never been a big fan of storage hardware virtualization, which often means virtualizing blocks. This is completely different than server virtualization with the likes of Citrix's (NASDAQ: CTXS) XenServer and VMware (NYSE: VMW), which has its own set of challenges.
First let me define what I mean by storage virtualization, as there are lots of different definitions and some veterans I know still think that a file system is modern virtualization, as they just want to write to blocks. Virtualization to me means that the number of blocks and their location is controlled and managed by something other than the file system. Basically, the file system sees a set of block addresses and a LUN, but those addresses might be managed by something else. It could be a separate device (storage, out of band or in band device), software (there are all types of software that virtualize storage, from HSM to LUN virtualization).
Before I start with the problems I see with storage virtualization, let me state up front that there are many environments where storage virtualization can help while reducing costs. These environments generally have low performance requirements and can often support applications with high latencies. For example, if you are virtualizing a Web application and you increase latency to storage by 20 milliseconds and reduce bandwidth by 20 percent, it will often make little difference when running over the internet, given the internet latency and the bandwidth between the storage and the user of the system over the internet.
On the other hand, if you are running storage virtualization over a local high-speed, low-latency network, the increase in latency and reduction in performance could be catastrophic to the user community. If these two examples sound a bit like the choice between SAN (low latency and high performance) and NAS (higher latency and lower performance), you are correct — these are some of the tradeoffs.http://o1.qnsr.com/log/p.gif?;n=203;c=204655439;s=10655;x=7936;f=201806121855330;u=j;z=TIMESTAMP;a=20400368;e=i
I have two areas of concern about virtualizing a large storage environment:
- Migration of your virtualized environment to a different vendor
If you do not have performance requirements, virtualization might make your life easier for a time. How long a time depends on which virtualization vendor you choose and how long you plan to continue with that vendor. Virtualization can simplify your life up to a point.
The whole point of virtualization is to allow the administrator the ability to provision storage as needed. That often means allowing file systems to grow and shrink based on the availability of storage space and to allow the administrator to provision storage from a pool of LUNs not knowing where they are or sometimes even how big the LUNs are.
As I've said before, there is a lack of communication from the application to the storage device. Applications cannot tell storage what performance they require, administrators cannot say that this user has storage performance requirements that must be met and to queue requests in the storage controller higher for this application and this user, much less is there control for a user from another system. There are a number of vendors that claim they can prioritize performance, and it might be possible for a vendor within a homogenous environment to address this type of situation, but there is no standard to allow this prioritization and the whole idea of provisioning performance is not part of the SCSI standard. In the fixed LUN world we live in today, from your storage controller you assign a LUN to a server and it is used by a file system. The location of that LUN is known, the location of the other LUNs on the storage system are known, and if you have a performance issue you can track it down in a reasonable amount of time because you know where your LUN is and what else is using the storage controller and potentially even the same disk drives, as the RAID group could have been divided into multiple LUNs. Whatever the case, if you have a problem, knowing where the LUN is and what else could be affecting the LUN is a good thing.
Using the same example with a virtualized LUN, you would need to find where the LUN is and what is contending with the LUN. With some vendor implementations, LUNs are dynamically reconfigured for space based on usage, so the LUN might move around and the contention might become random, based on the usage and needs of other systems. Without really good tools that provide both real-time and historical information, you will never know what the problem was.
Any type of storage migration today is complex work. There are vendors that claim ease of migration from one storage environment to another, and for many simpler environments migration is fairly easy, but even when it is easy it is not cheap. If you have a virtualized environment, you have another level of indirection between the file system and the storage and where the blocks of storage really are. The migration hardware and software must work together with the new virtualization environment to migrate your old environment to the new environment.
I have seen problems with this migration with a few customers; it does not always go as smoothly as some vendors claim. Besides the process of migration, you also need to consider the time migration will take over the replacement of the current hardware that is not virtualized. There is really no standard for virtualization, and vendors can implement things in any way they want as long as they support standard T10 (SCSI) and T13 (SATA) commands. This presents a potential big problem from switching from one vendor to another.
Virtualization Has Its Place
With all the concerns I have about virtualization, I still think it has a place in some environments as long as everyone understand the impact short term (monitoring performance and performance analysis is going to be difficult) and long term (migration to new hardware is going to require careful planning and might not work seamlessly).
Looking ahead, I wonder if the whole idea of vitalization of blocks might be overcome by new technologies such as ANSI T10 Object Storage Device (OSD) or NFSv4.1 (pNFS) to allow file systems to manage the virtualization of the storage. What will happen if block-based virtualization systems meet T10 object storage? I suspect virtualization systems won't work because the SCSI commands for object storage are a superset of the standard SCSI commands used today.
I think the jury is still out on whether block-based storage virtualization can meet the requirements of high-performance environments and save you money in the process. As a consultant, I like jobs that are interesting, that force me to become something of a storage detective to determine, for example, why a database search is running slow, and figuring out where index files really reside in a virtualized storage environment is not high on my list of fun activities.
It can be a tedious, time-consuming process to figure out where the data has moved and where to move it to improve performance. On the other hand, low performance environments might benefit from the ability to expand and contract as needed, and since these environments are often not too complex, the ability to upgrade hardware in a virtualized environment has some advantages. My question is whether there is a better way to solve the problem that block-based virtualization is trying to solve. I think often times there are solutions that might be easier and cheaper than buying block-based virtualization.
Henry Newman, a regular Enterprise Storage Forum contributor, is an industry consultant with 28 years experience in high-performance computing and storage.
See more articles by Henry Newman.