Virtualization was supposed to be the salvation of server inefficiency.
The long-term practice of having one physical server per application had resulted in data centers strewn with hundreds of x86 boxes or row after row of racks. Power and cooling bills soared. And most importantly, those servers typically had a shockingly low utilization rate – as low as 10 percent.
Server virtualization came in courtesy of VMware and saved the day. Suddenly, you could put 10 or more virtual machines (VMs) on one physical server, with each VM running an app. Massive amounts of server consolidation took place and the IT world adopted VMware with open arms.
But then the company decided to move into storage and provide the ultimate platform for storage virtualization. And that’s where it all started to go wrong, according to Jon Toigo, a consultant with Toigo Partners. He labelled storage as the real stumbling block of virtualization, and called into question the concept of virtualized storage.http://o1.qnsr.com/log/p.gif?;n=203;c=204655439;s=10655;x=7936;f=201806121855330;u=j;z=TIMESTAMP;a=20400368;e=i
“Virtualized applications run like crap,” said Toigo.
He was equally dismissive of software defined storage (SDS), believing it to be little more than a marketing term that didn’t really mean anything. In fact, he questioned the logic of moving the intelligence away from the array or the hardware. To his mind, the requirements of SDS and VMware are placing enormous demands on storage and leading to inefficiency, poor performance and higher costs.
The trouble started, he said, when VMware introduced its vStorage API for Array Integration (VAAI) which came out in 2010 along with vSphere 4. As the virtualization vendor gained in influence, it began to demand that the storage industry comply with its standards rather than the other way around. This introduced non-standard elements into the storage infrastructure, said Toigo. Later releases, such as vSphere Storage Appliance and vSAN, merely continued that trend, he said. The result is cumbersome clustering arrangements to support storage.
“VMware is pushing you to have a three-node storage cluster for each physical server,” said Toigo.
This adds cost — $16,000 to $26,000 per node for licenses and hardware. He considers this devolutionary as storage is not accessible to non-VMware hypervisors. The result is isolated islands of storage – the very problem the industry was supposed to be moving away from, he said.
He cited an IDC study to support his claims. While 75 percent of x86 servers are virtualized, they don’t appear to be running much. IDC numbers shows that the remaining 25 percent of non-virtualized x86 servers actually support 79 percent of the overall workloads.
Greg Schulz, an analyst with StorageIO Group, advises storage managers to suffer on with storage virtualization and learn to make it work.
“Like any technology, virtualization can be a good thing if used in the right ways for the proper situations leveraging applicable skills, experience and advice,” he said. “Used in wrong ways, it can lead to bad experiences.”
Storage Virtualization Approaches
Toigo laid out a chart of the different vendor approaches to virtualization and storage integration. First there are the systems that demand a fixed hardware model, such as VMware EVO:RAIL, which has only single hypervisor workload support, and Nutanix and EMC, which support multiple hypervisors. For those with a flexible hardware model, VMware vSAN is limited by its single hypervisor workload support, StarWind provides multiple hypervisor support, and DataCore takes it a step further by being hardware agnostic, supporting multiple hypervisors and also supporting non-virtualized workloads, according to Toigo.
“DataCore is in the sweet spot of virtualization that runs on any hardware and any hypervisor,” said Toigo. “EMC is any hypervisor, but you have to use their hardware.”
He explained that DataCore’s approach to parallelization of IO is proving successful in unburdening apps and therefore getting VMs to work faster.
Toigo went into some history. The industry was working diligently on ways to orchestrate multiple CPUs into a parallel architecture for increased processing power. But Intel’s uni-core processor changed the trajectory and introduced a new plan of attack following along with Moore’s Law. Fortunately, he said, a few at DataCore remembered how to do parallel I/O.
“While apps may not yet be ready to exploit parallelization, I/O is tailor made for it,” said Toigo.”Instead of doing IO operations serially, process them in parallel using an allocation of available logical cores.”
The result, he said, is among the fastest IOPS measurements ever submitted to the Storage Performance Council (SPC) using all low-cost commodity gear and producing the lowest dollars per IOPS rate in the market today (official SPC-1 benchmark due out soon).
“With the application of parallel IO technology to storage, the throughput of virtually all disk and flash will be accelerated at low cost,” said Toigo. “As the number of cores on the die increases, so does the throughput.
Meanwhile, VMware continues to roll out storage virtualization technology of its own. The latest generation, VSAN 6.0, operates in two modes: either all flash or hybrid. In the hybrid storage configuration, one SSD operates with up to seven HDDs in a disk group, with the flash used as a read cache. This is said to deliver 40k IOPS/server. Alternatively, an all-flash VSAN is said to deliver 90k IOPS/server. This configuration has two tiers of SSDs with one operating as a write cache utilizing high-endurance SSDs, and the other tier acting as a persistent data tier with high read performance utilizing cheaper read-intensive SSDs.
Web hosting company Grass Roots replaced its aging SAN with VSAN. Simon Kearney, the company’s Head of Group Hosting, reports a 5x increase in performance for Web-driven applications.
“We do not have to try to predict future growth and over-buy for a new SAN,” he said. “This saves us money today and into the future as we can easily scale.”
Photo courtesy of Shutterstock.