Helping Storage Keep Up With Server Virtualization

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Henry Newman Some see data center consolidation as little more than taking a bunch of servers and adding some Virtualizationsoftware such as VMware (NYSE: VMW), and poof, you have data center consolidation and application virtualization.

I have heard tales of people going from 1,000 servers down to 100 while saying they have no need to change the underlying storage infrastructure. One story I heard was of a site that was also using LTO-4 for backup of their consolidated data center and they were still planning on using their 1 Gbit Fibre Channel HBAs and 1 Gbit infrastructure from the old servers. Given that 1 Gbit HBAs are EOL, LTO-4 support is untested and not supported, and that you cannot run the tape drive at rate, these folks were in for some trouble.

Let’s drop down a few levels and look at some of the issues surrounding data center consolidation and the impact it will have on storage architecture. Whenever I hear someone say they plan on consolidating servers and reducing costs, I ask myself if they really know what they are getting into.

Let’s say you have 1,000 servers with 2,000 connections to the SAN for reliability (HBA failover). If you take the 1 Gbit FC example from before and assume that the utilization of each of the servers is, say, 20 percent of the storage bandwidth and IOPS. Since many of the 1,000 servers are likely from the early part of the decade, they could be running on 72GB 10K RPM drives, which can do about 100 IOPS and sustained performance of 67 MB/sec. During this era, the RAID controllers could likely support 128 outstanding I/O requests and stream at near the rate of the Fibre Channel for many controllers. It should be noted that streaming is far less important than IOPS for most of these types of Windows applications, given how NTFS allocates data.

Let’s look at an example of some of the issues:

Item Old  New Comments
Number of servers 1,000  100 10-1 reduction
Number of HBA ports 2,000  200 Still using redundant 2-port HBAs
Bandwidth per port MB/sec 100 400 1Gbps FC to 4Gbps FC
Total GB/sec of bandwidth from servers 195 78 Significant bandwidth drop
Total storage in TB 10 100 10X storage increase
Drive count estimate RAID-5 4+1 178 427 72GB 10K RPM drives vs. 300GB 15K RPM drives
Total drive IOPS 14,222 51,200 3.6 to 1 difference

What wrong with this picture? I think a lot of things. The total bandwidth drops pretty significantly in terms of the bandwidth from the servers to the storage. Since the bandwidth from the servers is limited, the RAID bandwidth will likely not be improved. Even if the utilization was low before, say 20 percent of the storage performance, you are now running at 50 percent of the theoretical bandwidth (20%*195=39 MB/sec). That is not sustainable.

The big issue I see is the IOPS performance. The improvement is 3.6 times, which is just plain scary. Going from 1,000 servers down to 100 means that you will likely have 10 times more IOPS per server. Let’s go back to the 20 percent utilization assumption, but you have increased the CPU power per CPU by 10 times, since the 2,000 vintage CPUs can issue far more IOPS per CPU. Clearly a 10 times improvement in CPU performance and a 10 times increase in storage does not address a 3.6 times improvement in IOPS.

The additional problem I see is that the more requests that come out of a server to storage from different applications at the same time, the more random I/O requests are going to be seen by the storage system. NTFS does a pretty poor job allocating data sequentially if multiple requests from multiple applications are made at the same time. The same is true for every free Linux file system I have looked at, and given that many of the server virtualization products run under Linux, you need to keep that in mind when developing an architectural plan. Sequential allocation with multiple streams of I/O is a hard problem that file system developers have tried to address for years without much success. The bottom line, based on what I have seen and heard, is that the greater the level of consolidation, the more attention needs to be paid to storage performance, and what that means is improving the availability of IOPS.

What an Architect Should Do

In the last eight years, we have likely improved CPU performance 10 times. Even if you were using 2.5-inch SASdrives running at 15K RPMs, the IOPS performance has increased only a measly 2.5 times, from 100 IOPS to 250 IOPS.

With 2.5-inch drives that have fewer gigabytes per drive, you will make the situation better if you configure them correctly. Using Seagate 73GB 15K 2.5-inch SAS drives, you are now at 9.86 (1,753 SAS disk drives) ratio on IOPS improvement, compared to 10 to 1 for CPU counts and CPU performance. Pretty close to the same ratio you have for CPU counts, and significantly better than the 3.6 to 1 using 15K 300GB drives, as you have far fewer drives.

If I were developing a system’s architecture for a virtualized environment, I would first look at five important factors on the current systems:

 

  • What was the utilization on those servers of bandwidth to storage?
  • What was the utilization on those servers of IOPS to storage?
  • What was the bandwidth available from the servers to storage?
  • What were the total IOPS available from storage to the servers?
  • What is the underlying file system and how does the file system deal with multiple streams of data being written at the same time?

The fifth point is the crux of the problem. If you do not understand how badly a file system is going to allocate data from these multiple I/O requests, it is going to be really hard to determine the IOPS bandwidth that is needed.

The IOPS needs to scale with the CPU performance increase. If your CPU performance increases 10 times, then your IOPS to disk needs to increase at least 10 times and likely more. The problem is that I/O performance for both IOPS and for bandwidth is not scaling with density improvements. Remember, on the old system you had fewer streams of data going to fewer disk drives, and the file system could potentially allocate data sequentially and reduce the number of seeks and the overhead for rotational latency. Even though you have effectively kept the same amount of CPU power, it is likely that you could issue far more I/O requests with the new system over the old system. The current PCIe 2.0 bus design is much faster than the older PCI buses, and what I found during that time period is that most PCI buses could not run anywhere near the rated performance, and they did not have to, as we were limited to 1Gbit FC as the fastest connection. The new PCIe 2.0 might have as much as 30 times the bandwidth of the older PCI bus on the old system. Having the ability to issue far more I/O requests might result in a larger number of seeks and more rotational latency at the drive level compared with the older system.

As an architect consolidating and virtualizing the data center, you cannot just take a spreadsheet and divide by ten here and multiple by ten there get to where you need to be for performance. You need to think about all the parts in the configuration and consider that some of the parts have not scaled in the same ratios as CPUs and density. When architecting and virtualizing an environment, you need to consider the whole data path, including how the file system allocated the data on the old system and how it might work on the new system, given the potential for an increased number of random I/O requests.

 

Henry Newman, a regular Enterprise Storage Forum contributor, is an industry consultant with 28 years experience in high-performance computing and storage.
See more articles by Henry Newman.

 

Henry Newman
Henry Newman
Henry Newman has been a contributor to TechnologyAdvice websites for more than 20 years. His career in high-performance computing, storage and security dates to the early 1980s, when Cray was the name of a supercomputing company rather than an entry in Urban Dictionary. After nearly four decades of architecting IT systems, he recently retired as CTO of a storage company’s Federal group, but he rather quickly lost a bet that he wouldn't be able to stay retired by taking a consulting gig in his first month of retirement.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.