Download the authoritative guide: Enterprise Data Storage 2018: Optimizing Your Storage Infrastructure
During the past three to five years (depending on the industry in question), IT budgets have been modest at best. There has been a great deal more Band-Aids placed on them than shiny new hardware and software purchased. At the same time, not a lot of new stuff has come on the market. Mainly, the past few years have been all about incremental improvements to the old hardware and software. What little new technology was released did not see significant adoption.
A prime example of this is 10 Gbit Ethernet. Its sales trends and pricing, which is generally a function of sales volume, tell us a great deal. 10 Gbit Ethernet pricing has finally dropped, and those (including me) who predicted high adoption and low pricing back in 2007 and 2008 were wrong. It is only now that we may be seeing the light at the end of this very long and deep recession.
So back to forklift upgrades -- I am working with a few organizations that have very old Fibre Channel (FC) disk drives. In one case, I heard of a site that had 146 GB FC drives in use that were released in 2002. The IT staff was also using 300 GB FC drives released in 2004. The site in question was running both of these disks in production and recently had a double failure on a RAID-5 LUN. We all know what happens when that happens. You might ask, why they were running RAID-5 rather than RAID-6 with the old drives? Well, the RAID controller was so old that it did not support RAID-6.
This is not an unusual situation. I am seeing more and more sites getting rid of this type old hardware and moving to new systems. There are now many choices that were not possible six, eight or 10 years ago. So how do you make the decision on what storage system to buy to replace your old SAN environment?
As recently as six years ago, if you needed high performance storage you had one choice: Buy a SAN from the usual vendors selling SAN-based technology. Your network to the storage was FC, the network in the storage was FC, and the disk drives were either FC or SATA -- SATA drives were becoming popular given their density and cost. Many people noted that the combination of reliability, performance and density required the use of RAID-6. So after buying the hardware, you had to worry about what file system to use to put your system together.https://o1.qnsr.com/log/p.gif?;n=203;c=204650394;s=9477;x=7936;f=201801171506010;u=j;z=TIMESTAMP;a=20392931;e=i
What a difference six years makes! Today, NAS boxes are capable of addressing many of the high performance requirements. These boxes are connected via 10 Gb Ethernet and scale out. They use NFS or CIFS for communication. If you need more performance, a number of vendors are making file system appliance boxes. These vendors are using scale out file systems like GPFS, Lustre, Gluster and Panasas, and they are wrapping hardware around the software to make things far easier to use. These appliances can also communicate via NFS or CIFS, or they can go much faster and communicate via a file system client software running on the server.
The hardware in these file system appliances are built completely differently than the RAID devices of just six years ago. For starters, disk drives today are connected via SAS connectivity, not FC. The drives replacing the old SATA dives of six years ago are called nearline SAS, and they support either a SAS or SATA interface. When making a purchase, be sure the vendors you're buying from are using the SAS interface, as it provides significantly more functionality.
The mechanics of these drives (the motors and so on) are SATA, and that translates to the same hard error rates and slight improvements in annualized failure rates (AFR) that we had before. There is also a move afoot to move from 3.5-inch drives to 2.5-inch drives in this area. The main driver, from what I can tell, is watts per IOP and watts per GiB/sec. The enterprise FC drives of six years ago are now 6.0 Gb/sec on 2.5 inch 10K or 15K SAS drives. The hard error rate is still one order of magnitude better, and the AFR is also better. Thus, you have different drive choices and sizes, but we are quickly moving to a single drive interconnect, and that is SAS.
Last, but not least, are the choices of SSDs. There are SAS, SATA and PCIe SSDs. In reality, your choices are limited by the vendors selling you appliances, unless you are rolling our own architecture. These new file system appliance boxes require less management intervention because they are pre-integrated for your environment. They are not the 2006 NAS boxes in terms of performance. The NAS vendors have not stood still during this time; they have developed scale out architectures. pNFS is finally in a released Linux kernel. The NAS vendors just need to modify their file systems to be able to support pNFS.
Both of these technologies require a forklift upgrade, as it does not make sense to upgrade a 2006 box with these new technologies.
Network Decisions to Be Made
Back in 2006, server-to-sever communications was, for the most part, a 1 Gb Ethernet world. Yes, 10 Gb was available, but it was cost-prohibitive for most organizations. For the storage communications networks, NAS excepted, FC with 4 Gb was released in 2005. Today, FC is used by and large only in legacy SAN environment. Yes, FC is very efficient, but the costs per port, for both HBAs and switch ports, are far more expensive than 10 Gb ethernet NIC and switch ports. I think the cost difference are going to increase, given the volume increase with 10 Gb Ethernet. This was not the case six years ago, and with the current pricing trends I am not convinced that if someone must forklift a new storage infrastructure that he should overlook 10 Gb ethernet. FC is still required for some peripheral devices, like tape drives, but this does not mean that your whole infrastructure must be FC, as almost all vendors have switches that support both FC and Ethernet.
Forklifts Are a Good Thing
If you incrementally upgrade often, the goal is to do things with the least amount of disruption. Hence, you buy the next generation of a given technology. In contrast, if you wait long intervals before replacing systems, the options increase. Given the technology changes that have happened in recent years, in some cases the amount of disruption for doing a forklift upgrade is likely about the same as it is for a cumulative incremental upgrade. Consider, for example, a file system appliance from a SAN vs. upgrading the SAN. In both casees, upgrading the file system storage network is equally difficult.
Before upgrading systems to the same thing as before, consider all the options for storage systems and networks. Ultimately, however, requirements should drive the decision. You must understand the strengths and weakness of the new technologies as they compare to the old way of doing things. This requires significant research to understand the new appliances, new NAS products, network choices and disk drive types.
As the economy slowly improves, I believe we will be seeing significant changes in the infrastructures that run large computer centers. Given the low cost of Ethernet and the high cost of FC, you must decide if it makes sense to continue down this path. It may have made total sense to buy another FC card for the enterprise switch and stick with FC three years ago, but when that switch needs replacing is it the right technology? Is SAN the right infrastructure for the environment? All of these questions should be on the table. The correct answer is likely to not do what you have been doing, but rather, look to do something new and different.
The forklift is coming. Choose wisely.
Henry Newman is CEO and CTO of Instrumental Inc. and has worked in HPC and large storage environments for 29 years. The outspoken Mr. Newman initially went to school to become a diplomat, but was firmly told during his first year that he might be better suited for a career that didn't require diplomatic skills. Diplomacy's loss was HPC's gain.