Download the authoritative guide: Enterprise Data Storage 2018: Optimizing Your Storage Infrastructure
A number of trends are coming together that are likely going to change how we use and access data storage, probably near the end of the decade.
At the Intel Developers Forum last year, there was an interesting talk by Alan Gara, Intel’s Chief Exascale Architect. Here’s a more detailed article on Intel's plans and what Intel says about faster optical interconnects and the issues with power, CPU, memory density and performance challenges. It’s definitely worth the time to listen and read the links.
Combine those predicted shifts with what has been happening with the storage industry with changes to interconnects, then add in what has been happening to the base storage technology, that is, the storage device themselves (disk, flash and tape). Clearly, change is coming. We have been in the same paradigm for a few decades: CPU, connected to memory and with connections to external channels for the display, and communications.
For an even longer period of time, disk drives have been attached over channels during the last few decades, from proprietary channels to IPI-2 and IPI-3 to SCSI, fibre channel and now SAS. Things are going to have to change for us to move forward and I think one of the first changes will come from Intel, based on the developments noted above.
So what happens to ethernet if Intel decides to put communications on the socket? What happens to storage communication? What happens to disk drive interfaces or, you might ask, what happens to SAS, since Seagate now makes an ethernet attached object disk?
Each of these areas has impact on the other area. Seagate, Toshiba and WD do not make CPUs and ARM. AMD and Intel do not make disk drives, but there are lots of overlapping issues between these two technologies. And as was discussed in the Intel talk, memory is going to be a big challenge.
For example, most if not all storage controllers today use standard commodity CPUs for much of the controllers’ RAID activities. Most of the controllers use Intel CPUs because of the desire for PCIe 3 and the performance.
Let me predict what I think is going to happen by the end of the decade. But please do not hold me to this, as a great deal depends on the companies involved and which vendor moves the fastest – and all of that is nearly impossible to predict.
I think all the CPU vendors have a plan to move more communication directly on the socket. There has been a significant evolution over the last decade or so with more and more technology being moved to the socket. Today we have USB, PCIe 3, ethernet and other features right on the CPU for servers. And for cellphones we have other communications connections and requirements for things like analog to digital conversion and the like.
The other big requirements for CPUs are memory and the connections to memory. Today for servers we have L1, L2 caches and in some cases L3, and then standard DDR memory. I see much of this changing in the future with more hierarchies of memory, especially beyond DDR where there will likely be a non-volatile area of memory.
If you listened to the whole Intel video you will note that memory is one of the biggest issues as CPUs get faster over time and power will continue to have a profound impact on the design of systems. The SOC (system on a chip) concept is going to be even more system oriented in the future. Intel has been pretty clear with their interconnect plans, having purchased Fulcrum, QLogic IB division and Cray’s networking division. The ARM consortium is working on new connections and of course everyone has heard about the OpenPower group that IBM, Mellanox and others have joined.
Clearly there is a move afoot to change the interconnect that the processors in use. Does that this mean that PCIe and PCIe connectivity is dead? Honestly I think we are all in for a big surprise in this area. I think we are going to back to a proprietary world with a few small changes. Back in the 1980s and 1990s, DEC, IBM, Sun and others had their own buses and connectivity. But the vendors found that the broad market was too small to support the cost for the NIC and HBA, and other vendors had to support these server vendor’s products.
Along comes PCI and – shazam – the NIC, HBA and other vendors were able to dictate in many cases that they were not going to build products for these vendors without money up front or guaranteed revenue. They could do this as other vendors had moved to PCI and the market was large enough and easy enough, as these vendors had a standard interface for peripheral devices.
The world has changed a quite bit and now the CPU vendors control much, if not all, of the board and there are not many CPU vendors left. The CPU vendors can tell the peripheral vendors that they need to play ball with them rather than the other way around, if the CPU vendors decide to build new connectivity options.
With ethernet technology being built right on the CPU socket by a number of the CPU vendors, and others adding SATA or SAS, you might realize as I have that the CPU vendors are trying to cover the a majority of the normal connectivity options. They’re cutting out the peripheral vendors from at least some of their market share. This in itself is a pretty big market change and there is more to come as far as I can tell.
The Storage Side
On the storage side I think we are also seeing other signs of big changes ahead. Going back to the 1970s and 1980s we had many different types of drive interfaces and a large number of drive vendors. Today we have three vendors, basically Seagate, Toshiba and Western Digital, and we have a few SSD vendors that make devices such as Intel and SanDisk. And of course we have vendors making PCIe SSDs and with the new NVM Express standard some are saying that we do not need SAS or SATA anymore. This might work for the low end where there are one or at most a few storage devices, but this is not going to work for systems with many hundreds of terabytes of storage.