Software Defined Storage and Networking: Will it Work?: Page 2 -

Software Defined Storage and Networking: Will it Work? - Page 2

The costs for these types of appliances are in the software development. For storage, one of the computations that will be required is do parity generation, so you need some good programmers that likely need to write in C or maybe even assembly language. As we move to other RAID methods such as declustering, that will need to be ported, but for the most part storage vendors have been using commodity hardware for over a decade for management and monitoring.

On the networking side we have had multithreaded TCP stacks for a long time in BSD and Linux, though BSD seems to have been an early choice for many. Large scale networking seems to be lagging storage, and for a good reason.

Why Networking Lags Storage

Development of storage controllers using commodity hardware is way ahead of large scale networking (many smaller network device such as your home routers use commodity hardware). This is true for a number of reasons in my opinion.

1. Most storage controllers have been using commodity hardware for a number of years for controller and management.

2. Storage latency requirements for the most part – even with SSDs – are far more forgiving than network latencies.

3. Storage bandwidths are far less than network bandwidths on large controllers.

Historical Use

Some vendors used a real-time operating system but others have used commodity UNIX (Linux, BSD and others) operating systems for more than a decade. Though this might be the case for network management, networking bandwidths have required the development of specialized ASICs.

As you can see from the above table, storage performance has not kept pace with CPU performance by at least 10x. On the networking side, you might be able to get away with using commodity hardware for your home router, but it will not work and has never worked for the high speed networking side.

I remember back in 1997/8 that you could get about 94 MiB/sec out of the fastest RAID controller available at the time. Today that number is, say, 32 GiB/sec – an increase of 384 times. Seemingly a significant increase, but it only took 10 disk drives to get the 94 MiB/sec back in 1997/8 and today it likely takes well over 500 drives to get the 32 GiB/sec. More on this later.

Storage vs. Network Latency

Back in the 1990s, the seek and latency time on disk drives was a bit over 12 milliseconds. Today that that number has changed to just under 8 milliseconds for 4 TB drives, and for the 2.5 15K drives, it’s under 4 milliseconds. Best case: a factor of 3x improvement.

Yes, SSDs have far less latency, but some of the SSD performance is limited by the operating system, as you have to get in and out of the operating system through the kernel, VFS layer and drivers down the channel to the device. Network latencies, on the other hand, are measured in milliseconds also, but are far lower in general than storage latencies.

Page 2 of 3

Previous Page
1 2 3
Next Page

Comment and Contribute


(Maximum characters: 1200). You have characters left.



Storage Daily
Don't miss an article. Subscribe to our newsletter below.

Thanks for your registration, follow us on our social networks to keep up-to-date