Software Defined Storage and Networking: Will it Work?: Page 3 - EnterpriseStorageForum.com
Hot Topics:

Software Defined Storage and Networking: Will it Work? - Page 3

You might have 1 million people expecting 10 millisecond latency on a Google or Yahoo search, but 1 million people getting 8 millisecond latency on disk requires about 1,000,000 disk drives. This is not going to happen, as that is 4,000 PB of storage or 4 exabytes. By design and expectation, networks are required to have far less latency than storage.

Network and Storage Bandwidth

Today we have networking backends with many terabits of bandwidth. In a perfect world and with a single Intel processor, you only get 320 Gbits of bandwidth (40 GiB per socket 8 bits per byte). For example, the Cisco Catalyst 6807-XL chassis is capable of delivering up to 11.4 terabits per second of networking bandwidth. That would equal over 35 Intel processor of bandwidth plus all the communications overhead, so likely double the number of CPUs. On the storage side, the faster controller is around 32 GiB/sec. Let’s assume about a 50 percent disk performance utilization and 121 MiB/sec maximum disk performance for 4 TB drive, and you can run about 537 disk drives for about 2 PB of storage.

What I Expect will happen

I have had a home NAS product for a few years that uses an Intel Atom for the RAID engine and management. It works great and has about the same performance as the high performance RAID I used back in 1997/8, all for about $600 including the storage.


High end storage performance can easily fit cost effectively into today’s commodity hardware footprint, even with SSDs. Storage bandwidth clearly has not grown beyond the commodity bandwidth with PCIe 3.0 and CPU performance that’s available with Intel Sandy Bridge. It has grown to the point where RAID parity, declustering and likely in the future erasure codes (aka for older people, forward error correction) can be done in the CPU without incurring significant latency and overhead.

This is not the case for high end networking and will not be the case at least for the foreseeable future, given that bandwidth is limited by PCIe 3.0 bandwidth. 35 Intel processors at $1000 a piece plus the motherboards, etc., plus the overhead. So multiply by two it’s likely around $700,000 just for the hardware, or over $61K per terabit just for the hardware. And this makes the assumption that you can get 50% of the bandwidth from the CPUs. This is far more cost than the equivalent terabits from Cisco’s hardware ASICs.

In a nutshell, what I see happening: storage will use commodity hardware, even for the most part SSDs, just given the sales volume required to develop your own ASIC. But high-end networking will continue to develop its own ASICs, given that commodity hardware will not be able to meet the needs for performance and latency required in large core switches. Low and mid-range networks, which are not as latency sensitive, will continue to move toward software on commodity hardware.


Page 3 of 3

Previous Page
1 2 3
 
Tags: data storage, networking, SDN, Software Defined Storage


Comment and Contribute

 


(Maximum characters: 1200). You have characters left.