Software Defined Storage and Networking: Will it Work? - EnterpriseStorageForum.com
Hot Topics:

Software Defined Storage and Networking: Will it Work?

We have all heard that software defined networks are going to solve all of our networking problems. Many vendors are already using software defined storage and I think there might be more vendors coming to the table.

The questions I always ask myself: is this all good and what is going to happen and why? What are the challenges and issues that need to be considered when evaluating software solutions that were once designed using ASICs for both storage and networking?

As we see more storage systems running over standard networks with ISCSI and FCoE, I think we should lump these issues all together.

Challenges

I think there are a number of challenges on both sides of the equation: the ASICs specific design and designs using commodity hardware for both storage and networks.


Before I get started it’s good to remember where we have come from, so we can understand why and how things have progressed over the years. The following references of performance on storage and networking reflect some of the design decisions that companies have made over the years:

software defined storage

The point is that once you get to storage, performance increases are pretty pathetic. Though ethernet performance and CPU performance have increase somewhat, storage performance – except for SSD performance – has increased far less than two orders of magnitude. And we are just starting to use 40 Gbit ethernet so 10 GbE might be a better comparison for networking.

This begs the question: can commodity CPUs address the storage problem and could they address the networking problem? Let’s look at the two methods for address the design of storage and networking hardware.

ASICs (Application-Specific Integrated Circuit)

On the side of ASIC design, it takes a long time for both the hardware design, and the verification process. Once that is completed the software teams need to confirm that the software designed for the ASIC works and works as expected with the hardware.

Sometimes minor ASIC hardware design issues can be fixed in software and sometimes they cannot. Once completed, ASIC is very fast at the tasks they were designed for. In the RAID storage world starting in the 1990s, ASIC were often designed for parity generation as computational performance and latency was an issue.

ASICs are also used for networking switches and routers and from everything to SSDs to disk drives. So ASICs are used in many places but the number of places is dwindling, given the performance increases in commodity CPUs and of course the availability of commodity software. There used to be a lot more fabrication houses that fabricated ASICs in the US and the world than there are today.

The technology for, say, 45 nanometer design is very expensive, requiring billions of dollars of investment. And you need to start preparing to build a new plant at 32 nanometers. If you are a vendor requiring an ASIC for, say, 100,000 storage controllers a year (which means you are a pretty good size vendor), you have to amortize for the cost of the multi-billion dollar fab. Of course it is not just your company using the fab, but trust me the cost is still pretty high.

On the networking side of the equation, if your company is a large networking vendor then maybe the number is more like 1,000,000 parts. But this is still a high cost even though the cost per unit will be lower.

Commodity Hardware and Software

Today, lots of vendors are using commodity CPUs for storage controllers, and some are starting to look at doing this on the network side. On the storage side, from the low end RAID cards to the high end storage controllers, this has been going on for a number of years.

On the networking side, with the availability of PCIe 3.0, we are just starting to see products in this area. In some cases vendors are taking standard motherboards, while in other cases they are using design specific motherboards. The reason, of course, should be no surprise to anyone: the cost of commodity hardware is far lower than development of an ASIC unless you are building a huge number.

The cost includes having EE engineers develop an ASIC, simulate it, send it out for fabrication, and likely fix the problems – and getting the software working is not cheap. On the other hand you can buy a Sandy Bridge motherboard (soon Ivey Bridge) and you get 40 GiB per socket of PCIe bandwidth, you have free operating systems (or can purchase support) with Linux or in some cases BSD all for say about $1,000 per unit. And if you include memory and the mother board likely around $5,000 on the high end.


Page 1 of 3

 
1 2 3
Next Page
Tags: data storage, networking, SDN, Software Defined Storage


Comment and Contribute

 


(Maximum characters: 1200). You have characters left.