The chip strategies of data storage vendors can never be neatly lumped into one basket, but they are getting pretty similar. So what’s going on?
From EMC (NYSE: EMC) to Sun (NASDAQ: JAVA), storage vendors seem to be turning more and more to x86 and commodity architectures for their arrays.
In general, storage vendors appear to be using Intel (NASDAQ: INTC) and AMD (NYSE: AMD) products for higher-cost, higher-performance applications, and using custom chips for lower-cost, lower-performance ones.
“Many vendors are using Intel chips and chipsets that have built-in support for XOR and P+Q computations needed for RAID-6,” said Brent Welch, director of software architecture at high-performance storage vendor Panasas. “Intel’s i7 platform is quite powerful, and coupled with the XOR and P+Q features, it can eliminate the need for ASICs.”
An array design is much more than the central processor, said Clod Barrera, an IBM (NYSE: IBM) storage strategist.
“An off-the-shelf processor with sufficient MIP power is fine — there is no special value in proprietary chips,” said Barrera. “The full attributes of a product are determined by MIP rate plus the cache design, internal bus design, adapter interfaces, nodal clustering scheme, and so on.”
Economics, SSDs Drive Commodity Chip Trend
The reasons vendors are embracing commodity chips are straightforward, said Brian Garrett, technical director of the ESG Lab at Enterprise Strategy Group. Storage vendors have switched to commodity processors because of factors such as economies of scale, performance and energy efficiency, he said.
“These chips also offer ease of integration, as they come with a wide knowledge base, are generally easier to program and debug than custom ones, and offer a variety of tools,” Garrett said.
Another factor in vendors’ chip selection strategy is the need to match storage performance with the rampant growth of solid-state drives (SSDs).
The growing use of SSDs places a greater demand on the CPU, which requires the use of more cores in order to generate maximum performance from high-end storage arrays, said Graham Lovell, Sun’s senior director of open storage. Sun reportedly has abandoned its own high-end chip development efforts.
Lovell said Sun’s high-end storage systems use 24 cores (four AMD Istanbul processors) to deliver the extra processing required for deduplication, encryption, compression and faster networks.
x86 processors are used in some of the most powerful and scalable applications today, said Lee Johns, marketing director for unified storage at HP (NYSE: HPQ). “Clustering these architectures together is a proven methodology for many workloads and is becoming increasingly important for storage,” said Johns.
What’s important is the architecture of the entire storage system, not just the chips used, Johns added.
“With clustered implementations such as the HP LeftHand P4000 SANs, we leverage the power and price/performance of HP ProLiant servers to enable a scalable SAN that can grow as a customer’s storage needs grow,” said Johns.
He said HP and its customers see value in having a common architecture, which is why HP is driving more solutions toward delivering a unified infrastructure for server and storage.
EMC Goes Commodity
EMC, long known for its proprietary solutions, is another vendor moving toward commodity architectures.
Earlier this year, EMC unveiled a high-end Symmetrix storage array based on Intel’s x86 quad-core processors and EMC’s Virtual Matrix Architecture, a move that followed EMC’s x86-based Atmos system introduced last fall.
The new Symmetrix provides more than three times the performance, twice the connectivity and three times more usable capacity than Symmetrix DMX-4 systems. EMC also claims the product uses significantly less power per terabyte and per IOPS.
EMC said its use of industry-standard processors and VMware’s (NYSE: VMW) hypervisor enables the system to scale up to hundreds of thousands of terabytes of storage and tens of millions of I/O operations per second, supporting hundreds of thousands of VMware and other virtual machines in a single, pooled storage infrastructure.
Page 2: Custom Chips Still Matter
Custom Chips Still Matter
Despite the storage industry’s marked shift to industry-standard processors, there will always be room for custom design, said ESG’s Garrett.
“For example, custom silicon is often needed to handle communication and locking between processors,” said Garrett. “Adding a pinch of custom high-speed locking or caching silicon is often needed to remain competitive from a performance and/or fault tolerance standpoint. I don’t see this need for a bit of custom silicon going away anytime soon — especially within high-end enterprise-class storage systems.”
Today, a lot of custom designs center around the PowerPC architecture, said Panasas’ Welch. “You can put several cores onto a chip, as well as peripherals such as 10Gb Ethernet, and more,” he said. “The PPC system-on-a-chip approach can have advantages in power consumption and cost.”
Welch added that PPC-based systems are common in dedicated network processing appliances, and are gaining some ground in storage products.
“While a PPC system is technically an ASIC, it is put together using a functional building block approach that can be reliable and cost-effective,” said Welch.
Specialty designs still exist for cache interfaces and internal fan out because array controllers require more I/O throughput and larger memory (cache) sizes than a standard server, said Barrera. “IBM’s Power processors are particularly good at I/O, which makes the Power platform a good one for our high-end disk arrays,” said Barrera.
BlueArc’s Chip Blend
A good marriage of custom and standard chip technology comes from BlueArc, whose Titan products offer a glimpse of possible future storage development in the network-attached storage (NAS) space.
BlueArc’s Titan uses field-programmable gate arrays (FPGAs) to accelerate its NAS processing and deliver faster I/O performance, pairing the arrays with standard multi-core Intel processors.
The vendor claims it has addressed the tradeoffs between hard-coded chips and software running on CPUs by bringing both concepts together in a unique Hybrid-Core Architecture.
“This architecture takes advantage of both FPGAs and traditional multi-core CPUs to efficiently separate processes that normally compete for system resources,” said Jeff Hill, BlueArc’s director of marketing.
“FPGAs are similar to ASIC chips, but are less costly to produce and are easily upgradeable in the field,” said Hill. “Also, ASIC development is expensive, time-consuming and inflexible.”
Hill said the FPGAs in BlueArc’s Hybrid-Core Architecture enable high-performance data movement, directory tree management, metadata processing and protocol handling.
“This allows data to be transferred between logical blocks in a parallel fashion, ensuring no conflicts or bottlenecks in the data path,” said Hill. “In complementary fashion, the system’s multi-core CPUs, unburdened by core file system functionality, handle system management, data management, virtualization and error handling.”
Hill said all functions performed by the CPUs are done out of band from the data, further reducing contention. He claims this separation of data movement and management prevents competition for system resources, maintaining performance levels while supporting advanced file system functions.
Follow Enterprise Storage Forum on Twitter