Not Just a Flash in the Pan - EnterpriseStorageForum.com
Hot Topics:

Not Just a Flash in the Pan

As the saying goes, the best I/O is the I/O you don't have to do, but the reality is that even with I/O virtualization and virtualized I/O technologies (see I/O, I/O, It's Off to Virtual Work We Go), there's still a need to perform I/O operations to store or access data in a more effective manner.

The need for more effective I/O performance is linked to the decade's old and still growing server-to-storage performance I/O gap, where the performance of hard disk drive (HDD) storage has not kept pace with the decrease in cost and increase in reliability and capacity of server processing power. You can read more about data center bottlenecks and the server-storage performance I/O gap here.

With the growing awareness of power, cooling, floor space and associated green and eco (ecological and economical) issues affecting IT data centers and storage, solid state devices (SSD) have reemerged as a solution to address multiple woes. SSD is not new technology, having been around for decades, but over the last couple of years there has been a renewed interest, with new forms of SSD-class technologies, along with packing options and market price bands (consumer, SMB, mid-market, enterprise). Even EMC has gotten in on the act, with a big announcement just last week (see EMC Goes Solid State).

Almost 20 years ago, as an early adopter and launch customer of open systems SSD from the company formerly known as DEC, I recall the price being in the low six figures for a couple hundred Megabytes (yes, that's MB) of SSD (two devices mirrored for resiliency) that were very big and bulky. These prices are embarrassing by today's standards; however, they were a bargain compared to the predecessor mainframe SSD solutions that came before them.

Traditionally speaking, SSD has been based on dynamic random access memory, known as DRAM, or what's installed in your computer and commonly known as RAM or memory. DRAM is also known as cache or volatile memory and is found in many storage systems to boost the performance of HDDs. The benefit of using RAM is that it is significantly faster than doing I/O (reads or write) operations (IOPS) to a HDD in that there are no moving parts to delay seek and transfer time, as is the case with even the fastest HDDs.

Some SSD vendors will claim there is no latency. However, do your homework and you will find it is more along the lines of nominal to not noticeable rather than non-existent. When you look at a classic storage I/O access pattern, there is the I/O command initiation, seek or positioning, and then data transfer time. Assuming that you can improve data transfer time with faster media and interfaces, by eliminating seek time, you can boost performance. In the case of SSD, seek time is essentially eliminated and media transfer times are reduced if not eliminated, leaving the bulk of I/O time to transfer time over a particular interface, command or protocol overhead, and I/O pre- and post-processing on the application server.

Cost Issues

Sounds great, so why don't we have more SSDs installed? Simply put, the answer is cost. A myth has been that SSD in general costs too much when compared to HDDs, and when you compare strictly on a cost per GB or TB basis, HDDs are cheaper. However, if you compare on the ability to process I/Os and the number of HDDs, interfaces, controllers and enclosures needed to achieve the same level of IOPS, bandwidth, transaction or useful work, then SSD should be more cost-effective. The downside to DRAM, in addition to cost, compared to HDD on a capacity basis is that electrical power is needed to preserve data.

DRAM has great performance capabilities for reads or writes, but there is a myth that SSD is only for small random IOPS, which was the case in early generations and for some current generations. There are DRAM SSD solutions that support Fibre Channel and InfiniBand that handle both small random IOPS and large sequential throughput workloads.

DRAM SSDs have over the years addressed data persistence issues, battery backed cache or in the cabinet, UPS devices to maintain power to memory when primary power is turned off. SSDs have also combined battery backup with internal HDDs, where the HDDs are either standalone, mirrored or parity protected and powered by a battery, to enable DRAM to be flushed (de-staged) to the HDDs in the event of a power failure or shutdown. While DRAM-based SSDs can exhibit significant performance advantages over HDD-based systems, SSDs still require electrical power for internal HDDs, DRAM, battery (charger) and controllers. Note that if you are concerned about green and environmental issues, you should be concerned about batteries and their safe disposal (e.g., WEEE and RoHS).

Page 2: A Flashy Alternative


Page 1 of 2

 
1 2
Next Page


Comment and Contribute

 


(Maximum characters: 1200). You have characters left.