Mainstreaming High IO Performance with Flash Cache Page 2: Page 2 - EnterpriseStorageForum.com
Hot Topics:

Mainstreaming High IO Performance with Flash Cache Page 2 - Page 2

Advantages of Caching

Flash caching drives dramatic performance gains by keeping active data copies on very fast flash cache. This saves money since customers do not have to increase spindles to increase performance. Some customers can even replace expensive SAS with SATA because they gain so much performance from the SSD cache.

Customers also save money because flash caching can be effective with a relatively small amount of flash and does not require heavy management overhead because they can leave optimization to the caching algorithms.


Caching’s predictive learning algorithms takes that problem out of administrators’ hands by learning and applying the optimal placement for best application performance. Larger SSD capacity, granularity, simplicity and redundancy also work into the caching equation:

  • Larger flash capacity. Caching has benefited from important advances in flash SSD sizes, which have reached TB capacity. Entire large worksets can fit into cache for very fast IO processing.
  • Granularity. Cached data copies are highly granular. Flash tiering moves large chunks of data even if a just small block or byte has changed. But caching can copy large or highly granular data. This minimal data copy dramatically increases performance over large data movements between a flash tier and backend disk.
  • Simplicity. Management simplicity saves on ongoing expenses, which is key to cost savings over the life of the array. Letting the storage system provide good caching algorithms is preferable to requiring IT to figure it out data placement and matching best performance for an application’s needs.
  • Fast reads and writes. Read and write flash caching greatly accelerates performance over spinning disk drives. Reads are considerably faster than writes, but caching vendors will usually employ write acceleration technologies just as writing from DRAM, aggregating writes and employing logs.
  • Accelerates overall storage performance. All applications sharing the SSD storage will increase performance and decrease latency. Benefits spread to all end-users.
  • Real-time vs. periodic.Most automated tiering solutions do not operate in real-time but offer manual commands, factory-set data movement schedules or policy-controlled data movement between tiers. Flash cache operates in real-time to automate and optimize copy movement between tiers as the workload changes.

SSD Cache Quote

Let’s look at an example of a hybrid storage array with SSD-based caching, the Nexsan NST5000. The array combines gigabytes of DRAM, terabytes of high capacity flash drives and SAS or SATA for long term spinning disk capacity.

Nexsan calls its cache technology FastTier. It performs reads and writes in its high-speed DRAM and flash layer for extremely high performance and low latency. Frequently used data can granularly stage to the cache for fast processing. The NST5000 does not need to RAID its SSD read cache because the data always lives on the underlying RAID’ed spinning disks. Performance for sequential files is stellar; performance for random IO files – by far the harder task – hits 100,000 CIFS op/sec at 1.23ms latency.

SSD cache helps to solve the big business problems of poor performance and high latency for mainline applications, without a large per/GB price tag. When utilized within hybrid storage arrays, SSD-based cache consolidates high performance storage onto a shared storage system. These solutions cost a fraction of all-flash and, in many cases, just slightly more than an all disk based system, but they greatly multiply IO performance over traditional storage arrays.

 

Christine Taylor is an analyst specializing in data storage and information governance at Taneja Group.


Page 2 of 2

Previous Page
1 2
 
Tags: Flash, cache, SSD, enterprise storage, tiering, IOS


Comment and Contribute

 


(Maximum characters: 1200). You have characters left.