What is SSD cache? SSD caching is a computing and storage technology that stores frequently used and recent data to a fast SSD cache. This solves HDD-related I/O problems by increasing IOPS performance and reducing latency, significantly shortening load times and execution. Caching works on both reads and writes, and particularly benefits read-intensive applications.
Caching is not new to hard drives. Operating systems like Windows and Linux come with native caching software. HDD array caching software exists and increases overall HDD performance, but the configuration is expensive and complex.
How Does Caching Work?
SSD caching is also called flash memory caching. Although flash and SSD are not the same thing, most SSDs are NAND flash. In this architecture, the caching program directs data that does not meet caching requirements to HDD’s, and temporarily stores high I/O data to the NAND flash memory chips.
This temporary storage, or cache, accelerates read and write requests by keeping a copy of the data closer to the processor. Caches may consist of an entire SSD or a fraction of the memory cells within an SSD. Many SSDs already come with a caching storage area, which may be NAND and/or DRAM.
SSD caching improves performance by storing readily needed data so that it’s more quickly available.
Types of SSD Caching
To fully understand how SSD caching works, let’s look at the various types of SSD caching. These different types of SSD caching include read caching, write-through, write-back, and write-around.
- Read SSD caching: stores copies of data in fast SSD memory cells; usually NAND and/or DRAM. The caching software uses the cached read data to populate the cache. Read caches from different manufacturers may use algorithm variants, such as coupling DRAM and NAND memory cells on SSDs to produce even faster caching performance.
- Write SSD caching types:
- Write-through SSD caching writes simultaneously to the cache and to primary storage. The cache enables faster data retrieval, while the primary storage write safely retains the data even if a system interruption affects the cache. Write-through SSD caching does not require additional data protection for the cached data, but does increase write latency.
- Write-back SSD caching confirms that a block is written to the SSD cache, and the data is available for usage before writing the block to main storage. The method has lower latency than write-through, but if the cache loses data before the data writes to primary storage, that data is lost. Typical data protection solutions for write-back SSD caching are redundant SSDs or mirroring.
- Write-around SSD caching writes data to the primary storage first instead of to the cache. This gives the SSD cache time to analyze data requests and identify most frequently and recently used data. The SSD cache efficiently caches high priority data requests without flooding the cache with infrequently accessed data.
Optimizing Your Drive’s Performance
SSD caching improves storage performance by keeping frequently accessed data immediately available. When the host issues a data request, the caching software will analyze SSD caches first to see if the data already resides there.
If not, the caching software will use algorithms to predict the patterns of data access. The algorithms identify least and most frequently used data, and least and most recent data access, enabling it to place copies of high priority active data into fast cache memory.
Not every application improves with SSD caching. Any application that issues primarily sequential reads and writes, such as video streaming, does not need random I/O caching. And data that has no predictive patterns, such as random data reading, does not benefit from SSD caching because there are no data patterns to reliably predict.
SSD Caching Locations
SSD caching may occur in any type of device that uses SSDs:
- Personal computers (Windows and Linux operating systems both provide basic caching)
- External storage arrays
- SSD storage controllers
- Servers with direct-attached hybrid storage – but realize that server caches are not limited to SSDs; eMMC is embedded flash that supports caching.
SSD Caching Use Case: Virtualized Infrastructure
SSD caching can significantly improve performance and lower latency for enterprise applications and large virtualized networks.
For example, SSD caching accelerates I/O performance, and virtualized environments generate large volumes of random I/O. This is because virtualized environments are bringing together many different server functions and applications. This includes VDI’s with hundreds to thousands of virtual desktops, or virtualized computing networks with dozens of different application servers and hundreds of dynamic virtual machines.
All these virtualized entities share the same underlying storage media – mostly HDDs, since it is not cost-effective to replace HDDs arrays with all-flash arrays to support virtualized environments. AFAs support extremely high numbers of I/O, but even larger virtualized environments do not automatically generate nearly as much I/O that the AFA can support, now or in the future.
This architecture does not justify the high cost of an all-flash array. But within an HDD or hybrid array that underlies a virtualized network, SSD caching enables the hard drives to support high I/O requirements even for intensive virtualized workloads.
Server-based SSDs, as opposed to networked array-based storage, also work in virtualized networks. In these cases, the host server uses SSD caches in its direct-attached storage to serve multiple VMs. Because the SSD cache is physically close to the I/O location, latency reduces even more. The drawback is that the server fails, the cached data may be inaccessible and perhaps even unrecoverable depending on the type of write cache. However, if IT backs up/snapshots/replicates the cached data and rapidly restores to another server, this is not a huge drawback.
Best SSD Cache Software
“Best” is complex concept in SSD caching, because there are many technologies that deliver caching software commands. These include VMWare and Hyper-V, specific applications, third-party software, Windows and Linux, SSD storage controllers, and storage arrays. For example:
- Intel: Smart Response Technology for hybrid caching. Smart Response Technology is a feature of Intel Rapid Storage Technology that improves performance and durability in hybrid arrays. Smart Response Technology caches I/O blocks of the most frequently used data and applications into the SSD, and uses the HDD for large storage capacity.
- Intel: RAID Cache Controller. Intel also manufactures a RAID SSD cache controller that uses intelligent caching algorithms to identify frequently accessed data, and directs it to fast flash memory.
- QNAP: Native SSD caching on NAS. QNAP claims that its caching feature accelerates IOPs performance on QNAP network attached storage by up to 10 times, and reduces latency up to three times. QNAP markets the NAS with its SSD caching feature for databases and virtualized environments.
- NetApp: Array-based SSD caching. The SSD Cache feature improves read performance on NetApp arrays, so mostly benefits arrays that store read-intensive applications. NetApp uses primary and secondary cache locations on its SSDs: the primary cache is SSD controller-based DRAM, while NAND flash memory cells are its secondary cache. Once the data is stored on the SSD cache, subsequent reads are performed on cache and not in primary storage. The high performance SSD caching improves application I/O and response times, and sustains the performance improvement across different workloads.