The Storage Networking Industry Association (SNIA) defines it as an architecture that provides computational functions in storage as a way of offloading host processing or reducing data movement. This enables improvements in application performance and infrastructure efficiency through the integration of compute resources that would normally be provided via traditional compute and memory resources.
The computational element is either directly situated within storage or can be placed between the host and the storage. The ultimate goal is to enable parallel computation and alleviate constraints on existing compute, memory, storage, and I/O.
This approach bypasses the need to move data between the CPU and a storage layer, a reason for slow response times. By hosting high-performance computing applications within the storage itself, it is possible to lower resource consumption, reduce costs, and achieve higher throughput. It also removes any bottleneck between data paths and host CPUs by moving computational elements into or beside the storage device.
Computational Storage Use Cases
There are multiple use cases for computational storage:
- Solid state devices (SSDs): SSD idiosyncrasies include writes being slower than reads, random writes being particularly slow, and writes wearing out devices over time. Computational storage can materially improve applications that use SSD storage by helping to avoid random writes and economizing on the use of writes in general. This includes relational and noSQL databases, analytics, software defined storage, and metadata associated with HDD based storage.
- Hyperscale data centers execute a great variety of parallel workloads: Adding more intelligent and higher powered storage can helps reduce Capex, Opex, power and cooling demands, and lower the physical footprint.
- Edge: Intelligent storage solutions provide low power, more efficient compute where it is needed. Telecom providers, for example, can use computational storage to support a massive amount of data processing at the edge.
- IoT: Automotive and aerospace providers, in particular, are among the leaders in deploying computational storage to make sense of vast amounts of sensor data
- Content Data Networks (CDNs): Computational storage improves key management, encryption, and access controlled content by moving computation closer to consumers.
How to Select a Computational Storage Vendor
What are the most important criteria a user should look at when evaluating computational storage options?
Align with Precise Needs
The value of a specific computational storage solution depends on how closely it aligns to the solution of a variety of challenges related to data size, the movement of large data sets, and the location of that data. If implemented correctly, it can provide the needed support to turn raw data sets into more meaningful results or simplify the work needed to be done by the CPU, GPU, or other processing engines.
Determine the Relative Importance of Performance, Efficiency, and Resilience
Different approaches to computational storage may address one or more of these areas better than others. For example, storage-related functions of SSD-based applications consume resources from the CPU, SSD, and network. Computational storage can directly address performance and throughput and reduce latency.
As always there is a tradeoff between cost and functionality. Some computational storage platforms may offer more power, but it may not be needed for some applications. Balance cost against value to the enterprise.
Top Computational Storage Vendors
Enterprise Storage Forum rates the following among the top computational storage platforms, in no particular order.
NGD Systems uses an NVMe SSD that has application execution capability on the drives. The company partners with other companies to standardize on the technology terms and drive the creation of the SNIA Technical Working Group (TWG) for computational storage. NGD’s Newport Platform is said to be the first 16 flash channel computational storage solution that provides in-situ processing with high performance.
- Solutions based primarily on supporting data that is stored then analyzed.
- NGD Systems provides a Computational Storage Drive (CSD) that is an ASIC-based NVMe SSD that supports a flexible Linux-based environment.
- Several form factors (M.2, E1.S, U.2, AIC).
- Capacities of up to 32TB available.
- It operates as a standard storage device or can have the compute resources turned on/off as needed.
- The In-Situ Processing Development System (ISDP) enables developers and integrators to build applications for in-situ processing solutions. The ISPD can be configured with your choice of 1U, 2U or even 4U server.
- Does not suffer from performance issues due to power throttling.
The Pliops Storage Processor (PSP) is a hardware-enabled storage engine available as a cloud service or in a PCIe card form factor. With Pliops storage acceleration, companies can consolidate existing infrastructure, eliminate bottlenecks, and improve resource utilization without changing applications. The computational resources in the Pliops Storage Processor can be used to deploy data structures and algorithms that optimize the disparate costs of compute and storage. Pliops claims that customers have measured performance throughput gains as high as 25x and latency improvements greater than 1,000x.
- Pliops manages writes to SSDs optimally, increasing endurance & performance by up to 10x, enabling drives to be filled without concern for performance and endurance.
- Pliops Storage Processor provides protection from drive failures without major tradeoffs in performance or capacity, allowing users to experience performance 2x RAID 0 while using all of the available SSD capacity for user data.
- Pliops Storage Processor is a single all-in-one device that accelerates TLC SSDs, QLC SSD, even Optane SSDs for any application that stores data to SSDs.
- Works with all standard servers and with all existing SSD suppliers.
- The block interface performs the data acceleration, protection, and management functions in its domain without changes to the application.
- For even higher performance and efficiency, a direct Key Value API is available for native RocksDB and other storage engine integration.
ScaleFlux is an early pioneer of computational storage drives. It brings compute closer to the data, reducing data movement, and boosting application performance. ScaleFlux aims to help users overcome the challenges they face in scaling to handle the exponential growth in the data storage, data processing, and data movement needs of the enterprise.
- ScaleFlux’s focus is on turnkey computational storage drive production deployment and NVMe-like storage performance.
- Claims to quadruple capacity, double performance, and half costs in comparison to its competitors.
- Its CSD is an FPGA PCIe-attached (non-NVMe) SSD that supports functions that are pre-installed on the FPGA and can be configured to support workloads.
- 2 and AIC form factors up to 8 TB.
Samsung provides a CSD that is an NVMe SSD with an FPGA accelerator on the same U.2 form factor card. It is called the SmartSSD and is promoted by the FPGA partner Xilinx. It offers up to 4 TB of storage and the device provides a peer-to-peer path between the FPGA and the SSD. integrates heavy-duty compute engines with high-capacity NAND flash storage to raise processing efficiency.
- One U.2 card combining an NVMe SSD with an FPGA accelerator.
- Runtimes, libraries, APIs, and drivers can be built into the system using common application frameworks.
- Its SmartSSDCSD Platform accelerates a variety of applications including database management, video processing, artificial intelligence layers, and virtualization.
Eideticom provides the NoLoad Computational Storage Processor to a variety of other providers. These products do not support any specific storage space; rather, it helps to disaggregates compute and storage into independently scalable resources.
- Form factors from hardware partners include U.2, Add-in-Card (AIC), and EDSFF.
- Can be deployed on customer-specific hardware platforms.
- NoLoad IP is also available for licensing.
- NoLoad Accelerators can be shared across the data center using NVMe-oF.
Nyriad began as a developer of technology used in radio telescopes for deep space exploration. More recently, the company developed a CSD driven by Nvidia GPUs that can handle data processing at 160 TBps. It promises to deliver the highest possible performance with bulletproof reliability, resilience, consistency, and integrity. The company appears to be in semi-stealth mode as the website lists little in terms of product details.
- It makes one copy of the data perform consistently, with more resilience, and more space and power efficiency than multiple copies.
- A software architecture that enables its high-performance storage controller to take full advantage of the processing characteristics of the CPU and GPU.
Arm offers a fast, cost-effective computational storage solution using low-power, high-performance processors and a software stack and developer tools from the Linux ecosystem.
- Tight integration with ARM specialist processors.
- Arm Cortex-A processors are optimized for low power and high performance in complex computing tasks on storage devices, as well as high-performance real time and high-level operating system applications.
- A memory management unit (MMU) for running Linux Neon support for ML workloads
- Partnership with NGD Systems.
Netint offers an ASIC combining an SSD controller with compute power. Its focus is largely on multimedia production, as well as video surveillance, CDN, edge, and data center operations.
- The NetInt Codensity G4 SSD Controller SoC offers video compression using H.265 encode/decode engines.
- Capacity of up to 16 TB.
- Supports NVM-Express over PCI-Express 4.0.
- Available as in add-in card or U.2 formats.