Maximize Asset Velocity by Sharing Flash 

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

This is an opinion piece exclusive to Enterprise Storage Forum written by John Collins, VP of Product Marketing at Western Digital. 

We live in a data-driven world, and the sheer volume of data created by consumers, businesses and machines is exploding. It’s hard to keep up.

Global research shows that the abundance of data generated, copied, and stored isn’t going to decrease anytime soon. In fact, IDC projects the volume to reach more than166 zettabytes by 2025—that’s less than two years away. Obviously, this affects data storage tremendously.

Storing a vast amount of growing data has two major challenges.

The first involves where and how to store it. For example, on-premises or in the cloud, what storage tier—hot, warm or cold data tier—and what type of infrastructure topology. These are complex questions with many different considerations. And there’s no simple answer. One thing is clear, however. In cloud or large enterprise data centers, storage architects must find solutions that can scale to accommodate petabytes of data while still providing the right performance and service level agreement that meets business demands.

The other big challenge is budgets. Storage architects are constantly under pressure to find cost-effective solutions to meet shrinking or flat IT budgets. Obviously, the more data an organization has, the more storage they need. And, with the need for greater storage capacity, comes increased storage costs. Today’s storage architects must perform a balancing act, one that involves the need for high-performance storage amidst dwindling budgets to get the best return on their investment.

Introducing Asset Velocity and Why it Matters

To help deliver high performance and low latency storage, data center architects are increasingly deploying flash to accelerate their workloads. Many data centers are adopting Non-Volatile Memory Express (NVMe) technology for parts of their architecture to even further expand the performance and latency benefits.

Laser-focused on optimizing and controlling storage spend, they must efficiently manage, scale, and utilize these flash assets to get the biggest bang for their buck. This is driving a growing trend to disaggregate and share NVMe flash over an Ethernet fabric for improved asset velocity.

In data storage management schemes, achieving asset velocity involves obtaining the highest performance, ensuring maximum availability as measured in up-time, while extracting the value of storage that results in the best utilization and efficiency.  In turn, high utilization is an enabler to reduce costs, and improve overall return on investment (ROI). Basically, it’s the ability to use a storage device to its fullest potential to generate value and revenue while keeping costs under control. Most organizations are not fully using their flash assets, and therefore are not realizing the highest possible utilization. In other words, inefficiencies in the architecture are impeding asset velocity.

As an example, let’s review hyper-converged infrastructures (HCI), or a scale-out environment. One of the main selling points for customers is that HCI is relatively easy to scale by adding full nodes that include the server hardware and software along with other components such as compute, storage and networking that are contained in the platform. These nodes consume power, cooling, and networking resources to deliver services to the applications. When the application duty cycle or workloads are predictable and stable, HCI architecture can be an effective method to scale. Design engineers will analyze these workloads and provide sufficient resources (compute, storage and network) in the node to ensure the demands of the application are fulfilled for the service life of the node, which is normally 5 to 7 years. When a workload or application needs additional resources, you simply add more nodes, regardless of what additional resources are required.

However, one of the pain points of HCI architecture is that when applications and workloads in the node are unpredictable and have demand bursts, that might stress the resources contained in these nodes. Greater concerns arise when these nodes have resources installed that are not required or become underutilized. Assets that are not being used that consume power and cooling create inefficiency and these resources become stranded or trapped and are not available for other applications or workloads to utilize.

Another way to architect IT infrastructure is to deploy composable disaggregated infrastructure (CDI) rather than HCI, especially for high value assets such as flash-based SSD storage.

Benefits of disaggregating and sharing flash 

As previously stated, scaling out using HCI has a purpose and can be essential in keeping up with growing data demands where the application requirements are fixed, and the workloads are predictable and stable. However, in HCI or other scale-out environments, resource management can become inefficient from a utilization and asset velocity perspective when these applications have bursts, or the resources provided exceed the demand of the applications.

Alternatively, designers are deploying scale-up architecture by deploying CDI by taking the flash assets out of the server nodes and creating the ability to share those assets across multiple applications and workloads using NVMe over Fabrics (NVMe-oF).

NVMe is a protocol used by the Peripheral Component Interconnect Express (PCIe(r)) bus to access flash storage. NVMe allows a more efficient way to use flash media when connected to PCIe and this standard is becoming prevalent over SAS/SATA connections in data center applications. As in HCI architecture, SSDs are installed directly to the PCI bus and the storage is available to the applications and workloads contained within the server. When scaling up using NVMe-oF technology, designers can now extend the PCIe bus out of the physical hardware node and deploy flash assets to the applications and workloads on demand using high- speed Ethernet connections and running RoCE (RDMA over Converged Ethernet) or TCP (Transmission Control Protocol) protocols.

There are several inherent benefits in the deployment of storage using NVMe-oF that are immediately realized.

  • Minimize Capex: By disaggregating flash from the server, storage can be added separately from other resources when only more storage is needed minimizing Capex spend when applications require additional capacity.
  • Deploy Best of Breed: As new Flash technologies are developed and higher performance SSD’s come to market, NVMe-oF allows these newer devices to be deployed at a disparate cadence from the HCI node that contains CPU, software, and other unneeded resources when additional storage is all that is required.
  • Increased resource utilization: Separating flash into independent pools allows storage architects to allocate and share flash resources more efficiently leading to significant cost savings, especially for organizations with highly variable workloads.
  • Greater flexibility: The ability to add flash externally when needed gives customers great flexibility. Flash is disaggregated so there’s no waiting for a refresh cycle, giving organizations the ability to scale and future proof their assets as needed and respond to changing business needs. Flash can be added to applications without disrupting the service being performed on the server.
  • Improved performance: Sharing flash storage through NVMe-oF allows the flash storage to be accessed by multiple applications and servers as if it were locally deployed in the server. This architecture allows storage architects to more effectively scale their infrastructure to meet the demands of high-performance workloads, such as machine learning and artificial intelligence.

With its many benefits, NVMe-oF is being adopted today and will continue to be a growing trend in the future of storage architectures. It allows designers to create a high-performance storage environment with latencies that rival direct attached storage (DAS), and enables flash devices to be shared, which creates very high utilization. We aptly call this Asset Velocity.

John Collins
John Collins
John Collins is the Vice President of Marketing and Business Development at Western Digital Corporation. He is a seasoned leader with a strong background in storage, memory technology, cloud, computing, video/graphics, networks, and mobility.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.