The abstraction of computing has been going on for a couple of decades. VMware set the stage with server virtualization. Since then, one hardware element after another has been abstracted and composable infrastructure is the latest iteration. It abstracts compute, storage, and networking resources, which are managed by software through a web-based interface. This makes resources available in a similar way to cloud services. However, instead of the public cloud, these resources are made available in private and hybrid clouds.
This has everything to do with agility and flexibility. Instead of having to build out infrastructure piece by piece, provision servers, add cabling, and establish network connections, a composable infrastructure eliminates the hardware-based heavy lifting. Instead, dynamic resources are combined to support applications in an effort to improve performance, eliminate underutilization, avoid overprovisioning, and respond rapidly to the needs of the business — just as rapidly as could be done by utilizing public cloud resources.
Composable infrastructure goes beyond earlier approaches such as converged and hyperconverged infrastructure in that IT can now run physical workloads inside the same environment that supports its virtual or container workloads.
Table of Contents
Key features and Benefits of Composable Infrastructure Solutions
Composable infrastructure solutions vary from vendor to vendor. But they contain many of the following features and benefits:
- Shortened server provisioning processes.
- Disaggregated CPU, GPU, NVMe, networking, DPU, FPGA, and other accelerators can be pooled, shared, and redeployed with zero-touch via composable software, across PCIe, Infiniband, Ethernet, or CXL.
- Software-defined scalability improves the ability to adapt data center architecture to address business demands in real time.
- Software-defined data center resource management, orchestration, and scalability for cloud-like flexibility and agility, regardless of where physical infrastructure resides.
- Users can compose bare-metal servers in seconds with the resources a workload requires. If the workload is no longer needed, or its resources are idle, resources can be returned to their respective pools for future use.
- No need to physically reconfigure servers, manually add additional devices, or overprovision hardware to support heavy workloads such as AI/ML.
- Process automation to address data demand associated with next-generation applications in AI, IoT deployment, DevOps, cloud and edge computing, NVMe- and GPU-over-Fabric (NVMe-oF, GPU-oF) support, and other data-center scale operations.
Key Use Cases for Composable Infrastructure
Use cases for composable infrastructure include:
- Pool and deploy GPU and other accelerators to meet the demands of AI and ML-driven high-performance computing. Leverage resource pools to compose balanced systems for each phase of the AI process, from ingest to inference.
- Regardless of where resources are physically located, enable cloud-like flexibility to direct resources toward applications that require them.
- Use remote resource management to orchestrate edge facility resources with zero touch. Pool and proportion resources to meet the needs of individual edge applications.
- Disaggregate converged infrastructures and add discrete accelerators as required.
- Enable dynamic VM provisioning with vCenter, Kubernetes, and Slurm integration.
Top Composable Infrastructure Vendors
Enterprise Storage Forum reviewed a variety of vendors in the composable infrastructure space. Here are our top picks in no particular order:
Liqid delivers a composable disaggregated infrastructure (CDI) platform known as Liqid Matrix. It is designed to unlock the flexibility and agility of public cloud for data center environments, regardless of where physical hardware resides. It enables organizations to deploy bare-metal servers and all resources on demand via software. Hardware is arranged into abstracted pools.
- Add the exact amount of GPU, accelerator, storage, and networking resources needed to existing servers, eliminating the need to physically install or remove components from a server chassis.
- Resources are disaggregated into PCIe expansion chassis and interconnected via a PCIe Ethernet or Infiniband.
- Host servers are seen as compute resources and are connected via PCIe HBA or smart NIC.
- Liqid Matrix software resides on the fabric and connects and manages resources without the need for drivers or agents.
- With multi-fabric support for PCIe, Infiniband, Ethernet, and CXL, Liqid can share resources for performance up and down the stack or for distance across the network, with minimal latency.
- Disaggregated resources can be added to running systems without a reboot.
- Extend the life of existing data center devices by repurposing as shared resources, regardless of physical location, add disaggregated devices on demand.
The Fungible Data Center enables end users to create bare-metal servers on-demand to address changing application needs. Infrastructure can operate at peak utilization and efficiency by pooling and deploying compute, storage and network resources on demand. This eliminates overprovisioning and offers flexibility for unknown workloads.
- Fungible Data Center allows a minimal set of SKUs to support a range of server configurations to reduce data center complexity, while matching workload needs.
- Secure, multi-tenant data centers are managed from a single pane of glass with automation, including APIs that enable an infrastructure-as-code approach to data center management.
- Supports traditional applications running in hypervisor environments to containerized cloud native applications and Big Data, AI, or analytics workloads.
- Offloads I/O processing from system CPUs.
- Per-tenant quotas on capacity and IOPs ensure that service levels are consistent across workloads and users.
- By disaggregating storage, it delivers near-local performance, eliminating the local NVMe storage need for containerized applications.
- Scales from half a rack to hundreds of racks.
- Hardware-accelerated domains, segmentation, Quality of Service (QoS), and end-to-end encryption to enable threat detection and prevention.
- Standard compute and GPU servers are equipped with the Fungible Data Services Platform — a standard full-height, half-length PCIe card powered by a Fungible S1 DPU.
Lenovo ThinkAgile CP
The Lenovo ThinkAgile CP Series is a composable infrastructure solution for private clouds. It has an integrated application marketplace (Lenovo Cloud Marketplace) and provides end-to-end automation. It uses modular compute, storage, and networking components paired with cloud virtualization software to create pools of IT resources, independently scaling and allocating capacity, and automatically configuring resources to fulfill application requirements.
- Can deal with workloads such as web services, virtual desktop infrastructure (VDI), enterprise applications, OLTP and OLAP databases, data analytics, application development, virtualization, containers, and back-office applications.
- Features the second generation of the Intel Xeon Processor Scalable processor family.
- Factory-preloaded platform delivered with all infrastructure needed.
- Lenovo deployment services that are included to get users up and running quickly.
- Scalable software-defined infrastructure (SDI) that simplifies cloud deployments and orchestrates workload provisioning.
- Security features include data at rest encryption, virtualized network and VM-level firewalls, and two-factor authentication.
- Storage Blocks are 2U storage enclosures with up to 24 PCIe NVMe SSDs and two controllers.
- Compute Blocks are modular 2U enclosures that contain up to four nodes and deliver processor and memory resources.
- The Interconnect centralizes connectivity of on-premises infrastructure to the Cloud Controller and acts as the entry point into the existing network.
Western Digital OpenFlex
Western Digital offers the OpenFlex Composable Infrastructure solution that uses NVMe-over-Fabric to improve compute and storage utilization, performance, and agility in the data center. Storage can be disaggregated from compute, enabling applications to share a common pool of storage capacity. Data can be shared between applications or needed capacity allocated regardless of location.
- Composable, shareable high-performance storage.
- Access data from anywhere in the data center.
- Manageable through existing data center orchestration frameworks.
- Dynamic provisioning to scale down resources as easily as they are scaled up.
- Western Digital’s Open Composability API builds upon existing industry standards, such as Redfish and Swordfish, to orchestrate data center elements, including compute, flash, disk, network, accelerators, and disaggregated memory.
- Leverages Western Digital’s Silicon to Systems Design approach across disk and flash to deliver a scalable, modular set of storage fabric devices — both flash and disk — with a common interface.
- NVMe-oF enables multiple storage tiers over the same wire.
The NVIDIA BlueField data processing unit (DPU) helps in offloading, accelerating, and isolating networking, storage, and security services from cloud to data center to edge. The DPUs combine computing, infrastructure-on-chip programmability, and high-performance networking for addressing the most demanding workloads.
- BlueField DPUs enable a zero-trust architecture.
- Provides NVMe-oF, GPUDirect storage, encryption, elastic storage, data integrity, decompression, and deduplication.
- Up to 400 Gb/s of Ethernet and InfiniBand connectivity for both traditional applications and modern GPU-accelerated workloads.
- The NVIDIA DOCA software development kit enables developers to create software-defined DPU-accelerated services.
- Optimized for multi-tenant, cloud-native environments.
- Delivers the equivalent data center services of up to 300 CPU cores.
- 10x the compute power of the previous generation DPU.
- 16x Arm A78 cores and 4x the acceleration for cryptography.
- Dell, Inspur, Lenovo, and Supermicro are integrating BlueField DPUs into their systems.
HPE Synergy offers composable, software-defined infrastructure for hybrid cloud environments. It helps data centers to compose fluid pools of physical and virtual compute, storage, and fabric resources into any configuration for any workload under a unified API. It treats resources as a service that can be deployed to applications in near real time, eliminating the need to configure hardware.
- Instantly compose and recompose fluid pools of physical and virtual compute, storage, and fabric resources into any configuration to run any application or workload.
- One integrated management platform.
- Programmable from a single interface and repeatable templates.
- API to integrate dozens of management, open-source automation, and DevOps tools such as Chef, Docker, and OpenStack.
- HPE Synergy 12000 Frame acts as the base for HPE Synergy intelligent infrastructure with embedded management.
- HPE Synergy Composer combines multiple tools for operational changes.
- HPE Synergy Image Streamer implements rapid image/application changes to multiple compute nodes.
Cisco Unified Computing System (UCS)
Cisco currently has two composable infrastructure offerings. The UCS M-Series is designed for compute-intensive workloads such as scale-out applications, grid, online gaming, genomic applications, web serving, and MaaS (metal as a service). The UCS C3260 is targeted at data-centric workloads such as big data analytics (MapR, Cloudera, etc.), content delivery, Microsoft Storage Spaces, and software-defined storage environments.
- Each M-Series chassis includes up to 16 independent servers including Intel Xeon E3 v3 processor and Intel Xeon E5 v3 processor configurations with a variety of core-count and memory footprint options.
- Compute cartridges are connected via an in-chassis PCIe midplane to the Cisco VIC 1300 Series with Cisco System Link Technology to provide access to the local shared I/O resources.
- Each server cartridge has access to a pool of 2x 40 Gb network resources and a pool of 4x SSD storage resources, which can be distributed as needed and scaled independently.
- The Cisco 1300 Series VIC with System Link Technology provides flexible resource sharing and configuration for the Cisco UCS M-Series.
- System Link Technology presents the vNIC (virtual network interfaces) and the sNIC (virtual storage controller) to the operating system as a dedicated PCIe device for that server.
- The Cisco UCS C3260 is a capacity-intensive architecture with up to 56x drives (supports both SSD and HDD options) and 2x server nodes per chassis.
- It uses up to two system I/O controllers to create pools of storage that can be allocated dynamically via the management controller to each server node.
- Policy-driven local storage allocation.
Intel Rack Scale Design
Intel Rack Scale Design (RSD) is an architecture for disaggregated, composable infrastructure to change the way a data center is built, managed, and expanded over time. It enables IT to buy only what they need when they need it. RSD divides the traditional server into several resource sections within the rack.
- Compute, storage, and accelerator resources are attached to a high-speed interconnect.
- A mix of different modules can be purchased to tailor the architecture to application requirements.
- Uses the Intel Zeon Scalable processor family.
- Harnesses intel Optane storage and storage class memory.
- High-speed networks based on Intel Silicon Photonics.
- Incorporates standards such as Redfish.
- Partnerships with Dell, Ericsson, HPE, Huawei, Inspur, Quanta, Radisys, Supermicro, Wywynn, AMI, Canonical, and 99Cloud.