Mark Pastor is director of product management at San Jose, California-based Western Digital, where he is actively involved in platforms for edge and cold storage. Previously, Pastor defined and launched various products at Quantum Corporation and Seagate. Pastor holds a bachelor’s degree in engineering from the University of California, Los Angeles. He is based in Colorado Springs, Colorado.
Significant challenges confront data center managers in the era of cloud computing and big data.
Managers must help reduce the growing financial burden associated with the colossal and unprecedented spike in data generation occurring today. To that end, managers must squeeze as much efficiency from data centers as possible. Meanwhile, they must also future proof and improve the flexibility, adaptability, and scale of their systems during an extraordinary period of technological innovation and disruption.
The good news is that a solution exists that helps with all the above.
Managing hyper-connected environments: Flexibility is key
At a time when cloud solutions have proliferated and countless enterprises have adopted both on-premises and cloud infrastructure, more nimble systems are required. By disaggregating resources, such as storage, compute, and networking, data center operators extract more value and efficiency from each, while improving their ability to quickly and cost effectively respond to evolving technology and user demands in the IT environment.
The need to build nimbler systems has led to growing acceptance of composable disaggregated infrastructure (CDI). Disaggregating compute, networking, and storage resources enables data centers to pool those resources and then utilize them when and where they’re needed.
Similar to how cloud services manage their data centers, disaggregation gives IT managers the means to structure their on-premises architectures in ways that produce increased cost savings and resource availability. Nowhere is this more evident than with data storage.
By disaggregating storage, data center managers increase the efficient use of resources, reduce total cost of ownership (TCO), and design architectures that adapt to technological change.
Adding servers to scale storage isn’t for everyone
CDI contrasts sharply with the storage methods widely relied on for years. Not long ago, it was common for organizations to expand their storage capacity by buying new servers. After maxing out the typical three year-warranty on a server, IT managers often chose to simply replace the entire server, along with the solid-state drives (SSDs). But many of those drives came with five-year warranties and still had plenty of life left.
For the time, the thinking made sense. Why install old drives into a new server? But it was also a wasteful and costly value proposition.
Decoupling storage and compute and placing them on separate racks eliminates that problem associated with scaling through the purchase of new servers. A CDI strategy frees managers to treat servers and the different components within them, including storage, individually. IT managers can swap out servers based on CPU and memory demands.
Thanks to CDI, additional storage requirements don’t have to factor into the server upgrade decision.
Reclaim idle, underutilized resources
Disaggregating resources also helps solve other significant challenges. For instance, today, too much IT equipment and resources can go underutilized.
As with the large cloud data centers, decoupling resources in on-premises data centers will only ratchet up efficiencies for multi-application demands. Disaggregating on-premises enables data centers to share X application servers across Y storage.
Instead of investing in servers loaded with maximum storage, the nimbler approach is to disaggregate and extract storage from a pool and assign it to applications as needed. As projects ebb and flow, the demand for storage resources transfers from one part of the workflow to another.
In this scenario, few resources are wasted.
One of the keys to making all this work is software-defined storage (SDS) and its ability to act as a traffic cop within networks. Once storage resources are decoupled, a programmable SDS app quickly and automatically decides how to reassign and allocate those resources based on changing demand.
As networks become more important to data management, software’s role also becomes more critical.
Disaggregation helps prepare data centers for rapid change
Not only do disaggregation and software assist in creating more efficient data resource management, but the increased flexibility provides data center managers with more ability to adapt to rapid changes brought on by new applications, datasets, and use cases. Disaggregation also creates opportunities to scale resources according to changes to business over the course of time.
Years ago, organizations did their best to predict the needs of their data centers and what their IT infrastructure would look like as much as five years into the future. Anticipating storage, CPU, GPU, and networking needs wasn’t easy — and still isn’t. Making investments this way came with a lot of risk.
Disaggregation can eliminate the need for IT departments to make big, long-term bets.
Disaggregating resources provides businesses with the flexibility they need to independently scale compute and GPU resources and build out capacity. It also supplies data center operators with more freedom to change the configuration of their architecture on the fly.
Sophisticated storage tools are available now
It should be noted that this isn’t pie-in-the-sky talk or wishful thinking. The disaggregation of GPUs, servers, storage, and other resources and enabling them to be distributed when and where they’re needed is occurring now to great effect.
The major cloud service providers operate their data centers this way. With regard to storage, the software tools and equipment that exist right now can ensure a data center, regardless of size, efficiently manages its decoupled storage resources.
New innovations in storage technology have also made important contributions.
NVMe over Fabrics (NVMe-oF) is a state-of-the-art networked storage protocol with the ability to fully utilize SSDs and deliver more efficient and speedier connectivity between storage and servers. The technology connects hosts to storage across a network fabric using the NVMe protocol. NVMe-oF represents only one of many breakthrough storage technologies introduced in recent years. More are on the way.
As everyone knows, the public’s appetite for data is only increasing. The data generated worldwide every year continues to rise at mind-numbing levels. Managing all that information will only grow in complexity and continue to demand that data center managers pay close attention to rapidly evolving storage technology if they’re to remain competitive.