For a few decades now, we’ve been told that the data center is dead. However, there are still plenty of data centers around, continuing to play a crucial part in IT infrastructures as they still face a range of physical security threats.
That’s not to say they haven’t changed over that period, however. In fact, an engineer who had accidentally been transported into a contemporary data center from the 1960s (or even the 1980s) would hardly recognize the way that we do things now. Gone are the huge, monolithic mainframes, the reliance on hard disk drives, and even the debates about SAN vs. NAS that seemed destined to run forever.
Here are five key trends that are transforming the way we store data — not just in data centers, but across the economy.
Data Fabrics
For much of the last ten years, data center managers and engineers have been engaged in a lively debate. It concerns whether customers prefer to store data on-premises or in the cloud. That debate is already out of date. Almost every company now needs to store huge amounts of data, but the truth is that very few care about how that’s achieved at a “technical” level.
For data centers, this means that the lines between on-premises, public cloud, private cloud, and edge storage are beginning to blur, and will continue to do so until these forms of storage are all but indistinguishable. There is already a name for this type of architecture, and one that was coined a few years ago by NetApp – the Data Fabric. Now, as more vendors look to software-defined storage, they are moving data away from their own hardware.
Also read: What is a Data Fabric?
Software-Defined Storage
Data Fabrics, as they are currently understood, rely on a suite of software tools. These allow data centers to abstract the way in which they are storing data, and the physical location it is stored in, and to therefore manage data in a way that is hardware agnostic. This approach has been given many different names over the years and is currently known as composable infrastructure.
This kind of software abstraction has some major advantages — primarily, it enhances the flexibility and agility of the average data center. However, we should also remember that the “average” data center was not built with software-defined storage in mind. Some of the biggest data centers in the UK have been repurposed several times, for example. And unfortunately, software-defined storage can suffer from performance issues when deployed on legacy hardware.
Also read: Software Defined Storage: A Guide to Understanding & Utilizing SDS
Dynamic Contracts
Alongside technologies like software-defined storage and data fabrics have come a number of other shifts in the data center industry. Though they are not “technical” changes to the way that data are stored, they will have major impacts on the way that data centers are built and run.
One of these changes is the idea of dynamic contracts. Businesses today are in a luxurious position in comparison to those of even a few years ago. For most, the idea of a hard limit on the amount of data they can store is unthinkable, because most data storage vendors now offer essentially unlimited storage against a sliding scale of payment levels.
For data center managers, this can create challenges. It’s becoming increasingly apparent, in fact, that no one vendor can ever meet all the needs of contemporary businesses – which includes everything from long-term archival storage to the high-performance servers that power mobile apps. That’s why more and more data centers are using dynamic contracts that can automatically reflect the level and scale of the service they are providing, and implementing storage automation to manage these data at a technical level.
New Bottlenecks
Alongside the emergence of new technologies – and in some cases because of them – new bottlenecks are emerging for data centers. For many, in fact, the primary limitation on the service they can offer clients is no longer the raw speed at which data can be sent to customers, but instead the complexity of managing systems in which different types of data must be accessible to entirely different specifications.
Because of this, the standard way in which data center services are contracted is changing. Most of these are still governed by service level agreements – SLAs – but the standard SLA today is much more complicated than that of even a few years ago. It might contain, for instance, varied uptime and data speed requirements for different types of data.
Also read: Cloud Storage SLAs are More Important Than Ever
Container-Native Storage
Finally, we come to perhaps the biggest change of all in the way that data centers are managed — the move to container-native architectures.
The movement toward container-native infrastructure has been apparent for some time, but 2020 was the year when the industry started making sizable investments in these technologies. In September of last year, Kubernetes data services platform Portwox was acquired by Pure Storage. Additionally, data recovery company Kasten was acquired by data management company Veeam.
These changes mean that organizations running data centers will need to ensure that they have expertise in container-native storage. This will be important not just to compete with other data storage providers, but also to meet the demands of customers.
For data center managers and engineers, these trends will contribute to an ongoing revolution in the industry. They also mean that delivering great service to clients now entails a lot more than just looking for the best storage and disk arrays, though that will always be important. Instead, data center managers should be aware that the way in which data storage is defined and implemented now — through software, and not hardware — means that their industry is changing fast.
Read next: Managing Unstructured Data Across Hybrid Architectures