What is Cloud Repatriation, and When Does it Make Sense?

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

The simple definition of cloud repatriation is the process of moving workloads—any application, service, or capability that consumes resources or memory—from the cloud back in-house to or on-premises systems. For some organizations, this means moving a single workload or a portion of their overall workload out of the public cloud. Other organizations are taking the drastic measure of moving all of them back on-premises or to a private cloud due to concerns about cost, security, or performance.

Here are five points to consider when determining whether to repatriate enterprise workloads or keep them in the public cloud.   

Cloud is Not Only About Cost

Not long ago, many companies moved applications and data to the cloud on the promise of greatly reduced costs. Fast forward a few years and this promise has not always materialized. In fact, cloud costs are more expensive than on-premises costs in some use cases due to the expense of hosting huge amounts of storage in the cloud, as well as ingress, egress, and other fees.

The cloud is less expensive for certain workloads and applications, and occasionally it is much cheaper, but some have discovered that it can be more expensive in the long run. Some businesses base their decisions about the cloud purely on cost, but a broader view may be more useful.

Even if costs are higher, there are other benefits—having someone else do the heavy lifting and infrastructure plumbing frees up internal resources to work on more strategic IT functions and projects. Retaining applications on-premises can sometimes mean clinging to aging systems that are difficult to integrate with cloud-based applications. Moving those applications to a cloud or as-a-Service model often aligns with ongoing digital transformation initiatives. Strategic considerations affect IT decisions and may be equally important or more so than cost alone.

There are multiple facets to the cost argument. When you take into account the number of full-time employees required for on-premises infrastructure, equipment refreshes every three years, application licensing and maintenance contracts, and additional infrastructure costs, cloud costs may seem more reasonable.

The moral of the story is to consider all costs while also looking beyond costs to determine whether a workload should stay in the cloud or be returned on-premises. Deciding to repatriate due to one abnormally high monthly cloud bill could be a short-term gain for a long-term loss.

Learn more about Cloud Storage Pricing.

A Measured Approach: Right Place, Right Cloud

Until recently, some businesses were increasingly adopting a cloud-only or cloud-first philosophy. That may be fine for some companies—especially startups—but disastrous for others due to financial, security, compliance, and performance reasons, among others. In lieu of rigid all-cloud or all on-premises decisions, consider adopting a “right-place, right-cloud” approach.

In practice, this means putting workloads that belong in the public cloud in the public cloud, and opting for in-house or private cloud as the home for workloads better-suited to that location.

Every workload is different, with varied requirements. Some will do fine in the cloud, while others may suffer. Determine the needs of each application and its associated user base and make individual determinations rather than trying to enforce blanket cloud-only strategies.

Learn more about cloud storage and how it works.

FinOps: A Unified View of Cloud

Traditional financial models for IT that relied on quarterly and annual budgets can’t keep pace with the velocity of modern cloud architectures. Financial Operations, or FinOps—a management practice in which IT and development operations (DevOps) teams share responsibility for cloud computing infrastructure and costs—might be a better way to gain a unified view into cloud costs, boost budget allocation efficiency, and suggest cost optimization recommendations.

FinOps streamlines procurement of cloud services to prevent different lines of business and different geographies within a single organization ordering cloud services independently of one another. This kind of procurement discipline typically negotiates better rates from cloud providers. It also brings control and monitoring of cloud spend, which makes it far more difficult to run up large bills unsupervised.

Centralizing cloud management and billing also opens the door to a stronger negotiating position and larger discounts. And by introducing FinOps for financial accountability and governance, it becomes easier to view the trade-offs between speed, cost, and quality of services in the cloud compared to on-premises.

Learn more about Top Private Cloud Providers.

Necessities of Compliance and Risk

Cloud data breaches are not uncommon. While there are arguments that the cloud is more secure, and counterarguments that it brings more risk and higher exposure, the truth is that there is validity to both sides. Once again, it takes a workload-by-workload approach to see how much risk there is in the cloud vs. on-premises and to make an appropriate security decision.

However, some security, compliance, and data sovereignty factors necessitate that certain apps and workloads exist only on-premises, and this necessity has been a driver for some enterprise cloud repatriation. Apps sent to the cloud can run afoul of one regulation of another or be red flagged in a security or compliance assessment.

As more and more data sovereignty laws are enacted in Europe, California, New Zealand, and other places that require that data not be shared outside of specific geographies, cloud providers are rolling out new services to meet them—but for some companies, the need for complete control of data can mean it only belongs in-house.

Learn more about enterprise data storage compliance

Specific Performance Requirements

Latency factors come into play with all data sent to the cloud. Some applications won’t be affected, but others will. The need for very low latency has caused some enterprises to bring data and applications back in-house.

Financial service firms, for example, use transactional systems that crunch millions of numbers every second. Latency can cost them millions of dollars in the blink of an eye due to sudden market upturns and downturns. Further, they typically serve demanding customers unwilling to wait even a few moments for a refresh, and if they’re unhappy with the service they’ll take their funds to higher-performing competitors.

If your cloud workloads aren’t performing at the levels you need, repatriating them to a private cloud or on-premises systems is a good idea if low latency is vital to operations.

Learn more about cloud storage vs. local storage.

Bottom Line: Cloud Repatriation

The cloud repatriation trend is real, and across industries, many workloads are being brought back in-house. But far more workloads are still heading to the cloud—there is no stopping the cloud juggernaut, as proven by the sheer number of new data centers being built and the rapid expansion of hyperscalers like Azure, AWS, and Google. It’s a good bet that most enterprise workloads belong in the cloud, but not all. Cloud repatriation is worth considering for enterprises seeking to find the right balance in terms of cost, performance, and security, but there are many factors to take into account before making a decision.

Read next: Cloud Storage Security Best Practices

Drew Robb
Drew Robb
Drew Robb is a contributing writer for Datamation, Enterprise Storage Forum, eSecurity Planet, Channel Insider, and eWeek. He has been reporting on all areas of IT for more than 25 years. He has a degree from the University of Strathclyde UK (USUK), and lives in the Tampa Bay area of Florida.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.