Strategies to avoid downtime when migrating data to the cloud. During the migration, the location of your data will have a significant impact on the performance of your application. If your data is not migrated at the same time in which you migrate the services using this data. You risk needing to access your data over a distance between your on-premise and your cloud data centers, which may cause latency issues.
Whether you have your own cloud architect or not, Here 3 strategies to avoid downtime when migrating data to the cloud
- Offline copy migration
- Master/read replica switch migration
- Master/master migration
It doesn’t matter if you’re migrating an SQL database, or simply raw data. Each migration method requires a different effort, has different impact on your application’s availability and present different risks. Below strategies looks similar, but the differences are in the details.
Strategy 1: offline copy migration
An offline copy migration is the most straightforward method. Bring down your on-premise application, copy the data from your on-premise database to the new cloud database, then bring your application back online in the cloud.
An offline copy migration is simple, easy, and safe, but you’ll have to take your application offline to execute it. If your dataset is extremely large, your application may be offline for a significant period of time, which will undoubtedly impact your customers and business.
For most applications, the amount of downtime required for an offline copy migration is generally unacceptable. But if your business can tolerate some downtime, and your dataset is small enough, you should consider this method. It’s the easiest, least expensive, and least risky method of migrating your data to the cloud.
Strategy 2: master/read replica switch migration
The goal of a master/read replica switch migration is to reduce application downtime without significantly complicating the data migration itself.
For this type of migration, you start with your master version of your database running in your on-premise data center. You then set up a read replica copy of your database in the cloud with one-way synchronization of data from your on-premise master to your read replica. At this point, you still make all data updates and changes to the on-premise master. And the master synchronizes those changes with the cloud-based read replica. The master-replica model is common in most database systems.
You’ll continue to perform data writes to the on-premise master, even after you’ve gotten your application migrated and operational in the cloud. At some predetermined point in time, you’ll “switchover” and swap the master/read replica roles. The cloud database becomes the master and the on-premise database becomes the read replica. You simultaneously move all write access from your on-premise database to your cloud database.
You’ll need a short period of downtime during the switchover, but the downtime is significantly less than what’s required using the offline copy method.
However, downtime is downtime, so you need to assess exactly what your business can handle.
Strategy 3: master/master migration
This is the most complicated of the three data migration strategies and has the greatest potential for risk. However, if you implement it correctly, you can accomplish a data migration without any application downtime whatsoever.
In this method, you create a duplicate of your on-premise database master in the cloud and set up bi-directional synchronization between the two masters. Synchronizing all data from on-premise to the cloud, and from the cloud to on-prem. Basically, you’re left with a typical multi-master database configuration.
After you set up both databases, you can read and write data from either the on-premise database or the cloud database, and both will remain in sync. This will allow you to move your applications and services independently, on your own schedule, without needing to worry about your data.
To better control your migration, you can run instances of your application both on-premise and in the cloud, and move your application’s traffic to the cloud without any downtime. If a problem arises, you can roll back your migration and redirect traffic to the on-premise version of your database while you troubleshoot the issue.
At the completion of your migration, simply turn off your on-premise master and use your cloud master as your database.
It’s important to note, however, that this method is not without complexity. If your application, data, and business can handle this migration method, use it. It’s the cleanest and easiest of the three strategies.
Limit migration risks
Any data migration comes with some risk, especially the risk of data corruption. Your data is most at risk while the migration is in progress. Don’t stop a migration until you have completed the process or you have rolled it back completely. And never stop a migration halfway through—half-migrated data isn’t useful to anyone.
The risk of data corruption is especially high when migrating extremely large datasets.
As is true with all migration including those 3 strategies to avoid downtime when migrating data to the cloud. You won’t know if you encounter a problem if you can’t see how your application is performing during migration. Maintaining application availability, and keeping your data secure, can only happen if you understand how your application is responding to the steps in the migration process.
Source: New Relic Partner (Vietnamese Version Available Soon)
RECENT POST