Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Migrating MS SQL Server from On-prem to Cloud SQL

Since DMS doesn't support MS SQL Server how to migrate a database with minimum downtime? is importing BAK or sql only way for SQL server if source is On-prem?

And Since BAK files are not accessible on other cloud providers (if source is other cloud provider), is import using sql files the only way?

How do organizations migrate big databases (say 1 Tb plus) with target being Cloud SQL server?

 

 

Solved Solved
0 1 3,781
1 ACCEPTED SOLUTION

When migrating large SQL Server databases to Google Cloud SQL, several factors need to be considered:

  • Downtime Tolerance: Some applications have well-defined change request schedules, which can be used for the migration, while others need to run 24/7 with high uptime. Knowing the acceptable downtime will allow you to weigh the complexity of the continuous migration options with the simplicity of one-time approaches.
  • Database Size: Migrating large databases can pose additional challenges, like prolonged increased resource utilization on on-premises servers to support the migration or how to deliver database snapshots in Transactional Replication.
  • Daily Size of Updates: The size of daily updates and the net change of those updates can both have a significant impact on the decision between one-time and continuous migration approaches​.

For large SQL Server databases, you can choose between one-time migrations and continuous migrations. One-time migrations involve taking a copy of your source database, transferring it to your destination instance, and then switching over your application to point to the new instance. Continuous migrations involve copying data from your source instance to your destination instance on an ongoing basis - starting with an initial data load - and the applications may gradually switch over days, weeks, or months later​.

For one-time migrations:

  • Import from a Backup: Importing from a backup is one of the simplest ways to migrate your database to Cloud SQL. This approach is suitable for any database size and becomes an increasingly appealing choice as the instance size grows, especially for working with huge amounts of data. For databases larger than 5TB, striped backups should be used due to file size limitations​
  • BCP Tool: The BCP (Bulk Copy Program) tool can be used in a standalone process. You would need to generate and apply the database schema, extract table data using the BCP tool, and then import the table data from a folder to Cloud SQL​
  • Snapshot Replication: This is a step up in complexity compared to backup import. Snapshot replication will introduce additional resource load on your on-prem server, and some types of workloads may not be supported and may block or reset the snapshot generation process. There are also limitations on objects supported by this type of replication​1​. Note that snapshot generation will keep a lock on the source tables until the process is complete, which can extend to hours for larger databases​.

For continuous migrations:

  • Push Transactional Replication: Cloud SQL supports Push Transactional replication as a publisher and as a subscriber, which allows the setup of continuous replication to a Cloud SQL instance from an external source. This approach involves three steps: initial seeding (copying over the initial data snapshot using backup, Snapshot Agent job, BCP etc.), sending incremental updates to the replica instance, and the final cut-over to Cloud SQL, which may include changes like adding/enabling triggers, updating identity field ranges, synchronizing logins, and converting views back from tables​.

In summary, there are several ways to migrate large SQL Server databases to Google Cloud SQL, each with its own benefits and considerations. The chosen method would depend on factors such as the size of the database, the frequency and size of updates, and the acceptable downtime. For large databases, the one-time migration approach using backups or the continuous migration approach using transactional replication are typically the most suitable options

For more details please refer to this great blog post: https://cloud.google.com/blog/products/databases/migrating-sql-server-databases-to-cloud-sql

View solution in original post

1 REPLY 1

When migrating large SQL Server databases to Google Cloud SQL, several factors need to be considered:

  • Downtime Tolerance: Some applications have well-defined change request schedules, which can be used for the migration, while others need to run 24/7 with high uptime. Knowing the acceptable downtime will allow you to weigh the complexity of the continuous migration options with the simplicity of one-time approaches.
  • Database Size: Migrating large databases can pose additional challenges, like prolonged increased resource utilization on on-premises servers to support the migration or how to deliver database snapshots in Transactional Replication.
  • Daily Size of Updates: The size of daily updates and the net change of those updates can both have a significant impact on the decision between one-time and continuous migration approaches​.

For large SQL Server databases, you can choose between one-time migrations and continuous migrations. One-time migrations involve taking a copy of your source database, transferring it to your destination instance, and then switching over your application to point to the new instance. Continuous migrations involve copying data from your source instance to your destination instance on an ongoing basis - starting with an initial data load - and the applications may gradually switch over days, weeks, or months later​.

For one-time migrations:

  • Import from a Backup: Importing from a backup is one of the simplest ways to migrate your database to Cloud SQL. This approach is suitable for any database size and becomes an increasingly appealing choice as the instance size grows, especially for working with huge amounts of data. For databases larger than 5TB, striped backups should be used due to file size limitations​
  • BCP Tool: The BCP (Bulk Copy Program) tool can be used in a standalone process. You would need to generate and apply the database schema, extract table data using the BCP tool, and then import the table data from a folder to Cloud SQL​
  • Snapshot Replication: This is a step up in complexity compared to backup import. Snapshot replication will introduce additional resource load on your on-prem server, and some types of workloads may not be supported and may block or reset the snapshot generation process. There are also limitations on objects supported by this type of replication​1​. Note that snapshot generation will keep a lock on the source tables until the process is complete, which can extend to hours for larger databases​.

For continuous migrations:

  • Push Transactional Replication: Cloud SQL supports Push Transactional replication as a publisher and as a subscriber, which allows the setup of continuous replication to a Cloud SQL instance from an external source. This approach involves three steps: initial seeding (copying over the initial data snapshot using backup, Snapshot Agent job, BCP etc.), sending incremental updates to the replica instance, and the final cut-over to Cloud SQL, which may include changes like adding/enabling triggers, updating identity field ranges, synchronizing logins, and converting views back from tables​.

In summary, there are several ways to migrate large SQL Server databases to Google Cloud SQL, each with its own benefits and considerations. The chosen method would depend on factors such as the size of the database, the frequency and size of updates, and the acceptable downtime. For large databases, the one-time migration approach using backups or the continuous migration approach using transactional replication are typically the most suitable options

For more details please refer to this great blog post: https://cloud.google.com/blog/products/databases/migrating-sql-server-databases-to-cloud-sql