I use PostgreSQL Cloud SQL Enterprise Plus as production database.
I would like to copy the production database to the dev/test environment every day and dev/test database should run in Cloud SQL Enterprise environment.
What's the fastest way to copy a database from Enterprise Plus to Enterprise edition ?
As I know, cloning is only possible between the same version eg. Plus to Plus.
Export and Import is possible but it's very slow.
Solved! Go to Solution.
Yes, it’s true. Exporting and importing a database can be a slow process. This is because the entire database must be copied from the source instance to the destination instance. The duration of this process largely depends on the size of the database and the network bandwidth between the instances.
Cloud SQL instances are fully managed database services, meaning Google Cloud is responsible for managing the underlying infrastructure, including storage. This management ensures high availability, automated backups, and other benefits, but it also means users don't have the ability to attach local disks directly.
Please note: In-place upgrades from Enterprise to Enterprise plus edition are planned for future development
The fastest way to copy a database from Cloud SQL for PostgreSQL Enterprise to Cloud SQL for PostgreSQL Enterprise Plus to is to use the Database Migration Service (DMS). DMS allows you to migrate data between different database engines and versions, including Cloud SQL.
Can DMS negatively affect the operation of the production base?
Yes, it is possible that DMS could impact the performance of the production database. This is because Google Cloud's DMS uses mechanisms similar to streaming replication to migrate data. Streaming replication typically involves connecting to the source database and reading changes from its transaction logs. This can introduce overhead to the source database.
The extent of this overhead introduced by DMS to the source database can vary based on several factors. These include the size of the database, the volume of data being modified, and the network latency between the source database and the DMS replication instance.
Generally, the impact of DMS on the source database is minimal. However, if there are concerns about its impact, consider the following steps to mitigate potential issues:
Use an Isolated Environment: Consider using a dedicated environment or network for the migration process within Google Cloud. This can help isolate the impact of DMS from the main production workloads.
Schedule During Off-Peak Hours: Plan DMS migrations for times when the database is less busy. This can help reduce potential contention during peak usage periods.
Adjust Batch Sizes: If Google Cloud's DMS allows for it, consider using smaller batch sizes for migrations. This can reduce the volume of data that DMS reads from the source database at any given time, thereby minimizing potential overhead.
DMS requires WAL_LEVEL=logical
My production Cloud SQL PostgreSQL Database is configured/implemented using a master-slave configuration so it's not possible to change wal_level from replica to logical.
You are correct that DMS requires WAL_LEVEL=logical
. This is because DMS uses logical replication to migrate data, and logical replication necessitates the source database to be configured with WAL_LEVEL=logical
.
If changing the WAL_LEVEL
of your production database isn't feasible, you can explore alternative methods to migrate your data to Cloud SQL for PostgreSQL Enterprise Plus. One such method involves using the pg_dump
and pg_restore
tools to create a logical backup of your database. This backup can then be imported into a Cloud SQL for PostgreSQL Enterprise Plus instance.
While the pg_dump
and pg_restore
tools might not offer the same real-time replication capabilities as DMS and could be slower for large databases, they provide a reliable way to migrate data between databases with different WAL_LEVEL
settings.
For more detailed guidance on using these tools, you can refer to the official documentation:
Thank you but pg_dump, pg_restore or export, import are very slow, in my opinion it could be faster if there would be possibility to make a backup to a local disk but I guess we can't add local disk to CloudSQL instance.
Yes, it’s true. Exporting and importing a database can be a slow process. This is because the entire database must be copied from the source instance to the destination instance. The duration of this process largely depends on the size of the database and the network bandwidth between the instances.
Cloud SQL instances are fully managed database services, meaning Google Cloud is responsible for managing the underlying infrastructure, including storage. This management ensures high availability, automated backups, and other benefits, but it also means users don't have the ability to attach local disks directly.
Please note: In-place upgrades from Enterprise to Enterprise plus edition are planned for future development