Database Migration Service uses migration jobs to migrate data from your source database instance to the destination database instance.
Creating a migration job for an existing destination instance includes:
There are certain limitations that you should consider when you want to migrate to a destination instance created outside of Database Migration Service. For more information, see Known limitations.
Database Migration Service wizard helps you create a migration job. This wizard consists of the following panes: Get started, Define a source, Create a destination, Define connectivity method, Configure migration databases, and Test and create migration job. Information on how to populate each pane is provided in the following sections of this page.
Go to the Migration jobs in the Google Cloud console.
Click Create migration job at the top of the page.
Provide a name for the migration job. Choose a friendly name that helps you identify the migration job. Don't include sensitive or personally identifiable information in the job name.
Keep the auto-generated Migration job ID.
Select the source database engine.
Select AlloyDB for PostgreSQL as the destination engine.
Select the destination region for your migration. This region must be the same as the one where your destination database is located. After you choose the destination region, this selection can't be changed.
Choose Continuous (snapshot + ongoing changes) as the migration job type.
In the Before you continue, review the prerequisites section, click Open to view automatically generated instructions that can help guide you through preparing your source database for the migration. It's best to complete these prerequisites at this step, but you can complete them at any time before you test, or start, the migration job. For more information, see Configure your source.
Click Save and continue.
If you have created a connection profile, then select it from the list of existing connection profiles.
If you haven't created a connection profile, then create one by clicking Create a connection profile at the bottom of the drop-down list, and then perform the same steps as in Create a source connection profile.
It is recommended to create a designated connection profile for your AlloyDB migration.
The speed of data dump parallelism is related to the amount of load on your source database. You can use the following settings:
If you want to use adjusted data dump parallelism settings, make sure to increase the max_replication_slots
, max_wal_senders
, and max_worker_processes
parameters on your source database. You can verify your configuration by running the migration job test at the end of migration job creation.
From the Connectivity method drop-down menu, select a network connectivity method. This method defines how the newly created AlloyDB cluster will connect to the source database. Current network connectivity methods include IP allowlist, VPC peering, reverse SSH tunnel, and TCP proxy through a cloud-hosted VM.
pg_hba.conf
file, so that the source can accept connections from these addresses. If you select the reverse SSH tunnel network connectivity method, then select the Compute Engine VM instance that will host the tunnel.
After specifying the instance, Google will provide a script that performs the steps to set up the tunnel between the source and destination databases. You'll need to run the script in the Google Cloud CLI.
Run the commands from a machine that has connectivity to both the source database and to Google Cloud.
If you select the TCP Proxy via cloud-hosted VM connectivity method, then enter the required details for the new Compute Engine VM instance that will host the TCP proxy.
After specifying the details, the Google Cloud console will provide a script that performs the steps to set up the proxy between the source and destination databases. You'll need to run the script on a machine with an updated Google Cloud CLI.
After running the script, it will output the newly created VM's private IP. Enter the IP and click Configure & continue.
After selecting the network connectivity method and providing any additional information for the method, click CONFIGURE & CONTINUE.
You can select the databases that you want to migrate.
If you want to migrate specific databases, you can filter the list that appears and select the databases that you want Database Migration Service to migrate into your destination.
If the list doesn't appear and a database discovery error is displayed, click Reload. If database discovery fails, the job migrates all databases. You can continue with creating a migration job and fix connectivity errors later.
On this final step, review the summary of the migration job settings, source, destination, and connectivity method, and then test the validity of the migration job setup. If any issues are encountered, then you can modify the migration job's settings. Not all settings are editable.
Click TEST JOB to verify that:
If the test fails, then you can address the problem in the appropriate part of the flow, and return to re-test.
The migration job can be created even if the test fails, but after the job is started, it may fail at some point during the run.
Click CREATE & START JOB to create the migration job and start it immediately, or click CREATE JOB to create the migration job without immediately starting it.
If the job isn't started at the time that it's created, then it can be started from the Migration jobs page by clicking START.
Regardless of when the migration job starts, your organization is charged for the existence of the destination instance.
When you start the migration job, Database Migration Service begins the full dump, briefly locking the source database. If your source is in Amazon RDS or Amazon Aurora, Database Migration Service additionally requires a short (approximately under a minute) write downtime at the start of the migration. For more information, see Data dump parallelism considerations.
The migration job is added to the migration jobs list and can be viewed directly.
Proceed to Review the migration job.
When you migrate to an existing instance by using Google Cloud CLI, you must manually create the connection profile for the destination instance. This isn't required when you use the Google Cloud console, as Database Migration Service takes care of creating and removing the destination connection profile for you.
Before you use gcloud CLI to create a migration job to an existing destination database instance, make sure you:
Create the destination connection profile for your existing destination instance by running the gcloud database-migration connection-profiles create
command:
This sample uses the optional --no-async
flag so that all operations are performed synchronously. This means that some commands might take a while to complete. You can skip the --no-async
flag to run commands asynchronously. If you do, you need to use the gcloud database-migration operations describe
command to verify if your operation is successful.
Before using any of the command data below, make the following replacements:
Execute the following command:
gclouddatabase-migrationconnection-profiles\ createpostgresqlCONNECTION_PROFILE_ID\--no-async\--alloydb-cluster=DESTINATION_INSTANCE_ID\--region=REGION\--display-name=CONNECTION_PROFILE_NAME
gclouddatabase-migrationconnection-profiles` createpostgresqlCONNECTION_PROFILE_ID`--no-async`--alloydb-cluster=DESTINATION_INSTANCE_ID`--region=REGION`--display-name=CONNECTION_PROFILE_NAME
gclouddatabase-migrationconnection-profiles^ createpostgresqlCONNECTION_PROFILE_ID^ --no-async^ --alloydb-cluster=DESTINATION_INSTANCE_ID^ --region=REGION^ --display-name=CONNECTION_PROFILE_NAME
You should receive a response similar to the following:
Waiting for connection profile [CONNECTION_PROFILE_ID] to be created with [OPERATION_ID] Waiting for operation [OPERATION_ID] to complete...done. Created connection profile CONNECTION_PROFILE_ID [OPERATION_ID]
This sample uses the optional --no-async
flag so that all operations are performed synchronously. This means that some commands might take a while to complete. You can skip the --no-async
flag to run commands asynchronously. If you do, you need to use the gcloud database-migration operations describe
command to verify if your operation is successful.
Before using any of the command data below, make the following replacements:
Optional: Database Migration Service migrates all databases in your source by default. If you want to migrate only specific databases, use the --databases-filter
flag and specify their identifiers as a comma-separated list.
For example: --databases-filter=my-business-database,my-other-database
You can later edit migration jobs that you created with the --database-filter flag
by using the gcloud database-migration migration-jobs update
command.
Execute the following command:
gclouddatabase-migrationmigration-jobs\ createMIGRATION_JOB_ID\--no-async\--region=REGION\--display-name=MIGRATION_JOB_NAME\--source=SOURCE_CONNECTION_PROFILE_ID\--destination=DESTINATION_CONNECTION_PROFILE_ID\--type=CONTINUOUS\
gclouddatabase-migrationmigration-jobs` createMIGRATION_JOB_ID`--no-async`--region=REGION`--display-name=MIGRATION_JOB_NAME`--source=SOURCE_CONNECTION_PROFILE_ID`--destination=DESTINATION_CONNECTION_PROFILE_ID`--type=CONTINUOUS`
gclouddatabase-migrationmigration-jobs^ createMIGRATION_JOB_ID^ --no-async^ --region=REGION^ --display-name=MIGRATION_JOB_NAME^ --source=SOURCE_CONNECTION_PROFILE_ID^ --destination=DESTINATION_CONNECTION_PROFILE_ID^ --type=CONTINUOUS^
You should receive a response similar to the following:
Waiting for migration job [MIGRATION_JOB_ID] to be created with [OPERATION_ID] Waiting for operation [OPERATION_ID] to complete...done. Created migration job MIGRATION_JOB_ID [OPERATION_ID]
Database Migration Service requires that the destination database instance works as a read replica for the time of migration. Before you start the migration job, run the gcloud database-migration migration-jobs demote-destination
command to demote the destination database instance.
Before using any of the command data below, make the following replacements:
If you don't know the identifier, you can use the gcloud database-migration migration-jobs list
command to list all migration jobs in a given region and view their identifiers.
Execute the following command:
gclouddatabase-migrationmigration-jobs\ demote-destinationMIGRATION_JOB_ID\--region=REGION
gclouddatabase-migrationmigration-jobs` demote-destinationMIGRATION_JOB_ID`--region=REGION
gclouddatabase-migrationmigration-jobs^ demote-destinationMIGRATION_JOB_ID^ --region=REGION
The action is performed in an asynchronous manner. As such, this command returns an Operation entity that represents a long-running operation:
done: false metadata: '@type': type.googleapis.com/google.cloud.clouddms.v1.OperationMetadata apiVersion: v1 createTime: '2024-02-20T12:20:24.493106418Z' requestedCancellation: false target: MIGRATION_JOB_ID verb: demote-destination name: OPERATION_ID
To see if your operation is successful, you can query the returned operation object, or check the status of the migration job:
gcloud database-migration migration-jobs describe
command to view the status of the migration job.gcloud database-migration operations describe
with the OPERATION_ID to see the status of the operation itself.At this point, your migration job is configured and connected to your destination database instance. You can manage it by using the following operations:
Optional: Verify the migration job.
We recommend that you first verify your migration job by running the gcloud database-migration migration-jobs verify
command.
For more information, expand the following section:
gcloud database-migration migration-jobs verify
Before using any of the command data below, make the following replacements:
If you don't know the identifier, you can use the gcloud database-migration migration-jobs list
command to list all migration jobs in a given region and view their identifiers.
Execute the following command:
gclouddatabase-migrationmigration-jobs\ verifyMIGRATION_JOB_ID\--region=REGION
gclouddatabase-migrationmigration-jobs` verifyMIGRATION_JOB_ID`--region=REGION
gclouddatabase-migrationmigration-jobs^ verifyMIGRATION_JOB_ID^ --region=REGION
The action is performed in an asynchronous manner. As such, this command returns an Operation entity that represents a long-running operation:
done: false metadata: '@type': type.googleapis.com/google.cloud.clouddms.v1.OperationMetadata apiVersion: v1 createTime: '2024-02-20T12:20:24.493106418Z' requestedCancellation: false target: MIGRATION_JOB_ID verb: verify name: OPERATION_ID
To see if your operation is successful, you can query the returned operation object, or check the status of the migration job:
gcloud database-migration migration-jobs describe
command with the MIGRATION_JOB_ID to view the status of the migration job.gcloud database-migration operations describe
command with the OPERATION_ID to see the status of the operation itself. Optional: Retrieve information about databases selected for migration.
When you migrate specific databases, Database Migration Service needs to retrieve the details about the databases that you selected for the migration job by using the --database-filter
flag.
Before you start the migration job, run the gcloud database-migration migration-jobs fetch-source-objects
command.
For more information, expand the following section:
gcloud database-migration migration-jobs fetch-source-objects
Before using any of the command data below, make the following replacements:
If you don't know the identifier, you can use the gcloud database-migration migration-jobs list
command to list all migration jobs in a given region and view their identifiers.
Execute the following command:
gclouddatabase-migrationmigration-jobs\ fetch-source-objectsMIGRATION_JOB_ID\--region=REGION
gclouddatabase-migrationmigration-jobs` fetch-source-objectsMIGRATION_JOB_ID`--region=REGION
gclouddatabase-migrationmigration-jobs^ fetch-source-objectsMIGRATION_JOB_ID^ --region=REGION
The output is similar to the following:
Waiting for migration job MIGRATION_JOB_ID to fetch source objects with OPERATION_ID Waiting for operation OPERATION_ID to complete...done. SOURCE_OBJECT STATE PHASE ERROR {'database': 'DATABASE_NAME', 'type': 'DATABASE'} NOT_SELECTED PHASE_UNSPECIFIED {'database': 'DATABASE_NAME', 'type': 'DATABASE'} STOPPED CDC {'code': 1, 'message': 'Internal error'}
To see if your operation is successful, you can query the returned operation object, or check the status of the migration job:
gcloud database-migration migration-jobs describe
command with the MIGRATION_JOB_ID to view the status of the migration job.gcloud database-migration operations describe
command with the OPERATION_ID to see the status of the operation itself. Start the migration job.
Start the migration job by running the gcloud database-migration migration-jobs start
command.
For more information, expand the following section:
gcloud database-migration migration-jobs start
Before using any of the command data below, make the following replacements:
If you don't know the identifier, you can use the gcloud database-migration migration-jobs list
command to list all migration jobs in a given region and view their identifiers.
Execute the following command:
gclouddatabase-migrationmigration-jobs\ startMIGRATION_JOB_ID\--region=REGION
gclouddatabase-migrationmigration-jobs` startMIGRATION_JOB_ID`--region=REGION
gclouddatabase-migrationmigration-jobs^ startMIGRATION_JOB_ID^ --region=REGION
The action is performed in an asynchronous manner. As such, this command returns an Operation entity that represents a long-running operation:
done: false metadata: '@type': type.googleapis.com/google.cloud.clouddms.v1.OperationMetadata apiVersion: v1 createTime: '2024-02-20T12:20:24.493106418Z' requestedCancellation: false target: MIGRATION_JOB_ID verb: start name: OPERATION_ID
To see if your operation is successful, you can query the returned operation object, or check the status of the migration job:
gcloud database-migration migration-jobs describe
command with the MIGRATION_JOB_ID to view the status of the migration job.gcloud database-migration operations describe
command with the OPERATION_ID to see the status of the operation itself.Once the migration reaches the Change Data Capture (CDC) phase, you can promote the destination database instance from a read replica to a standalone instance. Run the gcloud database-migration migration-jobs promote
command:
Before using any of the command data below, make the following replacements:
If you don't know the identifier, you can use the gcloud database-migration migration-jobs list
command to list all migration jobs in a given region and view their identifiers.
Execute the following command:
gclouddatabase-migrationmigration-jobs\ promoteMIGRATION_JOB_ID\--region=REGION
gclouddatabase-migrationmigration-jobs` promoteMIGRATION_JOB_ID`--region=REGION
gclouddatabase-migrationmigration-jobs^ promoteMIGRATION_JOB_ID^ --region=REGION
The action is performed in an asynchronous manner. As such, this command returns an Operation entity that represents a long-running operation:
done: false metadata: '@type': type.googleapis.com/google.cloud.clouddms.v1.OperationMetadata apiVersion: v1 createTime: '2024-02-20T12:20:24.493106418Z' requestedCancellation: false target: MIGRATION_JOB_ID verb: start name: OPERATION_ID
gcloud database-migration migration-jobs describe
command with the MIGRATION_JOB_ID to view the status of the migration job.gcloud database-migration operations describe
command with the OPERATION_ID to see the status of the operation itself.Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-04-17 UTC.