You can't upgrade an existing conversion workspace. If you want to use the upgraded conversion workspace features, create a new conversion workspace to use with your migration job.
Remove objects from the source schema to exclude them from conversion. You can later add removed objects if needed.
You can inspect the converted schema in the Cloud SQL for PostgreSQL draft tab.
You can add objects previously removed from the source schema to add them to the conversion.
You can use the Filter objects button to reduce the number of displayed objects. See Filter objects in the source schema view
You can inspect the converted schema in the Cloud SQL for PostgreSQL draft tab.
You can pull schema and code from your source database into the conversion workspace at any moment. Pulling the source gives you an opportunity to add new, or update existing objects in the conversion.
Pulling updated database schema and code doesn't reset any custom mappings that exist in the conversion workspace unless you explicitly choose to remove custom mappings.
Additionally, pulling the source doesn't override SQL changes on your code objects. You can reset these changes directly at the object level.
Database Migration Service pulls the new snapshot from your source database.
You can inspect the converted schema in the Cloud SQL for PostgreSQL draft tab.
You can customize the conversion logic with a conversion mapping file. The conversion mapping file is a text file contains precise instructions (referred to as conversion directives) for how your Oracle objects should be converted into PostgreSQL objects.
To write your custom mapping files:
Use the example configuration file as a point of reference.
Write your custom conversion mappings in a text editor and upload it to the conversion workspace.
To add a custom conversion mapping file to your workspace, do the following:
To remove a custom conversion mapping file from your workspace, do the following:
After you perform the source conversion, you can review the conversion results and possible issues for every individual converted object in the workspace editor area. You can also use Google Cloud CLI to save all results and issues in bulk to a text file.
In the Google Cloud console, go to Conversion workspaces.
Click the display name of the conversion workspace that you want to work with.
Conversion workspace editor opens.
Select the Oracle tab, and locate the object for which you want to review conversion results in the tree view table.
Select the object. Use the SQL and Conversion issues tabs to review the conversion.
With Google Cloud CLI you can print all conversion results or issues to the terminal. Redirect the output to a file for more convenient bulk object reviews.
gcloud CLI displays conversion results in the terminal in the form of Data Definition Language (DDL) statements. To save conversion results to a file, execute the following command:
gclouddatabase-migrationconversion-workspacesdescribe-ddls\CONVERSION_WORKSPACE_ID\--region=REGION_ID\ > OUTPUT_FILE_PATH
Replace:
CONVERSION_WORKSPACE_ID
with the conversion workspace identifier. For information on how to retrieve conversion workspace identifiers, see View conversion workspace details.REGION_ID
with the name of the region where the conversion workspace is located.OUTPUT_FILE_PATH
with the path where to the text file where you want to save the output.
Example:
gclouddatabase-migrationconversion-workspacesdescribe-issues\ my-conversion-workspace\ --region=us-central1\ >./my-conversion-issues.txt
Result:
Your schema conversion results are saved in a text format where the first line says DDLs
and subsequent lines are occupied by SQL statements:
DDLs CREATE SCHEMA IF NOT EXISTS "SCHEMA1"; ALTER TABLE "SCHEMA1"."EMPLOYEES" ADD CONSTRAINT PK_ID PRIMARY KEY ("ID"); CREATE OR REPLACE FUNCTION mockschema.func_test_datatype(str1 VARCHAR(65000)) RETURNS DECIMAL LANGUAGE plpgsql AS $$ DECLARE str2 VARCHAR(100); BEGIN SELECT employees.first_name INTO STRICT STR2 FROM mockschema.employees WHERE employees.employee_id = CAST(FUNC_TEST_DATATYPE.str1 as DECIMAL) ; RAISE NOTICE '%', concat('Input : ', FUNC_TEST_DATATYPE.str1, ' Output : ', str2); RETURN 0; END; $$; CREATE OR REPLACE PROCEDURE greetings AS BEGIN dbms_output.put_line('Hello World!'); END; CREATE SYNONYM TABLE "SCHEMA1"."SYNONYM1" ON "SCHEMA1"."EMPLOYEES"; CREATE OR REPLACE VIEW "SCHEMA1"."VIEW1" AS SELECT * FROM JOBS;
To save conversion issues to a file, execute the following command:
gclouddatabase-migrationconversion-workspacesdescribe-issues\CONVERSION_WORKSPACE_ID\--region=REGION_ID\ > OUTPUT_FILE_PATH
Replace:
CONVERSION_WORKSPACE_ID
with the conversion workspace identifier. For information on how to retrieve conversion workspace identifiers, see View conversion workspace details. REGION_ID
with the name of the region where the conversion workspace is located. OUTPUT_FILE_PATH
with the path where to the text file where you want to save the output. Example:
gclouddatabase-migrationconversion-workspacesdescribe-issues\my-conversion-workspace\--region=us-central1\ > ./my-conversion-issues.txt
Result:
All the conversion issues contained in your workspace are saved in a text format where the first line contains column headers and each subsequent line contains a separate conversion issue:
PARENT NAME ENTITY_TYPE ISSUE_TYPE ISSUE_SEVERITY ISSUE_CODE ISSUE_MESSAGE SCHEMA1 EMPLOYEES TABLE DDL ERROR 500 unable to parse DDL. SCHEMA1 EMPLOYEES TABLE CONVERT WARNING 206 no conversion done. SCHEMA1 STORED_PROCEDURE1 STORED_PROCEDURE DDL ERROR 500 invalid DDL. SCHEMA1 SYNONYM1 SYNONYM CONVERT WARNING 206 synonym warning message.
Database Migration Service might not be able to automatically convert your entire source. For most Oracle objects, you can use the conversion editor directly in Database Migration Service to adjust the generated SQL. For others, you might need to change the object directly in your source database and then pull the source snapshot again.
For a complete list of objects that Database Migration Service supports for editing directly in the conversion workspace, see Editable Oracle objects.
To fix the conversion issues found in your schema, do the following:
You can use the Google Cloud console for reviewing individual objects or gcloud CLI for reviewing all objects in bulk.
Depending on the type of your issue, you can fix it directly in the workspace editor, or you might need to provide a customized conversion mapping file. Expand the following sections for more information.
Regardless of what type of issue you're working with, you can try the the Gemini-powered conversion assistant to find a solution. For more information, see Use Gemini conversion assistant.
To fix issues encountered with objects that are supported in the workspace editor, do the following:
To fix issues encountered with objects that aren't supported in the workspace editor, perform one of the following:
You can use a conversion mapping file to provide precise definitions for how Database Migration Service should convert certain PostgreSQL objects. To use a conversion mapping file, do the following:
Before you apply the schema to the destination database, you can first perform a test run to proactively check for possible issues. To perform the test, Database Migration Service creates a temporary database. The test run doesn't impact your destination Cloud SQL instance.
Make sure your dedicated migration user has the CREATEDB
permission. For more information, see Create and configure your destination Cloud SQL instance.
In the Google Cloud console, go to Conversion workspaces.
Click the display name of the conversion workspace that you want to work with.
Conversion workspace editor opens.
Click Apply to destination>Test (recommended).
The wizard for applying schema to destination database appears.
In the Define destination section, select the connection profile that points to your destination database.
Click Define and continue.
In the Select objects and test application section, select the schemas of database entities you want to test for your destination database.
You can use the Filter objects button to reduce the number of displayed objects. See Filter objects in the source schema view.
Click Test application.
You can review the application status in the Cloud SQL for PostgreSQL tab.
When the schema you would like to use in the destination database is converted according to your requirements and mappings, you can apply the results to the destination database. Applying schema in the destination doesn't alter any data on the source database.
In the Google Cloud console, go to Conversion workspaces.
Click the display name of the conversion workspace that you want to work with.
Conversion workspace editor opens.
Click Apply to destination>Apply.
The wizard for applying schema to destination database appears.
In the Define destination section, select the connection profile that points to your destination database.
Click Define and continue.
In the Review objects and apply conversion to destination section, select the schemas of database entities you want to create in your destination database.
You can use the Filter objects button to reduce the number of displayed objects. See Filter objects in the source schema view.
Click Apply to destination.
You can review the application status in the Cloud SQL for PostgreSQL tab.
You can create a migration job that uses your conversion workspace directly from the conversion editor interface.
In the Google Cloud console, go to Conversion workspaces.
Click the display name of the conversion workspace that you want to work with.
Conversion workspace editor opens.
Click Create migration job.
Proceed with the standard migration job steps, as outlined in Create a migration job.
Database schemas often contain thousands of objects, making it challenging to partition conversion work. When you add objects from the schema snapshot to the source schema view, you can use filters to limit the number of displayed objects. Filters let you add objects in a more granular fashion and focus on converting a select subset of your schema.
Use the filtered view when you add objects to the source schema view:
ADMIN
. You can combine filter properties with logical operators.
type=table
. For more information on the filtering syntax, see Supported filtering syntax.
You can filter objects by name with basic free text search, or use a dedicated type
property. Both approaches support the Google API formal specification for filtering, meaning you can use literals with wildcards, as well as logical and comparison operators.
Use free text to filter the objects by name. This approach is case-sensitive and supports wildcards.
Example:
The *JOB*
query uses wildcards to search for entities that contain the JOB
substring. The filtered view returns some tables and one stored procedure:
type
propertyYou can filter objects by all standard types supported in Database Migration Service.
The type
property supports the following literals with the equality (=
) and inequality (!=
) operators: database
, schema
, table
, column
, index
, sequence
, stored_procedure
, function
, view
, synonym
, materialized_view
, udt
, constraint
, database_package
, trigger
, and event_trigger
.
Example:
The type=table
filter returns only tables present in your schema:
You can specify multiple conditions by combining them with logical operators.
For example, to search exclusively for tables whose names contain the JOB
or EMPLOYEE
substrings, use this query:
type=tableAND(*JOB*OR*EMPLOYEE*)
As a result, the filter displays all matching tables:
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-04-17 UTC.