Every migration unfolds a story. Here are the chapters most teams miss.
Years of guiding customers and partners through Oracle and SQL Server migrations to PostgreSQL taught me one thing above everything else: the tool was never the whole answer.
Legacy Oracle and SQL Server code wasn’t written yesterday. It was built over three decades, under different constraints, by people solving real problems with what they had. That logic has history. It deserves empathy not just a conversion pass.
Ora2pg, AWS SCT, Google Cloud DMS, Ispirer all are solid tools. All do a decent job translating DDL and procedural logic. But default tooling output becomes your new core first PostgreSQL foundation. If that foundation has cracks, your customer’s first experience of PostgreSQL will be frustration not its true capability.
Every migration I’ve worked on unfolded a story. I learned something new in each one and carried that into every PostgreSQL implementation I built after.
Here are four gotchas that will bite you if you’re not watching.
1. Oracle NUMBER Without Precision – A Silent Performance Trap
Oracle lets you declare NUMBER without precision or scale. That’s a 30-year-old design choice. Most tools map it to NUMERIC or DOUBLE PRECISION depending on context but without deliberate fine-tuning, you get a blanket mapping that introduces implicit casting inside procedural blocks and hurts performance long term.

It compounds further when NUMBER columns carry primary key or foreign key constraints wrong type mapping directly impacts index access patterns, storage, and query performance over time.
If your conversion tool is producing this mapping without any precision inference — please run. Fix it before you go further.
| Oracle Number Declaration | Constraint | PostgreSQL |
| NUMBER | Primary Key | Numeric or Double Precision |
| NUMBER | Foreign Key | Numeric or Double Precision |
| NUMBER | N | Default – Double Precision |
It also quietly introduces implicit casting inside procedural blocks, a performance issue that compounds over time and is hard to trace back to the root cause. Spoke about this in detail at PGConf India 2026.
2. Implicit Type Casting – Errors Hidden for a Decade
Oracle silently casts mismatched types. Comparing a NUMBER column to a VARCHAR? No error, Oracle just handles it and hides the problem for years. PostgreSQL is strict by design. It throws an operator error the moment argument types don’t match. That’s not a bug that’s PostgreSQL doing the right thing.
Most conversion tools can’t detect this during translation. It doesn’t surface until the first functional run or worse, when the customer is already in testing. Code that ran in Oracle for a decade starts failing. To the end customer it feels like a PostgreSQL problem. It isn’t it’s a decade-old data quality issue that Oracle never surfaced.
Two mitigation paths: explicit casting wherever feasible, or custom operators in a dedicated schema with casting functions, prioritized via search_path. Neither is a silver bullet, but understanding the behavioral gap is half the battle before you even touch the fix.
→ Deep dive: Implicit Conversion in Oracle to PostgreSQL Migration
→ Related: Same SQL, Different Results — A Subtle Migration Bug
3. Incomplete Code Marked as Completed
This one is specific to AWS Schema Conversion Tool and it’s dangerous.
For certain patterns AWS SCT can’t convert, it doesn’t fail loudly. It creates a version that compiles but doesn’t work. If you’re using metadata-driven schema validation, the object shows up as present. Functionally, it’s garbage.
Two real examples:
Oracle CodeCOUNT(DISTINCT col) inside analytical functions – SCT outputs a view that returns the error message as a column value. It compiles. It does nothing useful.
CREATE OR REPLACE FORCE VIEW MOCKSCHEMA.VW_DISTINCT_CLAUSE_ANALYTICAL(COL1, COL2, DISTINCT_COUNT_COL1) AS
WITH
......................
COUNT(DISTINCT COL1) OVER (PARTITION BY COL2) AS DISTINCT_COUNT_COL1 FROM ALIAS1
/
AWS Schema Conversion Tool Output
CREATE OR REPLACE VIEW mockschema.vw_distinct_clause_analytical (text, error_msg) AS
SELECT '.........'5340 - Severity CRITICAL - PostgreSQL doesn''t support the COUNT(<DYNAMIC_TYPE>) function. Revise your code to use another function or create a user-defined function.
5578 - Severity CRITICAL........
::varchar AS error_msg;

One more examples from AWS Schema Conversion Tool
BULK COLLECT + FORALL patterns
CREATE OR REPLACE PROCEDURE "MOCKSCHEMA"."PROC_EMP_BULK_LOAD" as
type emp_info_type is table of employees % rowtype;emp_info_type_tbl emp_info_type;
begin
select
e. * bulk collect into emp_info_type_tbl
from
employees e;forall I in 1 ..emp_info_type_tbl.count
insert into
employees_bkp (EMPLOYEE_ID)
VALUES
(
emp_info_type_tbl(i).EMPLOYEE_ID
);
exception
when others then rollback;raise;
end proc_emp_bulk_load;/
SCT wraps the untranslated logic in comments inside a valid PL/pgSQL shell. The procedure compiles. The body does nothing.
CREATE OR REPLACE PROCEDURE mockschema.proc_emp_bulk_load()
AS
$BODY$
/* create table employees_bkp(EMPLOYEE_ID NUMBER(6)); */
DECLARE
emp_info_type_tbl mockschema.proc_emp_bulk_load$emp_info_type;
BEGIN
BEGIN
/*
[9996 - Severity CRITICAL - Transformer error occurred in plSqlStatement. Please submit report to developers.]
select e.*
bulk collect into emp_info_type_tbl
from employees e
*/
/*
[9996 - Severity CRITICAL - Transformer error occurred in plSqlStatement. Please submit report to developers.]
forall I in 1 ..emp_info_type_tbl.count
insert into employees_bkp
(
EMPLOYEE_ID
)
VALUES
(
emp_info_type_tbl(i).EMPLOYEE_ID
)
*/
END;
EXCEPTION
WHEN others THEN
/*
[5035 - Severity CRITICAL - Your code ends a transaction inside a block with exception handlers. Revise your code to move transaction control to the application side and try again.]
rollback
*/
RAISE;
END;
$BODY$
LANGUAGE plpgsql;

The lesson: Never validate migration completeness from metadata alone. Run functional or sanity tests on Every object!
4. Hidden Lock-in via AWS SCT Extension Packs
AWS SCT converts Oracle/MSSQL functions by wrapping them in proprietary extension schemas aws_oracle_ext, aws_sqlserver_ext. It accelerates initial conversion but embeds undocumented, unmaintained, AWS-specific code into what should be an open PostgreSQL database.
Move to another cloud or self-host later? That extension dependency goes with you or blocks you.
The fix is replacing extension pack references with native PostgreSQL functionality or orafce equivalents.
We built two tools at DataCloudGaze specifically for this:
Extension Assessment Tool — scans your migrated code, maps extension dependencies by function, categorizes effort as Simple/Medium/Complex.
SCTMigrator — automates the replacement, giving you PostgreSQL that’s genuinely portable across any cloud or postgres platform.
Final Thoughts
Across dozens of migrations, Ora2pg, AWS SCT, Google Cloud DMS, and Ispirer have converted enormous chunks of Oracle PL/SQL and SQL Server T-SQL into working PL/pgSQL. Core translation – DDL, procedural logic, data type mapping — they handle well. That’s months of manual effort saved and that matters.
But enablers are not finishers. Compatible and production-ready are two very different things. The NUMBER precision decisions, implicit cast failures, silent compilation errors, extension lock-in none of that gets resolved by the single tool. That requires judgment, empathy for legacy code, and a systematic approach to validation.
A good reference on this — PostgreSQL Migration from Oracle (Converting 1 millions lines of code).
Every migration I’ve worked on taught me something new. These four gotchas are the ones that show up most consistently across tools, across customers, across source databases. They’re not edge cases. They’re the job.
If you’re in the middle of an Oracle or SQL Server migration to PostgreSQL and hitting walls, let’s talk.
📩 contact@datacloudgaze.com
📅 Book a 30-min call
🔗 DCGMigrator — end-to-end migration intelligence platform
































