How to Migrate from MySQL to PostgreSQL
February 4, 2026 · Ghazi
This guide walks through the key differences you'll encounter, how to map data types, convert your schema and queries, and which tools can automate most of the heavy lifting.
Why Migrate to PostgreSQL?
Before diving into the how, it helps to understand why so many teams are making this move:
- Standards compliance: PostgreSQL follows the SQL standard more closely than MySQL, which means fewer surprises and more portable SQL.
- Advanced data types: Native support for JSON, arrays, hstore, ranges, and custom types gives you more modeling flexibility.
- Extensibility: Extensions like PostGIS (geospatial), pg_trgm (fuzzy search), and TimescaleDB (time series) let you extend Postgres for specialized workloads.
- Concurrency: PostgreSQL uses MVCC (Multi-Version Concurrency Control) for all storage engines, giving you consistent read performance under write-heavy workloads.
- Ecosystem growth: Tools like Neon, Supabase, and a wide range of managed cloud providers have made PostgreSQL the default choice for new projects.
Step 1: Audit Your MySQL Database
Start by understanding what you're working with. Run a few queries to inventory your schema:
-- List all tables and their engines
SELECT table_name, engine, table_rows
FROM information_schema.tables
WHERE table_schema = 'your_database';
-- Check column types you'll need to map
SELECT table_name, column_name, data_type, column_type
FROM information_schema.columns
WHERE table_schema = 'your_database'
ORDER BY table_name, ordinal_position;Pay attention to MySQL-specific features you rely on: storage engines (InnoDB vs MyISAM), auto_increment columns, enum types, and any use of MySQL-specific SQL syntax.
Step 2: Map Data Types
Most MySQL types have direct PostgreSQL equivalents, but some require attention. Here are the key mappings:
| MySQL Type | PostgreSQL Type | Notes |
|---|---|---|
INT AUTO_INCREMENT | SERIAL or INT GENERATED ALWAYS AS IDENTITY | IDENTITY is the modern approach |
BIGINT AUTO_INCREMENT | BIGSERIAL or BIGINT GENERATED ALWAYS AS IDENTITY | |
TINYINT(1) | BOOLEAN | MySQL uses TINYINT(1) as boolean |
TINYINT | SMALLINT | No TINYINT in Postgres |
DOUBLE | DOUBLE PRECISION | |
FLOAT | REAL | |
DATETIME | TIMESTAMP | |
BLOB / LONGBLOB | BYTEA | |
TEXT / LONGTEXT | TEXT | Postgres TEXT has no length limit |
ENUM('a', 'b') | Custom ENUM type or CHECK constraint | Postgres enums are standalone types |
SET | TEXT[] or junction table | No SET type in Postgres |
JSON | JSONB | JSONB is faster for queries |
Step 3: Convert Your Schema
Beyond data types, several DDL patterns differ between MySQL and PostgreSQL. Here are the most common changes:
Auto-increment Columns
MySQL:
CREATE TABLE users (
id INT NOT NULL AUTO_INCREMENT,
name VARCHAR(100),
PRIMARY KEY (id)
);PostgreSQL:
CREATE TABLE users (
id INT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
name VARCHAR(100)
);Enum Types
MySQL defines enums inline in the column definition. PostgreSQL requires you to create the enum type first:
MySQL:
CREATE TABLE orders (
status ENUM('pending', 'shipped', 'delivered') DEFAULT 'pending'
);PostgreSQL:
CREATE TYPE order_status AS ENUM ('pending', 'shipped', 'delivered');
CREATE TABLE orders (
status order_status DEFAULT 'pending'
);String Quoting
MySQL uses backticks for identifier quoting. PostgreSQL uses double quotes. In practice, the best approach is to avoid quoting altogether by using lowercase, snake_case identifiers.
-- MySQL
SELECT `user`.`first-name` FROM `user`;
-- PostgreSQL
SELECT "user"."first-name" FROM "user";
-- Better: avoid reserved words and special characters
SELECT first_name FROM users;Step 4: Update Your SQL Queries
Most standard SQL works in both databases, but there are MySQL-specific patterns you'll need to update:
LIMIT with OFFSET
Both databases support LIMIT ... OFFSET ..., so this usually works as-is. However, MySQL's LIMIT offset, count shorthand is not valid in PostgreSQL:
-- MySQL shorthand (not valid in PostgreSQL)
SELECT * FROM users LIMIT 10, 20;
-- Standard SQL (works in both)
SELECT * FROM users LIMIT 20 OFFSET 10;String Concatenation
-- MySQL
SELECT CONCAT(first_name, ' ', last_name) FROM users;
-- PostgreSQL (both work)
SELECT CONCAT(first_name, ' ', last_name) FROM users;
SELECT first_name || ' ' || last_name FROM users;Date Functions
-- MySQL
SELECT NOW(), CURDATE(), DATE_FORMAT(created_at, '%Y-%m-%d');
-- PostgreSQL
SELECT NOW(), CURRENT_DATE, TO_CHAR(created_at, 'YYYY-MM-DD');IFNULL vs COALESCE
-- MySQL
SELECT IFNULL(nickname, 'Anonymous') FROM users;
-- PostgreSQL (COALESCE is standard SQL and works in both)
SELECT COALESCE(nickname, 'Anonymous') FROM users;GROUP BY Behavior
MySQL (depending on the sql_mode) allows selecting columns not in the GROUP BY clause. PostgreSQL strictly enforces that every selected column must be in the GROUP BY or wrapped in an aggregate function:
-- This may work in MySQL but will fail in PostgreSQL
SELECT user_id, name, COUNT(*)
FROM orders
GROUP BY user_id;
-- Correct for PostgreSQL
SELECT user_id, MAX(name) AS name, COUNT(*)
FROM orders
GROUP BY user_id;UPSERT Syntax
-- MySQL
INSERT INTO users (id, name, email)
VALUES (1, 'Alice', 'alice@example.com')
ON DUPLICATE KEY UPDATE name = VALUES(name), email = VALUES(email);
-- PostgreSQL
INSERT INTO users (id, name, email)
VALUES (1, 'Alice', 'alice@example.com')
ON CONFLICT (id) DO UPDATE
SET name = EXCLUDED.name, email = EXCLUDED.email;Step 5: Choose a Migration Tool
You don't have to do everything by hand. Several tools can automate schema conversion and data transfer:
pgloader
pgloader is the most popular open-source tool for migrating to PostgreSQL. It reads directly from a live MySQL database and loads data into PostgreSQL, handling type conversions automatically.
pgloader mysql://user:pass@mysql-host/mydb \
postgresql://user:pass@pg-host/mydbFor more control, use a pgloader command file:
LOAD DATABASE
FROM mysql://user:pass@mysql-host/mydb
INTO postgresql://user:pass@pg-host/mydb
WITH include drop, create tables, create indexes, reset sequences
SET maintenance_work_mem to '512MB',
work_mem to '48MB'
CAST type tinyint to boolean using tinyint-to-boolean;pg_dump and mysql2pgsql
If you prefer a two-step process, you can dump your MySQL database and convert the dump file. Tools like mysql2pgsql (Ruby gem) or pgloader can process MySQL dump files. However, loading directly from a live MySQL connection with pgloader is usually simpler and more reliable.
AWS DMS (Database Migration Service)
If you're on AWS, DMS can replicate data from MySQL to PostgreSQL with minimal downtime. It supports both full-load migration and ongoing change data capture (CDC) for near-zero downtime migrations.
Step 6: Validate the Migration
After moving your data, verify that everything transferred correctly:
- Row counts: Compare row counts for every table between MySQL and PostgreSQL.
- Spot checks: Query specific records and compare values, paying attention to dates, nulls, and special characters.
- Sequences: Make sure auto-increment sequences are set to the correct next value. pgloader handles this, but verify with:
SELECT last_value FROM tablename_id_seq; - Indexes and constraints: Verify that primary keys, unique constraints, foreign keys, and indexes were all created correctly.
- Application testing: Run your application's test suite against the new PostgreSQL database. This is the best way to catch query compatibility issues.
Common Pitfalls
Watch out for these issues that frequently trip people up during migration:
- Case sensitivity: MySQL is case-insensitive for string comparisons by default. PostgreSQL is case-sensitive. If your app relies on case-insensitive matching, use
ILIKEinstead ofLIKE, or use thecitextextension. - Zero dates: MySQL allows
'0000-00-00'as a date value. PostgreSQL does not. You'll need to convert these toNULLor a valid date. - Unsigned integers: PostgreSQL does not support unsigned integer types. A MySQL
INT UNSIGNEDshould become aBIGINTin PostgreSQL if you need the full range. - Character encoding: Make sure your PostgreSQL database is created with
UTF8encoding. MySQL databases sometimes uselatin1or mixed encodings that need to be converted during migration. - Stored procedures: MySQL stored procedures and functions use a different syntax than PostgreSQL's PL/pgSQL. These need to be manually rewritten.
After the Migration
Once your data is in PostgreSQL, take advantage of features that weren't available in MySQL:
- Use
JSONBcolumns with indexing for flexible semi-structured data. - Add
CHECKconstraints for data validation at the database level. - Use
EXPLAIN ANALYZEto understand and optimize query performance. - Explore extensions like
pg_trgmfor full-text and fuzzy search, orPostGISfor geospatial queries. - Use CTEs (Common Table Expressions) and window functions for complex queries that would have required subqueries or application-level logic in MySQL.
With your database now running on PostgreSQL, you'll want a good client to explore your data and run queries. PostgresGUI is a lightweight, native PostgreSQL client for Mac that connects to any Postgres instance. It's a fast, simple way to browse your tables and verify everything looks right after migration.