Skip to content

Chunked-resume

  • large numeric-PK tables dominate runtime
  • you want checkpoints and restart safety
  • row-count validation matters
schema = "app"
on_schema_exists = "error"
unlogged_tables = false
workers = 8
chunk_size = 100000
resume = true
validation = "row_count"
[source]
type = "mysql"
dsn = "root:root@tcp(127.0.0.1:3306)/source_db"
[target]
dsn = "postgres://postgres:postgres@127.0.0.1:5432/target_db?sslmode=disable"
Terminal window
pgferry plan migration.toml
pgferry migrate migration.toml

Resume only makes sense when the target data survives crashes, so this pattern keeps tables logged on purpose.

Raw files: migration.toml