Postgresql updating millions of rows

Someone will give a better answer for Postgres, but here are a few observations from an Oracle perspective which may apply (and the comments are too long for the comment field).

My first concern would be trying to update 2 million rows in one transaction.

INTO TEMPORARY TABLE master_tmp ; TRUNCATE TABLE consistent.master; -- Now DROP all constraints on consistent.master, then: INSERT INTO consistent.master SELECT * FROM master_tmp; -- ... These unlogged tables get truncated if the DB shuts down uncleanly while they're dirty.

I don't need to have transactional integrity (but of course if the system crashes, there should be no data corruption.

Oracle allows "no logging" hints on statements to stop the journalling.

It speeds up the statements alot, but leaves your db in an "unrecoverable" situation.

The solution was to drop all indexes on the table being updated before running the update statement.

Once I did that, the update finished in a few minutes.

Search for postgresql updating millions of rows:

postgresql updating millions of rows-76postgresql updating millions of rows-81postgresql updating millions of rows-52

Leave a Reply

Your email address will not be published. Required fields are marked *

One thought on “postgresql updating millions of rows”