What happens when a SQL query runs out of memory?


I want to set up a Postgres server on AWS, the biggest table will be 10GB – do I have to select 10GB of memory for this instance?

What happens when my query result is larger than 10GB?


Nothing will happen, the entire result set is not loaded into memory. The maximum available memory will be used and re-used as needed while the result is prepared and will spill over to disk as needed.

See PostgreSQL resource documentation for more info.

Specifically, look at work_mem:

work_mem (integer)
Specifies the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files.

As long as you don’t run out of working memory on a single operation or set of parallel operations you are fine.

Edit: The above was an answer to the question What happens when you query a 10GB table without 10GB of memory on the server/instance?

Here is an updated answer to the updated question:

  • Only server side resources are used to produce the result set
  • Assuming JDBC drivers are used, by default, the entire result set is sent to your local computer which could cause out of memory errors

This behavior can be changed by altering the fetch size through the use of a cursor.

Reference to this behavior here

Getting results based on a cursor

Leave a Reply