It's a view, it's a table... no, it's a materialized view!

PostgreSQL is a rich repository of evolving commands and functionality. Materialized views are one result of that evolution and in this Write Stuff article Robert M. Wysocki takes an in-depth look at their past, present and future.

Databases come in different shapes and sizes and so do policies created by their administrators. One of the strictest and most original is the rule I came across recently: "No VIEWs allowed in production database". When tracking down slow queries and investigating their causes, VIEWs can make the task much harder. And even more complicated is the issue of ownership and responsibility, especially when things take a wrong turn because of too many or too complex VIEWs. But usually the solution is simple and there's no need to go to such extremes as the rule mentioned. In this case it can be as easy as implementing MATERIALIZED VIEWs.

When dealing with slow queries, one of the first things that comes to mind is query result caching. This can be especially important in complex architectures that have many VIEWs, over other VIEWs, over yet another set of VIEWs. With lengthy SELECT queries operating in stacks like this, it's often hard to say whether the query planner will push the predicates down. If it doesn't - you're in deep trouble.

I can understand the policy of "no VIEWs allowed". It's simply because in the past I did come across heavy queries making use of VIEWs more than once. The main problem is that if - for some reason - your predicates don't get pushed down into the VIEWs definition, you end up fetching all the rows that the VIEW in question would fetch without the predicate - then you filter through them. That's quite a penalty and you probably wouldn't use the VIEW in the first place if you knew about that.

In my experience, often when faced with similar cases, application developers tend to try and "solve" the problem themselves by implementing some sort of application-level result caching. But what if you would have an easy way of caching results of your views, a way that wouldn't raise the need to involve external tools such as memcached or redis?

The terminology

But maybe it's best to first get our terminology straight. What is a VIEW? And how is a MATERIALIZED VIEW related to a VIEW?

Let's start with TABLE – it's basically an organized storage for your data - columns and rows. You can easily query the TABLE using predicates on the columns. To simplify your queries or maybe to apply different security mechanisms on data being accessed you can use VIEWs – named queries – think of them as glasses through which you can look at your data.

So if TABLE is storage, a VIEW is just a way of looking at it, a projection of the storage you might say. When you query a TABLE, you fetch its data directly. On the other hand, when you query a VIEW, you are basically querying another query that is stored in the VIEW's definition. But the query planner is aware of that and can (and usually does) apply some "magic" to merge the two together.

Between the two there is MATERIALIZED VIEW - it's a VIEW that has a query in its definition and uses this query to fetch the data directly from the storage, but it also has it's own storage that basically acts as a cache in between the underlying TABLE(s) and the queries operating on the MATERIALIZED VIEW. It can be refreshed, just like an invalidated cache - a process that would cause its definition's query to be executed again against the actual data. It can also be truncated, but then it wouldn't behave like a TABLE nor a VIEW. It's worth noting that this dual nature has some interesting consequences; unlike simple "nominal" VIEWs their MATERIALIZED cousins are "real", meaning you can - for example - create indices on them. On the other hand, you should also take care of removing bloat from them. For more on on detail on materialized views see this article.

A simple use case

Now that we have our notions sorted out and we know what we're talking about, let's take a quick look at a trivial example. So, I have this really simple database that I use to store investment fund quotes:

fundusze=# \d names
            Table "public.names"
 Column |          Type          | Modifiers 
 symbol | character(6)           | not null
 name   | character varying(128) | 
    "names_pkey" PRIMARY KEY, btree (symbol)
Referenced by:
    TABLE "costs" CONSTRAINT "costs_symbol_fkey" FOREIGN KEY (symbol) REFERENCES names(symbol)
    TABLE "funds" CONSTRAINT "funds_symbol_fkey" FOREIGN KEY (symbol) REFERENCES names(symbol)
    TABLE "quotes" CONSTRAINT "quotes_symbol_fkey" FOREIGN KEY (symbol) REFERENCES names(symbol)

fundusze=# \d quotes
       Table "public.quotes"
 Column |     Type     | Modifiers 
 symbol | character(6) | not null
 date   | date         | not null
 quote  | numeric(6,2) | 
    "quotes_pkey" PRIMARY KEY, btree (symbol, date)
Foreign-key constraints:
    "quotes_symbol_fkey" FOREIGN KEY (symbol) REFERENCES names(symbol)

If I'd like to fetch quotes for all equity (akcji in Polish) funds, I would have to execute a query like this:

fundusze=# explain (buffers, analyze) select date, quote from quotes inner join names on quotes.symbol = names.symbol where ilike '%akcji%';
                                                      QUERY PLAN                                                      
 Hash Join  (cost=8.06..3828.04 rows=37921 width=10) (actual time=0.524..574.040 rows=25944 loops=1)
   Hash Cond: (quotes.symbol = names.symbol)
   Buffers: shared hit=9 read=1088 dirtied=281
   ->  Seq Scan on quotes  (cost=0.00..2799.65 rows=170965 width=17) (actual time=0.008..539.779 rows=135480 loops=1)
         Buffers: shared hit=2 read=1088 dirtied=281
   ->  Hash  (cost=7.33..7.33 rows=59 width=7) (actual time=0.469..0.469 rows=47 loops=1)
         Buckets: 1024  Batches: 1  Memory Usage: 2kB
         Buffers: shared hit=4
         ->  Seq Scan on names  (cost=0.00..7.33 rows=59 width=7) (actual time=0.030..0.454 rows=47 loops=1)
               Filter: ((name)::text <del>* '%akcji%'::text)
               Rows Removed by Filter: 219
               Buffers: shared hit=4
 Planning time: 147.585 ms
 Execution time: 575.966 ms
(14 rows)

It isn't that fast for such a small table, but it's the first run, so most of the rows were read from disk. I can use a VIEW to simplify the query:

fundusze=# create view quotes_akcji_v as select date, quote from quotes inner join names on quotes.symbol = names.symbol where ilike '%akcji%';
fundusze=# explain (buffers, analyze) select * from quotes_akcji_v ;
                                                     QUERY PLAN                                                      
 Hash Join  (cost=8.06..3828.04 rows=37921 width=10) (actual time=1.004..54.585 rows=25944 loops=1)
   Hash Cond: (quotes.symbol = names.symbol)
   Buffers: shared hit=1094
   ->  Seq Scan on quotes  (cost=0.00..2799.65 rows=170965 width=17) (actual time=0.013..19.185 rows=135480 loops=1)
         Buffers: shared hit=1090
   ->  Hash  (cost=7.33..7.33 rows=59 width=7) (actual time=0.972..0.972 rows=47 loops=1)
         Buckets: 1024  Batches: 1  Memory Usage: 2kB
         Buffers: shared hit=4
         ->  Seq Scan on names  (cost=0.00..7.33 rows=59 width=7) (actual time=0.043..0.940 rows=47 loops=1)
               Filter: ((name)::text </del>* '%akcji%'::text)
               Rows Removed by Filter: 219
               Buffers: shared hit=4
 Planning time: 0.938 ms
 Execution time: 56.539 ms
(14 rows)

As you can see, it's much faster now, but it has nothing to do with the fact that I used a VIEW – it's simply because this time all the rows were fetched from the shared buffers cache. The timings would look more or less the same if I had used the same direct query again.

Based on the VIEW created I can now show you it's MATERIALIZED version:

fundusze=# create materialized view quotes_akcji_mv as select * from quotes_akcji_v ;
SELECT 25944
fundusze=# explain (buffers, analyze) select * from quotes_akcji_mv ;
                                                     QUERY PLAN                                                     
 Seq Scan on quotes_akcji_mv  (cost=0.00..380.70 rows=23970 width=18) (actual time=0.038..9.311 rows=25944 loops=1)
   Buffers: shared hit=141 dirtied=141
 Planning time: 0.129 ms
 Execution time: 13.137 ms
(4 rows)

So here you can see, that upon creation a SELECT query was executed fetching the data from quotes_akcji_v; the rows fetched were then stored in quotes_akcji_mv MATERIALIZED VIEW's storage. As a result, querying the MATERIALIZED VIEW is 4 times faster, than querying VIEW. If you'll take a look at both query plans, you'll notice, that the VIEW query has to apply a filter on all the rows while the MATERIALIZED version just fetches all the rows, since filtering has already been done upon creation.

A more complex use case

Now the previous example was a very simple one with an extremely small data set and yet you can see that the benefit of creating and using MATERIALIZED VIEW in place of a plain VIEW was significant. Now, just imagine how much can be gained in more complex cases. Actually, you don't have to imagine anything, let me tell you a story.

We had quite a big table with keyword hits. Throughout the whole day, a process had to go through another database, do some calculations and then update our table in question. This was quite a lengthy procedure. Another process had to periodically go through the table to... well... process it. The problem was that a quite complex query (involving not only this one table but also few smaller ones) was taking too long to run - think hours. And once it finished - it had to start over with the new data. One important fact - the application used a CURSOR to fetch data in batches. What happened to us more than once was that those two processes were blocking each other - we couldn't put fresh data in the table because processing was still running (having an open CURSOR to the table). And before the processing finished we already had one or more refresh processes queued up to access the table.

In order to speed things up, we decided to create a MATERIALIZED VIEW over the query. The update process was able to go through the table without being locked out by front-end queries and the web application was allowed to have a CURSOR open to MATVIEW for as long as it needed. The result was more than satisfying – not only did the two stop blocking each other, but the execution time also went down from hours to minutes.

But soon we were faced with another interesting problem. Obviously, we had to REFRESH the MATVIEW from time to time. The thing is that during such a refresh MATERIALIZED VIEW is unavailable for querying - an AccessExclusiveLock is acquired by the REFRESH query. Refreshing was heavy and needed some time to complete, so the front-end queries were piling up waiting for the MATVIEW to become available again. Unfortunately, we still had few months till the release of PostgreSQL 9.4 and the totally awesome feature called REFRESH MATVIEW CONCURRENTLY. I'd read about it many times and was really looking forward to it, but we had a real problem that demanded a solution.

Let's stop for a moment. Let me explain, why I was so excited about this new feature. The magical command CONCURRENTLY means that the MATVIEW is available for querying during the REFRESH procedure. What it does under the hood is that it creates a new MATVIEW with a fresh data set and then compares the two and applies changes to the original one on a row-by-row basis - it doesn't require a heavy, exclusive lock on the original MATVIEW. An interesting piece of information that I just learned while doing research for this article is that comparing and applying changes is done thanks to FULL OUTER JOIN. One important thing to remember - in order to be able to compare the two, PostgreSQL needs the MATVIEW to have an UNIQUE INDEX.

So, we had a MATERIALIZED VIEW speeding up our queries; the queries were faster than when running without the MATVIEW, but they still took some time. Queries were using CURSORs to process data in batches and it was opened throughout the query's duration. The window between one CURSOR being closed and another being opened was moving, so there was no real way of synchronizing the two. We needed a way to wait for one query to finish, REFRESH the MATVIEW and make it accessible for another query. But we couldn't lock the MATVIEW out for a period of time long enough to REFRESH it since our queries would start piling up.

We came up with an idea about using two MATVIEWs - primary and standby - and having a stored procedure do a REFRESH of one and then a swap between them - all in one transaction. Thanks to architecture like this we could hold a lock on the "primary" view only for a very short period of time needed to rename the "secondary" to "primary". The other one being locked for a long time wouldn't harm us. But this wasn't perfect. Still, we didn't know: when to exactly schedule the run since we didn't have a way to anticipate the need for one, and when one query will finish and the other will start (courtesy of asynchronous job queue). So, our procedure would end up with a refreshed "standby" MATVIEW that couldn't be renamed to "primary". Actually, it was a little more complicated. After refreshing "standby" we had to rename "primary" to "tmp" and just then we could rename "standby" to "primary". We could just run ALTER MATVIEW primary RENAME TO tmp and wait for it to finish, but...

There's a lot of "buts" in this story, I know, but that's way it's so full of lessons learned.

... So, letting ALTER wait and run wouldn't really work for us. That's for a very simple reason - an ALTER statement would place an AccessExclusiveLock on the MATVIEW in question in the lock queue, causing application queries to pile up [waiting for their turn].( We didn't want that to happen, since this was the problem we were trying to solve in the first place.

One of the things we thought about was to have a flag in the application, telling it which MATVIEW is available for querying at any time. But this would be hard to implement mainly due to the use of an ORM.

If we only had a way of waiting for a lock without placing a long-lasting lock ourselves... Come on, we're using "The Most Advanced Open Source Database in The World", surely there's a way of doing what we need?

What we needed was to:

  1. REFRESH a "standby" MATVIEW trying multiple times if necessary.
  2. Rename "primary" to "tmp" in one quick go, without long-lasting locks, trying multiple times if necessary.
  3. Rename "standby" to "primary".

So, I did some modifications to the first draft of the stored procedure.

First of all, I had to find a way to timeout a query in the event of it not being able to acquire a lock. Some quick research was enough to find the lock_timeout GUC - a great setting that allows us to specify exactly this, a maximum time a query can wait for a lock before timing out. But, this wasn't enough.

The second thing was retrying failed operations. Obviously, I didn't want the whole procedure to fail if it couldn't acquire a lock - I wanted it to try for a couple of times before exiting with an error. This was quite easy and I decided to use a simple loop with a fixed number of iterations.

The last element was not to try again immediately, but instead try again after some small, random delay. A combination of pg_sleep() and random() was exactly, what I needed.

So I had all the pieces ready, I just had to put them together. In pseudo-code, I came up with something along the lines of:

  1. Set lock_timeout and max iterations variable.
  2. Start the loop.
  3. Try and refresh the "standby" MATVIEW.
  4. If timed out, start another iteration, fail after reaching max iterations count.
  5. If successful, start fresh loop.
  6. Try and rename "primary" to "tmp".
  7. If timed out, start another iteration, fail after reaching max iterations count.
  8. If successful, rename "standby" to "primary".
  9. Rename "tmp" to "standby".

There are two loops in order to try and acquire two locks on two MATVIEWs. You can see that the last two RENAME operations aren't contained in such loops. This is simply because everything is done in one transaction, and that - after the successful completion of step 7 - this transaction already holds an AccessExclusiveLock on both the MATVIEWs, so we can be sure that following steps will be executed without any delays.

This solution worked perfectly for us and with a big enough max iterations count that we didn't have to worry about synchronization anymore. Writing this was fun and exciting for me, so I will leave coming up with an actual PLPgSQL code example as homework for the reader. I hope it'll prove to be a good exercise for you!

Other scenarios

Aggregates, joins, very large tables with huge amounts of data and high numbers of columns and queries with, possibly, many predicates - with all of these, MATERIALIZED VIEWs can help. Their uses are not limited to OLTP databases. In fact, OLAP is another paradigm in which MATVIEWs can be extremely useful. Crunching numbers while also fetching data can take some time, and users usually expect their reports and cubes to be fast. Good schema design is important, but if data freshness isn't the major concern, using MATERIALIZED VIEWs can speed things up greatly.

Another use case is simply keeping data in sync. If you have, for example, many dimension tables that are often updated, you may end up with inconsistent reports when running an "old" fact table against a "newer" dimension table. MATERIALIZED VIEW can keep all of those in sync by using a single, consistent snapshot while refreshing.

Many strategies

But, the notion of MATERIALIZED VIEW has been around much longer than this particular implementation. A few years back I had the opportunity to learn about different strategies that aren't covered yet by the core of PostgreSQL: "lazy" and "eager". Both of these use triggers, and so they can be implemented in older PostgreSQL versions.

"Lazy" MATERIALIZED VIEWs work by storing identifiers of modified rows in a separate table and then, when it's time to refresh, reading that table and applying changes to the target one. They're called "lazy" since you have to explicitly "tell" the database when to refresh the target table's contents.

Another strategy is the "eager" one, which modifies the MATERIALIZED VIEW right after the source data is modified. It also uses triggers, but instead of using an intermediate table to store row identifiers it executes an UPDATE, INSERT or DELETE statement directly on the MATERIALIZED VIEW's table.

There's a great article by Jack Christensen about those strategies. Another good source of information is a wiki page on pre 9.3 strategies.

The future

What awaits in the future? Well, one interesting thing would be actually applying the strategies mentioned above to native PostgreSQL MATERIALIZED VIEWs. Among them is a proposal for a "very eager" refresh strategy causing the MATVIEW to be updated alongside the original modifying query, even before the transaction commits. Other things include an ability to create UNLOGGED MATERIALIZED VIEWs and making the query optimizer aware of MATVIEWs. The former would be very useful in high-traffic replicated databases, where the number and size of generated WAL segments matter. The latter would make PostgreSQL treat MATVIEWs as indices and give it the ability to pull the data from them when it is deemed fresh enough. These were discussed in a talk at PGConfEU in 2013


PostgreSQL's native MATERIALIZED VIEWs are a powerful tool enabling many performance improvements while providing another way of ensuring data consistency. It's even more powerful with the ability to REFRESH CONCURRENTLY and (provided a certain level of procedural language knowledge) one can achieve many interesting results going beyond the native implementation.

With many proposed changes and so many ideas floating around the Web, it's certainly a good idea to keep a close eye on the developments being made and solutions being created.

I sincerely hope my article was helpful and could be considered, as such, an informative source on understanding and using PostgreSQL's MATERIALIZED VIEWs.

Thank you for reading!

Robert M. Wysocki is a freelance PostgreSQL consultant, experienced GNU/Linux administrator and author of technical blog

This article is licensed with CC-BY-NC-SA 4.0 by Compose.