In the first "Addon" article of this cycle of Compose's Write Stuff, Lucero Del Alba takes a look at how to get better performance with PostgreSQL, as long as you aren't too worried about replication and persistence. Do you want to shed light on a favorite feature in your preferred database? Why not write a short "Addon" for Write Stuff?
If data consistency after a server crash is not an issue, or you’re just gonna deal with a disposable table that needs that extra boost for writing — then unlogged tables are for you. Unlogged tables were introduced in PostgreSQL 9.1 in 2011. From the CREATE TABLE manual page:
(...) Data written to unlogged tables is not written to the write-ahead log, which makes them considerably faster than ordinary tables. However, they are not crash-safe: an unlogged table is automatically truncated after a crash or unclean shutdown. The contents of an unlogged table are also not replicated to standby servers. Any indexes created on an unlogged table are automatically unlogged as well.
What makes writing a lot faster is that the logging daemon is disabled entirely for the table that is flagged as
UNLOGGED. However, the Write-Ahead Logging (WAL) system is a key component of what makes Postgres reliable. Furthermore, it is also used for replication by Compose for a failover, but if you make a table unlogged when you create it, writes to it will never be copied to the secondary server, so if a failover happens, the secondary server will always come up with completely empty tables.
So, why would you want unreliable tables? The truth is that, under some specific circumstances, you don't care that much.
When to Go Unlogged
Keep in mind that by "unreliable" we don't mean that information will be corrupted or that data types will somehow be less precise, simply that data will be lost after a crash. As bad as it sounds, this is something you may be able to afford.
Think of the following scenarios:
- Large data sets that take a lot of time to import and are only used a couple of times (finance, scientific computing, and even big data).
- Dynamic data that after a server crashes will not be that useful anyway, such as user sessions.
- Static data that you can afford losing and re-importing in the unlikely event of a server crash.
Finally, unlike temporary tables, unlogged ones are not dropped at the end of a the session or the current transaction, and under normal operations (that is, no crashes) they are, in fact, persistent and operate normally — but faster.
... How Much Faster?
This is hard to tell as it heavily depends on both your hardware and your application. It is not the same writing logs to old fashion optical drives (HDD) than to newer solid state technology (SSD); also, the type of writing your application is doing (one-to-many, many-to-one) will be very important.
That being said, reports range from 10% to more than 90% in speed gain; but you rather do your own benchmark to be sure.
Creating an Unlogged Table
When creating a new table you just need to set the
UNLOGGED flag in between the
CREATE TABLE statement, everything else remains the same:
CREATE UNLOGGED TABLE "EUR/USD_ticks" ( dt timestamp without time zone NOT NULL, bid numeric NOT NULL, ask numeric NOT NULL, bid_vol numeric, ask_vol numeric, CONSTRAINT "EUR/USD_ticks_pkey" PRIMARY KEY (dt) )
Making an Existing Table Logged/Unlogged
Since PostgreSQL 9.5 —which is available on Compose— you can also alter an existing table to make it unlogged and vice-versa.
If the table already exists, you will turn the WAL off by setting the
ALTER TABLE "EUR/USD_ticks" SET UNLOGGED
Turning the WAL back on just as with regular tables is equally easy:
ALTER TABLE "EUR/USD_ticks" SET LOGGED
Listing Unlogged Tables
Postgres provides the
relpersistence column in the
pg_class catalog, where the
u keyword signifies "unlogged".
It can be queried like this:
postgres=# SELECT relname FROM pg_class WHERE relpersistence = 'u'; ┌───────────────────┐ │ relname │ ╞═══════════════════╡ │ EUR/USD_ticks │ └───────────────────┘ (1 row)
Unlogged tables are a fast alternative to permanent and temporary tables, this performance increase comes at that expense of losing data in the event of a server crash, which is something you may be able to afford under certain circumstances. Speedups can be very significant, but you better perform your own benchmark if you wanna make sure where the ballpark is.
attribution Tony Webster