Easy PostgreSQL performance stats with pgHero and Docker


The first thing you'll find about PostgreSQL is that every scrap of information about the performance of the database is inside the system tables of PostgreSQL. It's packed full of stats, but they are not easy to interpret. That's where utilities such as the web application pgHero come in.

What pgHero gives you

pgHero gives you a browsable dashboard to read current performance statistics. There's an easy to interpret overview - green in good, not green, you should pay attention - which gives the state of connections, long-running queries, vacuuming health, invalid, duplicate or suggested indexes and slow queries. There's also views to dig down on various aspects of performances:

Getting pgHero running quickly

Now, you can install pgHero in various ways but for that quick drop in to do some diagnostics, I recommend using the Docker route. There's already a Docker image ready to roll out there, so, assuming you have Docker installed (if not, look at the Docker site for instructions) then let's get going...

What you'll need is your connection string for your PostgreSQL deployment. Find that in the Overview on the Compose Console and don't forget to reveal your credentials so you can copy and paste the whole credential string. For example, it could look like this:


Once you have that, you can assemble your Docker run command. The pgHero Docker page shows the command to run as:

docker run -ti -e DATABASE_URL=postgres://user:password@hostname:5432/dbname -p 8080:8080 ankane/pghero  

So just switch out the DATABASE_URL value for your own to give (in our example's case):

docker run -ti -e DATABASE_URL=postgres://admin:SECRETPASSWORD@sl-eu-lon-2-portal.4.dblayer.com:17980/compose -p 8080:8080 ankane/pghero  

The parameters, after run, in this command are:

Hit return and, if it's your first time running the command, the various parts of the Docker image for pghero will download. Once done, you'll see something like this...

[7] Puma starting in cluster mode...
[7] * Version 3.11.0 (ruby 2.4.2-p198), codename: Love Song
[7] * Min threads: 1, max threads: 16
[7] * Environment: development
[7] * Process workers: 3
[7] * Preloading application
[7] * Listening on tcp://
[7] Use Ctrl-C to stop
[7] - Worker 0 (pid: 9) booted, phase: 0
[7] - Worker 1 (pid: 14) booted, phase: 0
[7] - Worker 2 (pid: 23) booted, phase: 0

Indicating the app is up and running. Bring up your browser and navigate to http://localhost:8080/ to see pgHero's overview.

pgHero Overview

All's well and from here we can begin exploring pgHero's reports. Take note of that warning at the top of the page; don't use pgHero in a development environment and extrapolate from that to your production environment. If anything, consider running pgHero against your production environment to gather statistics if you can.

A side note: If pgHero fails to connect, you'll likely only see a lot of traceback from the Ruby stack in the terminal. It is, unfortunately, not good at reporting failed connections beyond that.

You can configure historical query and space stats too, allowing you to browse through collected history. The collection is handled by running tasks every 5 minutes and day. Details of those tasks and the tables needed to support them are in the pgHero Docker page.

In this form, pgHero makes a great quick-check tool for PostgreSQL, offering results in easily consumable form. Try it today.

Read more articles about Compose databases - use our Curated Collections Guide for articles on each database type. If you have any feedback about this or any other Compose article, drop the Compose Articles team a line at articles@compose.com. We're happy to hear from you.

attribution Eric Masur

Dj Walker-Morgan
Dj Walker-Morgan was Compose's resident Content Curator, and has been both a developer and writer since Apples came in II flavors and Commodores had Pets. Love this article? Head over to Dj Walker-Morgan’s author page to keep reading.

Conquer the Data Layer

Spend your time developing apps, not managing databases.