Compose, Now with More Scale

We're proud of our autoscaling database capabilities at Compose, but we also know some of you out there want to take control of our scaling mechanisms and open up, or close, the resource throttle. If you're using Compose ElasticSearch, Redis, PostgreSQL or RethinkDB, you now have to opportunity to do just that.

Eagle-eyed users who regularly visit their Compose Dashboards will have already noticed that there's been some rearrangement of the cluster overview for the database deployments. Below the Current Usage panel is now a Scaling panel which lays out scaling options and costs based on your current configuration. So for example in this view we can see an Elasticsearch deployment:

So currently, this deployment has 2GB of disk space and 205MB of RAM currently. Below are the available scaling options, 1.5x, 2.0x and 3.0x resources. Depending on what you feel you need, you can go to 3GB, 4GB or 6GB of disk space and 307MB, 410MB and 614MB of RAM. Each option also displays its monthly cost, making it easy to work out the impact of scaling up. Clicking the appropriate button will trigger the scale up and progress will be visible on the Jobs view.

What this process does is increase available resources ahead of our autoscaling algorithms. Those work out your allocated resources based on your disk usage. So, there's no need to use the manual scaling if your databases are just expanding their disk use; we'll be automatically handling that. Also if you do use manual scaling to bump up your resources and then fill the storage space you will move back onto autoscaling automatically.

Where this really comes into play is if you want more RAM for your databases. By pushing up the notional disk space available, we're pushing up the RAM available. If you don't use all that notional disk space, then your databases are running with more RAM for caching and indexing and it is likely that you'll see a performance improvement from that.

Elasticsearch users need to be aware that manual scaling like this does restart the nodes on the cluster so that the Elasticsearch processes can be reconfigured to make full use of the additional memory. Other databases are not affected by manual scaling in this way.

And if you have already scaled up and aren't using your allocation of disk then another option will also be displayed....

This will be the reduce scaling option and it'll let you move down to a scaling level closer (or at) the autoscaling levels. This will allow you to ratchet down any scaling increases you've made. If you've been relying on the extra allocation of RAM for higher performance, then this will have an impact as you'll be working with less RAM. But then if you scaled up your RAM for a particularly intensive task, then this is how you would return things to normal parameters.

In some cases, you can also use this to dial back autoscaling. For example, when indexing, PostgreSQL creates large temporary files which the autoscaler will scale up for. At the end of the process the autoscaler won't shrink down for a while - it operates on the basis what pushed the scale up will likely reoccur and scaling down immediately may cause performance issues. If you have the space available to scale down into – and you aren't going to be repeating the indexing – you will be able to select the reduce scaling option and pull it back on course.

You can now take the controls of the Compose autoscaling engine and tune your resources to your needs. If you have any questions, drop an email to support@compose.io so you can be more sure when you open the throttle.