People often ask us how we come up with the prices on our various database deployments. There is an underlying logic to it all but it may not be obvious so in this article we're going to talk about how we build out database deployments and how that relates to pricing. We'll also explain what makes up one of our new MongoDB+ deployments.
What goes in a Compose deployment
For many, one of our database deployments is just "a database" with available access points from the internet configured and secured. But behind the scenes, there's a lot going on. The reason for that is we are about making sure you can relax knowing your data is being held in a production quality system. That means high availability, well provisioned clusters, with stopless backups and snapshots, all tuned for whichever database you choose to run.
The first part of the Compose recipe is to use a solid hardware foundation. We may be a cloud-based service but we have substantial amounts of our own hardware built specifically to serve the demands of the databases we rung. That means using, for example, very high quality SSDs which are locally attached to the systems they are working with rather than attached over a secondary storage network.
All that hardware is built to optimally interconnect with the datacentres where you run your applications. We also don't overprovision; you can be assured that we have more than enough capacity to handle our users simultaneously.
The second part of our recipe is using our software platform to deliver the power of that hardware and infrastructure to you as a database deployment.
The anatomy of a Compose deployment
We've talked in the past about how deployments are made up of capsules, containerized applications each performing particular roles. There are in fact four different major types of capsules.
- Data capsules - These are the heart of most deployments, they run the database servers. A single unit of data capsule comes with 102MB or RAM and 1GB of storage. MongoDB, Elasticsearch, RethinkDB and PostgreSQL all have two or more of these at their core.
- Memory capsules - In-memory databases like Redis don't need disk, but they do need RAM, so the Memory capsule is a variation on Data capsule which has 256MB of RAM and no extra disk in a single unit.
- CPU capsules - There's often work that needs to be done which just needs CPU to process. Things like SSH tunnels, SSL proxies and Mongos servers where they are expected to be active doing a significant amount of work. For this there's the CPU Capsule. No RAM or storage is associated with them.
- Utility capsules - Finally, for everything else which just needs to be there and awake, for the long running process and the arbitrator, for the configuration store or the monitoring code.
So that's Data, Memory, CPU and Utility. Data and Memory capsules are the ones that scale up, and down, to give more RAM and storage, they scale in units, so one unit of Data is $6.00 a month and one unit of Memory is $6.50, because memory is precious. CPU capsules are a flat $4.50 each and those Utility capsules are a bargain at $1 each a month.
With that in mind, let's look at a typical Elasticsearch deployment and pull up the overview page – available from the Compose Web UI – for it...
The deployment topology tells you there's three elastic_search capsules configured. They have a role of "member" so are data capsules. By default, Elasticsearch data members use two units of Data. So thats $6 x 2 units x 3 members, that comes to $36. Add to that the haproxy capsule, a CPU capsule, at $4.50 and that comes to $40.50 a month. If we were to scale up to 4GB of disk, only the data capsules would change: $6 x 4 units x 3 members, $72 plus $4.50 for the haproxy, giving $76.50. Of course, you don't need to work that out because if you look on the overview page you'll see the scaling panel...
And there's "Increase resources 2.0x" with 4GB of disk and 410MB of storage. If you are wondering why the apparently odd RAM allocation, the default allocation of RAM on Compose is 10% of the disk allocated. So allocator 1024GB of disk and you get 102.4MB of RAM. It gets rounded to the nearest MB when being displayed. 10% of 4096MB is 409.6MB which rounds to 410MB.
We can do the same exercise with a Redis deployment. If we pull up the overview on a typical one:
This deployment of Redis is at 1x scale and, if you look, there's there's two redis type entries marked as members so that's two Memory capsules. That's $6.50x2, $13. Then there's the other redis entry but thats not a Memory capsule. It's role is marked lightweight and although it may be running Redis, its running it in sentinel mode which makes it a Utility capsule. Thats another $1. Finally, there's the haproxy CPU capsule, adding $4.50 on and bringing the total up to $18.50.
MongoDB+ specced out
The previous two deployments are relatively simple. Now we move onto our next generation MongoDB deployments, currently in beta as MongoDB+. The current release of MongoDB from Compose is uncomplicated; it's the equivalent of three Data capsules with two of the Data capsules accessible from the web – hence the $6x3, $18, pricing. It's a very effective arrangement but lacking that complexity means developers have to ensure their applications handle failover correctly and it is impossible to offer connection options like SSL or SSH tunneling.
MongoDB+ addresses this with a lot more infrastructure. If you looked at the overview for a MongoDB+ deployment you'd see it listed like this:
As you can see there's a lot of capsules making up a MongoDB+ deployment. Let's work through them starting with the Data capsules. There's two mongodb/member capsules and one mongodb/backup capsule and all three of these are the ones which will be scaled up and down and at the moment, they are scaled at 1x unit by default. The backup role is a specialized one which enables backups to take place without stopping the database; it is a member of the same replica set as the member capsules. That gives us $6*3, $18 in scalable capsules.
Moving down the list, next is the mongodb/mongos capsules. There's two of them and as well as routing incoming connections to the replica set, they also provide the SSL enabled proxy. They are created as CPU capsules, so add $4.50x2, $9, to the running total.
Then there's the mongodb/configsvr which provides configuration data to allow the mongos and replica set to efficiently connect. The MongoDB architecture needs at least three of them by design. Each one is run as a Utility capsule, so all three come to another $3. Finally the replica set has an arbiter capsule to look after things there. It's another Utility capsule, so it's another $1.
If you pull those together, $18+$9+$3+$1, you get $31 which is how much a basic MongoDB+ deployment costs. The $13 difference between MongoDB and MongoDB+ is made up entirely of the new supporting infrastructure. As evidence of this, scaling up MongoDB+ x2 just involves an $18 increment for the member and backup capsules. This is the same cost as the current MongoDB to scale up. The mongos, configsvr and arbiter don't need to change and are a fixed cost in a deployment.
As you can see, we price our databases with a predictable model. The model also shows how we price our add-ons, depending on what workload they are under - connectivity add-ons tend to require just CPU to maintain connections. Bringing it all together means you get production quality database hosting at a predictable, competitive price. And we get the ability to bring you new database technologies using the same, production-tested platform.
Photo Source: Franck BLAIS CC-BY-SA-2.0