RethinkDB Rescaled


With our recent release of RethinkDB, we noted some concern over the cost of RethinkDB while others asked why the memory specified appeared to be relatively low. Well, we've looked at how we offer RethinkDB and have reformulated our deployment recipes. Now RethinkDB starts with a two node deployment, 1GB of total storage and 102MB of RAM per node, and brought the cost down to $21 a month with each extra gigabyte of storage costing just $12.


Is 102MB enough RAM to run RethinkDB? We say yes, because thats the amount of RAM per node when you have 1GB of storage. Now, you may ask "But how do you come up with 102MB per node?" and we say we know from experience. It's taught us that as a general rule that we should scale with a 10:1 formula. For each ten units of usable storage, we allocate one unit of RAM. So 1024MB of storage, we allocate 102MB of RAM per node. Add a gigabyte of storage to get 2048MB of storage and the RAM allocation goes up to 204MB of RAM per node and so on as you scale up. Add another for 3072MB and you get 307MB of RAM. If you wanted a gigabyte of RAM for your databases, you'd want to get 10GB of storage.

These numbers though are all per node. With a two node cluster and 1GB of storage, you actually have 204MB of RAM and 2GB of storage spread equally between the nodes. This means there's 1GB of storage per node and that becomes the maximum size of data set, with indexes and other associated persistent artifacts, that the cluster can handle is therefore 1GB. Any bigger than that and the database would have to be stored as incomplete non-redundant data sets on each of the nodes and nobody wants that in production. And thats where our scaling comes in – when you exceed that gigabyte, you get your next gigabyte allocated automatically.

We like to keep things simple at Compose, so we price according to the maximum size of storage of a node. With RethinkDB you start at 1GB and you get 102MB per node of RAM. When you add a gigabyte of storage, we add a gigabyte to all the nodes and we scale the RAM allocated to the nodes at the same time according to that 10:1 rule. This isn't a hard and fast rule; if you think you need more RAM talk to – we'll work with you on your memory issues and if you do need more RAM, then we can make the arrangements to adjust your storage:RAM ratio.

And if you are still wondering if 102MB RAM per node, in a 1GB storage configuration, is still enough, do remember that's the memory available to your chosen database and its directly supporting applications running in its own container. We've stripped our application capsules to make them lean and clean so you are getting all the benefit of that memory. If you want to check out how your memory is being consumed, refer to our integrated memory metrics which are built into the Compose interface.

Dj Walker-Morgan
Dj Walker-Morgan was Compose's resident Content Curator, and has been both a developer and writer since Apples came in II flavors and Commodores had Pets. Love this article? Head over to Dj Walker-Morgan’s author page to keep reading.

Conquer the Data Layer

Spend your time developing apps, not managing databases.