TL;DR: Redis is now available in beta on the Compose platform.
The worst part of most databases is the waiting. That interminable wait between querying the database and waiting for the data to be pulled off the hard disk. Then there's the wait when you are doing an update, no matter how small, and writing that update to the disk. Even with super fast SSD disks, there's still always a non-negligible wait. Those waiting times build up and when you are handling thousands upon millions of updates, they bite into your performance.
Redis to the rescue
We know all about that at Compose as we've helped our customers tune the best performance out of their databases. One option we've used with customers and in-house is to use an in-memory database such as Redis which keeps the entire dataset in RAM for super fast reads and updates while keeping persistent disk copies of that data for recovery. Redis is addressed as a high performance key/value store, rather than storing large complex data structures with indexes for id and other fields.
In Redis, the key is any binary sequence, from a number to a string even an image, while the value can be a single number or string or a list, set, sorted set or hash of number or string values. You don't query the values, just use the keys to access and manipulate the values through a powerful set of operators. The simplicity of the architecture belies its usefulness in building data aggregators, task queues and intermediate stores.
Redis at Compose
This memory-centric approach really comes into play when, as we talk about in "Redis, MongoDB and the Power of Incremency" you need to do large numbers of increment updates to numeric fields. These tiny operations turn into much bigger write operations with the associated waiting with disk-centric databases, but with Redis the update is as fast as the main memory of the system. It's a great solution to those scenarios, and they are surprisingly more common than you think.
We use Redis in-house at Compose as part of our infrastructure and we know a lot of you would like to use it as part of your database stack. You could use Redis as your main storage database, but the chances are you'd end up making many compromises to adapt your data model to it. It's best to use Redis as a precise weapon against the woes of waiting, a laser for latency if you might. We couldn't leave Compose users without a weapon that powerful.
Redis for you
That's why we've just launched a beta of Redis on Compose. Now, you can deploy Redis as easily as you've been able to deploy MongoDB, Elasticsearch and RethinkDB. And, like those other databases, Redis on Compose is auto-scaling and automatically backed up, with two nodes and a sentinel for high availability. Where we normally focus on disk use for pricing our disk-centric databases, for Redis as its memory-centric we focus on the RAM use. Redis deployments make use of one of the most precious and constrained assets in the cloud, RAM.
- $25 per month gives a Redis cluster with 256MB of RAM per node and one access portal
- $13 per month gives an extra 256MB of RAM per node
- $52 per month gives an extra 1GB of RAM per node
- $9 per month gives an extra access portal
You can add a Redis deployment to your Compose portfolio by simply clicking on
Add Deployment in your Compose dashboard. If you don't have a Compose account sign up here and and to get going with the latest database on Compose - and remember you can add MongoDB, Elasticsearch and RethinkDB to your solutions too, all in the same simple, effective user interface.