How to get logs with Node.js and the Compose API


At Compose, we've just added API access to download the daily log files of your Compose database deployments. In this how-to, we're going to introduce the logfile API, show you how to locate your deployment, find its log files and download them from Node.js.

The new API endpoints are modeled on our existing backup endpoints but, because there are actually multiple logs per day for each capsule, there are more steps to the process. Regular readers of Compose Articles may find the code here familiar; it's based on the backupomat, a tool we developed to for a previous article: How to get backups with the Compose API and Node.js.

Our new log tool is called logloady. It's very similar, so much so that we're going to dive straight in to how we list logfiles. If you are completely new to the Compose API, it'll be worthwhile diving into the backups article, especially if this is your first time with the API as you'll need to get an API token to get API access to your account.

You can follow along with the code in the logloady Github repository. Apart from one minor change, the first thirty or so lines are the same as backupomat. That includes the command to list the available deployments. Then we get to the first new command, designed to answer one question.

Getting the list of logfiles

The first question to answer is what logfiles are available for a deployment. The endpoint that can answer this in the API is /2016-07/deployments/:id/logfiles. Give the endpoint a deployment id and it'll list the last seven days of logs available. That's its default. You can pass it parameters that specify when the list of logs starts and when it ends, as dates in DD-MM-YYYY format. We'll stick with the seven day view for now.

let listLogs = (deploymentid) => {  
    fetch(`${apibase}/deployments/${deploymentid}/logfiles`, { headers: apiheaders })
        .then((res) => { return res.json(); })
        .then((json) => {

This uses node-fetch to query that endpoint, passing in a deployment id. We're using Node 8.x here with all the syntactic sugar for defining a function which takes deploymentid as a parameter. Let's see the raw JSON that comes back from this call...

    "_embedded": {

The _embedded is denotes these are embedded resources; it's part of the JSON+HAL specification.

        "logfiles": [

All the logfile information is returned in an array of logfile objects. Let's look at the first one.

                "id": "5a541c31c607ae0015b8e648",

Each logfile has its own unique id. We'll be using this later.

                "capsule_id": "5a0c2ea867245f00106fda6f",
                "deployment_id": "5a0c2ea467245f00106fda6b",

The capsule_id is an identifier to which particular part of a database deployment this is a log for. Compose deployments are made up of capsules each of which have a specific purpose. These are what you see when you go to the Compose UI overview for a deployment and look at the Topology. You won't see the capsule id's there though. The presence of this field and the deployment_id is generally internal use. We now move on to information about the actual logfile:

                "file_size": 448437,
                "status": "success",
                "date": "2018-01-08",
                "region": "us-east-1",
                "name": "mysql1026.log-2018-01-08.gz",

It's all here, the file_size is the size in bytes of the compressed log file. The status should be success unless there was an issue creating and storing the logfile. The date is the day the log file covers. The region is a reference to which cloud region the log was created in for compliance checking.

And then we get to the name of the log. It's comprised of, here, the name of the capsule- mysql1026, the extension of the original log file - .log, then a dash and the date it covers - -2018-01-08 and finally the compression extension - .gz.

The last part of the object is a link to the URL where we could find the logfile and its download link. This is another section in JSON+HAL style:

                "_links": {
                    "self": {
                        "href": "deployments/5a0c2ea467245f00106fda6b/logfiles/5a541c31c607ae0015b8e648{?embed}",
                        "templated": true

And that's an entire logfile entry. There's multiple of these entries for each day and for seven days, or as many days as you set the date range to.

Now we've seen that, let's go back to our code. The first thing we do is extract the logfiles array.

           let logfiles = json["_embedded"].logfiles;

The logfiles array comes in an arbitrary order, but most useful for users is to see the logs grouped by date. So, let's do that with the lodash groupBy function.

            let logfilesByDate = _.groupBy(logfiles, (v) => { return });

Now we have the logs grouped, we can step through the groups and print a summary:

            for (let dateset in logfilesByDate) {
                console.log(`Logs for ${dateset}`)
                logfiles = logfilesByDate[dateset]
                for (let logfile of logfiles) {

To make things a bit more readable, we can extract the capsule's name from the log file's file name. Then we can print the essential information. The capsule name, the id of the logfile and the size in KB of the log:

                    let capsulename ="\.")[0]
                    console.log(`${capsulename} ${} (${Math.round(logfile.file_size / 1024)}KB)`)
        .catch((err) => { console.log(err) });

If we run that function - we'll show how later - we get output like this:

$ node logloady.js logs 5a0c2ea467245f00106fda6b
Logs for 2018-01-08  
mysql1026 5a541c31c607ae0015b8e648 (438KB)  
mysql1064 5a54243ed60a890018674a6c (451KB)  
mysql687 5a5418686f45c00010256012 (447KB)

Logs for 2018-01-07  
mysql1026 5a52ca7f553c5300101bcf2e (439KB)  
mysql1064 5a52d39a1ce5cb001074b769 (451KB)  
mysql687 5a52c6a51ce5cb001874ab8d (417KB)

Logs for 2018-01-06  
mysql1026 5a5179698fa270002188aa4a (450KB)  

Getting a logfile

What listing the available logfiles gets us is the logfile id for a specific date and capsule. Say we want the logs for mysql1026 on 2018-01-07 we know that we need to refer to id 5a52ca7f553c5300101bcf2e. Let's build the function to retrieve an arbitrary logfile with just the deployment id and logfile id. The API endpoint for this is

let getLog = (deploymentid, logid) => {  
    console.log("Requesting logfile")
    fetch(`${apibase}/deployments/${deploymentid}/logfiles/${logid}`, { headers: apiheaders })
        .then((res) => { return res.json(); })
        .then((json) => {

We start by getting the specific logfile entry. This code is practically the same as the listLogs code above, except that the endpoint now has a log id appended to it. The returned data is almost exactly the same except it's not wrapped in _embedded and there's one extra field - download_link. This extra field is generated on the fly and the API prefers to do it on-demand - the links are only live for 24 hours too.

Using the fetch package makes downloading that file simple too:

                .then((res) => {
                    console.log(`Downloading ${}`);
                    const dest = fs.createWriteStream(`./${}`);
                .then((res) => {
                    console.log(`Downloaded ${}`);
        .catch((err) => { console.log(err) });

We simply fetch it and, in this case, write it to a file, as per the name field in the local directory.

Commanding the logs

That's pretty much all there is to the logloady.js program save for wiring these functions up to the command line. That's where yargs comes in.

    .command("deployments", "List deployments", {}, (argv) => listDeployments())
    .command("logs <deploymentid>", "List deployment logfiles", {}, (argv) => listLogs(argv.deploymentid))
    .command("get <deploymentid> <logid>", "Get specific log file", {}, (argv) => getLog(argv.deploymentid, argv.logid))

We just define a command, a description for it, a builder object and a function to handle the command. Yargs manages all the parsing and invoking for us. It's great for assembling small functional commands like this.

Logging out

This has been a quick run through of what is possible with the logfiles endpoints in the Compose API. It's a simple set of commands to find and download log files. You can of course go further, say by automatically checking which logfiles you have and haven't downloaded and downloading the missing ones to keep a complete archive. With these endpoints, you can create the automated logging solution you need.

Read more articles about Compose databases - use our Curated Collections Guide for articles on each database type. If you have any feedback about this or any other Compose article, drop the Compose Articles team a line at We're happy to hear from you.

attribution Semenov Ivan

Dj Walker-Morgan
Dj Walker-Morgan was Compose's resident Content Curator, and has been both a developer and writer since Apples came in II flavors and Commodores had Pets. Love this article? Head over to Dj Walker-Morgan’s author page to keep reading.

Conquer the Data Layer

Spend your time developing apps, not managing databases.