Previously on MongoHQ API: Go command line utilities and Node.js dashboards were assembled to show how the new MongoHQ API could give visibility to everything from database versions to backups. In this episode, we get to tour the entire stockpile of calls available to the API.
Everything starts with an account
In both previous articles, the applications start by establishing the account to be used. A user's login can have a number of accounts associated with it so the List accounts endpoint makes it easy to browse through the list. Accounts are referred to by their account slug string and if you already have that you can use it to get one account's details. There's also the ability to update an account though thats currently restricted to updating the account's name.
At the moment, we're still working on the multi-account support, but we've embedded it into the API now so you can future proof your API accessing code. Currently everyone is their own accounts administrator too...
If you have administrator rights, there's a lot more you can do with the accounts though. That includes listing the accounts invoices and any coupons that apply. When you want to be sure you know who has been doing what with the account, you can also list an account's activity which will include information about who, what and where users were when databases and access tokens were created.
As we've shown in previous articles, accounts have deployments, not databases. The deployment is a useful abstraction because a modern database with its replicas and such is actually made up of a number of hosts and servers. This all gets wrapped in the deployment and its these that you create and manage generally. In those previous articles we only looked at how to get all deployments and get the version information for the databases within the deployment. You can, like accounts, get the details of one deployment by getting `/deployments/' and the account slug and the deployment identifier.
Deployments themselves can be identified by either an id or a name, but the name is initialised as null. To set that name you patch the same deployment endpoint and pass it a new name. You can then refer to that deployment by name rather than id which should be somewhat shorter which is handy if you are hand-rolling your API calls.
You can, of course, create your deployment with a name. This is possible through one of two endpoints: create elastic deployment or create dedicated deployment. One creates an elastic deployment on being given a name for the deployment, a name (and optionally a username and password for the first user) for a database and a location to create that deployment in. The other creates a dedicated deployment where you can select the number and capabilities of the nodes for the deployment.
Once you have created a deployment, you can add more databases to it using the create a database endpoint, again giving it a name along with the optional username and password for the first user of that database.
We'll come to backups in detail later, but suffice it to say at this point that there's a get deployment's backups endpoint which will return a document about what backups are available and where to retrieve them or how to restore them .
The last deployment related endpoint relates to another endpoint we mentioned earlier - it's all well and good being able to find version information about your deployment but you need to be do something about it. Thats where the upgrade deployment database version comes in because when invoked it'll upgrade the database to the next eligible version. It's actually the URI as the endpoint to get the version information but accessed with a PUT rather than a GET. You can use the GET to check in the progress of the upgrade though.
Down in the Database
With an account slug and a deployment id or name in hand, we can manage the databases of a deployment. The first thing we'd probably want to do is find out what databases are within the deployment and the List databases endpoint supplies that information including id, name and status of the database. If you already have the name or id of the database you are interested in then you can get a database to get the same information but just for that database.
We've previously mentioned create database, but it is worth knowing that it has a counterpart in delete database which will remove a database from a deployment and, if it's the last database, remove the deployment too.
Backup to the Future
Being able to manage your backups is good, being able to manage them programmatically through the API is better and that's what the collection of Backups endpoints enables you to do. At the highest level you can get all the information on all the backups related to an account with the List all backups endpoint; all you need present is the account slug for the account. If you already know the deployment id or name, you can use the list all backups for a deployment endpoint which we mentioned earlier.
Both calls return the same format of information with backup ids, times, database names, deployment id's, filenames, sizes and a
links array. The
links give a relationship and URL to perform various actions like downloading the backup file for an extra off site copy or local processing, or restoring that backup into the database.
If you just have the account slug and backup id, you can get the same information through the get one backup endpoint - note this just gets the details, not the backup itself.
These backups are mostly generated on a schedule, but if you need a deployment backup for right now, what you'll want is the trigger on-demand backup endpoint. You do a POST to that endpoint and it will let you know its started the backup process. If you want to know when its done, poll the endpoint for listing all backups (and at a rate of no less than every 15 seconds) to see when the new backup appears. When it's done, you'll have a backup id which you can use to download.
The final part of our tour brings us to the historical log endpoints, which allow you to query the deployment logs. The logs can be retrieved for a particular day, grepped for a string, selected from a particular entry number or between a date range. The selection of parameters allows the one end point to be used for accumulating logs on a schedule or exploring them on demand. These historical logs are only available for elastic deployments – dedicated deployments have different infrastructure for this.
As this grand tour concludes we hope that you now have a better feeling for the scope of the new API. We haven't touched on the MongoDB REST API but will be arranging a tour of that soon. Remember to consult our previous two articles for examples of how to to authorize your application and call the REST API in Go and Node.js. Remember that email@example.com is your one stop shop for assistance and any queries about how to use the new API.
If you aren't a MongoHQ user already, why not sign up for one of our Elastic Deployments and put the power of MongoHQ's APIs and auto-scaling MongoDB at your fingertips.