One of the reasons we created Compose Enterprise was to give you more control over how your databases are deployed. That does mean that there's more for you to think about when configuring how you actually connect to your databases.
Where are my databases?
In the previous article on provisioning your cluster, we set up a simple VPC with a public and private subnet and then let Compose Enterprise create a cluster on that VPC. That process placed the three machine instances within the private subnet, able to talk to each other but essentially cut off from the world. This is a good thing as it means you, as controller of the VPC, can take control of how things are configured in terms of access.
What we'll cover here are purely examples of how you could configure access to your databases by applications and by users.
Your own public server
Your first stop to getting an application connection is to create a server on the public subnet. This will be accessible from the internet and be able to connect to the databases in the private subnet. Go to your AWS dashboard and select the EC2 option in the top left. This will take you to a summary of your EC2 machine instances and below the summary, there's a Launch Instance button. Click that and you'll be offered a range of quick start images, access to your own AMI library, the AWS Marketplace and Community AMIs.
For our purposes we want a small Ubuntu machine, so we'll pick the Quick Start - Ubuntu Server 14.04 LTS and click the Select button next to that.
Next we can choose what kind of machine we want hosting our new Ubuntu image. We're going for a T2-Micro because this is really just a demonstration.
We then click Next: Configure Instance Details and it's here that we start making this instance part of the VPC that our Compose Enterprise cluster is in. Let's have a look at that:
The important part for us is the Network, Subnet and Auto-assign Public IP.
- VPC – The VPC field needs to be set to the VPC we put our cluster in. Select that and the UI will update to show relevant details for that VPC.
- Subnet - The selected VPC will have one or more subnets. In our example, we created one private and one public subnet. For this process to work, you will need to select the public subnet.
- Auto-assign Public IP - This is likely to use the default from the subnet which will be to disable. Select Enable so that it will get an IP address you can connect to.
That's all we need to worry about here, so we can click Review and Launch. Details of the instance we are about to launch will be displayed. You'll also see some advice about tightening up your SSH rules - it's worth tightening up the IP range or addresses that can access your new server over SSH. That said, we'll skip that for now and click Launch.
Now you get to select a key pair to use with this new server. We recommend you use the option to create a new key pair here rather than use the existing pairs as they will include the keys to cluster and good practice suggests giving different functionality different keys. Remember to store the created keys safely as you will need them. For our examples, we'll save them as a file called
launchlogin.pem. Once you've done that, click the checkbox to acknowledge you need your keys and click Launch Instances.
Now AWS will go and provision our new machine instance. This will take a little while, so keep an eye on the EC2 dashboard till it's showing a state of "Running". You may want to click on the name field and give it a name you can recall. We'll do that with this server and call is "sshloginbox". You could, at this point, connect into the system with SSH. But attempting to connect to the databases from it will fail. We need to change security groups.
Security Groups make it easy to manage access between one system and another system within your VPC. With ComposeEnterprise, we create three different groups. They are named in the form of [cluster name]-[role]-[random number]. The roles are
The one we are concerned with is the first one,
ComposeEnterpriseAccess, as this grants anything in that group the ability to talk to the ports on the hosts used for database and web/REST access.
To add our machine to that group, select it in the EC2 Dashboard, select Actions in the menu button at the top, then select Networking and within that, click Change Security Groups.
A new panel will popup listing the security groups available to this instance's interfaces. Look for the one with the ComposeEnterpriseAccess role for your cluster and select the checkbox to add it. Then click *Assign Security Groups. And we're done.
Making the connections
Now we're ready to connect to our server. If this were a real-time demonstration though, we'd pop over to the Compose UI and deploy ourselves a PostgreSQL database in our cluster. When it was done we'd be looking at a screen something like this.
We'll now open up a new tab and head back to the AWS EC2 dashboard. This is where we can get details on how to connect. Select our Ubuntu instance, "sshloginbox" and then click Connect at the top of the page. Instructions, like this, will pop up on the page:
Open a terminal and use the SSH command as shown (or use the Java connection if you want, we just prefer a terminal). Remember you'll need that key pair you created earlier to enable the connection and remember to set its permissions to 400 (
chmod 400 keyfile.pem) so the ssh command knows it isn't easily modified. We'll be asked if we want to remember this host, we'll say yes, and then we are logged in to the host we created on the public subnet.
From here, we can connect directly to our databases as we've got onto a machine which shares a security group with the Compose Enterprise hosts.
Plugging into PostgreSQL
Now that we are connected to the 'sshloginbox' we want to connect to that PostgreSQL database. First we'll have to do a little preparation on the system. Run
sudo apt-get update and
sudo apt-get upgrade to get everything current - it's just good practice - and then we'll want to install postgresql locally. Why? Because it's the easiest way to get the
psql command; we don't want any of the rest of the database installation, just that one command.
The process to do that is, as you can see, just running
sudo apt-get install postgresql. If it says it is starting up a local PostgreSQL instance, run
sudo service postgresql stop to shut that down as it's completely unnecessary - our database is running inside the cluster.
We are now ready to connect to PostgreSQL. Remember the deployment screen after we created the PostgreSQL deployment. Head back to that. You'll need to reveal the admin user's password by clicking the Show/Change link in credentials. When that's displayed you'll also notice the Command Line section has replaced the "[username]" placeholder with
admin. You can go cut and paste that entire command line into your terminal and hit return. It'll ask you for a password and you should be able to see where that is. And now we are logged in to the Compose PostgreSQL deployment.
The observant will note there's a warning there; the PostgreSQL 9.3 installed by Ubuntu 14.04 is a version behind the 9.4 installed on Compose deployments. It's not a problem for proving connectivity though.
Anyway, now we have a machine on the public subnet, visible on the internet, which can only be connected to via SSH with a certificate and it can talk the the Compose databases, any of the compose databases. This machine could play host to applications or manage access to the various databases and it could be one of many. It's virtual proximity to the databases means fast responses. We'll finish up with a diagram to help you visualise what's been configured here:
Now go and enjoy your newly enabled database access.