*Compose is now available on Google's Cloud Platform. Being a different platform, the configuration for launching your first Compose Enterprise cluster is also different and in this article, we'll walk you through what you need to do to create your own database powerhouse in your private cloud.
Before creating a Compose Enterprise cluster, you will need to do some preparatory work in the Google Cloud. We'll assume at this point that you've created a project under your Google Cloud account and enabled billing on it - without that Google will not let you enable the APIs used by Compose to create the hosts needed for your cluster.
Google Cloud Seeding
You should be here at your Dashboard on Google Cloud. There's a "Service Account" we need to create so select the menu in the top left of the dashboard:
This will slide in the main Google Cloud Platform menu. If you are wondering where something is in Google Cloud Platform, head to this menu and you should be able to filter it down. What we want is at the top though.
And select IAM & Admin.
Then select Service accounts...
Followed by clicking on the Create service account label. This will get you here:
Enter a name for your service account. Now, you will need to select roles for this new account. Specifically you will need to grant "Editor", "Service Account Actor", "Storage Admin" and "Storage Object Admin" for the new user.
You'll find the "Editor" and "Service Account Actor" roles under the Project option in the Role drop down:
Select both roles in this menu.
For the next two roles, click on the Role drop down and then scroll the list till you see Storage. Click on that and a pop-up menu opens up.
Click on "Storage Admin" and "Storage Object Admin" to add their roles.
For the rest of the for, make sure you check the Furnish a new key check box. The form will expand out like so:
Leave the form set to JSON; this will ensure that the keys you need to access your project and resources will be transferred to you as a JSON file. You are now ready to click Create
Now watch out for that key file, it's a blink or you'll miss it download. Make sure that file is safe and we can move on.
Creating the cluster on Compose
Head over to your Compose console and select the Enterprise button from your left hand side bar. Now click the Create Cluster button on the right. Enter a name for your new Enterprise cluster at the top and select "Google Cloud Platform" from the options.
The page will then expand with this form:
Most of the values for this form is in the JSON file we downloaded when the service account was created. Open that up in your preferred editor. The
project_id field value should be copied, less the double-quotes, into the project_id field. The same for the
private_key_id to private key id,
private_key to private key,
client_email to client_email and
client_id to client id. Watch out for the
private_key value, it's a long one.
The last two fields are not in the JSON document. The region is the Google Cloud Region you want your hosts deployed to, such as
us-east1 - find out more about Regions and Zones on the Google Cloud Platform help. The other field, bucketname is a name for the backups - enter a name which you prefer here.
Finally, there is a slider which shows how much Compose will charge per month for provisioning a cluster that supports that much total RAM. Each cluster is made of 3 hosts so 24GB, for example, represents 8GB of RAM on each host. You will, of course, be provisioning and charged directly by Google for those hosts. In our walkthrough, we'll go with 24GB and then click Create Cluster.
Getting your own deployment configuration
The cluster will be created on Compose's side at this point, but the hosts on the Google side have yet to be created and connected to Compose. That's what this next page is about:
The first step is to create a Google Deployment Manager configuration which can do the creation work for you. This is a YAML file which Compose will create according to your needs. First, we select where we want to deploy these hosts. For example, if you entered
us-east1 in the preceding Create Cluster form, you should select "Eastern US".
You will need to select the size of host for your deployment. Google offers a number of predefined standard hosts and high-memory hosts. Select one that matches up with the memory that you selected when you created the cluster. For example, if we selected 24GB when creating the cluster, that equates to 8GB of RAM per host, and looking at the predefined machine types, the nearest match is the "n1-standard-2" with two virtual cores and 7.5GB of RAM. We'll select that on the page.
The last item is the amount of storage to initially allocate to your deployments. The slider ranges from 512GB to 3TB. Select how much you'd like and we're ready to make the configuration file.
Click on Download Configuration and a file will be downloaded to your system called
Before you move on to the next step, you need to activate an API, the Google Cloud Deployment Manager V2 API. You can navigate to this through the API option in the Cloud dashboard, or simply visit the API page and click on Enable.
Starting the cluster deployment on Google
This file needs to be used with machine with Google Cloud SDK installed on it. This can be done on a workstation or you can make use of the Google Cloud Shell, a remote shell running on the Google cloud.
In the case of having the SDK installed locally, follow the appropriate install instructions for your operating system. That process will include setting up your account and connection to Google Cloud so you will need your account details to hand. If you only have one project on Google Cloud, the SDK will automatically select that. When the process is complete, you should be able to run
gcloud auth list to see which accounts are configured and active.
In the case of using the Google Shell we use the built in shell in the Google Cloud Platform dashboard. Select the "prompt" icon in the top menu and it will connect, through the web browser, to a shell on a system preconfigured. What isn't in the shell is the configuration file we need. There's a number of routes to getting it over but the quickest way is just to cut and paste it into an editor. On Mac OS X, you can do
cat compose-enterprise.yaml | pbcopy or on Linux, you can install
xclip and then run
cat compose-enterprise.yaml | xclip -selection clipboard. Then you can go to the Google Cloud Shell and run the nano editor with
nano compose-enterprise.yaml, paste the clipboard into the editor and then exit with control-X then y then return. With the file in place we can continue.
The command displayed in step two on the Compose Hosts page now needs to be executed. It assumes that you will be in the same directory as the file you downloaded (or copied over). If it isn't, change the file name that comes after the
--config to point at your downloaded file. If you get an error like:
ERROR: (gcloud.deployment-manager.deployments.create) ResponseError: code=403, message=Access Not Configured. Google Cloud Deployment Manager API has not been used in project 99999999999 before or it is disabled.
Then go back to the Enable APIs step above, do that and retry the command. For illustration, what you should see is something like this, only with your own names in it:
[~] gcloud deployment-manager deployments create exemplumcluster --config Downloads/compose-enterprise.yaml Waiting for create operation-1470301061838-5393b2480f4b1-e795a8d6-47a1fe4b...done. Create operation operation-1470301061838-5393b2480f4b1-e795a8d6-47a1fe4b completed successfully. NAME TYPE STATE ERRORS exemplumcluster-disk-0-data compute.v1.disk COMPLETED  exemplumcluster-disk-0-swap compute.v1.disk COMPLETED  exemplumcluster-disk-1-data compute.v1.disk COMPLETED  exemplumcluster-disk-1-swap compute.v1.disk COMPLETED  exemplumcluster-disk-2-data compute.v1.disk COMPLETED  exemplumcluster-disk-2-swap compute.v1.disk COMPLETED  exemplumcluster-image compute.v1.image COMPLETED  exemplumcluster-instance-0 compute.v1.instance COMPLETED  exemplumcluster-instance-1 compute.v1.instance COMPLETED  exemplumcluster-instance-2 compute.v1.instance COMPLETED  exemplumcluster-network compute.v1.network COMPLETED  exemplumcluster-network-capsules compute.v1.firewall COMPLETED  exemplumcluster-network-udp-4789 compute.v1.firewall COMPLETED  [~]
The cluster is now being deployed and after a few minutes, reloading the Hosts page will show you that the initialisation is taking place:
It should take around 20 minutes for this process to complete as each element of the Google cluster meshes with Compose's cluster management. After 20 minutes, refreshing the page should show the cluster as ready to run:
The first database deployment
Your first database deployment can now be done. Click on Create Deployment and you'll see the Compose database selection page. Select any database and you'll see the form for deploying your database, with one difference:
There's one difference from the default deployment page. Because we have a Enterprise cluster, the Create Deployment On option appears and will default to the Enterprise cluster. It's still possible to select Compose Hosted databases, but they are charged separately from the Enterprise cluster. The interface defaults to the Enterprise cluster to avoid that. Enter the name for your deployment, select your options and configure your initial deployment resources. On Enterprise, the current default minimum is a configuration with 1GB of RAM. Once done, click Create Deployment and Compose will provision your database.
It's at this point in the configuration process that you have a choice. At Compose, we understand that Compose Enterprise customers will have different security requirements and, rather than open up ports to your cloud infrastructure automatically, we give you the opportunity to apply your own security procedures and processes.
Briefly, that Compose hosts will need to be accessible from wherever you are administering them. You can configure a VPN or SSH tunnelling to achieve this. Within the network, enable your access host to pass TLS traffic to and from the hosts and this should cover most databases requirements. Applications configured within your project will require that the firewall rules allow them to connect to the Compose database hosts. The internal IP addresses of the hosts are mapped to *.compose.direct DNS addresses.
That said, we also know that users may just want to quickly configure a VPN to access their databases. In that case we offer the following guide to creating an IPSEC VPN with the least steps possible.
Creating the VPN instance
The first step in this process is to create a machine instance that will run your VPN software. Go to the Google Cloud Platform console and select Compute Engine from the products menu. Select VM Instances from the sidebar and then select Create Instance from the top list of options.
Give the new instance a name, it's mostly decorative – we'll call ours
vpngateway – then select a zone for this instance to live in; you can accept the default offered if you wish or you can set it to a zone in the region where you placed your Compose Enterprise cluster. Generally for administration you won't need a whole dedicated CPU to handle the VPN load, so in Machine Type select Micro to reduce the cost of this new node. For the boot disk, click Change and select Ubuntu 14.04 LTS.
Then carry on down the page till you hit the Management, disk, networking, SSH keys link. Click that to reveal the options underneath. The first screen that is revealed will be Management. Click in the Tags field and enter
vpn. We'll need that tag when we set up the firewall rules. Now select Networking.
This is where we set up this instance to be our gateway between the outside world and our Compose cluster. The Network field should be set to the network that was created when we created the cluster – in our example, we named the cluster
exemplumcluster so the network is
exemplumcluster-network so we select that. Set the External IP to "New static IP address". A dialog will pop up asking you to reserve an IP address with a name – we'll use
vpnip for a name – enter a name and click Reserve. Finally set IP Forwarding to On and click Create.
The display will now return to the VM Instances dashboard with an extra entry and after a little while, our new node will be deployed and it'll show an SSH button next to it. It's time to log into our gateway to configure it.
Installing the VPN software
Click that SSH button and Google will start a session to the VPN gateway. There's a lot of ways you could enable this as a gateway and we're going to use one of the quickest and simplest ones we've found hwdsl2's setup-ipsec-vpn. This is script which automatically configures the system to run a IPSEC VPN and it can be run with no user intervention whatsoever - see the installation instructions for alternative ways of setting it up. For our configuration needs, all we need to do is run this:
wget https://git.io/vpnsetup -O vpnsetup.sh && sudo sh vpnsetup.sh
Hit return and watch as the script downloads and builds the required code into a VPN. When it finishes, it'll display something like this:
================================================ IPsec VPN server is now ready for use! Connect to your new VPN with these details: Server IP: 220.127.116.11 IPsec PSK: BqSfZg8qcNFDjLAc Username: vpnuser Password: M7J6Bt3EyCmwPZbM Write these down. You'll need them to connect! Important notes: https://git.io/vpnnotes Setup VPN clients: https://git.io/vpnclients ================================================
That bit about writing them down, do it then exit from the SSH session. These are our IPSEC VPN credentials. The VPN is running, but there's still a step to go.
Opening the Firewall
We need to allow the traffic to flow from the outside to the VPN and to allow TLS traffic to go between the VPN and the hosts. This can be done from the GCP console. Go to the Networking product page and you'll see the general networking overview. There will be a default network, at least, and the network for the cluster – in our example
exemplumcluster-network. Select the clusters network and you'll now see this:
We can add firewall rules here simply by clicking on Add firewall rule which brings up this form:
First, the incoming rule for the VPN. Give it the name
vpn-rule and select "Allow from any source (0.0.0.0/0)" in the source filter. Then, in the Allowed protocols and ports field put
tcp:1701; udp:4500; udp:500
This allows TCP/IP traffic on port 1701 and UDP traffic on ports 4500 and 500.
In the Target tags field, enter
vpn, the tag allocated to the instance when it was created earlier. This will lock down the rule to being between the outside word and the VPN host. Click Create and the rule will be applied.
That lets the traffic from the VPN in. Now we need to enable TLS connections within the cluster. Click Add firewall rule again. Name this rule "databrowser". The Source Filter will need to be set to
Subnetworks and when you do that, the form changes to allow you to enter those subnetworks:
We need all the subnetworks in this case, so click Select all and Ok. In the Allowed protocols and ports enter:
We won't be setting any target tags as this rule will apply to all systems in the cluster. Click Create and that should make your clusters VPN connection ready to use.
Configuring the incoming connection
How you set up your incoming connection will entirely depend upon your operating system. Recall back when the connection credentials were generated, there were a few URLs included. Specifically, https://git.io/vpnclients, which gives directions for creating a client VPN connection on Windows, Linux, Mac OS X, iOS and Android. We'll use Mac OS X as an example here. As per the instructions at the previous link, go to System Preferences and then to the Network section, click on the + at the bottom of the interface list to add an interface and select VPN in the drp down that appear. Diverging slightly from the instructions, select Cisco IPsec as the VPN type. Click Create to make the network interface and you'll return to the Network screen with the new interface selected.
Now we can fill the details for our VPN server connection. From the information we recorded earlier...
- enter the Server IP into the Server Address field
- enter the Username into the Account Name field
- enter the Password into the *Password field
- to use the IPsec PSK
- click Authentication Settings
- select Shared Secret
- enter the IPsec PSK into the Shared Secret field
- click Ok
- click Apply
- click Connect
Testing the connection
The VPN should have been configured and connected by now. If you want to see if it is configured you can either try selecting the data browser in any database that have a browser option. The data browser is integrated into your cluster and seamlessly blends with the Compose console; if it appears, the VPN is working. For any database with a HTTPS web ui (eg RethinkDB or RabbitMQ), you can also try connecting to their admin UI (details in the Compose console for deployed databases).
You now have a Compose cluster running of the Google Cloud Platform, complete with VPN access. You can deploy new compute instances into the Google Cloud project to run your application and connect directly to those databases, or create secure tunnels or SSL connections to remote applications. The choice is yours with Compose Enterprise.