Creating an AWS VPC and Secured Compose MongoDB with Terraform

Published

Connecting to Compose MongoDB from Amazon VPC? Using Terraform for orchestration? In this Write Stuff article, Yamil Asusta shows us how to create secure connections to Compose MongoDB using Terraform and Amazon VPC.

Security is often overlooked when busy shipping products. As a result of that, thousands of databases are being held captive from their operators. The attack was possible because none of the security alternatives were implemented for their deployments. Luckily for us, developers, Compose provides us with deployments that include security defaults which can be further expanded to reduce risk. In this post, I hope to explain some basic security practices to lock down access to a MongoDB deployment from VPC.

AWS VPC

Assuming we are starting from scratch, we need to spin up some infrastructure in which we can launch our servers. To do so, we will use one of my favorite tools, Terraform.

Create a main.tf file and add the following:

provider "aws" {  
  region = "us-east-1" # feel free to adjust
}

This will indicate Terraform our target region for the next operations.

Creating a VPC

Let's proceed with creating a VPC. For the purposes of this post, we will only launch 1 public subnet and 1 private subnet using Segment.io's Stack. Add the following to the file:

module "vpc" {  
  source             = "github.com/segmentio/stack//vpc"
  name               = "my-test-vpc"
  environment        = "staging"
  cidr               = "10.30.0.0/16"
  internal_subnets   = ["10.30.0.0/24"]
  external_subnets   = ["10.30.100.0/24"]
  availability_zones = ["us-east-1a"] # ensure it matches the one for your provider
}

Note: Do not go to production with this setup since it will leave you prone to downtime in the scenario where the Availability Zone collapses.

This "vpc" module will launch an Internet Gateway and attach it to the VPC, thus allowing instances launched in the public subnet to reach the internet (assuming the were assigned a public IP). Additionally, it launches the most important piece, a NAT server. The NAT is launched in a public subnet and is linked to a private subnet which in result, gives instances in the subnet access to the internet. The NAT is provisioned with an Elastic IP and all requests coming from the private subnet will have this IP (see where I'm going with this?).

Making the private subnet available

Now we have reachable subnet and one that isn't. How do we fix that? Let's create a bastion which will let us jump from our public subnet to our private ones. Add this to the file:

module "bastion" {  
  source          = "github.com/segmentio/stack//bastion"
  region          = "us-east-1" # make sure it matches the one for the provider
  environment     = "staging"
  key_name        = "my awesome key" # upload this in the AWS console
  vpc_id          = "${module.vpc.id}"
  subnet_id       = "${module.vpc.external_subnets[0]}"
  security_groups = "${aws_security_group.bastion.id}"
}

resource "aws_security_group" "bastion" {  
  name        = "bastion"
  description = "Allow SSH traffic to bastion"
  vpc_id      = "${module.vpc.id}"

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  lifecycle {
    create_before_destroy = true
  }
}

The security group of the bastion only allows SSH for inbound. We could further tighten it up but we are going to keep it simple for the sake of example.

Let's launch an instance in the private subnet using the following:

resource "aws_instance" "instance" {  
  ami                         = "ami-0b33d91d" # Amazon Linux AMI
  key_name                    = "my awesome key"
  instance_type               = "t2.nano"
  subnet_id                   = "${module.vpc.internal_subnets[0]}"
  vpc_security_group_ids      = ["${aws_security_group.instance.id}"]
  associate_public_ip_address = false

  tags {
    Name = "ComposeIPWhitelisted"
  }
}

resource "aws_security_group" "instance" {  
  name        = "instance"
  description = "Allow SSH traffic from bastion"
  vpc_id      = "${module.vpc.id}"

  ingress {
    from_port       = 22
    to_port         = 22
    protocol        = "tcp"
    security_groups = ["${aws_security_group.bastion.id}"] # only the bastion SG can access me :)
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  lifecycle {
    create_before_destroy = true
  }
}

Notice that the security group for the instance only allows traffic from the bastion's security group.

Once we have this ready, let's add some outputs so we can get going.

output "bastion-ip" {  
  value = "${module.bastion.external_ip}"
}

output "nat-ips" {  
  value = "${module.vpc.internal_nat_ips}"
}

output "instance-ip" {  
  value = "${aws_instance.instance.private_ip}"
}

At this point, your main.tf must look similar to this one.

Terraform time:

$ terraform get # pulls dependencies
$ terraform plan # this will show you are the things to be created/destroyed on the next step
$ terraform apply # applies the plan, effectively creating our infrastructure

Once the apply is complete, we can SSH into our bastion using the resulting IP by running:

$ ssh -A ubuntu@bastionIP # assuming we selected the same key pair, -A will forward our keys allowing us to jump with them

Within the bastion, SSH into our private instance by running:

$ ssh ec2-user@instanceIP # ec2-user is the default user of Amazon Linux AMI

Configuring MongoDB

Go ahead and provision a MongoDB deployment from the Compose dashboard. Be sure to select Enable SSL access. By enabling this, Compose will provide us with SSL certificates, which will allow us to encrypt our data in transit. This prevents Man-in-the-middle attacks. When the deployment is ready, we will be able to access the deployment dashboard. From here we need to do two things:

  1. Create a user that we can later use to authenticate against the database. To do so, click on the Browser tab, select the admin database and click Add User. Make sure to remember the password as it will not be available from this point forward.
  2. Obtain the SSL certificate we will use to connect to our database. In the Overview, tab there will be a section called "SSL Certificate (Self-Signed)". Its contents are hidden and you will be prompted for your password in order to make them visible. This will be available at all times for your convenience.

Let tie everything up now!

Within our target host, install the MongoDB shell. If you kept the same AMI (Amazon Linux AMI) you can follow this guide. Additionally, create a file called cert.pem which contents are the SSL certificate found in the dashboard.

You should be able to connect to your MongoDB using this command now:

$ mongo --ssl --sslCAFile cert.pem <your deployment url>/admin -u <username> -p <password>

The data we transmit will be encrypted when we use our certificate. Only one problem left, our MongoDB is still open to anyone to try to authenticate. Let's fix it by using the IP Whitelist feature. Back in the dashboard, visit the Security tab. Under the section Whitelist TCP/HTTP IPs, select Add IP. When prompted, add the IP address value of the nats-ip output from Terraform. Once the feature is active, all connections that are not from Compose or our designated list will be dropped.

Let's make a quick test! Try connecting to MongoDB one more time from our instance. It should work as intended. Now try accessing it from your local network and tell me how it goes ;)

Yamil Asusta works as an SRE at [Auth0](http://auth0.com). He ships code and infrastructure while occasionally pretends he can write stuff.

attribution Pexels

This article is licensed with CC-BY-NC-SA 4.0 by Compose.

Conquer the Data Layer

Spend your time developing apps, not managing databases.