Go and Compose - MongoDB, Elasticsearch and PostgreSQL

Published

How to connect to every Compose database using Go: This is the first stop on the Compose Grand Tour which shows you the drivers, code and everything you need to get going with Go and Compose MongoDB, Elasticsearch, AND PostgreSQL.

Compose's Grand Tour project sets out to provide an example of connecting to all of the databases of Compose. This includes using TLS/SSL, self-signed certificates, and any other appropriate good practice to get a more secure connection. In this first part, we'll look at MongoDB, Elasticsearch and PostgreSQL. In future parts, we'll be covering Redis, RethinkDB, RabbitMQ, etcd, Scylla and MySQL.

But first, let's look at what we like to call "The Example".

The Example

From experience, we know a good example which leans on familiar code is often the best way to get up and running quickly. With that in mind, for each Grand Tour language, there's "The Example". This is the starting point for every other example and is a simple web application which serves up a simple word and definition list and lets you add new definitions to it. The web front end is completely managed in JavaScript, client side.

This allows the server side code to implement the GET and PUT REST calls that work with the data. The GET method returns all the words and definitions currently stored. The PUT method recieves a form and extracts a word and definition from the form and stores that. In "The Example", that storage is performed by an array with no database. Each subsequent example puts in the needed code to back it with a particular database.

The Go version of The Example is quite compact and uses only the standard Go HTTP library. Browse the code; you'll see it in all the following examples. For all the databases we'll cover, you'll find the connection strings and certificates on the Compose console's overview for that particular database.

Now, let's dive in with MongoDB.

MongoDB

The Code: You'll find the MongoDB code for the Grand Tour in the example-mongodb directory.

The Driver: The mgo driver from Labix has established itself as the de facto Go driver. It is very much written in a Go idiom.

The Connection: The first step in getting a connection is to acquire the self-signed certificate. Compose MongoDB's TLS uses only self-signed certs to verify that the app is talking to the right server.

The process of getting a self-signed certificate is involves creating a tls.Config structure which has been populated with a Root CA pool for verification. That pool just contains the certificate read from the file pointed to by the PATH_TO_DBNAME_CERT environment variable.

    roots := x509.NewCertPool()
    if ca, err := ioutil.ReadFile(os.Getenv("PATH_TO_MONGODB_CERT")); err == nil {
        roots.AppendCertsFromPEM(ca)
    }
    tlsConfig := &tls.Config{}
    tlsConfig.RootCAs = roots

Now we have a certificate, we get the connection string as supplied by Compose MongoDB from the environment. We assume it's been set using the full string provided. Here we get to work around a mgo misfeature:

connectionString := os.Getenv("COMPOSE_MONGODB_URL")  
trimmedConnectionString := strings.TrimSuffix(connectionString, "?ssl=true")  

MongoDB expects ?ssl=true on the end of URLs to denote that the driver needs to set up TLS/SSL. With mgo, the TLS/SSL dance is a little more complex and the mgo driver isn't keen on seeing that parameter in the URL. So we strip it off. There's still more to do though. We need to create a custom version of the Dial function from the connection string and get the driver to connect with that.

dialInfo, err := mgo.ParseURL(trimmedConnectionString)  
dialInfo.DialServer = func(addr *mgo.ServerAddr) (net.Conn, error) {  
        return tls.Dial("tcp", addr.String(), tlsConfig)
    }
session, err = mgo.DialWithInfo(dialInfo)  
defer session.Close()  

(Note that we've skipped Go error checks for clarity in this article; you'll find them in the Grand Tour source code though).

The Read: This is deceptively simple. In "The Example" there's a struct type called item which had tags to help it get marshaled into JSON. For MongoDB and mgo, we add some bson: tags to map the struct to a MongoDB record:

type item struct {  
    ID         bson.ObjectId `bson:"_id,omitempty" json:"_id,omitempty"`
    Word       string        `bson:"word" json:"word"`
    Definition string        `bson:"definition" json:"definition"`
}

This will take care of most of the work for us. Not specifying the id field will autogenerate it in this case. Now, back to the actual read.

First, make sure you get the collection you want:

        c := session.DB("grand_tour").C("words")

Then all we need to do is tell mgo to find all the items in the collection:

        var items []item
        err := c.Find(nil).All(&items)

And we're done. The Find can take search terms but for now, we've got all the items, un-marshaled into an array.

The Write: As you might guess, this is equally simple. Make an item and write it to the collection:

c := session.DB("grand_tour").C("words")  
newItem := item{Word: r.Form.Get("word"),  
                Definition: r.Form.Get("definition")}
err := c.Insert(newItem)  

Now you are connected to Compose, you can start exploring the mgo driver's capabilities.

As with all these examples, this is the simplest possible connection and, especially if you are running many go-processes, you'll want to take steps to protect shared data such as structures like the DB session by copying it; that's a discussion for another time though.

Elasticsearch

The Code: You'll find this code in the example-elasticsearch repository. Before we go any further, this example is for the current Elasticsearch 5.x on Compose which comes with Let's Encrypt certificates for TLS/SSL verification.

The Driver: Elasticsearch's REST interface has allowed many drivers to be written, but in our experience, Olivere's Elastic library stands out as one of the best thought through. It comes in various Elasticsearch version flavors. We'll be using gopkg.in/olivere/elastic.v5.

The Connection: The connection code is very simple. We just have to create an elastic client. First, we'll create a global variable for the client.

var client *elastic.Client  

In the main function, we retrieve the Elasticsearch URL from the environment, then create the client:

  esuri := os.Getenv("COMPOSE_ELASTICSEARCH_URL")

  var err error
    client, err = elastic.NewClient(elastic.SetURL(esuri), elastic.SetSniff(false))

The setSniff stops the client from doing auto-discovery. What about all the TLS/SSL stuff we did with MongoDB? That's the advantage of using Let's Encrypt certificates; the verification is handled at the system level over trusted sites.

We're ready to go now, but just before starting up, we want to make sure there's an index to write to.

    exists, err := client.IndexExists("grand_tour").Do(context.TODO())

    if !exists {
        _, err := client.CreateIndex("grand_tour").Do(context.TODO())
    }

We'll just let the defaults do their thing. Now we are ready to write and the read.

The Write: Documents in Elasticsearch are JSON documents. For our Elasticsearch example, we shorten the item type so it only carries the fields we are interested in storing and searching.

type item struct {  
    Word       string `json:"word"`
    Definition string `json:"definition"`
}

When we need to write that, the first thing we create in a new item with our data in it.

    newitem := item{r.Form.Get("word"),r.Form.Get("definition")}

To insert that into our index, here's our code...

_, err := client.Index().  
            Index("grand_tour").
            Type("words").
            BodyJson(newitem).
            Refresh("true").
            Do(context.TODO())

Let's step through it. The client.Index() gets an IndexService from the client which is solely concerned with adding and updating indices. We follow that with Index("grand_tour") which sets the index; yes, there are multiple instances of the Index function in different contexts. Now we need to say what Elasticsearch type we are creating: Type("words"). With that defined we can hand over the data to be handled as JSON: BodyJson(newitem).

We want the index to be updated once we've inserted this new item so we'll add Refresh("true"). Now that is all set up, we need to tell the service to do its work with a Do() call. This takes a Go Context for setting timeouts and deadlines; we aren't setting any for this example, so we use context.TODO().

That will see the new record written into the index. Now to get it back.

The Read: Elasticsearch is all about the search, so to retrieve data, we have to get ourselves a search service and ask that to retrieve data.

searchResult, err := client.Search().  
            Index("grand_tour").
            Type("words").
            Do(context.TODO())

This search is only qualified by asking for the "words" type in the index. Indices can contain multiple types so better to avoid non-words types. Next, we have to decant the results. We'll start by creating an array of item and a single item:

        words := []item{}
        var i item

Now, we use the convenience function Each to iterate through all the returned data, cast the data into the correct type and append it to our array.

        for _, peritem := range searchResult.Each(reflect.TypeOf(i)) {
            i := peritem.(item)
            words = append(words, i)
        }

At which point we are ready to return the results to the user. Check out this example for more on deserializing Search service results and creating queries.

PostgreSQL

The Code: PostgreSQL's code in the Golang Grand Tour can be found example-postgresql directory of the repository.

The Driver: This is the first SQL database in the Grand Tour and in Go. We tend to use the native database/sql package and lib/pq, a pure Go PostgreSQL driver designed to work with it.

The Connection: lib/pq happily handles the full range of options that the PostgreSQL command line can host. So to create our connection we need to actually expand the URL from the Compose connection strings like so:

    myurl := os.Getenv("COMPOSE_POSTGRESQL_URL") +
        ("?sslmode=require&sslrootcert=" + os.Getenv("PATH_TO_POSTGRESQL_CERT"))

We append to the URL sslmode=require to ensure that no connection is made without TLS/SSL enabled and then we add sslrootcert= and the path the our certificate. lib/pq will take care of loading it when needed.

Now we can move onto doing opening the connection and creating a table if needed.

  var err error
    db, err = sql.Open("postgres", myurl)
    defer db.Close()

This is standard database/sql, asking to open a connection using the "postgres" driver which would have been imported earlier with:

import (  
    _ "github.com/lib/pq"
)

And using the url we just created to locate said database. Once the connection is open, we are going to check to see if a words table exists, and if it doesn't, create one. This query covers that and we are including creating a serial incrementing primary key.

    _, err = db.Query(`CREATE TABLE IF NOT EXISTS words (
        id serial primary key,
        word varchar(256) NOT NULL, 
        definition varchar(256) NOT NULL)`)

If that all works, we go on to serve up the web pages and await calls. And that means, first up is the read.

The Read: The database/sql makes things like issuing a query simple. Just formulate your SQL and use db.Query to send it off to the server.

    rows, err := db.Query("SELECT id,word,definition FROM words")
        defer rows.Close()

Things get a little more convoluted when we want to read it. We have to make an empty array of items. Then we iterate through the results with for rows.Next(). For each row we then create an empty item and use the rows.Scan() function to populate that item. Once done, we append that new item to the array. This is quite a common pattern, so common that a package called sqlx exists to simplify it. We've done it like this so you can see the lowest level of SQL querying code.

        items := make([]*item, 0)
        for rows.Next() {
            myitem := new(item)
            err = rows.Scan(&myitem.ID, &myitem.Word, &myitem.Definition)
            items = append(items, myitem)
        }

At the end of this, we have an array which we can return to the user and thanks to the defer, the rows closes itself automatically.

The Write: The write, at least for this example, is very simple:

_, err := db.Exec("INSERT INTO words (word,definition) VALUES ($1, $2)", r.Form.Get("word"), r.Form.Get("definition"))  

It literally is a single line, running an SQL INSERT command through the db.Exec function. Parameters for the two fields are pulled from the incoming form.

The database/sql package is remarkably powerful for a standard library and the lib/pq package makes it effective to use with PostgreSQL.

A brief stop...

So that's MongoDB, Elasticsearch and PostgreSQL with Go and Compose. In the next part we'll take on three more Compose databases in the Grand Tour.

Update: Read on to find out about Go with Redis, RethinkDB and RabbitMQ on Compose.


Read more articles about Compose databases - use our Curated Collections Guide for articles on each database type. If you have any feedback about this or any other Compose article, drop the Compose Articles team a line at articles@compose.com. We're happy to hear from you.

attribution Hollie Harmsworth

Dj Walker-Morgan
Dj Walker-Morgan is Compose's resident Content Curator, and has been both a developer and writer since Apples came in II flavors and Commodores had Pets. Love this article? Head over to Dj Walker-Morgan’s author page to keep reading.

Conquer the Data Layer

Spend your time developing apps, not managing databases.