Go’s SQL Support

Go has great built-in support for interacting with SQL databases via it’s database/sql package. The main object used is sql.DB.

What may not be obvious from the surface is that the sql.DB object is a pool of connections to the database, and by default has an unlimited pool size. When an operation is executed against a sql.DB object, Go calculates which connection from the pool to use. If there is an idle connection, then this will be used first. If there isn’t one available, then a new one will be created. This can cause issues with certain databases, as there may be connection limits. The default settings of the pool can also have a performance implication as described in this article by Alex Edwards. As mentioned in that article, there are a few knobs that can be adjusted on the pool, using the methods sql.DB.SetMaxOpenConns(), sql.DB.SetMaxIdleConns() and sql.DB.SetConnMaxLifetime().

Given the pool’s workings are hidden from the user’s perspective, how can we know what the state of pool is at any one time? Enter sql.DBStats.

sql.DBStats

These DB stats can be accessed by calling the Stats() method on a sql.DB object and they are defined as such:

type DBStats struct {
    MaxOpenConnections int // Maximum number of open connections to the database; added in Go 1.11

    // Pool Status
    OpenConnections int // The number of established connections both in use and idle.
    InUse           int // The number of connections currently in use; added in Go 1.11
    Idle            int // The number of idle connections; added in Go 1.11

    // Counters
    WaitCount         int64         // The total number of connections waited for; added in Go 1.11
    WaitDuration      time.Duration // The total time blocked waiting for a new connection; added in Go 1.11
    MaxIdleClosed     int64         // The total number of connections closed due to SetMaxIdleConns; added in Go 1.11
    MaxLifetimeClosed int64         // The total number of connections closed due to SetConnMaxLifetime; added in Go 1.11
}

Having access to these inside an application is all well and good, but how could we see how the pool is performing over time? This is a good use case for an external monitoring system like Prometheus.

Prometheus Custom Collector

To be able to do this, we can write a custom collector for these DBStats (a brief guide can be found here).

This entails creating a Go struct that implements the prometheus.Collector interface, i.e.:

type Collector interface {
	Describe(chan<- *Desc)
	Collect(chan<- Metric)
}

Given that we can expose 8 different metrics from DBStats, we will need to push 8 different *prometheus.Descs into the channel that is passed into the Describe method, and the corresponding 8 metric values into the channel passed into the Collect method. This collector should then be registered to the Prometheus library using the MustRegister function. Once it is registered, the 8 metrics will be listed under the Prometheus metrics endpoint the application should expose.

Given that the Collect method is invoked every time the metrics endpoint is called, i.e. every time the metrics are collected by Prometheus, the method should call sql.DB.Stats() every time, as to return the correct up-to-date metrics.

dlmiddlecote/sqlstats library

An implementation of the above can be seen at dlmiddlecote/sqlstats. Using this library in your project can be done like so:

package main

import (
	"database/sql"
	"net/http"

	_ "github.com/lib/pq"
	"github.com/dlmiddlecote/sqlstats"
	"github.com/prometheus/client_golang/prometheus"
	"github.com/prometheus/client_golang/prometheus/promhttp"
)

func main() {
	if err := run(); err != nil {
		panic(err)
	}
}

func run() error {
	// Open connection to a DB (could also use the https://github.com/jmoiron/sqlx library)
	db, err := sql.Open("postgres", "postgres://postgres:postgres@localhost:5432/postgres")
	if err != nil {
		return err
	}

	// Create a new collector, the name will be used as a label on the metrics
	collector := sqlstats.NewStatsCollector("db_name", db)

	// Register it with Prometheus
	prometheus.MustRegister(collector)

	// Register the metrics handler
	http.Handle("/metrics", promhttp.Handler())

	// Run the web server
	return http.ListenAndServe(":8080", nil)
}

Then, upon calling localhost:8080/metrics, the current stats for the DB object will be returned and can be scraped by Prometheus. These metrics can then be used to see the DB pool’s inner workings over time, hopefully enabling users to know which settings should be tweaked to aid performance of the pool in their application context.