A practical guide to configuring web services for deployment with TOML & OpenBao/Vault


After you build an application and you’re ready to deploy it you’ll most likely need to run it with different arguments. On the computer in front of you, you might be running the app on localhost, but on your production server, maybe you want it to listen on 0.0.0.0 - i.e. expose it to external requests. If your app is slightly more mature you might have a live testing environment that like your dev environment, uses a Stripe test key, and a production environment that uses a live key. As your app or system of applications becomes more complex, your config system can benefit from changes that scale with it. I will take you through several elements you can consider as your environment & organisation become more complex.

n.b. I will be using Node.JS as an example runtime & show some packages specific to Node.JS, but the general advice is runtime/lang agnostic.

Contents

  1. 1. Hardcoded config
  2. 2. Args, flags & environment variables
  3. 3. .env files
  4. 4. TOML config
  5. 5. Consul-Template
  6. 6. OpenBao (Vault)
  7. 7. Good practices

1. Hardcoded config

The simplest config is hardcoded into the application e.g.

server.js
const server = require('http').createServer((req, res) => res.end('Hi'));
server.listen('localhost', 80);

Running node server.js at this point will start a server listening on localhost i.e. only accessible by clients on that same host. When the stakes are low and the lifecycle of your build from coding -> deletion is short, you could just copy the file to a server with a public IP, vim edit localhost to 0.0.0.0 to listen on all interfaces i.e. expose to the internet and call it a day.

2. Args, flags & environment variables

When the stakes are raised e.g. you’re running multiple environments, the complexity of the app is non-trivial, you’re building with version-control (Git), you have sensitive information or you want to make your application more portable e.g. for distribution, you should start looking at dynamic config from outside the application code. Environment variables (env var) are the easiest way to start:

server.js
const isProd = process.env.TARGET_ENV === "PROD"
server.listen(isProd ? '0.0.0.0' : 'localhost', isProd ? 80 : 8000);

In this case you’d run TARGET_ENV=PROD node server.js. This uses a mix of hardcoded config but uses a single env var $TARGET_ENV to select bween 0.0.0.0:80 (PROD) & the default localhost:80.

If you’re using more variants, or you’re pulling information directly from the environment, you might want to use values directly. e.g. running HOST=0.0.0.0 PORT=80 server.js with the following code:

server.js
server.listen(process.env.HOST || 'localhost', Number(process.env.PORT || 80));

In this example you’ll see we had to cast PORT via Number(process.env.PORT). This is because environment variables are all cast as strings, so you will have to cast values to the types you desire.

Command line args (incl. flags) e.g. node server.js -e prod (e.g. via meow) or go run main.go -e prod (e.g. via go’s flag pkg) are another way to introduce config. They have the same limitations re. being interpreted as strings, thus require casting occasionally.

Something that you should NOT do is hardcode passwords, keys etc into applications. If they end up in version control, anyone with access to that repo and anywhere that repo goes includes those passwords. Most likely, you’ll need to change those passwords as a priority. Storing sensitive information in a repo introduces a significant vector for these to be leaked and makes values difficult to control with an organisation. Hardcoded config also doesn’t scale well for bigger systems & organisations - if you change a key used in 10 apps, that’s 10 apps you have to redeploy.

This website uses a mix of hardcoded config, multiple entrypoints (astro dev, astro build & docker run nginx) with flags in vendor software (docker, nginx, traefik) to control how it’s run.

3. .env Files

.env files are a very simple Key-Value pair format readable by your shell. They provide a nice flatfile way to group related config in one file. source .env (man source) will take an .env file and make all key value pairs available as environment variables e.g. source prod.env && node server.js:

prod.env
HOST="0.0.0.0"
PORT="80"
PAYMENT_API_KEY="secret-key-123-xyz"
server.js
server.listen(
process.env.HOST || 'localhost',
Number(process.env.PORT) || '8000'
);
paymentProvider.init(process.env.PAYMENT_API_KEY)
.gitignore
# Prevents .env files from entering repo
*.env

Something that I have seen, but wouldn’t personally like to do, is to store .env files with sensitive values, encrypted in a repo e.g. with transcrypt.

4. TOML Config

Using a configuration language gives us more types. I find numbers & booleans very useful. TOML (Tom’s Obvious Minimal Language) is my go-to. It’s easy to read, supports comments, has native support for different types, white-space doesn’t bite you like it does with YAML & nesting is very explicit.

config.toml
HOST="0.0.0.0"
PORT=80 # Number support
DEBUG_MODE=true # Boolean support
PAYMENT_API_KEY="secret-key-123-xyz"
DATABASE_NAME="prod"
DATABASE_USERNAME="prod"
DATABASE_PASSWORD="abcd-1234"
server.js
const config = toml.parse(fs.ReadFileSync('/path/to/config.toml'));
server.listen(
config.host || 'localhost',
config.port || '8000'
);
paymentProvider.init(config.PAYMENT_API_KEY)

Pretty good, but there’s no validation of any kind - you could put a number as your database name and omit your payment api key. I’ve written a package toml-config for Node.js that helps with finding the file, allows you to specify a config schema, statically infer types (autocomplete config objects & catch config errors before running code) & validate the config against it & provide defaults when a config item is not specified. Here’s how to use it:

npm install toml-config or pnpm install toml-config

server.ts
import { loadToml, validateConfig } from 'toml-config';
// Write a schema that the config will be validated against
const schema = {
EMAIL: { type: 'string', format: 'email' },
URL: { type: 'string', format: 'https' },
HOST: { type: 'localhost' },
PORT: { type: 'number' },
USERNAME: { type: 'string', default: 'admin' },
PASSWORD: { type: 'string', required: false, secret: true },
},
};
// Load config.toml from relative path to current file
const rawConfig = loadToml(import.meta.url, './config.toml');
// If the config doesn't match the schema, an error will be thrown. I usually let it crash here.
export const config = validateConfig(schema, rawConfig);

Note the secret attribute in the PASSWORD. This means you will have to use config.reveal() to use it. This guards against certain kinds of leaks like accidentally spilling out the entire config object to a public endpoint.

5. Consul Template

Preprocessing your config file has some advantages, you can take things from the server or dev environment. In its simplest form you could use envsubst or a shell script to substitute values in a config field prior to loading the config, for example:

config.dev.sh
#!/bin/sh
cat <<EOF
DEV_MODE=true
SECRET_SIGNING_KEY="$(openssl rand -base64 12)"
EOF

Running ./config.dev.sh > config.toml will produce a valid toml with a random secret key. This example is great for automating setup of dev config on other devs computers, for example. You might be tempted to use the application code for creating a secret like that if one doesn’t exist. Personally, I don’t recommend doing that. You probably don’t want to accidentally create a false one in prod, instead you want your CI/CD to fail, or at the very least, the dependent routes to fail.

In these situations I may also consider using Consul Template. Here’s an example you can run:

Install Consul Template with brew install consul-template and create a template:

dev.toml.ctmpl
SECRET_SIGNING_KEY = {{ env "RANDOM" }}
HOST = "{{ envOrDefault "LOCALHOST_OVERRIDE" "localhost" }}"

We can populate the env value with some bash: export RANDOM="$(openssl rand -base64 12)" && consul-template -template "test.toml.ctmpl:config.toml" -once - you can imagine doing something similar in CI/CD. The particular example with the “LOCALHOST_OVERRIDE” would let me listen on all address for example, or a VPN address, for example, as opposed to on loopback.

The best of Consul Template however, is to allow integration with Consul (a kv store), OpenBao (a secrets repo I will discuss in the next section). Running Consul Template will listen for changes to Consul/OpenBao variables by default, but you can use -once to exit after evaluation. You can also use -exec /path/to/bin to manage a child process that depends on that config - useful in small-medium sized deployments to restart your app on config change.

Here are links to Consul Template’s Template Language and Configuration Docs. You can do some very interesting things with consul-template including using dynamic paths to have one template file for all environments - or at least for Green/Blue deployments.

6. OpenBao (Vault)

OpenBao is an open-source fork of Hashicorp’s Vault. As your organisation grows, or in some cases even if your application grows, you might want to centralise secret managemnt & establish roles around it to minimise risk of leaks. This will also help keep your keys etc safe at rest & during transport, while keeping them accessible.

Let’s set up a OpenBao dev server (not secured for production):

Terminal window
brew install openbao
export BAO_ADDR='http://127.0.0.1:8200'
export BAO_TOKEN='dev'
bao server -dev -dev-root-token-id=dev
# And enable kv v2 secret engine & put a secret in it
bao secrets enable -path=kv2 -version=2 kv
bao kv put kv2/playground/db name=playground username=playground password=password
# Confirm secret exists
bao kv get -field=name kv2/playground/db
dev.ctmpl.toml
DB_HOST="staging.example.com:5432"
{{ with secret "secret/staging/data/database" }}
DB_NAME="{{ .Data.name }}"
DB_USERNAME="{{ .Data.username }}"
DB_PASSWORD="{{ .Data.password }}"
{{ end }}

And we’ll evaluate our template: consul-template -template="/templates/dev.ctmpl:/templates/config.toml" -once to produce:

config.toml
DB_HOST="staging.example.com:5432"
DB_NAME="staging"
DB_USERNAME="staging"
DB_PASSWORD="staging"

You may not want to give a single developer or app full access to the entire Vault store. You could give the app in its staging environment, for example read access to the entire secret/staging/* prefix. But if you want to give it exactly what it needs and nothing more (Principle of Least Priviliges), I have written a tool called baobud. This evaluates your template file & generates read policies based on the template file.

After installing baobud & executing baobud dev.ctmpl.toml -o policy.hcl, you’ll get the following policy:

policy.hcl
path "secret/staging/data/database" {
capabilities = ["read"]
}

You can then write the policy to Vault and create a token based on it. For an individual app this might look like:

Terminal window
vault policy write app-policy policy.hcl
vault token create -policy=app-policy

You can, and a lot of the time should, write your own policies, and it may be acceptable, depending on the nature of your organisation and software to reuse policies between different apps.

7. Good practices:

Here are some practices that have helped me in small & medium-sized environments:

  • Don’t ever write sensitive tokens etc in your app, it’s easy to forget they’re there, commit them, and accidentally keep them in history forever
  • While TOML, YAML etc allow you to, avoid nested config. A flat key will force you to be very clear about what you are referencing and not mix up names.
  • Always label keys, particular for database, along with their usernames, but also other sensitive resources with the target environment e.g. DEV_DB vs PROD_DB. You’re less likely to confuse dev with prod for example, and see bad things happen.
  • Same thing for the files themselves, for example repos could look like:
app/
├── src/
│ └── main.ts
│ └── config.ts
└── config/
├── dev.toml.tmpl
├── staging.toml.tmpl
└── prod.toml.tmpl
  • Sharing some resources & thus config between dev & other testing environments is often ok, but in general, keep prod totally separate.
  • Think about things that could go wrong due to misconfig & consider additional validation at runtime
  • I’m not talking about it in this post, but if you’re in a Kubernetes environment (I’m sorry), use Consul Template upstream to configmaps & let K8s itself deal with the lifecycle is probably a better approach in most circumstances.

Please email me at [email protected] for any corrections or feedback.