Bash strict mode: the 4-line preamble (set -euo pipefail; IFS=$’\n\t’) and what each flag actually buys you

Most production bash scripts I’ve inherited start with a comment block, then dive straight into echos and mkdirs and cps. They look fine. They run fine, most of the time. They fail mysteriously the one time it matters — when a directory doesn’t exist, when a variable is unset, when one stage of a pipeline silently errors and the next stage cheerfully continues with an empty input.

The fix is four lines at the top of every script:

#!/bin/bash
set -euo pipefail
IFS=$'\n\t'

This is “bash strict mode,” and it changes how bash treats errors so dramatically that scripts written without it feel reckless once you’ve gotten used to scripts written with it. Here’s what each piece actually does, and the failure modes it catches.

set -e — exit on error

Without -e, bash treats every command’s failure as advisory. You can cp file backup && rm file, but if you wrote cp file backup; rm file (semicolon, not &&), the rm runs even if the cp failed. Your file is gone, your backup never existed.

set -e says: if any command exits non-zero, the entire script exits immediately. The cp failure is now fatal; rm never runs.

Caveat: -e is intentionally limited. It does not exit on a failed command in a pipeline, in an if condition, or after !. The pipeline case is what -o pipefail fixes (below). For the if case, that’s intentional — you want if grep -q foo file; then ... to work whether grep matches or not.

set -u — unset variables are errors

Without -u, $UNDEFINED_VAR expands to an empty string and the script trundles on. Classic disaster: rm -rf "$BASE_DIR/$SUBDIR" when $BASE_DIR is unset. That becomes rm -rf "/$SUBDIR" and you’ve just deleted /home or wherever $SUBDIR happened to point.

set -u says: any reference to an unset variable is fatal. The rm -rf bomb above can’t even start; the script exits with “BASE_DIR: unbound variable” before bash builds the command line.

This is the single most important flag for catching bugs you’d otherwise notice during the post-mortem. Use ${VAR:-default} to opt specific variables out when you genuinely want a default.

set -o pipefail — pipelines fail on any stage

Without pipefail, the exit code of foo | bar is the exit code of bar alone. foo can die horribly, but if bar succeeds (because it got an empty input it could process), the whole pipeline reports success.

Concrete example: curl -s https://api.example.com/users | jq '.[] | .name' | sort -u > users.txt. If curl 404s and outputs nothing, jq runs against empty stdin (success), sort runs against empty stdin (success), and you’ve just written an empty users.txt that overwrites your real list. pipefail makes this fail loud — the curl exit code propagates through the pipeline, and the redirect to users.txt never happens.

IFS=$'\n\t' — predictable word-splitting

The default IFS (Internal Field Separator) is space + tab + newline. This bites you constantly when filenames have spaces. for f in $(ls); do against a file named my report.pdf iterates as two items: my and report.pdf. Most “weird bash bug” stories trace back here.

Setting IFS=$'\n\t' drops space from the splitter — words split only on newlines and tabs. Filenames-with-spaces now iterate as one item, which is what you almost always want.

It’s not a complete fix — filenames with newlines (yes, that’s legal) still break — but it eliminates 95% of the IFS-related bugs in real scripts.

What strict mode doesn’t fix

  • Quoting variables. You still have to write "$file", not $file, anywhere a value might contain spaces. Strict mode catches the wrong-IFS bug, not the missing-quotes bug.
  • Race conditions. Two scripts running simultaneously and stepping on the same temp file is a logic bug strict mode can’t see.
  • Logic errors. If you wrote if [ $x = $y ] when you meant !=, strict mode happily runs your wrong logic to completion.

The opt-out pattern

Sometimes you genuinely want a command to fail without aborting the script. set +e turns off -e temporarily; command || true swallows a single command’s exit code:

# I want both of these to run, even if one fails
mkdir -p /var/cache/myapp || true
chown myapp:myapp /var/cache/myapp || true

# Or for a block:
set +e
some_command_that_might_fail
set -e

Using || true for one-offs is cleaner; set +e/set -e is for when you have a chunk of legacy logic you don’t want to retrofit.

Just put it at the top

The four-line preamble takes ten seconds to add and pays for itself the first time a script blows up loudly instead of silently. I keep a snippet for it in my editor; every new script gets it before any logic.

If you have existing scripts running in production without it, don’t retrofit them all today — adding -u to a script that depends on undefined-variable laziness will break it spectacularly. But every new script should have it. After a few months of writing strict-mode bash, going back to non-strict bash feels like javascript without TypeScript: technically possible, but actively unpleasant.

Cover photo: dkomov on Pexels.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.