You wrote a deploy script. It SSHes into 12 servers, runs an update, comes home. The first time you run it on a fresh laptop, every server prompts: The authenticity of host ‘203.0.113.x’ can’t be established. Continue connecting (yes/no/[fingerprint])?. The temptation is to set StrictHostKeyChecking=no in ~/.ssh/config and move on. Don’t. That setting is the difference between “my deploy works” and “an attacker on the path got the credentials I shipped to the wrong server.”
The right way to handle this in a personal-fleet shell script is: pre-populate known_hosts with the keys you trust, leave StrictHostKeyChecking at its default (“ask”), and let the script fail loudly if a host’s key changes. Here’s the small handful of ssh-keyscan idioms that get this right.
What StrictHostKeyChecking actually does
The ssh client has three modes for verifying server identity:
StrictHostKeyChecking yes. The host’s key MUST already be inknown_hosts. If it isn’t, the connection fails. Maximally strict; great for production.StrictHostKeyChecking ask(the default). If the key isn’t known, prompt interactively. If it’s known and matches, connect silently. If it’s known but DIFFERENT, fail loudly — this is the famous “REMOTE HOST IDENTIFICATION HAS CHANGED” warning, which is a real possible-MITM signal.StrictHostKeyChecking no. Auto-accept anything, ever. Don’t use this. It defeats the entire host-key-verification system.
The third mode is what people set when their script breaks on first run. The right fix isn’t to disable verification — it’s to populate known_hosts ahead of time so verification passes silently.
ssh-keyscan: harvesting host keys properly
ssh-keyscan connects to a host’s SSH port, fetches its public host key, and prints it in known_hosts format. It does NOT verify the key — whoever’s at the IP gets to claim ownership. So you only run keyscan in trusted contexts (first-time inventory of a server YOU just provisioned), then the result lives in your repo and is treated as ground truth from then on.
# Get the ed25519 host key for one server:
ssh-keyscan -t ed25519 server.example.com
# Multiple servers at once:
ssh-keyscan -t ed25519 -H server1.example.com server2.example.com 192.0.2.10 \
> my-fleet-known-hosts
# The -H flag hashes the hostname so a leaked file doesn't disclose
# which servers you have. Optional but tidy.The output looks like:
|1|aBcDe...|fGhIj... ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAI...Hashed hostname, key type, key bytes. Save the file to your dotfiles repo (it’s a public key — safe to commit).
The personal-fleet pattern
#!/usr/bin/env bash
# ~/dotfiles/bin/fleet-ssh
# Usage: fleet-ssh <cmd>
# Runs cmd on every host in ~/dotfiles/fleet.txt, with strict host-key verification.
set -euo pipefail
HOSTS_FILE="$HOME/dotfiles/fleet.txt"
KH_FILE="$HOME/dotfiles/fleet-known-hosts"
CMD="${1:?usage: fleet-ssh <cmd>}"
while IFS= read -r host; do
[[ -z "$host" || "$host" =~ ^# ]] && continue
echo "=== $host ==="
ssh -o "UserKnownHostsFile=$KH_FILE" \
-o "StrictHostKeyChecking=yes" \
-o "ConnectTimeout=5" \
-o "BatchMode=yes" \
"$host" "$CMD"
done < "$HOSTS_FILE"UserKnownHostsFile=$KH_FILE— use the fleet’s known_hosts, not the user’s. So a stray entry in~/.ssh/known_hostscan’t accidentally trust a server.StrictHostKeyChecking=yes— if the host’s key doesn’t match what’s in$KH_FILE, fail. No prompt.BatchMode=yes— never prompt for a passphrase or password; if auth fails, fail. Required for non-interactive scripts.ConnectTimeout=5— bound the wait. A dead host stops your script after 5 seconds, not 30.
Refreshing the known_hosts when a server gets reinstalled
OS reinstall = new host key. Your known_hosts is out of date. The script will start failing with “Host key verification failed” — correctly. The fix is two commands:
# Remove the old key for that host:
ssh-keygen -R server-that-was-reinstalled.example.com -f ~/dotfiles/fleet-known-hosts
# Re-fetch the new key:
ssh-keyscan -t ed25519 -H server-that-was-reinstalled.example.com \
>> ~/dotfiles/fleet-known-hosts
# Commit the diff to git so your collaborators / future-you can see what changed.
cd ~/dotfiles && git add fleet-known-hosts && git commit -m "rotate host key for $H"The git commit is the audit trail. If a collaborator later sees a sketchy commit “rotate host key” without context, they can ask “why did this server’s identity change?” — the right question.
First-run bootstrap on a brand-new laptop
The first time you check out your dotfiles repo on a fresh laptop, the fleet-known-hosts file is already there. The fleet-ssh script reads it. No keyscan needed. Strict verification works on the first connection.
The bootstrap that DOES need a keyscan is the one where you’ve just provisioned a brand-new server. Then the workflow is:
# 1. Provision the server (whatever cloud / VPS).
# 2. Verify its host key OUT-OF-BAND if you can — most cloud providers
# print the SSH host fingerprint in the boot log / instance metadata.
# Compare to what ssh-keyscan returns.
# 3. Append to fleet-known-hosts:
ssh-keyscan -t ed25519 -H new-server.example.com >> ~/dotfiles/fleet-known-hosts
# 4. Verify against the boot-log fingerprint:
ssh-keygen -lf <(ssh-keygen -lF new-server.example.com -f ~/dotfiles/fleet-known-hosts)
# 5. Commit.Step 2 is the bit nobody does and that closes the trust gap. Hetzner, DO, Vultr, AWS, GCP — all show the new instance’s host fingerprints in some console / metadata service. Compare them once, then trust the entry forever. That’s the meaningful difference between “TOFU” (trust on first use, the default) and actual host-key verification.
When to use a CA instead
Above ~50 servers, managing a known_hosts file by hand becomes a chore. The right move is then SSH certificates — sign each server’s host key with your own CA, distribute the CA’s public key once, and clients verify against the CA instead of per-host. known_hosts shrinks to a single line. Look up @cert-authority when you get there.
For a personal fleet of <50 hosts, the keyscan + git’d known_hosts pattern is the right balance: real verification, no infrastructure to maintain, no NoStrictHostKeyChecking shortcuts that would let a man-in-the-middle steal your sudo password.
Photo: Padlock with key by theshantanukr on Pexels.
