Open VS Code on a large monorepo and three or four tabs in. Save a file. Save another. Watch the file save spinner stick around for half a second longer than it should. Open the integrated terminal and type git status; it lags. Eventually a notification pops up: “Unable to watch for file changes in this large workspace.” You’ve hit the kernel’s per-user inotify watch limit, and the workaround that the internet first tells you about is “increase max_user_watches and reboot.” You don’t need to reboot.
This is the no-restart path I now run by default on dev machines and CI servers.
What inotify watches actually are
Every time a process calls inotify_add_watch() on a file or directory, the kernel allocates a watch. Watches consume kernel memory (~1 KB each on x86_64). To prevent runaway processes from exhausting kernel memory, there’s a per-user limit. On most distros it’s 8,192 watches per user, which is comically low for modern dev workflows.
Concrete numbers: a fresh VS Code instance on a node_modules-heavy project will consume 4-8k watches by itself. Add a vim/nvim with treesitter, a file watcher in your dev server, dropbox or syncthing watching your home dir — you blow through 8,192 in a single session.
Check current usage:
# Current limit
cat /proc/sys/fs/inotify/max_user_watches
# Current usage by all processes
find /proc/*/fd -lname 'anon_inode:inotify' 2>/dev/null \
| cut -d/ -f3 \
| xargs -I{} sh -c 'cat /proc/{}/status 2>/dev/null | head -1; \
cat /proc/{}/fdinfo/* 2>/dev/null | grep -c inotify' \
| paste - -
# Top consumers (one-shot)
sudo lsof | awk '/inotify/ {print $2}' | sort | uniq -c | sort -rn | head
If you’re running a chatty dev environment and the count is anywhere near the limit, you’ll feel it.
The no-restart bump
The limit is a sysctl, and sysctls take effect immediately when you write to /proc/sys/.... No reboot. No restart of any process. Existing processes can immediately allocate up to the new limit:
# Live bump (gone on next reboot)
sudo sysctl fs.inotify.max_user_watches=524288
# Persist across reboots
echo 'fs.inotify.max_user_watches=524288' \
| sudo tee /etc/sysctl.d/40-inotify.conf
sudo sysctl --system
524,288 (512k) is the value VS Code’s docs recommend, and it’s far above what any sane dev workflow needs. The actual memory cost: 512k watches × ~1 KB = ~500 MB worst case, and you’ll never hit that in practice; my heaviest dev session sits around 30k watches.
The moment you run that sysctl, your editor’s “unable to watch” warning goes away. No reload, no restart. Saving the next file is fast again.
The other two limits to bump while you’re there
fs.inotify.max_user_instances: how many inotify file descriptors a single user can open. Default is 128 on most distros — fine for ordinary use, but containers and IDEs that spawn lots of file watchers can hit it. Bump to 512.fs.inotify.max_queued_events: per-fd event queue size. If you’re touching many files quickly (build systems, mass renames), you can overflow this and the kernel emits anIN_Q_OVERFLOWevent. Bump to 32768.
The full sysctl drop-in:
cat <<EOF | sudo tee /etc/sysctl.d/40-inotify.conf
fs.inotify.max_user_watches=524288
fs.inotify.max_user_instances=512
fs.inotify.max_queued_events=32768
EOF
sudo sysctl --system
One file. Three limits. All apply immediately.
Why you can’t only bump max_user_watches
If your editor’s complaint is “unable to watch” but you’re well under max_user_watches, check max_user_instances. VS Code on a multi-root workspace will open multiple inotify fds; if you’ve got Docker also running (each container can open its own fds), you’ll hit instances before watches. The error message is the same, the fix is the second sysctl line.
Why this isn’t enabled by default everywhere
Two reasons: kernel-memory accounting (watches do consume real RAM, and the default is conservative for low-memory boxes), and historical inertia. Modern distros are starting to ship higher defaults — Fedora 39 went to 524288 — but Ubuntu LTS, Debian, and most server images still ship 8192. If you’re on those, the bump is on you.
One more thing: if you’re inside a container, the host’s sysctl applies. Bumping inside the container has no effect — the limit is enforced on the host’s kernel namespace, not the container’s. kubectl exec into a node with the right limit, or fix it on the bare metal/VM that’s running the container.
How to know it worked
cat /proc/sys/fs/inotify/max_user_watches # 524288
cat /proc/sys/fs/inotify/max_user_instances # 512
# Trigger a save in your editor; the warning should be gone.
That’s it. Five minutes of work; eliminates the entire class of “my editor is mysteriously slow on big projects” bug. Ship the sysctl drop-in via your config-management tool of choice on every dev box and CI runner, and you’ll never see this warning again.
Cover photo: John Detochka on Pexels.
