I have a 4 GB Oracle ARM box that, by all rights, should not be running everything I run on it: LSWS with eight WordPress sites, a Postfix relay, fail2ban, CrowdSec, Tailscale, three small Node services, and a couple of cron-driven backup scripts. It works because I turned on compressed swap. The kernel transparently compresses memory pages it would otherwise have to push to disk, getting maybe 3:1 effective expansion before any actual swap I/O happens. It’s the closest thing to free RAM that exists in 2026.
Linux ships two implementations: zswap and zram. They sound identical and they’re not. Picking the right one matters; running both on the same box is worse than running either alone. Here’s the actual difference, when each wins, and the kernel parameter that flips between them.
What each one does
- zram creates a block device that lives in RAM with on-the-fly compression. You point swap at it. When the kernel needs to evict a page, it gets compressed and stays in RAM (in the zram device). No disk I/O ever happens — the zram device is the swap. When RAM pressure increases beyond what zram can absorb, you’ve got real swap pressure (OOM killer, latency spikes), but the runway is much longer than uncompressed RAM alone.
- zswap sits between the page cache and your disk-backed swap. Pages chosen for eviction get compressed first; a configurable cache of compressed pages stays in RAM, and the rest spill to disk-backed swap. The disk-backed swap (your
swapfileorswappartition) still has to exist; zswap is a write-back cache for it.
The architectural difference: zram replaces disk swap. zswap accelerates disk swap. The choice is determined by whether you have a disk you’re willing to spill to.
When to use which
- Use zram if you have no swap partition, no swap file, and don’t want to create one. Common for small VPSes, embedded boxes, anything where the disk is small or write-amplification matters. zram is what’s running on my Oracle box.
- Use zswap if you already have a disk-backed swap and want to reduce how often the kernel actually writes to it. Useful on desktops, laptops with NVMe SSDs, or beefier servers where you want graceful spillover beyond what would fit in compressed RAM.
- Use neither if your workload genuinely needs more RAM than you have, often. Compressed swap buys you 2-3× expansion, not 10×. If you’re constantly swapping, the answer is more RAM, not better swap.
Setting up zram
On Debian/Ubuntu, install the helper package:
sudo apt install zram-tools
# Configure: /etc/default/zramswap
ALGO=zstd
PERCENT=50
PRIORITY=100
sudo systemctl restart zramswap
swapon --show
PERCENT=50 means zram gets 50% of RAM as its compressed swap pool. With ~3:1 compression on typical workloads, that’s ~1.5× the RAM-equivalent in expansion. ALGO=zstd is the right choice in 2026 — better compression than lzo/lz4 at modest CPU cost.
Verify it’s working:
cat /sys/block/zram0/comp_algorithm # active algorithm
cat /sys/block/zram0/disksize # uncompressed size
cat /sys/block/zram0/orig_data_size # what's stored
cat /sys/block/zram0/compr_data_size # actual RAM used
Compression ratio = orig_data_size / compr_data_size. On WP/PHP-heavy workloads I see 3.2:1; on Node/V8 heavier workloads, closer to 2.5:1.
Setting up zswap
zswap is a kernel feature, not a userspace daemon. Enable via boot parameter or runtime sysfs:
# Boot-time (Ubuntu/Debian)
sudo nano /etc/default/grub
# Add to GRUB_CMDLINE_LINUX_DEFAULT:
# zswap.enabled=1 zswap.compressor=zstd zswap.max_pool_percent=20
sudo update-grub
sudo reboot
# Runtime (no reboot, lost on next boot)
echo 1 | sudo tee /sys/module/zswap/parameters/enabled
echo zstd | sudo tee /sys/module/zswap/parameters/compressor
echo 20 | sudo tee /sys/module/zswap/parameters/max_pool_percent
max_pool_percent=20 caps zswap’s compressed-page cache at 20% of RAM. Beyond that, evicted pages go to disk-backed swap as usual. Higher percentages give you more compressed-RAM cache; lower percentages mean more disk I/O when swap pressure builds.
Watch zswap effectiveness:
# Pages stored in zswap (4 KB each)
cat /sys/kernel/debug/zswap/stored_pages
# Pages reaching disk-backed swap (cache misses)
cat /sys/kernel/debug/zswap/written_back_pages
# Compression saved
cat /sys/kernel/debug/zswap/pool_total_size
If written_back_pages is climbing fast, your max_pool_percent is too low for the workload. Bump it to 30-40.
Don’t run both
Running zram swap and zswap together is a bad idea. zswap will compress pages on their way to the zram swap device — so they get compressed twice. The second compression buys nothing (zstd output isn’t compressible) but costs CPU. Pick one based on whether you’re trying to avoid disk swap entirely (zram) or accelerate existing disk swap (zswap).
The vm.swappiness rule
Once you have compressed swap, bump vm.swappiness from the default (60) to something more aggressive (100-180). Default swappiness was tuned for slow disk-based swap; with zram or zswap, swap is fast (compression speed, not disk speed), so the kernel can use it more aggressively without latency penalty:
# /etc/sysctl.d/99-swappiness.conf
vm.swappiness=180
vm.vfs_cache_pressure=50
Apply: sudo sysctl --system. The 180 (above 100, in case you’re wondering — it’s allowed up to 200) tells the kernel to prefer compressed swap over evicting page cache. The vfs_cache_pressure=50 keeps useful filesystem cache around longer, since disk reads are now relatively expensive compared to compressed-swap reads.
The bottom line
Compressed swap turned a 4 GB box into a server that comfortably runs 8+ WordPress sites. The setup is 5 minutes. The trade-off is a few percent of CPU spent on zstd compression — invisible against the modern server-arm cores I’m running.
If you’ve ever stared at free -m and seen 100 MB free and wondered if you needed to upgrade: try zram first. Most likely you’ll see your “available” jump 2-3× and your problem disappears for the cost of an apt install.
Cover photo: wwarby on Pexels.
