nginx vs Caddy vs OpenLiteSpeed for a single-VPS WordPress setup: when each makes sense

You’re spinning up a $5/month VPS to host one or two WordPress sites. The web server choice is one of the early decisions that’s hard to undo later because the rest of your config — vhost files, cache layer, TLS automation, fail2ban patterns — is shaped by it. Three reasonable options in 2026: nginx, Caddy, and OpenLiteSpeed. Each makes a different set of tradeoffs.

I’ve run all three on production WordPress for at least a year. Here’s the actual comparison — what’s good, what’s annoying, and which one fits which situation.

nginx — the boring industry default

  • What’s good. Stable, ubiquitous, every deploy guide on the internet assumes nginx. fail2ban filters, log formats, monitoring dashboards — all default to nginx out of the box. PHP-FPM via socket is well-trodden territory.
  • What’s annoying. Every TLS cert involves wrestling with certbot. The config language is its own thing — location blocks, try_files, fastcgi_pass — and small mistakes silently produce 502s. Cache for WordPress requires either fastcgi_cache (sharp learning curve) or a separate Varnish layer.
  • When it’s right. You’re at a company where ops standards are nginx. You’re hosting many sites with shared SSL configs. You like editing config in vim, you don’t like surprises.
# /etc/nginx/sites-available/wp.example.com — minimal but real
server {
    server_name  wp.example.com;
    root         /var/www/wp.example.com;
    index        index.php index.html;

    location / {
        try_files $uri $uri/ /index.php?$args;
    }

    location ~ \.php$ {
        include       fastcgi_params;
        fastcgi_pass  unix:/run/php/php8.2-fpm.sock;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    }

    listen 443 ssl;
    ssl_certificate     /etc/letsencrypt/live/wp.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/wp.example.com/privkey.pem;
}

Caddy — the "TLS just works" one

  • What’s good. Auto-HTTPS — you write the hostname, Caddy obtains the cert from Let’s Encrypt automatically and renews it forever. Single config file in plain syntax, much shorter than the nginx equivalent. HTTP/3 enabled by default. Built-in reverse proxy with sane defaults.
  • What’s annoying. WordPress’s caching plugins haven’t all caught up — W3 Total Cache and LiteSpeed Cache have nginx-specific path-based rewrite rules. WP Super Cache works fine. Operator tooling (logrotate, fail2ban filters) is more DIY than for nginx. Caddy’s PHP-FastCGI integration is rock-solid but less documented.
  • When it’s right. Single-VPS, one or a few WordPress sites, you want certs to never be a problem, you don’t have an existing nginx config to migrate.
# /etc/caddy/Caddyfile — the equivalent of the nginx config above
wp.example.com {
    root * /var/www/wp.example.com
    php_fastcgi unix//run/php/php8.2-fpm.sock
    file_server
}

# That's it. TLS auto-issues. WordPress works.

OpenLiteSpeed — the WordPress-tuned one

  • What’s good. Built-in LSCache, the best WordPress page cache in the open-source space. ESI for fragment caching. mod_pagespeed-style optimizations baked in. Web admin GUI — if you’re not a config-file person, you can do most of your tweaks through the browser. Apache .htaccess compatibility, so existing WP rewrite rules just work.
  • What’s annoying. The community is smaller. fewer Stack Overflow answers, fewer monitoring dashboards. The default config layout (vhRoot, listeners, the bin/ scripts) is its own world. Web admin runs on port 7080 by default, which you’d better firewall off.
  • When it’s right. WordPress is the primary workload. You want the fastest possible WP page-speed without setting up Varnish/Redis/etc. You’re OK with a non-mainstream stack to get that performance.
# /usr/local/lsws/conf/vhosts/wp.example.com/vhconf.conf — partial
docRoot                   /var/www/wp.example.com/
vhDomain                  wp.example.com

context / {
    location                /var/www/wp.example.com/
    allowBrowse             1
    rewrite  {
        enable                 1
        autoLoadHtaccess       1     # WP's .htaccess handles permalinks
    }
}

# LSCache is enabled via the LSCache plugin in WordPress + this vhconf
cache  {
    enableCache             1
    qsCache                 1
    expireInSeconds         3600
}

Side-by-side, where the rubber meets the road

  • Memory footprint (idle, default config): Caddy ~30 MB, nginx ~25 MB, OpenLiteSpeed ~80 MB. All of these are fine on a $5 VPS.
  • Cold-start request latency on a 100-post WordPress: nginx + fastcgi_cache ≈ OpenLiteSpeed + LSCache (both ~5 ms once cache is warm). Caddy without a cache plugin: ~80 ms. Caddy with WP Super Cache: ~10 ms.
  • Setting up TLS for a new site: Caddy: 0 manual steps. OpenLiteSpeed: ~5 (request via certbot in webroot mode). nginx: ~5 (same).
  • Time-to-debug a 502: nginx is best documented. OpenLiteSpeed has good logs. Caddy’s logs are JSON-by-default which is great for tooling, painful for eyeballing.
  • fail2ban / CrowdSec integration: nginx is first-class. OpenLiteSpeed works fine with the iptables bouncer. Caddy needs a plugin (caddy-security or matching log patterns yourself).

My picks for three concrete cases

  • One personal WordPress on a $5 VPS, you’re not a sysadmin. → Caddy. The TLS-auto magic alone saves hours per year. The default cache via WP Super Cache is good enough.
  • Multiple high-traffic WordPress sites, you care about speed. → OpenLiteSpeed. LSCache + ESI + the LiteSpeed Cache WordPress plugin is genuinely faster than nginx + Varnish for most WP workloads, with a fraction of the operational complexity.
  • You have an existing nginx fleet, this is one more site, ops cares about consistency. → nginx. Don’t introduce a third tool just for one site.
  • Mixed: one WordPress + several Node/Go/Python apps reverse-proxied. → Caddy, hands down. Reverse-proxying is its native language and the syntax is much cleaner than nginx’s.

One thing that’s not in the comparison: don’t dual-stack. Pick one and commit. Running nginx as a TLS-terminating front and OpenLiteSpeed behind it “for the LSCache” sounds clever for about six months — until the day a request times out and you’re four logs deep trying to figure out which layer is at fault. Pick the one whose strengths fit your actual situation, and live with its weaknesses.

Photo: Server rack with cabling by Josh Sorenson on Pexels.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.