Varnish Cache is renowned for its incredible speed as a reverse caching proxy. While most tutorials focus on its caching capabilities, Varnish’s powerful Varnish Configuration Language (VCL) also allows it to function as a highly effective, high-performance load balancer.
However, there’s a critical technical detail: Varnish itself does not terminate SSL/TLS. It’s designed to handle unencrypted HTTP traffic at maximum speed.
To achieve a full SSL-enabled, load-balanced setup, we must deploy a “stack” where a dedicated SSL termination proxy sits in front of Varnish. This proxy will handle the computationally expensive SSL/TLS handshake, decrypt the HTTPS traffic, and pass plain HTTP to Varnish. Varnish will then use its load-balancing logic to distribute this traffic to a pool of backend servers.
For this “highly technical” guide, we won’t use NGINX or Apache for SSL termination. We will use Hitch, a lightweight, high-performance SSL/TLS proxy specifically recommended by the Varnish Cache project for this exact purpose.
Architecture Overview
Our target architecture will be:
Client (HTTPS) -> [Port 443] Hitch (SSL Termination) -> [Port 8443, PROXY Protocol] Varnish (Cache & Load Balancer) -> [Port 8080] Backend Servers
We will use the PROXY protocol between Hitch and Varnish. This is essential because, without it, Varnish would see all traffic as originating from 127.0.0.1 (Hitch). The PROXY protocol forwards the real client IP address, allowing for correct logging and IP-based VCL logic.
Prerequisites
- An Ubuntu 22.04 server.
- Root or
sudoprivileges. - A registered domain name (e.g.,
yourdomain.com) pointing to your server’s public IP. - Valid SSL certificates. We’ll use Let’s Encrypt (Certbot).
- At least two backend application servers. For this guide, we’ll simulate them with two simple NGINX instances running on ports
8080and8081on the same machine. In a real-world scenario, these would be separate servers.
Step 1: Simulate Backend Servers (Optional)
If you don’t already have backends, let’s quickly set up two NGINX servers to act as our “application.”
- Install NGINX:
Bash
sudo apt update sudo apt install nginx - Create two separate configs. First, disable the default:
Bash
sudo rm /etc/nginx/sites-enabled/default - Create the config for
backend1:Bash
sudo nano /etc/nginx/sites-available/backend1Paste this in:
Nginx
server { listen 8080; server_name _; root /var/www/backend1; index index.html; location / { try_files $uri $uri/ =404; } } - Create the config for
backend2:Bash
sudo nano /etc/nginx/sites-available/backend2Paste this in:
Nginx
server { listen 8081; server_name _; root /var/www/backend2; index index.html; location / { try_files $uri $uri/ =404; } } - Create the content and symlink the configs:
Bash
# Create content directories sudo mkdir -p /var/www/backend1 sudo mkdir -p /var/www/backend2 # Create unique pages to see the load balancing echo "<h1>Welcome to Backend Server 1</h1>" | sudo tee /var/www/backend1/index.html echo "<h1>Welcome to Backend Server 2</h1>" | sudo tee /var/www/backend2/index.html # Enable the sites sudo ln -s /etc/nginx/sites-available/backend1 /etc/nginx/sites-enabled/ sudo ln -s /etc/nginx/sites-available/backend2 /etc/nginx/sites-enabled/ # Test and restart sudo nginx -t sudo systemctl restart nginx - Verify they are working:
Bash
curl http://127.0.0.1:8080 # Should show "Backend Server 1" curl http://127.0.0.1:8081 # Should show "Backend Server 2"
Step 2: Install Varnish Cache
Ubuntu 22.04’s default repositories may have an outdated Varnish. We will use the official Varnish Cache repository for the latest stable version.
- Install prerequisites and the GPG key:
Bash
sudo apt install curl gnupg apt-transport-https curl -fsSL https://packagecloud.io/varnishcache/varnish74/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/varnish-archive-keyring.gpg(Note: Check the Varnish documentation for the latest recommended version, e.g.,
varnish74) - Add the repository:
Bash
echo "deb [signed-by=/usr/share/keyrings/varnish-archive-keyring.gpg] https://packagecloud.io/varnishcache/varnish74/ubuntu/ jammy main" | sudo tee /etc/apt/sources.list.d/varnish.list echo "deb-src [signed-by=/usr/share/keyrings/varnish-archive-keyring.gpg] https://packagecloud.io/varnishcache/varnish74/ubuntu/ jammy main" | sudo tee -a /etc/apt/sources.list.d/varnish.list - Install Varnish:
Bash
sudo apt update sudo apt install varnish
Step 3: Configure Varnish Service (systemd)
We need to change the varnishd daemon’s listening ports. The modern, correct way to do this is with a systemd override file, not by editing the old /etc/default/varnish file.
- Create the override directory and file:
Bash
sudo mkdir -p /etc/systemd/system/varnish.service.d sudo nano /etc/systemd/system/varnish.service.d/override.conf - Paste the following configuration. This is the technical core of the Varnish setup:
Ini, TOML
[Service] # Clear existing ExecStart ExecStart= # Define the new ExecStart ExecStart=/usr/sbin/varnishd \ -a :80 \ -a 127.0.0.1:8443,proxy \ -p feature=+http2 \ -f /etc/varnish/default.vcl \ -s malloc,256mBreaking this down:
ExecStart=: This first blank entry is crucial. It clears the defaultExecStartarguments from the main service file./usr/sbin/varnishd: The path to the daemon.-a :80: This is the standard HTTP port. We leave this open so we can terminate SSL with Certbot’s webroot plugin and for HTTP-to-HTTPS redirects.-a 127.0.0.1:8443,proxy: This is the PROXY protocol port. Hitch will send its traffic here. It only listens on localhost (127.0.0.1) for security, and the,proxyflag tells Varnish to expect and interpret the PROXY protocol header.-p feature=+http2: Enables HTTP/2 support for Varnish’s communication with the backends.-f /etc/varnish/default.vcl: The path to our VCL configuration.-s malloc,256m: The cache storage definition.malloc(in-memory) with a 256MB size. For production, you might usefilestorage.
- Reload
systemdto apply the change:Bash
sudo systemctl daemon-reloadVarnish is not started yet; we need to write our VCL first.
Step 4: Configure VCL for Load Balancing
This is where we tell Varnish how to load balance. Edit the default VCL file:
Bash
sudo nano /etc/varnish/default.vcl
Replace the entire file with the following:
Code snippet
vcl 4.1;
# Import necessary modules
import directors; # For load balancing
import proxy; # For the PROXY protocol
# --- Backend Definitions ---
# Define our first backend server
backend server1 {
.host = "127.0.0.1";
.port = "8080";
# Health Probe
.probe = {
.url = "/"; # What URL to check
.interval = 5s; # Check every 5 seconds
.timeout = 1s; # Fail if no response in 1s
.window = 5; # Keep history of last 5 probes
.threshold = 3; # Mark as healthy after 3 successes
}
}
# Define our second backend server
backend server2 {
.host = "127.0.0.1";
.port = "8081";
# Health Probe
.probe = {
.url = "/";
.interval = 5s;
.timeout = 1s;
.window = 5;
.threshold = 3;
}
}
# --- Initialization (vcl_init) ---
# This VCL subroutine is called when Varnish starts.
# We initialize our load balancing director here.
sub vcl_init {
# Create a new director of type "round_robin"
new my_director = directors.round_robin();
# Add our backends to the director
my_director.add_backend(server1);
my_director.add_backend(server2);
}
# --- Request Handling (vcl_recv) ---
# This is called for every incoming request.
sub vcl_recv {
# 1. Handle the PROXY protocol
# Check if the connection has a PROXY header (from Hitch)
if (proxy.is_proxy()) {
# Set the client IP from the PROXY header
set req.http.X-Forwarded-For = proxy.ip();
# Note: Varnish 6+ sets client.ip automatically from PROXY
}
# 2. Assign the backend
# Tell Varnish to use our load balancing director
set req.backend_hint = my_director.backend();
# 3. Handle Health Checks from external monitors (optional)
if (req.url == "/health") {
return (synth(200, "OK"));
}
# 4. Do not cache admin sections
if (req.url ~ "^/admin") {
return (pass); # "pass" bypasses the cache
}
# 5. Only cache GET or HEAD requests
if (req.method != "GET" && req.method != "HEAD") {
return (pass);
}
# Default Varnish behavior
return (hash);
}
# --- Backend Response Handling (vcl_backend_response) ---
# Called after a response is fetched from the backend.
sub vcl_backend_response {
# Set a Grace period of 1 hour.
# If the backend is down, Varnish will serve stale content
# for up to 1 hour instead of showing an error.
set beresp.grace = 1h;
# Set a Time-To-Live (TTL) for the cache
set beresp.ttl = 10m;
return (deliver);
}
# --- Delivery Handling (vcl_deliver) ---
# Called just before the response is sent to the client.
sub vcl_deliver {
# Add a header to see if the request was a cache HIT or MISS
if (obj.hits > 0) {
set resp.http.X-Cache = "HIT";
} else {
set resp.http.X-Cache = "MISS";
}
# Add Varnish instance name (optional)
set resp.http.X-Varnish-Host = server.hostname;
# Clean up internal headers
unset resp.http.X-Varnish;
unset resp.http.Age;
return (deliver);
}
Technical Notes on this VCL:
import directors;: This is essential for any load balancing.directors.round_robin(): We use a simple Round Robin director. Other options include.hash()(for sticky sessions based on a value),.fallback()(for active/passive), and.random()..probe: This is critical. Varnish will actively poll these backend URLs. If a backend fails the probe (e.g., returns a 503), the director will automatically and temporarily remove it from the rotation, preventing users from seeing errors.vcl_recv: We setreq.backend_hintto the director. This tells Varnish to ask the director which backend to use for any cache misses.vcl_backend_response: We set agraceperiod. This is a key feature for high availability. If all backends are down (failing health probes), Varnish will serve stale content from its cache for up to 1 hour, which is far better than showing a 503 error.
Step 5: Install and Configure Hitch (SSL Termination)
Now, we set up the “front door” for our SSL traffic.
- Install Certbot and get certificates:
Bash
sudo apt install certbot # Use --standalone (stopping Varnish temporarily) or --webroot sudo systemctl stop varnish sudo certbot certonly --standalone -d yourdomain.com sudo systemctl start varnish # Don't forget to restart!This will place your certificates in
/etc/letsencrypt/live/yourdomain.com/. - Install Hitch:
Bash
sudo apt install hitch - Prepare the “PEM” file for Hitch: Hitch requires the private key and the full certificate chain (cert + intermediate) to be in a single file.
Bash
sudo mkdir -p /etc/hitch/certs sudo cat /etc/letsencrypt/live/yourdomain.com/fullchain.pem \ /etc/letsencrypt/live/yourdomain.com/privkey.pem \ | sudo tee /etc/hitch/certs/yourdomain.pem # Set strict permissions sudo chmod 600 /etc/hitch/certs/yourdomain.pem sudo chown hitch:hitch /etc/hitch/certs/yourdomain.pem(Note: You will need to automate this process in a cronjob to run after your
certbot renewcommand) - Configure Hitch: Edit the main configuration file:
Bash
sudo nano /etc/hitch/hitch.confReplace the entire file with this:
Code snippet
# /etc/hitch/hitch.conf # --- Frontend (Client-facing) --- # Listen on port 443 on all IPs frontend = { host = "*" port = "443" } # --- Backend (Varnish-facing) --- # Send traffic to Varnish's PROXY protocol port backend = "[127.0.0.1]:8443" # --- SSL/TLS Settings --- # Point to our combined PEM file pem-file = "/etc/hitch/certs/yourdomain.pem" # --- Protocol Settings --- # ENABLE THE PROXY PROTOCOL # This is the most important line for this guide write-proxy-v2 = on # --- Performance --- # Number of worker processes (usually = CPU cores) workers = 4 # Adjust to your core count # --- Security --- # Use modern, secure ciphers ciphers = "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH" ssl-prefer-server-ciphers = on ssl-protocols = "TLSv1.2 TLSv1.3" # Run as the 'hitch' user user = "hitch" group = "hitch" daemon = on
Step 6: Finalize and Test the Stack
- Enable and start the services: We need Varnish and Hitch to run, in that order.
Bash
sudo systemctl restart varnish sudo systemctl enable varnish sudo systemctl restart hitch sudo systemctl enable hitch - Verify services are running:
Bash
sudo systemctl status varnish hitchYou should see them both as
active (running). - Test the stack! Open your terminal and use
curl.Bash
# Run this command several times curl https://yourdomain.comExpected Output (alternating):
HTML
<h1>Welcome to Backend Server 1</h1>…run again…
HTML
<h1>Welcome to Backend Server 2</h1>This proves the load balancing is working (for cache misses).
- Test the cache: Now, let’s look at the headers.
Bash
curl -I https://yourdomain.comFirst Request:
HTTP/2 200 ... X-Cache: MISS X-Varnish-Host: your-server-hostname ...Second Request (immediately after):
HTTP/2 200 ... X-Cache: HIT X-Varnish-Host: your-server-hostname ...The
X-Cache: HITproves Varnish is now serving the content from its cache and is no longer contacting the backend. The load balancing is “paused” until the cache expires (after the 10-minute TTL we set) or a new, un-cached page is requested. - Test Varnish’s health probes: Let’s kill one of our backends.
Bash
# Stop the NGINX backend on port 8080 # (This is tricky as they are one service. As a simulation, # just temporarily comment out the 'listen 8080' line # in /etc/nginx/sites-available/backend1 and run 'sudo systemctl reload nginx')After stopping one backend, wait ~10 seconds for the Varnish health probe to fail. Now, if you run
curl https://yourdomain.comrepeatedly, you will only see “Welcome to Backend Server 2”. Varnish has automatically detected the failure and pulledserver1from the load-balancing pool.
Conclusion
You have successfully deployed a sophisticated, high-availability stack on Ubuntu 22.04. This configuration gives you the best of all worlds:
- Security: SSL/TLS termination via the lightweight and secure Hitch.
- Performance: Blazing-fast responses for cacheable content via Varnish.
- Resilience: High-availability load balancing via Varnish Directors and Health Probes, ensuring traffic is only sent to healthy application servers.
- Insight: Correct client IP logging and VCL logic using the PROXY protocol.
This stack is robust, production-ready, and a testament to the power of Varnish beyond simple caching.