nginx proxy with LetsEncrypt

Why bother?

As a home lab nerd, I frequently find myself putting up various web-based services and wanting to access them remotely. Naturally, I can’t be sending passwords cleartext over the interwebs and while I do have a sweet VPN setup, I’m not always using a machine with my VPN keys handy. The obvious solution is HTTPS, but what to do about certs? Self-signed certs are great, except that you’re either always accepting the invalid cert warning or having to import your custom CA everywhere. And that’s where LetsEncrypt comes in. With the noble goal of encrypting all the things for free, LetsEncrypt–in theory–makes getting valid SSL certs for all of your sites extremely easy. In practice, I’ve only rarely had it work reliably for me, so I’m not anxious to use it as a solution.

Introduction and sick ASCII art diagram

That’s all changing today though. I finally figured out how I want to host things: an nginx ingress controller that manages LetsEncrypt certs and proxies requests through to my various backends. That way, I only have to get LetsEncrypt working consistently once and all of the certs are managed in a central location instead of strewn about my network like everything else.

       [ Internet ]
            |
       [ Router 10.0.0.1 ]
            |
       [ nginx / LetsEncrypt 10.0.0.10 ]
           /                        \
[ pastebin.geek.cm 10.0.0.14 ]    [ geek.cm 10.0.0.15 ]

nginx installation and configuration

So let’s start with the nginx setup. Personally, I like CentOS for servers. The transactional yum database (and ability to roll back) has saved me numerous times. The downside is that the packages get stale quickly and sometimes I like to feel a little bit more bleeding-edge. 3.10 kernel? Seems…old-fashioned. Anyway, nitpicking aside, CentOS will do fine for a quick and reproducible nginx/LetsEncrypt server.

First off, let’s get nginx installed:

[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0
enabled=1

No GPG check because I like to live dangerously. J/K, I actually like to live with an abundance of caution and prudence and that means no unsigned packages on my network.

# wget http://nginx.org/keys/nginx_signing.key
# rpm --import nginx_signing.key
# sed -i 's/gpgcheck=0/gpgcheck=1/' /etc/yum.repos.d/nginx.repo
# yum -y install nginx
# systemctl enable nginx && systemctl start nginx

Now we can verify that nginx is up and running by visiting http://10.0.0.10 (obviously substitute your own local address) from another system on the network. You should get nginx’s default page. If you didn’t, there’s a good chance that you haven’t opened up the firewall ports. You are using a firewall, right?

# firewall-cmd --permanent --add-service=http
# firewall-cmd --permanent --add-service=https
# firewall-cmd --reload

Generate a self-signed certificate

To ensure that we’ve got a working SSL configuration without dealing with LetsEncrypt, we’ll start with a self-signed certificate.

# openssl req -x509 -newkey rsa:4096 -nodes -sha256 -keyout geek.cm.key -days 365 -out geek.cm.crt

I like the /etc/pki directories for certs, it makes sense to me, so we’re going to put them there:

# mv geek.cm.key /etc/pki/tls/private/
# mv geek.cm.crt /etc/pki/tls/certs/

And then modify the nginx configuration to add an SSL block:

server {
    listen 80;
    server_name geek.cm www.geek.cm;

    # Redirect all HTTP traffic to HTTPS.
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl;
    server_name geek.cm www.geek.cm;

    ssl_certificate /etc/pki/tls/certs/geek.cm.crt;
    ssl_certificate_key /etc/pki/tls/private/geek.cm.key;

    location / {
        root /usr/share/nginx/html/;
        allow all;
    }
}

Increasing the SSL security

This is based on Mozilla’s recommendations[1] for modern compatibility. If you’re going to have a wide variety of clients accessing your sites, then you may need to drop down to intermediate compatibility or legacy compatibility settings.

server {
    ...

    ssl_session_timeout 1d;
    ssl_session_cache shared:SSL:50m;
    ssl_session_tickets off;

    ssl_protocols TLSv1.2;
    ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
    ssl_prefer_server_ciphers on;

    add_header Strict-Transport-Security max-age=15768000;

    ...
}

Test the nginx configuration and reload it.

# nginx -t
# systemctl reload nginx

Generate certificates with certbot

The next step is to install certbot which is a pretty nice wrapper around LetsEncrypt functionality. To install certbot on CentOS, we need the EPEL (Extra Packages for Enterprise Linux) repo.

# yum -y install epel-release
# yum -y install certbot

Then we’ll go ahead and generate the cert using the http-01 challenge. Generally speaking, you can use the built-in web server plugins for certbot if you like (e.g., certbot nginx|apache) but I prefer to just get the certs and do the web server configuration myself. The other benefit to the webroot plugin is that it functions well behind Cloudflare without requiring additional hooks. You can use a hook to do the DNS challenge with Cloudflare[2], but I’ve found it less reliable than webroot. Cloudflare configuration will be discussed in another post.

# certbot certonly --webroot --webroot-path /usr/share/nginx/html/ -d geek.cm -d www.geek.cm

The webroot plugin adds a file to the /.well-known/ directory in the webroot that LetsEncrypt will then look for to validate that you actually own the domain for which you’re requesting certs. This will require that you’ve already got a DNS entry for those domains pointing to your server. You’ll answer some questions, accept the TOS, and like magic, you’ll have a free SSL cert to use for your server.

If everything went well, you’ve got a private key in /etc/letsencrypt/live/geek.cm/privkey.pem and a cert in /etc/letsencrypt/live/geek.cm/fullchain.pem. (Although obviously not geek.cm since that’s my domain, not yours.) You can optionally symlink the certs to /etc/pki or just modify the nginx configuration to point to the letsencrypt directory.

# rm /etc/pki/tls/certs/geek.cm.crt
# rm /etc/pki/tls/private/geek.cm.key
# ln -s /etc/letsencrypt/live/geek.cm/privkey.pem /etc/pki/tls/private/geek.cm.key
# ln -s /etc/letsencrypt/live/geek.cm/fullchain.pem /etc/pki/tls/certs/geek.cm.crt
server {
    ...
    ssl_certificate /etc/letsencrypt/live/geek.cm/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/geek.cm/privkey.pem;
    ...
}

If you’re doing this for a home lab like me, you will likely need to add a hosts entry since your domain will resolve to a public address and the routing won’t work as expected. (You’ll request https://public_ip, it’ll go to your modem, and your modem will be like, “Oh, you want to access the management interface? Here you go.”) One of these days, I’ll figure out some appropriate iptables rules so that things work on the LAN without split DNS or hosts entries.

You’ll want to test renewal of the certificate with certbot renew --dry-run. If everything is successful, then it’s time to set up the automated renewal of certificates with a cron job.

#!/bin/bash
/usr/bin/certbot renew --quiet --post-hook "systemctl reload nginx"

Configuring nginx to proxy to internal hosts

Presumably you’ll want to serve more than the nginx default page. In my case, I’ve got WordPress running on Apache over on 10.0.0.15. Since we aren’t going to co-mingle services by running Apache/PHP/MySQL on the same server as nginx, we’re going to need to proxy those requests.

upstream wordpress {
    server 10.0.0.15;
}

server {
    listen 80;
    ...
}

server {
    listen 443;
    ...

    location / {
        proxy_pass http://wordpress;
        proxy_buffering on;
        proxy_buffers 12 12k;
        proxy_redirect off;

        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header Host $host;
    }
}

It’s worth noting that Apache will be serving the site over HTTP. That is, nginx is receiving HTTPS requests on port 443 and making HTTP requests on port 80 to the upstream server. If you have concerns about your local network traffic being sniffed, then you’ll need to configure Apache for HTTPS and modify the proxy_pass line in the nginx configuration.

Side note: There was some additional configuration that I did on the Apache 2.4 side in order to log correct IP addresses from nginx. If you don’t configure these options, all requests to your upstream servers will appear to originate from the nginx proxy.

...

RemoteIPInternalProxy 10.0.0.10
RemoteIPHeader X-Real-IP

...

And that’s it. Now we have a pretty decent setup that serves HTTPS on nginx and can proxy requests to our various internal hosts.

Ensuring that nginx doesn’t eat certbot requests

One issue that I’ve run into repeatedly when running certbot on the same host as web applications is renewal doesn’t always function correctly. Things aren’t supposed to change and if your certbot renew --dry-run succeeded during setup, then it’s supposed to work basically forever. In practice, that’s not always the case. In one instance, I had set up certbot, acquired the cert, tested the renewal and then later moved DNS to Cloudflare. Three months later, I was receiving certificate expiration notices from LetsEncrypt. A look at the logs showed that domain validation was failing during the renewal. That was weird because I had definitely tested renewal during the setup process. Turned out that the validation method worked fine–as long as the site wasn’t behind Cloudflare. As someone who generally prefers the privacy afforded by services like Cloudflare, it’s important to have LetsEncrypt working even when my sites aren’t directly accessible.

Tangentially-relevant story time: I also had one host running mod_python with Apache which had a very specific set of Python dependencies that were incompatible with the versions required by certbot. There were two quick options: run the application and get rid of certbot or find the application a different home. The longer option would’ve been figuring out a way to resolve the package dependency issues, e.g., with a virtual environment for one or the other. Anyway, I had already used certbot to acquire the cert for the domain and everything was up and running. Rather than deal with the finicky application, I opted to just remove certbot–completely forgetting that it’d need to be there to renew the cert.

Stories aside, we need a way to ensure that nginx doesn’t pass along the validation requests from certbot to whatever upstream we’ve configured. Since the webroot validation uses HTTP, it’s just a matter of adding a short stanza to our configs for each domain:

server {
    listen 80;
    ...

    location /.well-known {
        root /usr/share/nginx/html;
        allow all;
    }

    ...
}

Since we’ve already specified the webroot plugin and /usr/share/nginx/html as the webroot path during the initial certificate request, we just need to make sure that nginx knows that requests to the .well-known directory should stay local instead of being redirected to the proxy. This ensures that we pass the domain validation challenge during certbot’s renewal.


Footnotes

[1] Based on Mozilla’s recommendations using the modern compatibility settings since I’m usually the only person accessing things and my systems are generally up-to-date.

[2] https://github.com/kappataumu/letsencrypt-cloudflare-hook

Leave a Reply

Your email address will not be published. Required fields are marked *