Virt
An Opinionated Tech Blog

Home Lab: Setting up the Blog Backend

Table of contents

The previous blog post described the basic backbone of my home lab setup. I’ll break down the backend for this blog for you here, but mostly for me. Call it my personal documentation.

I’ll throw together what I remembered from getting the container to run and this blog up, more or less, as part of this ongoing series. Thankfully .bash_history doesn’t forget, or at least it remembers the last nth commands based on the default settings that I am not going to look up.

So let’s dive in.

Creating the Container #

Just to preface: I hate it when I read or hear the phrase LXC Container. LXC literally stands for Linux Container. I shudder also when I hear HIV Virus, I’m sure you already figured that one out. Right, where was I…

Let’s fire up a Debian 12 container on the server and name it proxy, why not:

$ lxc-create proxy -B zfs --template download -- \
    --dist debian --release bookworm --arch amd64
Using image from local cache
Unpacking the rootfs

---
You just created a Debian bookworm amd64 (20250222_05:24) container.

To enable SSH, run: apt install openssh-server
No default root or user password are set by LXC.

Now, to configure the IP we need to start it first and let it get a DHCP lease and IP so we can change it in the routers web interface:

$ lxc-start proxy && sleep 5 && lxc-ls -f proxy
NAME  STATE   AUTOSTART GROUPS IPV4           IPV6                                UNPRIVILEGED
proxy RUNNING 1         -      192.168.44.154 fde0:d31d:2035:0:216:3eff:fea7:46c4 false

After applying the new DHCP IP on the DHCP server/router, based usually on the MAC address, restart the container to grab a fresh DHCP lease and that sweet new fake static IP:

$ lxc-stop proxy && lxc-start proxy && sleep 5 && lxc-ls -f proxy
NAME  STATE   AUTOSTART GROUPS IPV4          IPV6                                UNPRIVILEGED
proxy RUNNING 1         -      192.168.44.11 fde0:d31d:2035:0:216:3eff:fea7:46c4 false

Now we need a way to log in, the easiest is to just use lxc-attach(1) on the container host, which is fine, but I prefer setting up SSH on the container and directly logging in from my laptop. You know, the less hops, the better.

To do so, set up a new user, SSH and sudo(8). This will become essential for one of the next blog post for deployment1.

On the container, install sudo and openssh-server, add a new user and add it to the sudo group:

apt install sudo openssh-server
adduser versable
usermod -aG sudo versable

The adduser(8) command should have prompted you to fill up some basic information, including setting up a password. On your laptop, or if you are fancy, your workstation, use ssh-copy-id(1) to transfer your private key. If you don’t have one, I don’t know why you are reading this, maybe check out nixCraft’s article first before continuing.

ssh-copy-id versable@192.168.44.11

Going forward from here, we will exclusively use sudo from your chosen username on the container instead of using the root account, it may become clear why later. Also a great time to push for Michael W. Lucas’s book Sudo Mastery (IT Mastery), read it, I highly recommend it.

You should now be able to log in from your workstation to the container per SSH, ssh versable@192.168.44.11.

Once inside, it is time to prohibit password logins in /etc/ssh/sshd, so change the line:

#PasswordAuthentication yes

to

PasswordAuthentication no

and restart sshd(8):

sudo systemctl restart sshd

Time to get a HTTP server rolling.

Setting up NginX #

I installed NginX as I have severe PTSD from Apache and its Frankensteinian config syntax. Yeah, it’s been out there forever and is probably the most battle-tested HTTP server, but I am SO over it.

Choose what is right for you, I have been using NginX for a decade or so, the config syntax is C inspired alright but also has its flaws. Though it still outperforms most other popular HTTP servers, including the shiny new Caddy Server.

Quickly jump into the container using your preferred method (SSH goddamn it) and install NginX:

sudo apt install nginx-light curl

Noticed nginx-light, pretty much what it says on the tin: A bare minimum NginX installation with the essentials.

The HTTP server should now be all set up and should be running, we can quickly check it with curl(1), that we obviously installed for exactly that reason inside the container:

# curl -I localhost
HTTP/1.1 200 OK
Server: nginx/1.22.1
Date: Wed, 26 Feb 2025 12:37:25 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 26 Feb 2025 12:36:58 GMT
Connection: keep-alive
ETag: "67bf0aea-267"
Accept-Ranges: bytes

Open the gates #

Before we open the floodgates to the internet, let’s set up some basic firewall rules using the iptables(8) wrapper ufw(8), because iptables is horrible and should never have been birthed into existence and ufw is for lazy people that can’t be bothered to learn iptables properly:

sudo apt install -y ufw
sudo ufw allow ssh
sudo ufw allow http
sudo ufw allow https
sudo ufw enable

If you get kicked out of your SSH session, you messed up but you can always go back into the container on the host server with lxc-attach to disable ufw with sudo ufw disable.

Consider installing and using fail2ban to tighten up SSH. In theory you don’t need to, since we are not port forwarding SSH and neither should you, use a VPN for keeping that shit locked inside.

Now open said floodgates on your router, or as I did on my Fritz!Box, by port forwarding HTTP/HTTPS to the container.

To test, figure out your public IP by just googling “What is my IP” and noting your IP down. Ha, got you. If you can’t do it over the CLI, it’s not worth doing at all2. Get your public facing IP by using the following command:

curl ifconfig.co

Take out your phone, turn off WiFi, open your browser, and type in your IP address. Or just use curl like a normal human being.

If you see the default NginX page, you are all set.

Now a non-awkward way to reach it is needed, so let’s set up DNS next.

DNS Setup #

Did I mention that I am a cheap bastard economically resourceful? I didn’t go for a paid domain, instead I evaluated a couple of free subdomain providers with full DNS control and stumbled drunkenly upon deSec, a Germany based DNS provider with a lot of awesome features. And being based in Germany, you know they have to comply with strict privacy laws, so I am a happy, privacy aware, trooper.

Definitely check them out. The features that caught my eyes are:

If you have a static IP, register a managed DNS account, if not, sucks to be you, but having a dynDNS account is fine as well.

I registered a free subdomain, virt.dedyn.io, and pointed the A-Record to my IP. Give it some time, drink a tea, but for me, it worked, I am reachable:

# ping -c 1 virt.dedyn.io
PING virt.dedyn.io (212.110.204.253) 56(84) bytes of data.
64 bytes from port-212-110-204-253.static.as20676.net (212.110.204.253): icmp_seq=1 ttl=63 time=3.02 ms

--- virt.dedyn.io ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 3.019/3.019/3.019/0.000 ms

If you lack basic knowledge about DNS Records, do it the right way, consult the RFC. In this case the most important one is the A Record.

Get started here:

RFC1035
Domain Names - Implementation and Specification
Wikipedia
List of DNS record types (Including RFC references)

Time to get funky with an HTTPS certificate.

HTTPS Certificate #

We need to be nothing short of grateful for the folks at Let’s Encrypt.

Before this glorious non-profit organization of chads started to liberate the internet about 10 years ago, getting a useful TLS certificate from an authority cost more than hosting and a domain combined. Forcing this cheap bastard economically adjusted gentleman to host his own sites unencrypted, and open for snooping3.

I used Certbot for issuing the certificate, it’ll make your life easier with everything, including taxes.

Don’t follow their installation recommendation that is presented on the website. I trust the Electronic Frontier Foundation and endorse them, but here they are just wrong. No need for pip, npm or any other additional package manager that brings extra technical debt and lower sphincter issues. Just trust the Debian maintainers for what they do best: Maintaining Debian packages and avoiding snap packages like they have COVID4.

Certbot will do the following for you:

  1. ACME challenge

    Certbot will, by default, use the HTTP-01 challenge, to verify you are in control of your domain. It does so by throwing a token in your web root .well-known/acme-challenge/ and verifying it on their end. Check their documentation on challenge types.

  2. Certificates Issuance

    Once validated, Certbot fetches your TLS certificate and installs it for you. No need to handle CSRs, PEM files, or interact with shady certificate authorities.

  3. Configure Your Web Server

    Certbot modifies your server configuration to properly apply the new certificate with the following settings:

    • Redirecting HTTP to HTTPS: Automatically redirect all HTTP traffic to HTTPS.

    • TLS Configuration: Configure relatively strong security settings using TLSv1.2 and TLSv1.3, loosely based on Mozilla’s intermediate TLS recommendations.

    • Session security: ssl_session_cache, ssl_session_timeout, and ssl_session_tickets are all configured to avoid known attacks

  4. Automatic Certificate Renewal

    Let’s Encrypt certificates expire every 90 days, but Certbot automates renewal via cron or systemd. When renewal time rolls around, Certbot checks if your cert is expiring soon, fetches a new one if needed, and reloads your web server to apply it.

So let’s get that done and install Certbot and its NginX extension inside the container:

sudo apt install -y certbot python3-certbot-nginx

Now issue a certificate for your domain and Certbot:

sudo certbot --nginx -d virt.dedyn.io

Fill out the issued form as truthfully as your moral compass allows and check if your domain name now properly uses HTTPS:

DOMAIN=virt.dedyn.io && \
    echo http://$DOMAIN/; curl -I -L --silent http://$DOMAIN | \
    egrep -i '^Location' | sed 's/[Ll]ocation:/ ->/g'
http://virt.dedyn.io/
 -> https://virt.dedyn.io/

And finally, at least for TLS, issue a SSL Server Test and make sure you get an A+, like a good boy.

We are not done yet by a long stretch, oh no, we are just getting started with the ridiculousness of HTTP security headers, which will be covered in the next blog post.

Footnotes #

  1. This blog post series focuses on deploying a self-hosted blog within a home lab, but the approach has been battle-tested across deployments of all sizes. You can easily adapt it for a basic VPS to host a static site. I may also write an addendum covering even more limited hosting setups that rely solely on (S)FTP access and .htaccess for configuration. I’ve been there, done that. Of course, this guide is geared toward static site deployment, so it doesn’t cover the complexities of large-scale dynamic deployments. ↩︎

  2. Klickibunti ↩︎

  3. Privacy back then was a luxury, not a given. ISPs abused this all the time by injecting all type of crap into intercepted unencrypted websites. Long story short, over 500 million certificates have been issued by Let’s Encrypt, completely free of charge. You should donate, I did, a long time ago. ↩︎

  4. All jokes aside: The KISS principle applies here. If you manage a fleet of servers for updates, you don’t want to handle a plethora of package managers such as pip or npm and you certainly don’t want to vet them for security. Debian packages a lot of these in their repositories and meticulously checks them. The downside is that most of these are outdated and may lack newer features, but that’s negligible considering the stability, security and maintainability you get in return. ↩︎