I felt the urge to share my HomeLab with the world – so here’s a little write-up covering the hardware, services, and how it all came to be. Enjoy reading :)
I’m honestly not sure when I got started with all of this – probably 2021 (when I registered the HomeLab domain) or maybe 2022 (based on a timestamp of a photo I found in my gallery).
It all began with two Raspberry Pi 3B units. One was running Discord music bots, and the other had an external HDD and hosted a Nextcloud instance using NextcloudPi. Looking back, I’m kind of surprised I didn’t lose any data with that setup 🥸My old HomeLab, hidden behind a cabinet
More stuff
2022 (?)
After learning about Home Assistant and Pi-hole, I wanted to self-host those too. I didn’t have any spare Pis left, so I repurposed an old Trekstor notebook and installed Debian on it.
I installed the services "bare metal" – directly on the OS – which, fun fact, isn’t even officially supported by Home Assistant anymore. The notebook got a USB Ethernet adapter and a permanently attached power cable. The battery stayed inside 🔥 At some point I realized that I couldn’t run both Nextcloud and Home Assistant on the same ports using the same public IP. I ignored this issue for way too long until I finally decided to set up a reverse proxy.
As more and more services joined my HomeLab, it became increasingly impractical to run them on weak standalone machines. So I decided to convert my old PC into a server.
The setup: Intel Core i7-3770, 20 GB DDR3 RAM, 2× 2 TB HDDs and 2× 240 GB SSDs, all grouped into pools. I chose Unraid as my OS – the demo seemed user-friendly and worked well for me. Migrating Home Assistant and Pi-hole was a breeze thanks to backups. NextcloudPi, however, gave me a hard time – I had to reformat my drives to BTRFS and eventually gave up trying to transfer it. I just reinstalled it from scratch and uploaded the data manually. Everything except Home Assistant is now running in Docker :)
Backups!
2024
Eventually I realized that the RAID on my server didn’t count as a real backup, so I started building a proper backup system. At first, I used Duplicati to push weekly backups to an old NAS I had lying around at home.
As the amount of critical data on the server grew, I figured it was time to add a second, off-site backup location. I managed to get another old NAS and set it up at my grandma’s place, including a permanent VPN connection to her network.
To save power, I configured a schedule to turn the NAS on only on Sundays for the backups. I also installed a remote power switch in case I ever need to boot it manually.
Over time, that NAS became unbearably slow – booting up took half an hour. So in June 2025, I decided to build a new one using old parts from the attic. More on that in this Mastodon post :)
Another move
2025
With Windows 10 reaching end-of-life and Windows 11 having higher hardware requirements, I got lucky and snagged a PC with an i7-7700 :) While I was still waiting for that machine to arrive, my old network rack decided to just fall off the wall 🥸
Given the circumstances, I bought a larger rack that could hold both the server and a UPS.
First, I got the EFB WGB-1912GR60 and mounted the network hardware inside.Someone forgot to order cage nuts?Soon after, I migrated the new server into an Inter-Tech 3U-K-340L case (do NOT buy it – it’s awful!).
It was then mounted alongside this UPS.
Since I reused all drives from the previous system, all I had to do was change the IP address of the new server to match the old one.
Most of my services run in Docker containers within a shared virtual Docker network. External access is handled by Nginx Proxy Manager, which is reachable via a port forwarding rule on my FritzBox.
Since I have a dynamic public IP, I use the FritzBox’s built-in DynDNS support in combination with my DNS provider. Whenever the IP changes, a specific DNS record is automatically updated. All other domains use CNAMEs pointing to that single record.