If you’ve followed my blog for a while, you know that “good enough” is not a phrase I apply to home infrastructure. What started as wanting reliable home networking has evolved into a full homelab — a 5-node Kubernetes cluster running Talos Linux, a properly segmented Gigabit network throughout the entire house, redundant NAS storage, and a desktop build that edges uncomfortably close to “why though.” This post is a complete overview of the hardware stack — what it is, why I chose it, and how it all fits together.
The Network: UniFi All the Way Down
The network is the foundation of everything, and I went all-in on the Ubiquiti UniFi ecosystem. It’s opinionated, occasionally quirky, but it gives you enterprise-grade visibility and control at a price point that doesn’t require a procurement department.
Core: UDM Pro
The UniFi Dream Machine Pro is the brain of the operation. It handles routing, firewall, IDS/IPS, and serves as the controller for the entire UniFi estate. Running everything through a single pane of glass — including traffic analytics, client history, and network segmentation — is genuinely useful when you’re running a homelab with multiple VLANs, IoT devices, and a Kubernetes cluster that occasionally does something unexpected on the network.
The UDM Pro also terminates my WAN connection from Freedom.nl, which provides a symmetric 1 Gbps up / 1 Gbps down fiber connection. That alone would be overkill for most households, but when you’re self-hosting services and moving container images around, the upstream bandwidth actually gets used. On top of that, Freedom.nl provides a /29 IPv4 prefix, giving me a block of public IP addresses. This means I can route real public IPs directly into the network rather than relying solely on NAT — a significant advantage for self-hosted workloads and clean external access.
Switching
The switching layer is layered based on location and purpose:
- UniFi Switch 24 (US-24) lives in the meter cupboard (meterkast) and acts as the aggregation switch. Every room in the house runs a dedicated Gigabit UTP cable back to this switch. No WiFi backhaul, no powerline adapters — just copper, the way it should be.
- UniFi Switch Lite 16 PoE (USW-Lite-16-PoE) is dedicated to the home office, powering the Kubernetes cluster nodes, the Synology units, and my workstation. Having a dedicated PoE switch here keeps the cluster traffic isolated and gives me clean power delivery for access points.
- 6× UniFi Switch Flex Mini — one in every room. These small 5-port switches terminate the structured cabling from the meter cupboard and provide local switching for TVs, set-top boxes, consoles, and anything else that lives in that room. They’re powered over the PoE uplink from the US-24, so no separate power bricks needed.
Wireless
Three UniFi U6 Pro access points cover the entire house. The U6 Pro supports WiFi 6 (802.11ax) with 4×4 MU-MIMO and handles the wireless clients — phones, laptops, tablets, and the assorted IoT devices that accumulate over time. With the wired backhaul to the US-24, there’s no mesh overhead — each AP is a full uplink to the switching core.
Network Segmentation: VLAN Everything
A flat network where every device can talk to every other device is a security nightmare. With a mix of Kubernetes cluster nodes, NAS units, IoT devices, IP cameras, kids’ devices, and work machines all under one roof, proper VLAN segmentation is non-negotiable. Every network segment has a purpose, and inter-VLAN traffic is controlled at the firewall on the UDM Pro.
Here’s the full breakdown:
| VLAN | ID | Subnet | Purpose |
|---|---|---|---|
| VLAN-1-unifi | 1 | 10.0.0.0/24 | Management — UniFi devices only |
| VLAN-12-Freedom | 12 | 45.138.55.88/29 | WAN — public /29 prefix from Freedom.nl |
| VLAN-4-WiFi-Edge | 4 | 10.4.0.0/24 | WiFi edge — general wireless access |
| VLAN-5-WiFi-IOT | 5 | 10.5.0.0/24 | IoT devices — smart home, sensors |
| VLAN-6-WiFi-Private | 6 | 10.6.0.0/24 | Private WiFi — trusted personal devices |
| VLAN-7-DMZ | 7 | 10.7.0.0/24 | DMZ — externally exposed services |
| VLAN-8-NAS | 8 | 172.16.0.0/24 | NAS storage — Synology units only |
| VLAN-30-Kids-Thimo | 30 | 192.168.30.0/24 | Per-kid VLAN — Thimo |
| VLAN-40-Kids-Desree | 40 | 192.168.40.0/24 | Per-kid VLAN — Desree |
| VLAN-50-Kids-AJ | 50 | 192.168.50.0/24 | Per-kid VLAN — AJ |
| VLAN-89-Isolated | 89 | 192.168.89.0/27 | Fully isolated — no inter-VLAN routing |
| VLAN-90-WiFi-Kids | 90 | 192.168.90.0/24 | Kids WiFi — shared wireless for kids |
| VLAN-91-WiFi-Guest | 91 | 192.168.91.0/24 | Guest WiFi — internet only, isolated |
| VLAN-101-Devices | 101 | 10.101.0.0/24 | Trusted devices — wired client machines |
| VLAN-102-Work | 102 | 10.102.0.0/24 | Work — work laptop, VPN traffic |
| VLAN-103-Private | 103 | 10.103.0.0/24 | Private wired — personal trusted segment |
| VLAN-104-Stream | 104 | 10.104.0.0/24 | Streaming — TVs, media players |
| VLAN-105-Ipcams | 105 | 10.105.0.0/24 | IP cameras — no internet access |
| VLAN-106-Private-Isolated | 106 | 10.106.0.0/24 | Private but isolated from other VLANs |
| VLAN-110-Games | 110 | 10.110.0.0/24 | Gaming — consoles, controlled internet |
| VLAN-120-k3s-native | 120 | 10.120.0.0/24 | Kubernetes node network |
| VLAN-121-k3s-lab | 121 | 10.121.0.0/24 | Kubernetes lab/workload network |
| VLAN-122-k3s-lab-dmz | 122 | 10.122.0.0/24 | Kubernetes DMZ — externally reachable services |
| VLAN-200-Untrusted | 200 | 10.200.0.0/24 | Untrusted — quarantine, unknown devices |
That’s 24 VLANs. A few things worth calling out:
Per-kid VLANs (30, 40, 50) — each kid gets their own network segment. This makes it trivial to apply per-person firewall rules, time-based internet access policies, and bandwidth limits without affecting anyone else. The UDM Pro’s Traffic Management handles scheduling. When homework time is over, internet access goes away. No arguments.
Kubernetes gets three VLANs — the node network (120), the workload/pod network (121), and a DMZ for services that need to be reachable from outside (122). Cilium handles BGP advertisement of service IPs back to the UDM Pro, so LoadBalancer services get real routable IPs from the appropriate subnet rather than relying on NodePort hacks.
IP cameras are air-gapped from the internet (105) — the G3 Bullet and G4 Doorbell sit on VLAN-105 with a firewall rule that blocks all outbound internet access. They can reach the UDM Pro for Protect, and nothing else. No phoning home to Ubiquiti’s cloud, no surprise firmware callbacks.
NAS on its own subnet (8) — the Synology units live on a dedicated 172.16.0.0/24 segment. Only the Kubernetes node VLAN and the private wired VLAN have firewall rules permitting access. Nothing else can reach the NAS directly.
DMZ (7) and k3s-lab-dmz (122) — two separate DMZ segments. VLAN-7 is for general externally exposed infrastructure. VLAN-122 is specifically for Kubernetes workloads that need to be reachable from the internet — ingress traffic hits the UDM Pro, gets routed to the DMZ VLAN, and Cilium handles the rest from there.
The firewall ruleset on the UDM Pro enforces all of this. Inter-VLAN routing is denied by default; explicit allow rules are added only where there’s a legitimate reason for two segments to communicate. It’s more upfront work, but the result is that a compromised IoT device or guest client is genuinely contained.
Cameras
A G3 Bullet handles outdoor coverage at one of the entry points, and a G4 Doorbell covers the front door — the latter doubling as an intercom. Both feed into Protect on the UDM Pro. Nothing exotic here; they do the job.
The Cluster: 5-Node Talos Linux on Intel NUCs
This is where it gets interesting. The Kubernetes cluster runs on Talos Linux — an immutable, API-driven OS designed specifically for Kubernetes. There’s no SSH, no package manager, no shell. Configuration is entirely declarative via talosctl and YAML machine configs. It’s a different mindset from traditional Linux administration, but once you’re in it, going back to mutable OS management feels messy.
The cluster is 5 nodes: one control plane and four workers.
Control Plane: Intel NUC NUC11TNHI7
The control plane node is an Intel NUC Kit NUC11TNHI7, running an Intel Core i7 (11th gen) with a 2.5 Gbps NIC. The faster NIC is deliberate — the control plane handles etcd and cluster API traffic, and having the extra headroom on the network interface matters when you’re doing large state operations or rolling updates across the cluster.
Spec:
- CPU: Intel Core i7-1165G7 (11th gen)
- RAM: 16 GB DDR4
- Storage: 500 GB NVMe M.2
- NIC: 2.5 Gbps (onboard)
Worker Nodes: 4× Intel NUC NUC12WSKi3
The four worker nodes are Intel NUC Kit NUC12WSKi3 units, each running an Intel Core i3-1220P — a 12th gen Alder Lake chip with a P-core/E-core hybrid architecture. For Kubernetes workloads these are surprisingly capable: 10 cores (2P + 8E), reasonable power consumption, and they fit neatly in a rack-mounted NUC cradle.
Each worker spec:
- CPU: Intel Core i3-1220P (12th gen, 10-core)
- RAM: 16 GB DDR4
- Storage: 500 GB NVMe M.2
All five nodes are connected to the USW-Lite-16-PoE in the office, on a dedicated VLAN for cluster traffic.
What Runs on It
The cluster runs the full homelab stack: Cilium for CNI (with BGP for proper IP advertisement to the UDM Pro), ArgoCD for GitOps-based deployment, Traefik as the ingress controller, and cert-manager for TLS. Workloads include Jellyfin, various self-hosted utilities, monitoring via the Prometheus/Grafana stack, and a growing collection of personal projects — including the tooling I write about on this blog.
Cilium BGP: Making Kubernetes Services Routable
One of the more satisfying parts of this setup is how Kubernetes LoadBalancer services get real, routable IP addresses — no MetalLB, no NodePort hacks, no externalIPs hardcoded in manifests. Cilium’s BGP Control Plane handles it, peering directly with the UDM Pro’s FRR router.
The Design
The cluster has two network-facing VLANs, each with its own purpose:
- VLAN-121 (k3s-native) —
10.121.0.0/24— internal cluster traffic and private services - VLAN-122 (k3s-lab-dmz) —
10.122.0.0/24— DMZ, externally reachable services
These map to two separate BGP ASNs:
| ASN | Role | Peers |
|---|---|---|
| 65001 | Private / internal services | 10.121.0.11, .12, .13 |
| 65002 | Public / DMZ services | 10.122.0.11, .12, .13 |
The UDM Pro runs FRR (Free Range Routing) at ASN 65000 and peers with both groups. The FRR config is straightforward — two peer groups, one per ASN, both with default-originate so cluster nodes always have a default route, and a permissive ALLOW-ALL route-map to keep things clean for a homelab context.
router bgp 65000
neighbor CI-65001 peer-group
neighbor CI-65001 remote-as 65001
neighbor CI-65001 default-originate
neighbor CI-65002 peer-group
neighbor CI-65002 remote-as 65002
neighbor CI-65002 default-originate
neighbor 10.121.0.11 peer-group CI-65001
neighbor 10.121.0.12 peer-group CI-65001
neighbor 10.121.0.13 peer-group CI-65001
neighbor 10.122.0.11 peer-group CI-65002
neighbor 10.122.0.12 peer-group CI-65002
neighbor 10.122.0.13 peer-group CI-65002
Cilium Side: BGP Cluster Config
On the Kubernetes side, a CiliumBGPClusterConfig defines the two BGP instances. Node selection is label-based — only nodes with public: "true" participate in BGP peering. The two instances run side by side on the same nodes, each with their own local ASN and peer config:
# cilium-bgp-cluster-config
spec:
nodeSelector:
matchLabels:
public: "true"
bgpInstances:
- name: "instance-65001"
localASN: 65001
peers:
- peerASN: 65000
peerAddress: 10.121.0.1 # UDM Pro gateway on VLAN-121
peerConfigRef:
name: "router-peer-config-65001"
- name: "instance-65002"
localASN: 65002
peers:
- peerASN: 65000
peerAddress: 10.122.0.1 # UDM Pro gateway on VLAN-122
peerConfigRef:
name: "router-peer-config-65002"
Both peer configs use identical timers (keepalive 10s, hold 30s) with graceful restart enabled — important for rolling Talos upgrades where nodes briefly drop out of the BGP session.
IP Pools and Advertisement
Service IP assignment is handled by CiliumLoadBalancerIPPool resources. There are four pools in total:
| Pool | VLAN | Purpose | Selection |
|---|---|---|---|
dmz-public-ingress | 121 | Public ingress (Traefik public) | label bgp: 65001 |
dmz-public-service-vlan12 | 122 | Public individual services | label bgp: 65002 |
dmz-private-ingress-vlan11 | 122 | Private ingress (Traefik private) | namespace traefik-private |
dmz-private-service-vlan11 | 122 | Private individual services | label cidr: private |
Services opt into a specific pool by setting a label. For example, a service that should get a public IP advertised via ASN 65001 gets bgp: "65001". A private service behind the internal Traefik instance gets cidr: private. The IP pools and advertisement CRDs handle the rest — Cilium assigns an IP from the matching pool and BGP-advertises it to the UDM Pro.
# CiliumBGPAdvertisement — ASN 65001
spec:
advertisements:
- advertisementType: "Service"
service:
addresses:
- LoadBalancerIP
selector:
matchExpressions:
- { key: bgp, operator: In, values: [ "65001" ] }
L2 Announcement for Private Services
For the private VLAN segment, BGP is supplemented with CiliumL2AnnouncementPolicy. This handles ARP-level announcement on the VLAN-122 interface (enp86s0.122) for services on the private ingress and private service pools. Node selection again uses a label — nodes with the private label participate:
spec:
nodeSelector:
matchExpressions:
- key: private
operator: Exists
interfaces:
- enp86s0.122
externalIPs: true
loadBalancerIPs: true
The End Result
When you deploy a service to the cluster and label it appropriately, within seconds:
- Cilium’s IPAM assigns an IP from the matching pool
- The BGP advertisement is sent to the UDM Pro
- The UDM Pro installs the route in its routing table
- Traffic to that IP from anywhere on the network (or the internet, for public IPs) routes correctly to the cluster
No static routes. No manual IP management. No MetalLB. The cluster and the router stay in sync automatically, and rolling a node out for maintenance gracefully withdraws its routes before the node goes down.
The public IP range comes directly from the Freedom.nl /29 prefix — meaning externally reachable services on VLAN-122 get real public IPs, routed straight into the cluster without NAT.
Storage: Two Synology NAS Units
Persistent storage for the cluster comes from two Synology units, each playing a specific role.
DS214+
The DiskStation DS214+ provides 2 TB of storage over NFS. This is used for bulk workloads where you need shared access to files across multiple pods — think media libraries, archives, and anything where ReadWriteMany access is required.
DS218+
The DiskStation DS218+ also runs 2 TB, but this one is integrated via the Synology CSI driver for Kubernetes. This gives proper dynamic provisioning of PersistentVolumes backed by Synology — each PVC gets its own volume, lifecycle management is handled through Kubernetes, and snapshots are available through the CSI interface. For stateful applications that need reliable, properly isolated storage, this is the right tool.
Both units are connected directly to the office switch for low-latency storage access from the cluster nodes.
The Desktop: Absolutely No Chill
And then there’s the desktop. The cluster is humble and efficient. The desktop is neither of those things.
| Component | Spec |
|---|---|
| CPU | Intel Core i9-14900KF (24-core, 3.2 GHz base) |
| Cooler | be quiet! Dark Rock Pro 5 |
| Motherboard | Asus ProArt Z790-CREATOR WIFI (ATX, LGA1700) |
| RAM | 128 GB Corsair Vengeance DDR5-5200 CL40 (4× 32 GB) |
| Storage | Samsung 990 Pro 2 TB NVMe PCIe 4.0 |
| GPU | MSI GeForce RTX 5080 16 GB GAMING TRIO OC |
| Case | Corsair 5000D AIRFLOW |
| PSU | Corsair RM1000e 1000W 80+ Gold Fully Modular |
| Fans | 4× be quiet! Pure Wings 3 140mm |
The i9-14900KF is a 24-core (8P + 16E) beast with no iGPU — paired with an RTX 5080, there’s no reason to waste die space on integrated graphics. The 128 GB of DDR5 is excessive by any reasonable measure, but when you’re running local LLMs via Ollama, doing container builds, and keeping 30 browser tabs open simultaneously, it turns out memory pressure is a real thing.
The Asus ProArt Z790-CREATOR motherboard is an interesting choice — it’s aimed at content creators rather than pure gamers, which means it prioritizes connectivity: Thunderbolt 4, dual 2.5G LAN, and a clean BIOS that doesn’t try to auto-overclock things you didn’t ask it to overclock.
The RTX 5080 is the headline component. CUDA acceleration is useful for local AI workloads, and it handles anything graphics-related with room to spare. The Corsair 5000D AIRFLOW gives it enough breathing room to stay sane under load, supported by four be quiet! Pure Wings 3 140mm fans in a push-pull configuration.
The Corsair RM1000e 1000W PSU is rated for the task — an RTX 5080 under full load plus a 14900KF can draw significantly, and running a PSU near its limit continuously is not a great idea.
Putting It All Together
The whole setup is less exotic than it might sound when you lay it out like this. The network is structured cabling with a sensible UniFi stack — the kind of thing that’s boring to manage because it just works. The cluster is five small-form-factor NUCs running an OS that gets out of your way and lets Kubernetes be the focus. The NAS units cover two distinct storage use cases cleanly. And the desktop is a deliberate overspec that earns its keep when running local models, building container images, or doing anything that benefits from raw CPU and memory headroom.
Future posts will go deeper on specific parts of this setup — the Talos configuration, the Cilium BGP setup, the ArgoCD workflow, and the LiteLLM/OpenWebUI stack that runs on the cluster. Stay tuned.