On November 11, 2025, the Kubernetes community announced the end of Ingress NGINX. The project that had been the de facto standard for routing traffic to Kubernetes workloads for years is retiring in March 2026. No new releases, no bugfixes, no security patches. For me, this meant one thing: time to migrate.
Why Ingress NGINX is Shutting Down
The story behind the retirement is actually quite sad. Despite its enormous popularity, the project was maintained for years by just one or two volunteers working in their spare time, on evenings and weekends. The flexibility that made the project so popular—like the ability to inject arbitrary NGINX configuration via snippet annotations—eventually came to be seen as a serious security risk. Technical debt piled up and became insurmountable.
The planned replacement, InGate, never progressed far enough to become a viable alternative and will also be retired. The Kubernetes SIG Network and Security Response Committee made the difficult but responsible decision to sunset the project rather than let it become a security liability.
My Setup: Not Your Average Ingress
My home lab runs on Talos Linux with Cilium as the CNI. I had a dual-ingress setup that served me well for years:
- Private ingress: For internal services, accessible only on my home network via VLAN 11
- Public ingress: For services exposed to the internet via VLAN 12
Each ingress controller had its own IngressClass (nginx-private and nginx-public) and received its IP address through Cilium’s LB-IPAM. The public ingress was announced via BGP to my router, while the private ingress used L2 announcements on the internal VLAN.
This setup meant I couldn’t simply switch to Cilium’s built-in ingress controller. It only supports a single shared LoadBalancer service—my dual-IP, dual-VLAN configuration with different BGP advertisements per ingress just doesn’t fit that model.
Why Not Cilium’s Built-in Ingress?
Since I’m already running Cilium as my CNI, the obvious first choice was to use Cilium’s built-in ingress controller. I actually tested this about six months ago, and it didn’t work well for my setup.
The problem is architectural. Cilium’s ingress controller is designed for simpler deployments—one LoadBalancer service, one IP, done. My dual-ingress setup with separate private and public IPs on different VLANs, each with their own BGP advertisements or L2 announcements, just doesn’t fit that model.
Cilium’s Gateway API implementation has matured significantly since then (it’s now GA in Cilium 1.16+), and theoretically I could create multiple Gateway resources for each network path. But that would require a complete refactor of all my Ingress resources to HTTPRoute CRDs, and I’d still be fighting against the controller’s assumption of a single shared service.
Sometimes the tool you’re already using isn’t the right tool for the job. Cilium is an excellent CNI and handles my LB-IPAM, BGP peering, and L2 announcements flawlessly. But for ingress, I needed something designed for multi-instance deployments from the start.
Why Traefik?
After evaluating the options, I landed on Traefik for several reasons:
- Active community: 50k+ GitHub stars, actively maintained, widely adopted
- True open source: No vendor lock-in concerns like with F5’s NGINX Ingress
- Ingress API compatibility: My existing Ingress resources would mostly work, just with different annotations
- Gateway API ready: When I eventually want to migrate to Gateway API, Traefik already supports it
- Fits my setup: Two Traefik deployments with separate IngressClasses work exactly like my NGINX setup
The Migration
Deploying Traefik
I deployed two separate Traefik instances via Helm, one for each ingress class:
helm repo add traefik https://traefik.github.io/charts
# Private ingress
helm install traefik-private traefik/traefik \
-n traefik-private --create-namespace \
-f traefik-values-private.yaml
# Public ingress
helm install traefik-public traefik/traefik \
-n traefik-public --create-namespace \
-f traefik-values-public.yaml
The key parts of the values files mirror my NGINX setup. For the private ingress:
deployment:
kind: DaemonSet
ingressClass:
enabled: true
isDefaultClass: false
name: traefik-private
service:
type: LoadBalancer
externalTrafficPolicy: Local
annotations:
"io.cilium/lb-ipam-ips": "10.122.0.129"
For the public ingress, I just needed the BGP label instead of the IP annotation:
service:
type: LoadBalancer
externalTrafficPolicy: Local
labels:
bgp: "65001"
My Cilium LB-IPAM pools and BGP configuration remained completely unchanged. I just had to update the namespace selector in my CiliumLoadBalancerIPPool from ingress-nginx-private to traefik-private.
The Middleware Paradigm Shift
The biggest conceptual difference between NGINX Ingress and Traefik is how they handle advanced configuration. With NGINX, everything goes into annotations on your Ingress resource. With Traefik, complex configurations like headers, CORS, rate limiting, and redirects are defined in separate Middleware CRDs.
At first, this felt like extra overhead. But it actually makes sense:
- Reusability: Define a CORS policy once, use it on multiple Ingresses
- Separation of concerns: Routing logic stays in the Ingress, request processing in Middlewares
- Composability: Chain multiple middlewares together
For example, my MinIO ingress needed CORS headers. With NGINX:
annotations:
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "https://example.com"
nginx.ingress.kubernetes.io/cors-allow-headers: "Content-Type, authorization"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
With Traefik, I created a Middleware:
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: minio-cors
namespace: traefik-public
spec:
headers:
accessControlAllowCredentials: true
accessControlAllowHeaders:
- Content-Type
- authorization
accessControlAllowOriginList:
- "https://example.com"
And referenced it in the Ingress:
annotations:
traefik.ingress.kubernetes.io/router.middlewares: traefik-public-minio-cors@kubernetescrd
I bundled all my middlewares in the Helm chart’s extraObjects section, so they’re deployed alongside Traefik itself.
Handling SSL Passthrough
Some of my services, like OpenBao (the open source Vault fork), need SSL passthrough—the TLS connection goes straight through to the backend without Traefik terminating it.
With NGINX this was just two annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
With Traefik, you need an IngressRoute CRD instead of a standard Ingress. But I had a twist: I wanted Traefik to terminate TLS with a cert-manager certificate, then talk HTTPS to the backend (which has its own self-signed cert).
The solution was an IngressRoute combined with a ServersTransport that skips certificate verification:
apiVersion: traefik.io/v1alpha1
kind: ServersTransport
metadata:
name: insecure-backend
namespace: openbao
spec:
insecureSkipVerify: true
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: openbao-tls
namespace: openbao
spec:
secretName: openbao-tls
issuerRef:
name: lets-encrypt-dns
kind: ClusterIssuer
dnsNames:
- openbao.example.com
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: openbao
namespace: openbao
spec:
entryPoints:
- websecure
routes:
- match: Host(`openbao.example.com`)
kind: Rule
services:
- name: openbao-active
port: 8200
scheme: https
serversTransport: insecure-backend
tls:
secretName: openbao-tls
Note that cert-manager annotations don’t work on IngressRoute resources—you need to create the Certificate explicitly.
What’s Actually Better
After completing the migration, I realized Traefik actually improved several things:
WebSocket support: Just works. No annotations needed. NGINX required explicit configuration.
No body size limit by default: Traefik has no limit on request body size unless you set one. NGINX defaults to 1MB, which I constantly had to override.
Global HSTS: I added HSTS headers globally via a middleware on the websecure entrypoint. Every service gets it automatically.
Cleaner Ingress manifests: Without all those annotations, my Ingress resources are much more readable.
Better observability: Traefik’s dashboard gives a clear view of all routes, middlewares, and services.
Lessons Learned
Don’t wait until the last minute. I had three months before the NGINX Ingress EOL, but migrations always take longer than expected. Start early.
Test in parallel. I ran both ingress controllers simultaneously, migrating services one by one. If something broke, I could quickly switch back to NGINX.
Document your annotations. Before migrating, I inventoried every NGINX annotation I was using. This made it clear which middlewares I needed to create.
The Middleware pattern is actually good. My initial reaction was “great, more YAML to manage.” But having reusable, composable configuration objects is cleaner than copy-pasting annotations everywhere.
Conclusion
What started as a forced migration turned out to be an improvement. Traefik is a solid, actively maintained ingress controller that fits my complex networking setup perfectly. The Middleware pattern took some getting used to, but it results in cleaner, more maintainable configuration.
If you’re still running community NGINX Ingress, don’t wait. March 2026 will be here before you know it, and running an ingress controller without security updates is not a risk worth taking.
The Kubernetes ecosystem keeps evolving. Sometimes that means saying goodbye to tools that served us well. But in this case, the alternative is better.