A couple of days ago I wrote about migrating from MinIO to Garage. The short version: MinIO archived their community repo, Garage was the stable open-source alternative, and I moved on. Job done.
Except Garage always had one problem I couldn’t shake: the UI is functional but joyless, and performance never quite felt snappy compared to what MinIO used to be. I tolerated it because the alternative was running deprecated software.
Then I came across RustFS.
What is RustFS?
RustFS is a Rust-based, S3-compatible object storage server. On paper it looks a lot like MinIO — same API surface, similar Web UI, same deployment model. In practice it’s a clean-room rewrite with no MinIO code, under the Apache 2.0 license. No commercial upsell, no feature gating, no AGPL headaches.
It’s still in alpha (1.0.0-alpha.x at time of writing). That matters and I’ll get to it. But the moment I spun it up on my cluster I understood why people are excited about it.
It feels fast. The UI is responsive. The S3 API is snappy. It actually resembles the MinIO I was used to before everything went sideways.
Should You Use It? The Security Reality
Before diving into the setup, I want to be honest about the state of RustFS security, because there have been real issues.
CVE-2026-22043 (GHSA-xgr5-qc6w-vcg9) — A deny_only short-circuit in the IAM engine allowed a restricted service account to self-issue an unrestricted service account, inheriting root-level privileges. Serious. Patched in alpha.79.
CVE-2026-22042 (GHSA-vcwh-pff9-64cc) — The ImportIam API checked for ExportIAMAction instead of ImportIAMAction, letting export-only principals write IAM state. Patched in alpha.79.
GHSA-r54g-49rx-98cr — Credentials (access key, secret key, session token) were logged in plaintext at INFO level. Patched, but critically: if you run log_level: info, older builds will still do this. Set it to warn.
As of alpha.79 and above, all three are patched. I’m running alpha.90. If you’re on anything older, update first.
The broader point: RustFS is moving fast and the security track record is improving. Keep your Renovate/auto-update pipeline tight and watch the security advisories.
My Setup
I run a single-node Talos Linux cluster with Cilium, ArgoCD, and Traefik as the ingress controller. Two ingress classes:
traefik-private— internal DNS only, not exposed externallytraefik-public— publicly accessible, fronts my CDN domains
For RustFS this maps perfectly: the console stays private, the S3 endpoint is public for my CDN use cases (cdn.sneekes.solutions, cdn.blackycamperbus.nl).
Storage is handled outside the chart — I pre-created PVCs bound to my NFS-backed storage class. Credentials live in an External Secret synced from my vault.
The Helm Chart
The upstream RustFS Helm chart is a decent starting point but needed work. I forked it and made it production-ready for my setup. Here’s what changed and why.
Split Ingress: Console vs Endpoint
This is the most important change. The upstream chart only has one ingress resource. I added ingressEndpoint as a separate resource pointing at port 9000, while the main ingress points at the console on port 9001.
# Console — private only
ingress:
enabled: true
className: "traefik-private"
traefikAnnotations:
cert-manager.io/cluster-issuer: lets-encrypt-dns
traefik.ingress.kubernetes.io/service.sticky.cookie: "true"
traefik.ingress.kubernetes.io/service.sticky.cookie.name: rustfs
hosts:
- host: console.rustfs.ictq.xyz
paths:
- path: /
pathType: Prefix
tls:
enabled: true
existingSecret:
enabled: true
name: rustfs-tls
# S3 Endpoint — public CDN
ingressEndpoint:
enabled: true
className: "traefik-public"
traefikAnnotations:
cert-manager.io/cluster-issuer: lets-encrypt-dns
traefik.ingress.kubernetes.io/service.sticky.cookie: "true"
traefik.ingress.kubernetes.io/service.sticky.cookie.name: rustfs
hosts:
- host: cdn.sneekes.solutions
paths:
- path: /
pathType: Prefix
- host: cdn.blackycamperbus.nl
paths:
- path: /
pathType: Prefix
tls:
enabled: true
existingSecret:
enabled: true
name: rustfs-public-tls
This is defense in depth. The IAM privilege escalation bugs that existed in older RustFS versions required authenticated access. With the console not exposed publicly, the attack surface for that class of vulnerability shrinks significantly.
Sticky cookies matter for distributed mode too — RustFS needs session affinity for the console to function correctly across replicas.
Credentials via External Secret
Never inline credentials. In values-platform-prod.yaml:
secret:
existingSecret: "rustfs-secret"
The secret itself is managed by External Secrets Operator and contains RUSTFS_ACCESS_KEY and RUSTFS_SECRET_KEY. The chart picks these up via envFrom.secretRef.
Log Level and Environment
Two settings I changed from the upstream defaults:
config:
rustfs:
log_level: "warn" # NOT info — avoids credential leakage
obs_environment: "production" # NOT development
log_level: info on older builds (pre-patch) would dump credentials to logs in plaintext. Even on patched versions, warn is the right default for production — you don’t want INFO noise in your log aggregator.
obs_environment: development enables extra debug output. Set it to production.
Existing PVCs
I manage storage outside the chart. The data PVC is backed by a larger NFS volume; the logs PVC is a small local-path volume.
mode:
standalone:
enabled: true
existingClaim:
dataClaim: rustfs-data
logsClaim: rustfs-logs
distributed:
enabled: false
The storageclass section in values is irrelevant when you use existing claims — the chart ignores it for standalone mode.
Security Context
The upstream defaults are actually solid here, but worth calling out explicitly:
podSecurityContext:
fsGroup: 10001
runAsUser: 10001
runAsGroup: 10001
containerSecurityContext:
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
runAsNonRoot: true
Non-root, read-only root filesystem, all capabilities dropped. This is the minimum for anything running in my cluster.
One additional change: serviceAccount.automount: false. RustFS has no reason to talk to the Kubernetes API.
Renovate
The chart includes a renovate.json which keeps the appVersion (and therefore the image tag) up to date automatically. Given how frequently RustFS releases new alpha versions — and how many of those releases include security fixes — automatic updates are not optional here.
Migration from Garage
I ran both Garage and RustFS in parallel during migration. The process:
- Deploy RustFS with the new chart
- Create matching buckets in RustFS
- Use
mc mirrorto copy objects from Garage to RustFS:
# Add aliases
mc alias set garage http://garage.garage.svc.cluster.local:3900 <key> <secret>
mc alias set rustfs https://cdn.sneekes.solutions <key> <secret>
# Mirror each bucket
mc mirror garage/sneekes-photos rustfs/sneekes-photos
mc mirror garage/blackycamperbus rustfs/blackycamperbus
mc mirror garage/cdn-assets rustfs/cdn-assets
- Update application configs to point at the new endpoint
- Run a final incremental sync after switching
- Decomission Garage after a week of burn-in
CORS policies need to be reapplied manually. Garage and RustFS both support the standard S3 CORS API via awscli:
aws s3api put-bucket-cors \
--endpoint-url https://cdn.sneekes.solutions \
--bucket sneekes-photos \
--cors-configuration file://cors.json
Comparing Garage and RustFS
After running both for a while, here’s my honest take:
| Garage | RustFS | |
|---|---|---|
| Maturity | Stable, production-ready | Alpha, moving fast |
| UI | Minimal, functional | MinIO-like, polished |
| Performance | Good | Noticeably faster feel |
| Security track record | Clean | Several patched CVEs |
| License | AGPL-3.0 | Apache 2.0 |
| Kubernetes support | Workable | Good, improving |
| Distributed mode | First-class | Supported |
| Auto-updates safe? | Yes | Yes, if on alpha.79+ |
Garage is the safer choice if you need something that just works without thinking about it. RustFS is the choice if you want something that feels alive, has a real UI, and you’re comfortable tracking security advisories.
For my homelab — CDN for photo blogs and personal projects — RustFS is the right call. For anything storing sensitive data at scale, I’d wait for a stable release.
What’s Next
RustFS is shipping new alpha versions frequently. The project has real momentum — 23k+ GitHub stars, active issue triage, and security issues being reported and patched rather than ignored.
I’m keeping an eye on:
- When they reach a stable
1.0.0release - The IAM engine maturing (it’s where most of the bugs have been)
- OTEL integration becoming production-ready (currently experimental)
For now: running well, Renovate keeps it current, and cdn.blackycamperbus.nl is serving photos without complaint.
Running Talos Linux with Kubernetes, Cilium, ArgoCD, and Traefik. All infrastructure managed via GitOps. Previous post: Migrating from MinIO to Garage.