MinIO was good while it lasted. A single binary, full S3 compatibility, runs anywhere — it was the obvious choice for self-hosted object storage for years. Then, over the course of 18 months, the company systematically dismantled everything that made it great for the community.

The timeline is not subtle: May 2025, the admin console was stripped from the community edition. October 2025, they stopped publishing Docker images and pre-built binaries. December 2025, the project entered “maintenance mode.” February 12, 2026, the GitHub repository was archived with six words in all caps: THIS REPOSITORY IS NO LONGER MAINTAINED.

60,000 GitHub stars. Over a billion Docker pulls. Quietly archived and pointed at a $96,000/year enterprise product. The company raised $126M in funding, built its valuation on community goodwill, then pulled the rug. It is a story as old as open source itself, but it still stings when it happens to infrastructure you depend on.

There is a community fork that restores the admin console and ships binaries via CI/CD. Technically it works. But betting your storage layer on a one-person fork of abandoned software is not a strategy — it is a delay. Time to move on properly.

Evaluating the Alternatives

The field of S3-compatible self-hosted storage is not enormous. The realistic options right now are:

SeaweedFS — written in Go, Apache 2.0 licensed, genuinely good performance. Designed for larger deployments with dedicated master and volume server roles. Solid choice if you need raw throughput, but more moving parts than I want for a homelab.

RustFS — newer, written in Rust, Apache 2.0. Positions itself as a high-performance replacement targeting data lakes and AI workloads. Promising, but relatively young and clearly aimed at enterprise use cases.

Garage — written in Rust, AGPLv3, built by a French non-profit called Deuxfleurs. Designed explicitly for small-to-medium self-hosted deployments, including geo-distributed setups running over consumer-grade internet connections. Funded through NLnet/NGI grants from the European Commission, not VC money.

For a homelab on Talos Linux, the choice was obvious. Garage is built for exactly this use case. It is not trying to compete with AWS at petabyte scale. It assumes you have a few nodes, possibly on different physical locations, possibly behind flaky internet connections, and it works with that reality instead of against it.

The AGPLv3 license is worth mentioning. For a homelab there are no legal concerns, and the license means Deuxfleurs cannot pull a MinIO-style bait and switch without forking their own project. The incentives are aligned differently when you are a non-profit funded by grants rather than a VC-backed company chasing ARR growth.

Architecture Overview

Garage runs as a distributed cluster where every node is equal — there is no dedicated master or coordinator. Data is split into partitions and replicated across nodes according to the layout you define. The layout assigns each node a zone and a capacity, and Garage uses that information to ensure replicas land on nodes in different zones when possible.

The key ports are:

PortPurpose
3900S3 API (what your apps talk to)
3901RPC (internal cluster communication + CLI)
3902Web endpoint (static website hosting from buckets)
3903Admin HTTP API

One thing that trips people up: the garage CLI does not use the S3 or admin API. It speaks directly over the RPC port and requires both the full node ID (a 64-character hex string) and the RPC secret. This is by design — the RPC layer is the internal cluster bus, and the CLI is essentially another cluster peer.

Deploying on Kubernetes

I deployed using the official Helm chart as a 3-node StatefulSet. The relevant parts of my values file:

garage:
  garage:
    dbEngine: "sqlite"
    replicationFactor: "3"
    consistencyMode: "consistent"
    compressionLevel: "1"
    rpc:
      bindAddr: "[::]:3901"
      secret:
        create: false
        name: "garage-rpc-secret"
    admin:
      token:
        create: true
        secret:
          create: false
          name: "garage-admin-token-secret"
    s3:
      api:
        region: "ictq"
        rootDomain: ".s3.ictq.xyz"
      web:
        rootDomain: ".web.ictq.xyz"
        index: "index.html"

  persistence:
    meta:
      storageClass: nfs-01
      size: 100Mi
    data:
      storageClass: nfs-01
      size: 200Gi

  deployment:
    kind: StatefulSet
    replicaCount: 3

The RPC secret and admin token are pulled from Azure Key Vault via Secrets Store CSI — the Helm chart supports referencing existing Kubernetes secrets so nothing sensitive lives in the values file.

For ingress, Traefik handles the S3 API endpoint with Let’s Encrypt certificates:

ingress:
  s3:
    api:
      enabled: true
      className: "traefik-public"
      annotations:
        cert-manager.io/cluster-issuer: lets-encrypt-dns
      hosts:
        - host: "cdn.sneekes.solutions"
          paths:
            - path: /
              pathType: Prefix

Connecting the CLI

Getting the CLI connected took more trial and error than it should. Here is the clean version.

First, install the binary. Garage ships a single static binary — the same binary is both the server and the CLI:

wget https://garagehq.deuxfleurs.fr/_releases/v2.2.0/x86_64-unknown-linux-musl/garage
chmod +x garage
sudo mv garage /usr/local/bin/garage

To connect, you need the full node ID. If you cannot reach the CLI yet, get it directly from the pod:

kubectl exec -it garage-0 -- /garage node id

Note the /garage path — the binary is not on the default PATH in the container.

Then set up your environment:

export GARAGE_RPC_HOST=<full-64-char-node-id>@<ip>:3901
export GARAGE_RPC_SECRET=<rpc_secret>
garage status

The RPC secret needs to be exactly 64 hex characters. If you get “odd number of digits”, you are missing one character — copy it again carefully.

If you deployed with a LoadBalancer service exposing the RPC port, you will likely hit intermittent unexpected end of file errors. This happens because the load balancer routes to different nodes across connections, and the CLI state does not survive node switches cleanly. The fix is to pin the admin service to a single pod:

# In your admin LoadBalancer service
selector:
  app.kubernetes.io/name: garage
  app.kubernetes.io/instance: garage
  statefulset.kubernetes.io/pod-name: garage-0  # Add this line

One line. Immediately more stable. The S3 API service can still load balance across all nodes — that is fine because S3 requests are stateless. The admin/RPC service is the one that needs to be pinned.

Configuring the Cluster Layout

After deployment, nodes show up as healthy but with no role assigned. Garage will not store anything until you define the layout. This is intentional — Garage wants you to be explicit about capacity and placement before it starts making decisions about where to put data.

garage layout assign -z homelab -c 200G <node-id-0>
garage layout assign -z homelab -c 200G <node-id-1>
garage layout assign -z homelab -c 200G <node-id-2>

garage layout show
garage layout apply --version 1

The output tells you exactly what you are getting:

Optimal partition size:                     781.2 MB
Usable capacity / total cluster capacity:   600.0 GB / 600.0 GB (100.0%)
Effective capacity (replication factor 3):  200.0 GB

homelab             Partitions  Capacity   Usable capacity
  <node-0>          256         200.0 GB   200.0 GB (100%)
  <node-1>          256         200.0 GB   200.0 GB (100%)
  <node-2>          256         200.0 GB   200.0 GB (100%)
  TOTAL             768         600.0 GB   600.0 GB (100%)

With replication factor 3 across 3 nodes, every write goes to all 3 nodes. You get 200 GB of usable capacity from 600 GB raw, and any single node can go down without data loss.

One honest caveat about my setup: all 3 nodes use NFS-backed PVCs pointing at the same NAS. The replication protects against individual pod or Kubernetes node failure, but if the NAS goes down, all 3 Garage nodes lose their data simultaneously. Garage’s geo-distribution features are designed to solve exactly this problem — you would run nodes on genuinely separate storage — but that requires physical separation that is a future improvement for my homelab. For now, regular off-site backups cover the gap.

Day-to-Day Management

For bucket and key management, the Garage UI is worth deploying alongside the cluster. It connects to both the S3 API and the admin API and gives you a reasonable web interface without needing to remember CLI syntax for everything. It is deployed as a separate container in the same Helm release and exposed via a private Traefik ingress.

Creating a bucket and access key from the CLI if you prefer:

# Create a bucket
garage bucket create my-bucket

# Create an access key
garage key create my-app-key

# Grant the key access to the bucket
garage bucket allow my-bucket --read --write --key my-app-key

# Get the credentials
garage key info my-app-key

Any S3-compatible tool works from there — aws s3, rclone, mc (the MinIO client still works fine against Garage), or any SDK. The region name needs to match what you configured (ictq in my case, but it can be any string).

What I Moved Over

The main workloads previously running against MinIO:

Photo storage for Blacky de Camperbus — the Astro-based photo blog uses S3 for image storage with imgproxy for on-the-fly resizing. Drop-in replacement, zero code changes needed beyond updating the endpoint and credentials.

Artifact and backup storage — various backup archives and build artifacts that needed an S3-compatible endpoint. Same story, just update the config.

Development and testing — local S3-compatible endpoint for applications under development. Garage is perfectly fine for this; the S3 API compatibility is solid.

The data migration itself was a single rclone command per bucket:

rclone sync minio:my-bucket garage:my-bucket --progress

Configure both remotes in ~/.config/rclone/rclone.conf with their respective endpoints and credentials, and rclone handles the rest. For large buckets, add --transfers 8 to parallelise the uploads.

Final Thoughts

Garage is not trying to be MinIO. It does not have MinIO’s raw performance ceiling, and it is not designed for petabyte-scale enterprise deployments. For a homelab — or any small self-hosted setup — it is a better fit precisely because it was never trying to become that.

It is lightweight, honest about what it is, maintained by people who are not trying to sell you a $244,032/year enterprise license, and funded by organisations that have a genuine interest in keeping self-hosted infrastructure viable. After watching MinIO spend 18 months systematically betraying the community that built its valuation, those things matter more than benchmark numbers.

The setup took an afternoon, most of which was spent on the RPC connection quirks documented above. It has been running in production since with no complaints.

If you are still on MinIO: the community fork buys you time, but Garage is where you should end up.


Running Garage v2.2.0 on Talos Linux with Cilium networking, deployed via Helm, managed with ArgoCD, fronted by Traefik.