Garage – An S3 object store so reliable you can run it outside datacenters

Adoption and Alternatives

  • Many are considering Garage as a MinIO replacement after the MinIO licensing “debacle.”
  • Other contenders repeatedly mentioned: SeaweedFS, RustFS, Ceph/Rook, Versity S3 Gateway, JuiceFS, Storj, and DigitalOcean’s custom S3 gateway.
  • SeaweedFS gets strong praise for performance and robustness, but is criticized for documentation quality and a 32 GiB object size limit.
  • RustFS is seen as early-stage and “underbaked,” with concerns about durability architecture and a licensing “rug-pull” mechanism.

Performance and Design Goals

  • Some testing shows Garage easier to deploy than MinIO but significantly slower at high throughput (e.g., ~5 Gbit/s vs. 20–25 Gbit/s on the same hardware).
  • Garage’s own docs state that top performance is not a goal; design favors simplicity and minimalism over maximum speed.
  • Users report good performance for local dev, data engineering workflows, and small/medium deployments.

Reliability Model, Replication, and Erasure Coding

  • Garage relies on replication (e.g., 3-way) rather than erasure coding; some see this as a major efficiency drawback, especially for large archival setups (like tape libraries).
  • One commenter argues replication is reasonable given likely future storage price drops; others question the math and note storage prices haven’t fallen dramatically.
  • Authors reference Jepsen testing and a precise failure model: cluster tolerates one “crashed” node (including metadata corruption) with 3 replicas; with two down, data remains safe but unavailable.
  • Criticism: if all nodes lose power simultaneously (same fault domain), the guarantees become unclear; documentation is seen as underspecified here.

Metadata Storage, Power Loss, and KV Engines

  • By default, Garage uses LMDB for metadata, and its docs admit potential corruption after unclean shutdowns; they recommend robust filesystems (ZFS/Btrfs) and snapshots.
  • This alarms some, who expect WAL-style crash recovery akin to PostgreSQL. Others counter that many systems trust underlying storage similarly.
  • SQLite is supported and safer but slower; LMDB chosen for performance in multi-node setups.
  • The team is experimenting with alternatives (e.g., Fjall/LSM) and open to RocksDB, SlateDB, etc., but hasn’t found a perfect KV engine yet.
  • Broader discussion touches on consumer SSDs lying about fsync, PLP capacitors, and hardware vs. software guarantees.

S3 Compatibility and Missing Features

  • Garage offers read-after-write consistency but not conditional writes (If-Match / If-None-Match), due to its CRDT-based design; this breaks compatibility with tools like ZeroFS.
  • Object tags are not implemented; some say tags are “table stakes” for cloud-style APIs.
  • A migrating MinIO user lists missing or weak features:
    • No lifecycle policies (e.g., “retain versions for 3 months”),
    • No automatic mirroring to other backends,
    • Limited ACLs (no sub-path keys, no global admin key),
    • Primitive static web hosting/CORS controls,
    • Inability to set/import arbitrary access keys directly.
  • These gaps make some workloads harder to migrate despite overall positive impressions.

Use Cases and Practical Experiences

  • Positive reports for:
    • Local development S3 endpoints,
    • Hyper‑converged setups where local nodes can serve local data first,
    • Data engineering pipelines using S3 integrations and later scaling to cloud,
    • Quickly seeding large local mock datasets.
  • One user reports a crash when deleting ~300 uploaded documents; they restarted the container and question the “so reliable” claim.
  • There’s interest in bandwidth-limiting replication for multi-home/family distributed backup setups.

Comparisons to Other Systems

  • Garage vs. Syncthing: framed as different tools—Syncthing for file/folder sync, Garage as an S3 service for backups, web/media storage, etc.
  • Ceph/Rook: powerful, self-healing, but widely described as complex and RAM-hungry; some small deployments succeed, others end up in “death spirals” if mismanaged.
  • Some advise against Rook/Ceph if you only need S3; complexity and operational risk are viewed as high.

Ecosystem, UX, and Miscellany

  • Several users praise Garage’s single-binary deployment, Forgejo hosting, and documentation (though the “real-world” guide wording around corruption is seen as scary).
  • Deuxfleurs’ website is admired aesthetically but criticized for accessibility/readability in some environments.
  • A tangent debate covers Rust’s safety claims and ecosystem trust (Cargo dependencies vs. Debian-style vetting), with some skepticism toward over-marketing of Rust in competing projects’ docs.