Why I stopped using JSON for my APIs
Perception of the article
- Some readers found the post confusing or “LLM‑ish”; others found it clear but unconvincing.
- Several argue you can’t reliably detect LLM use; what people are really reacting to is writing quality, not tooling.
JSON’s strengths and why it persists
- Human readability and “view with curl and a text editor” are seen as major advantages for debugging, onboarding, and working with poorly documented or quirky third‑party APIs.
- JSON is ubiquitous, built into browsers, trivial to parse in most languages, and easy to prototype with. This low human cost outweighs machine efficiency for most teams.
- Many comment that compressed JSON (gzip/brotli/zstd) is “good enough” in size and speed for the vast majority of web APIs.
Protobuf benefits and pain points
- Pros: static schema, strong typing, good codegen, smaller binary encoding, and easier backward‑compatible evolution when done carefully.
- Cons: schema management across repos, toolchains and versioning, awkward optional/required semantics (proto3 especially), and loss of human readability without extra tooling.
- Several note that Protobuf clients still must be defensive: proto3 removed
required, so missing fields silently get defaults. - Debugging and ad‑hoc inspection are harder; people often end up needing viewers, text proto, or JSON transcoding.
Validation, schemas, and contracts
- Many point out that “JSON vs Protobuf” is orthogonal to “untyped vs typed”: JSON plus libraries (serde, Pydantic, ajv, Zod, etc.) can enforce strict schemas and nullability just as Protobuf can.
- The “parse, don’t validate” pattern is raised: parse directly into tight types and fail early, regardless of format.
- Version skew is a problem in any distributed system; robust CI and explicit versioning matter more than the wire format.
Performance, size, and compression
- Some report protobuf or other binary formats losing to gzipped JSON in realistic benchmarks.
- Others care more about CPU cost of (de)serialization; here Protobuf can help, but zero‑copy formats (FlatBuffers, Cap’n Proto) can be even faster.
- A number of commenters see Protobuf as premature optimization for typical CRUD APIs (tens of requests/sec, DB‑bound).
Alternatives and ecosystem gaps
- Alternatives mentioned: CBOR, MessagePack, BSON, Avro, ASN.1 (and its complexity), FlatBuffers, Cap’n Proto, Lite³, Erlang ETF, GraphQL, CUE, JSON Schema.
- CBOR and MessagePack are viewed as good “binary JSON” options; CBOR already underpins WebAuthn.
- Some argue ASN.1/DER or OER are more principled but tooling is poor; Protobuf is seen as the “worse but widely tooled” reinvention.
Browser & tooling considerations
- Lack of first‑class Protobuf support in browsers is a common complaint; JSON is effectively native.
- Hybrid approaches are popular: gRPC‑Gateway, Envoy transcoding, Twirp, ConnectRPC, and Protobuf’s own JSON mapping to offer both binary and JSON APIs.
When to use what
- Emerging consensus:
- JSON + good schema/validation is ideal for public, heterogeneous, or early‑stage APIs.
- Protobuf (or similar) makes more sense for high‑throughput, tightly controlled, internal systems where bandwidth/CPU and strict contracts matter.
- For many teams, the operational and cognitive overhead of Protobuf outweighs its benefits.