Systems Thinking
Requirements, Discovery, and Evolution
- Several comments argue requirements inevitably change or are only truly discovered through development; even fixed requirements are better understood over time.
- Others counter that in many domains requirements stabilize and users prefer minimal change, though “searching for the real requirements” is still the core of software work.
- Many see iterative delivery as the only realistic way to learn what’s actually needed; upfront omniscient specification is viewed as impossible.
Gall’s Law, Complexity, and Iteration
- Gall’s Law (“working complex systems evolve from simpler working systems”) is widely endorsed, tied to second-system syndrome and the idea that multiple attempts (often more than two) are needed before a design “sticks.”
- Distinctions are drawn between “complicated” (mechanical, decomposable) and “complex” (nonlinear, emergent, hard to analyze in parts) systems. Supply chains and socio-technical systems are cited as complex.
- Some propose “complex” = systems with chaotic behavior requiring active stabilization; high-performance designs often sit here.
Engineering Analogies and Their Limits
- Skyscraper/bridge analogies are heavily debated:
- Pro‑engineering camp stresses design-first, high cost of change, and the success of model-based systems engineering and V‑models in aerospace, etc.
- Critics note software’s low construction cost, unknown/unstable requirements, and argue large systems are more like evolving cities than buildings.
- “You can’t upgrade a shed into a skyscraper” is used to illustrate that early architectural constraints can’t always be stretched; software often tries anyway and suffers.
Specifications vs. Implementation
- One thread predicts a shift toward spec-centric development, with AI and humans iterating on dense, high-level specifications and generating implementations on demand.
- Examples already spec-first: network/hardware protocols, W3C standards, Apache Iceberg, programming languages. Still, prototypes and reference implementations are seen as essential to validate specs.
- Others warn that big specs often become “fiction” if written without tight feedback from implementers; a spec that never meets code is like a PR that never compiles.
- “Russian doll” specs (successive refinements, TLA+ style) are suggested as a promising pattern.
AI, Malleability, and Code Volume
- Some argue LLMs increase software malleability and favor engineering-style upfront reasoning (e.g., generating tests, then implementations).
- Others worry chatbots only know how to add code, not minimize or aggressively delete it, which conflicts with the need for small, long‑lived codebases.
Process, Culture, and the Middle Ground
- Many reject the article’s binary of “evolution vs engineering”; real projects lie along multiple dimensions (risk, speed, novelty).
- Big‑upfront specs are widely reported to fail in practice (shifting requirements, integration surprises), yet some report success with modest, living design documents that front‑load hard questions.
- Several emphasize culture over process: continuous refactoring, technical-debt work, and local autonomy are seen as crucial but often blocked by compliance-heavy, ticket‑driven organizations.
Misuse of “Systems Thinking” and Overall Reception
- Multiple commenters say the article’s “systems thinking” is really about upfront design, not the broader discipline (feedbacks, whole‑system behavior, Conway’s Law, etc.).
- Reactions split: some praise the piece as capturing the pain of sprawling enterprise landscapes; others dismiss it as a thinly veiled defense of waterfall and an oversimplified dichotomy that ignores well-known hybrid approaches.