Dijkstra On the foolishness of "natural language programming"
Overall reaction to Dijkstra’s essay
- Many readers find the piece strikingly clear, beautiful, and still relevant decades later.
- Some note his complaint about declining mastery of natural language feels timeless.
- There’s clarification that the text is from 1978, with a 2010 transcription.
Natural vs formal languages
- Strong agreement that forcing ideas into a formal language (math or code) improves thinking, exposes ambiguities, and reveals flawed requirements.
- Several note that natural language is especially bad at specifying rules and procedures; examples given: mathematics, programming languages, aviation weather codes, and legal drafting all evolve specialized, semi-formal notations.
- Others counter that real-world features begin in someone’s head and must first be expressed in natural language; translation to formal systems is unavoidable, so tools that help this step are attractive.
LLMs and “natural language programming”
- Critics fear overreliance on LLMs will erode competence and produce large, messy, hard-to-maintain codebases.
- Multiple comments liken “vibe coding” with LLMs to earlier “no-code” / flowchart / UML / BPML / AppleScript waves: appealing demos that fail to scale.
- Some argue LLMs are best seen as translators or assistants: turning specs into code, explaining code, navigating big codebases, generating docs and tests, or helping refactor—provided an experienced human keeps architectural control.
- There is a long, unresolved debate about whether a “sufficiently descriptive” natural-language spec could, with a “sufficiently powerful” model, fully determine a program; skeptics emphasize irreducible ambiguity and non-reproducibility, proponents say it’s theoretically possible but not here yet.
Thought, representation, and programming
- Extended subthread on whether ideas exist pre-language: some claim the act of verbalizing creates the idea; others point to non-verbal thinking, inner “shapes,” and graph-like mental models of code.
- Several describe programming as manipulating abstract structures rather than composing sentences, and imagine better structural GUIs or AR interfaces—but still grounded in formal semantics.
Types, errors, and formal methods
- Dijkstra’s praise of languages that turn “silly mistakes” into compiler errors is linked to static and strong typing.
- Discussion of Rust, Scala, etc. highlights a trade-off: rich type systems catch more errors but hinder rapid, exploratory work.
- Dijkstra’s advocacy of proving correctness (predicate transformer semantics, A Discipline of Programming) is mentioned as another largely ignored but relevant strand.
Domain and legal languages
- Several note that each business domain already has a quasi-formal jargon; Domain-Driven Design’s “ubiquitous language” is cited as formalizing that.
- Legal language and math notation are given as examples where natural language is sharpened into a more constrained, semi-formal code; suggestions appear for more symbolic legal systems to avoid ambiguity.
Historical patterns and expectations
- Commenters list recurring “simple” ideas that don’t scale (flowcharts, weak typing, constraint-based layouts, naive dependency management, many no‑code platforms), suggesting we should document their failure modes rather than repeatedly rediscover them.
- Others caution that some once-dismissed ideas (e.g., neural networks) eventually worked when constraints changed, so a “Hall of Definitely Bad Ideas” should instead catalogue hard problems and past pitfalls.
High-level synthesis
- A recurring theme: requirements vs implementation. Natural language is suited to stating goals and motivations; formal languages are needed for precise, reproducible behavior.
- Many see LLMs as potentially powerful tools to move between these layers, but not as replacements for the discipline of formal reasoning that Dijkstra defends.