This week in 1988, Robert Morris unleashed his eponymous worm

Date and article accuracy

  • Commenters note confusion between 1988 vs 1998 and Nov 2 vs Nov 4; consensus is the worm was released Nov 2, 1988, and the HN title/article timing is just editorial sloppiness.
  • Some suggest updating Wikipedia from primary/secondary reports linked in the thread.

Morris, background, privilege, and career

  • Many are struck that after a felony conviction he still finished a PhD at an elite university and later became faculty at the institution whose network he used to mask origin.
  • Several point to his father’s senior NSA role and long security pedigree, suggesting this likely smoothed outcomes; others argue the sentence was in line with how early computer crimes were handled.
  • His later academic work (e.g., distributed systems, routing, DHTs) is portrayed as genuinely top-tier, and some say that alone explains his academic trajectory.

Intent, ethics, and legal consequences

  • Debate over whether the worm was “harmless research gone wrong” vs a knowingly reckless attempt to gain unauthorized access to every Internet host.
  • Some emphasize that even at the time, unleashing self-replicating code on others’ systems without consent was clearly unethical among technically literate people.
  • Outcome: felony conviction, probation, and fine; some think this was lenient given the scale, others say it matched norms for non-financial computer crime then.

Impact on security culture and technical lessons

  • Thread highlights how the worm pushed a shift from “trust users” to “trust mechanisms,” and helped people internalize that buffer overflows are exploitable, not just crash bugs.
  • Later work on stack overflows and widely publicized exploits is described as a second wave that finally made industry take memory safety seriously.
  • Discussion of specific exploit vectors: sendmail DEBUG mode and gets()-based buffer overflows in fingerd.

Why we see fewer similar worms

  • Reasons given: more secure defaults (firewalls, fewer exposed services), fewer trivial RCEs, OS hardening initiatives, and a shift toward scams/social engineering rather than blind worms.
  • Others note that large-scale self-spreading systems still exist (botnets, IoT malware) but are quieter, more financially driven, and often target very weak devices.

Firsthand accounts and historical context

  • Multiple posters recall the day: university networks crawling, machines repeatedly reinfected, admins yanking sendmail, or even entire countries temporarily disconnecting from the Internet.
  • Several reminisce about the much smaller, slower, research-focused Internet and the relative informality around “computer crime” compared to later decades.

Myths, numbers, and narratives

  • The famous “10% of the Internet” statistic is called out as essentially invented at the time based on a rough host-count guess.
  • Some dispute claims that the worm was the turning point for security culture, pointing to earlier hacker culture, phreaking, and publications; they see it as one major milestone among others.

Language safety and ongoing vulnerabilities

  • Commenters connect the worm’s exploits to C’s unsafe APIs; note that many newer languages (and older non-C-like systems languages) avoid these issues by design.
  • Despite decades of lessons, examples are given of modern C/C++ projects still replicating gets-style patterns, reinforcing why memory-safe languages (and constructs like slices/spans) matter.