Advent of Code 2025

Site status & format changes

  • Some users initially saw the site as down or flaky; others reported it working but with puzzles locked until release time.
  • This year has only 12 days of puzzles (still two parts per day). Many are relieved due to December time pressures; a minority are disappointed but accept it as necessary time-saving for the author.
  • A few question calling it “Advent” when it ends mid‑month; others note it could have matched the “Twelve Days of Christmas” instead.

Global leaderboard removal & competition culture

  • The global leaderboard is gone; only private ones remain.
  • Cited reasons: infrastructure stress, users “taking it too seriously,” even DDoS attempts, and harmful comparison making many feel inadequate.
  • Many are glad: time zones made it unfair, it drove anxiety, and it drifted from the “cozy advent calendar” spirit.
  • Some miss it as a way to discover exceptionally skilled participants and interesting solution writeups.
  • There’s criticism of public “private” leaderboards with cash prizes as recreating a de‑facto global board against the stated guidance.

AI use and cheating

  • Official FAQ strongly discourages using AI to solve puzzles, likening it to sending a friend to the gym for you.
  • Many expect modern coding LLMs to trivially solve most problems and view their use in leaderboards as cheating.
  • Others see AoC as a good benchmark for comparing LLMs or for learning workflows (tests, iteration), but agree that claiming personal achievement would be dishonest.
  • Several report other contests (university competitions, online judges) being swamped by LLM‑assisted submissions, to the point that remote leaderboards are no longer meaningful.

Motivations: fun, learning, and dislike of “coding for fun”

  • Large contingent treats AoC as a festive tradition: a way to practice algorithms, learn or deepen a language, or enjoy problem‑solving with friends, Reddit, Slack/Discord, etc.
  • Some use it explicitly as structured practice in new paradigms or languages, or for teaching students.
  • A vocal minority see no appeal in recreational coding and compare it to plumbers unblocking toilets for fun; others respond that many trades and arts have analogous hobby competitions and that deriving joy from work skills is normal.

Languages and tooling

  • Strong theme: AoC as an excuse to try “non‑mainstream” languages or a new one each year (e.g., Haskell, OCaml, Elixir, Clojure, Nim, Crystal, Julia, Prolog, Scheme, array languages like APL/BQN/Uiua, self‑designed languages, even Game Boy ASM or spreadsheets/Excel).
  • Many argue the best choice is whatever you know well or want to learn; others note that AoC’s heavy string‑and‑grid parsing favors dynamic, batteries‑included languages (Python, Ruby, JS).
  • Some warn that minimalist or “batteries‑depleted” functional languages can be painful for beginners due to parsing and IO; others say building a personal utility library over years makes them great fits.

Access, inputs, and technical quirks

  • Login requires an OAuth provider (GitHub, Google, Reddit, etc.); some object to relying on “BigCorp” accounts. Others justify it as pragmatic anti‑abuse and suggest throwaway Reddit accounts.
  • FAQ asks participants not to publish puzzle text or personal inputs. Inputs are partly randomized; enough leaked inputs could allow cloning the problem set. Workarounds include private submodules, git‑crypt, or runtime input downloaders.
  • A few report a Day 1 issue where the site alternated between two input datasets, causing “that answer is correct for someone else” errors; one suggestion is embedding an input ID to detect mismatches.

Difficulty, accessibility, and “who AoC is for”

  • Debate over the FAQ claim that “a little programming knowledge” gets you “pretty far.”
  • Some insist many problems require knowledge of graphs, pathfinding, memoization, or discrete math beyond what casual coders have, and fear newcomers will be discouraged.
  • Others counter that while later days are hard, early days plus partial completion already offer substantial learning, and that problems rarely depend on obscure prior theory—more on general problem‑solving.
  • One perspective: AoC is a great “cozy festival,” but a poor formal competition (timezone dependence, relatively easy constraints, underspecification, and parsing quirks).

Private leaderboards and community

  • Numerous people run private boards with friends, coworkers, or chat communities; these are seen as fun, low‑stakes ways to compare times.
  • Some stress they “ignore the leaderboard” entirely and finish puzzles weeks or months later; stars and evolving ASCII art are enough motivation.
  • There’s disagreement over whether simply “ignoring” competitive features is psychologically realistic, and whether leaderboards subtly shape puzzle design.

Broader reflections

  • Some lament that AI + remote formats are undermining many competitions (coding and even school contests), leading to unverifiable leaderboards or withdrawn official rankings.
  • Others draw analogies to chess: engines vastly outplay humans, yet human‑only competition thrives with proper anti‑cheat; they see AoC’s shift away from a global race as a sensible adaptation.