Advent of Code 2024
AI, cheating, and the global leaderboard
- A 9‑second double‑star solve on Day 1 was traced to an AI‑generated solution and later-removed apology, triggering debate about LLM “cheating.”
- Many argue LLMs make the public leaderboard meaningless: models read faster than humans and can be automated to fetch puzzles, generate code, run, and submit answers.
- Others say AI is now a normal tool (like Stack Overflow or autocomplete) and should either be allowed explicitly or moved to a separate AI leaderboard.
- Several liken AI use on the public board to aimbots in games or Stockfish in chess tournaments; others counter that programming isn’t inherently a sport and tools shouldn’t be forbidden.
- Some are impressed by the automation challenge itself (pipelines, benchmarking o1‑style repeated runs), but still see it as incompatible with the event’s spirit.
Competition vs. personal enjoyment
- Many participants say they ignore the global leaderboard due to time zones, cheaters, and extreme competition; they prefer private boards with friends or colleagues.
- A recurring pattern: people enjoy the first ~7–12 days, then puzzles become time‑consuming and stressful, leading to burnout or abandonment.
- Strategies include: setting per‑puzzle time limits, skipping hard days, finishing after December, or doing only first stars.
- Some view AoC as a fun tradition and a way to practice problem solving, not a career or productivity exercise; others advocate doing side projects instead for longer‑term benefit.
Learning, languages, and tooling
- Large contingent uses AoC to learn or practice languages: F#, Gleam, Rust, Go, Swift, Ada, SQL/SQLite, K/APL, Elixir, Lisp variants, Prolog, bash, Excel, Whitespace, custom languages, even NES/STM32 targets.
- Many build personal frameworks/CLIs, input parsers, grid/graph utilities, or benchmarking rigs; some note they over‑invest in frameworks instead of solving puzzles.
- AoC is contrasted with LeetCode: AoC is seen as more playful, story‑driven, and community‑oriented, with less emphasis on textbook algorithms and more on parsing and ad‑hoc problem solving.
Difficulty, algorithms, and accessibility
- Disagreement over how “beginner‑friendly” AoC really is: some say you can get far with loops and brute force; others note recurring need for more advanced ideas (graphs, DP, CRT, linear algebra).
- Several stress that optimal algorithms are often not required for personal success; brute force plus patience works for many inputs.
- Site UX is widely criticized: tiny thin font, dark theme, and poor mobile support; people recommend browser reader modes, user CSS (Stylus), userscripts, or CLI tools to fetch and re‑render puzzles.