A visual introduction to big O notation
Overall reception & visuals
- Many commenters found the article clear, engaging, and especially praised the interactive visualizations and animations as a great way to internalize timing behavior.
- Several experienced programmers said it worked well as a refresher and would have been helpful in their university days.
- A few readers with attention issues found the layout harder to follow and preferred more structured, textbook-style formatting.
Big O definition: math vs “industry shorthand”
- A major thread challenges the article’s claim that Big O “always” describes worst-case performance; commenters stress Big O is just asymptotic notation for any function (best, average, worst case, or non-algorithmic functions).
- Multiple comments note Big O is an upper bound, distinct from:
- Ω (lower bound) and
- Θ (tight bound, same order above and below),
and that these are independent of “best vs worst case.”
- The article’s (now-removed) explanation of Θ via “best and worst case are the same order” is called flatly incorrect; bubble sort’s worst case is Θ(n²) even though its best case is O(n).
- There’s debate over how strictly to teach this: some argue precise math (asymptotes, bounds) is essential; others say a slightly “wrong but useful” simplification is acceptable for bootcamp-level audiences.
- Broader meta-discussion arises about “toxic experts,” tone of correction, and the tension between accessibility and rigor.
Practical usage, constants, and hardware realities
- Several comments emphasize that Big O doesn’t capture constants, caching, or memory patterns; O(1) hash lookups can be slower than an O(n) scan for small n or cache-friendly data.
- Examples include replacing hash maps with sorted arrays + binary search for real speedups, and quadratic algorithms that are fine when n is small but explode at larger scales.
- There’s pushback on the claim that Big O is “less relevant” now: others argue its abstraction from hardware details is exactly why it remains valuable.
Education, calculus, and who needs Big O
- Some argue Big O is fundamentally about limits/asymptotics, so a bit of calculus (limits, growth rates) would prevent common misconceptions.
- Others counter that many working developers lack time or interest for calculus but still benefit from an intuitive grasp of “how cost grows with input size.”
- Experiences differ: some learned Big O in early CS courses (discrete math, algorithms), others report it being handwaved or never properly taught.
- A veteran engineer claims never to have needed formal Big O notation; replies argue every engineer should at least recognize the impact of nested loops and core data-structure complexities.