The Lost Art of Logarithms
Practical coding uses of logarithms
- Example from LMAX Disruptor: computing the next power-of-two buffer size via
log/pow; others suggest bit-twiddling (highestOneBit,numberOfLeadingZeros) as clearer, faster, and avoiding floating-point quirks. - Discussion of edge cases (e.g., input 1 or 0) and how Java’s bit-ops API is slightly awkward for a true
floor(log₂).
Notation and alternative representations
- Some are dissatisfied with traditional
log_b(x)notation; the “triangle of power” is praised by some as what finally made logs click and derided by others as visually confusing and unhelpful for proofs. - Proposal of “magnitude notation” (e.g., writing only the exponent, “mag 6”) as a friendlier way to think about orders of magnitude; critics note clashes with existing uses of “magnitude” and loss of precision/significant figures.
- General sentiment: logs are conceptually simple (“just the exponent”) but burdened by intimidating terminology.
Teaching, history, and pedagogy
- Many argue logs are better introduced via their original purpose: turning multiplication into addition (functions with f(ab)=f(a)+f(b)) and tools like Napier’s tables and slide rules, rather than as an abstract inverse of exponentials.
- Several reminisce about learning log tables in school because calculators were banned; logs were taught as a practical computational tool.
- Strong interest in “genetic” / historical approaches: following the sequence of real problems (astronomy, navigation, engineering) that drove the invention of logs and other math, instead of decontextualized symbol-pushing.
- Frustration from people who’ve forgotten school math and find re-entry hard; others point to modern resources (Khan Academy, etc.) and argue adults can relearn in weeks with practice.
Probability, statistics, and simulations
- Highlighted fact: if X ~ Uniform(0,1), then -ln(X)/λ ~ Exponential(λ); used for weighted sampling and event-time generation.
- This leads into inverse transform sampling (sample uniform, apply inverse CDF) as a general technique and connections to Poisson processes and even SQL implementations for weighted random sampling.
- Explanations range from calculus/PDF derivations to intuitive arguments via memorylessness.
- Another theme: multiplicative physical laws imply that products of many random factors yield log-normal distributions, explaining why log-transforms often “Gaussianize” data—but also warnings that log-log plots can be overused and misleading.
Mental math, data intuition, and large numbers
- Several people advocate memorizing a tiny table of base-10 logs (especially for 2,3,7) plus simple interpolation; this enables quick order-of-magnitude estimates, base conversions, and decibel calculations in one’s head.
- Simple “party tricks” (estimating log₁₀ of arbitrary integers via digit counts and rough mantissas) illustrate how far this can go.
- Discussion on “conceiving” huge numbers (10⁸⁰, 10⁴⁰⁰⁰): consensus that we can’t visualize them, but logs and scientific notation give workable, intuitive handles on scale.
Analog tools and the “lost art”
- Strong nostalgia for slide rules, Napier’s bones, and log tables; some still use slide rules (e.g., in kitchens for scaling recipes) and abaci/Soroban for mental math training.
- Observations that old math books routinely included log tables because they were universally useful.
Miscellaneous mathematical insights
- Mention of logarithmic derivatives ((ln f)' = f'/f) as a surprisingly central tool, with links to Gompertz-type growth curves appearing often in nature.
- References to Benford’s law via worn log-table pages, and to logs in music, navigation, and engineering scales (dB, Richter).
Reception of the book and author
- Strong enthusiasm for the project, especially from readers who loved the author’s earlier computing books.
- Multiple people plan to use it as a gentle, historically grounded introduction for themselves or their kids and ask for ways to follow its completion.