Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 337 of 535

Ask HN: Anyone making a living from a paid API?

Why API businesses are often secretive

  • Several commenters note a strong incentive to “stay under the radar”: sharing details risks inviting copycats into niches that are easy to execute.
  • Unlike open source communities, API providers often guard implementation and go-to-market “tricks of the trade” as their competitive edge.

What makes a paid API valuable

  • Convenience and reliability repeatedly trump “anyone can build this.”
  • Examples: image processing, HTML→PDF, screenshots, SMS/telephony, STT, OCR, geo-IP, podcast search, certificate transparency, blockchain node access, market data, recipe parsing, Bitcoin analytics.
  • A common origin story: “I was the user; I built an API to solve my own recurring problem.”
  • Advice: start with a painful, well-understood problem in a domain you know, not with a neat piece of tech.

Idea generation and demand quality

  • Thread includes wishlists (Lego-set-from-inventory, multi-store grocery optimization, better meeting transcription/webhooks, native TTS/STT tooling).
  • Many such ideas already exist, showing the danger of building “cool” APIs without checking for real, paying demand.
  • One commenter frames it as: sell “painkillers, not vitamins.”

Real-world API businesses (orders of magnitude)

  • Solo/very small teams report:
    • ~$200/mo (recipe ingredient parser).
    • ~$5k/mo (speech-to-text; model fine-tuning; image finetuning API).
    • ~$12k MRR (HTML→PDF).
    • ~$20k MRR (screenshot API).
    • ~$35k–55k MRR (CIAM and OCR APIs).
    • ~€500k MRR (SMS/telephony API with pay-per-use).
  • Some are now largely “maintenance mode”; others are declining due to commoditization by big cloud/LLMs.

Go-to-market and distribution

  • First customers often come from: personal networks, meetups/hackathons, Reddit/HN/Quora/StackOverflow, Product Hunt, cloud-provider marketplaces, or dedicated developer platforms.
  • Marketing and sales (not engineering) are repeatedly cited as the hardest part.

Pricing and value capture

  • Models include per-call, per-minute, per-page, subscriptions with usage tiers, and negotiated enterprise contracts.
  • A recurring regret: backend API providers often capture far less revenue than the customer-facing apps built on top, because “who owns the end user” owns pricing power.

Ethics, legality, and odd cases

  • One story describes an employee spinning out an internal system into a paid external API for the same company, prompting debate about legality and whether this is extortion vs normal consulting.
  • Some APIs are subsidized or public (e.g., government job-posting feeds), used to stimulate ecosystem growth rather than direct profit.

My five-year experiment with UTC

Role and value of local time zones

  • Time zones encode “time of day” and shared culture: “up at 5am for a flight” or “home late” instantly conveys early/late without extra explanation.
  • People generally care about where an event sits in the light–dark cycle and workday/week pattern, not its absolute offset from England.
  • Time zones roughly align calendar days with “natural” days so that a new date usually starts while most people sleep, avoiding confusion like appointments crossing a date boundary mid‑day.
  • Critics of abolition say that local schedules are still governed by sun and culture; without time zones you’d need lookup tables for what 08:00 UTC “means” in each place, effectively recreating zones.

Critiques of living by UTC / abolishing time zones

  • Using UTC personally adds friction to everyday local life: store hours, trains, flights, and social plans all require constant conversion.
  • Jet lag and local sleep cycles don’t disappear: going to bed at “21:00 UTC” after travel may mean sleeping at noon locally.
  • People working in facilities that used a “master time” report persistent confusion and ad‑hoc fixes (e.g., taped “+1” for DST).
  • Several view “time zones should be abolished” as a form of programmer utopianism that underestimates social and biological constraints. Others argue the article is more of a voluntary experiment than an authoritarian prescription.

Arguments in favor of UTC / global time

  • Strong consensus that UTC is best for logs, servers, telemetry, and cross‑region debugging; it avoids DST chaos and ambiguous local timestamps.
  • Some remote workers and travelers find a single personal baseline (UTC) simplifies reasoning about multiple colleagues’ zones, especially when DST rules differ.
  • Advocates argue that inherent complexity is “people keep different hours”, not time zones; a global standard could reduce repeated conversions and DST mistakes.

Alternatives and incremental improvements

  • Many propose 24‑hour clocks everywhere to eliminate AM/PM confusion; “06:00pm” formats are widely disliked.
  • Various alternative schemes are mentioned (Swatch .beat time, letter codes, metric time, solar‑offset time, continuous longitude‑based time), but are generally seen as clever curiosities rather than realistic replacements.
  • Broad agreement that abolishing DST is a more practical near‑term goal than abolishing time zones themselves.

The wake effect: As wind farms expand, some can ‘steal’ each others’ wind

Wind rights and ownership

  • Several comments connect the article’s “wind theft” framing to emerging concepts of “wind rights,” comparing them to water or air rights.
  • Upwind farms reducing the available resource for downwind farms is seen as a classic shared‑resource/ownership problem, similar to upstream dams reducing downstream hydropower or trees/shadows affecting neighbors’ solar panels.
  • Some foresee the need for clearer legal frameworks as density of wind development increases.

Physics of wake effects

  • Commenters note that slower wind downstream is basic energy conservation but emphasize the magnitude and distance (wakes up to ~100 km, ~10% reductions) as the non‑obvious and policy‑relevant part.
  • There’s debate over whether turbines “stop the wind” entirely (rejected by others) versus partially extracting energy.
  • Some propose national- or basin‑scale optimization of wind farm siting to account for wakes.

Scale, climate, and ecological impacts

  • One side argues the extractable fraction of global wind energy at turbine height is large compared to current human use, so environmental impact from extraction is small.
  • Others worry that assuming “tiny effects” risks repeating past mistakes (e.g., fossil fuels) and ask about impacts on birds, insects, local climate, and global circulation.
  • Responses claim measurable but minor meteorological changes (e.g., soil drying), and note bird mortality from turbines is small relative to buildings, cats, and pollution; mitigation ideas like painting a blade are mentioned.

Economics and investment risk

  • A key dispute: is a 2–3% wake-related production loss trivial or potentially fatal to project economics?
  • Some argue that for capital-intensive offshore projects with thin margins, a few percent can erase profit, especially if a new upwind farm wasn’t in the original risk model.
  • Others counter that this level of uncertainty is normal, often already modeled, and mainly affects marginal projects rather than the overall build-out.

Politics, perception, and aesthetics

  • Several comments see “wind theft” narratives and complaints about waste oil, non‑recyclable blades, etc., as part of a broader, sometimes ideological anti‑wind backlash, likened to how nuclear became a “dirty word.”
  • NIMBY resistance and culture‑war opposition are highlighted.
  • Aesthetic objections are voiced: wind farms are said to “industrialize” previously open landscapes, especially in rural plains.

Investment Risk Is Highest for Nuclear Power Plants, Lowest for Solar

Limits of LCOE and Cost Metrics

  • Several comments note that simple LCOE charts (especially older ones) are misleading: they often ignore grid integration, variability, and timing of generation (“watts at night”).
  • Grid costs for renewables (batteries, transmission, inertia provision) are often excluded from LCOE but are real and sometimes large.
  • Others argue that even with storage and grid costs added, new nuclear is still more expensive than solar + storage, especially given rapid cost declines in renewables and batteries.

Grid Stability, Inertia, and Blackouts

  • One side emphasizes the value of physical inertia from large turbines (nuclear, hydro, gas) and claims renewables need additional investments to replace that.
  • Another side counters that modern inverters and batteries can provide synthetic inertia and grid-forming capabilities, making heavy rotating mass unnecessary.
  • The cause of Spain’s Iberian blackout is disputed: some blame lack of inertia, others cite inverter misconfiguration or reactive power/overvoltage, and say inertia was not the key issue. The exact root cause is described as still unclear by some.

Nuclear Economics, Regulation, and Risk

  • Many agree nuclear has very high construction risk: huge upfront capex, long build times, and frequent cost overruns.
  • Disagreement over why: some blame “wildly over‑regulated” and opaque processes; others say the zero‑tolerance safety requirement and political risk justify strict regulation.
  • There is a long sub‑thread arguing whether long plant lifetimes (60–80 years) materially improve nuclear economics once discount rates, early closures, and competition from solar are considered.
  • Some promote federal financing or guarantees since private capital struggles with long, risky payback; critics see this as massive subsidy and poor use of public funds.

Waste, Liability, and Externalities

  • Pro‑nuclear commenters say waste volumes are small, storage methods are known (pools, dry casks, deep geological repositories), and surcharges already fund decommissioning.
  • Skeptics point out that no country is yet operating a permanent repository, and that accident liability is largely socialized (e.g., liability caps), unlike for many other sectors.
  • Comparisons are drawn to fossil fuels: their externalities and cleanup are also heavily socialized, but nuclear is held to a higher standard.

Renewables, Storage, and System Design

  • Strong view that solar and wind, coupled with large grids, batteries, pumped storage, and possibly hydrogen/syngas, can economically reach very high penetrations.
  • Others stress that in high‑latitude, long‑winter or cloudy regions (e.g., Sweden), non‑dispatchable renewables need firm backup (hydro, gas, nuclear, or coal), and that backup costs are often not attributed to renewables.
  • Debate over how much long‑duration storage (tens vs 100+ hours or even seasonal) is needed; some argue occasional fossil backup is acceptable during rare extremes.

Scale, Modularity, and Construction Risk

  • Commenters highlight “diseconomies of scale”: very large projects (especially >1.5 GW nuclear) show systematically higher cost escalation.
  • Solar and wind benefit from modular, repeatable components with factory learning curves; they can start generating revenue as they are incrementally built, reducing financial risk.
  • By contrast, nuclear projects only earn once fully complete after many years, compounding financing and political risk.
  • Small modular reactors are discussed as a way to capture similar modular benefits, but commenters note that regulatory frameworks and financing models have so far prevented their promised cost reductions.

Policy, Markets, and Carbon Pricing

  • Several comments distinguish “investment risk” (cost overrun) from ROI or social value; nuclear might be risky to build, yet valuable for energy security and firm capacity.
  • Others argue current market structures favor cheap gas peakers and ignore climate externalities; a sufficiently high and well‑designed carbon price would shift economics away from gas toward renewables (and possibly nuclear).
  • There’s contention over subsidies: critics say nuclear survives only via large direct and indirect subsidies; others respond that renewables have also received substantial support but are now increasingly subsidy‑independent.

Social License, Aesthetics, and Public Perception

  • Some see the paper and similar messaging as “green propaganda” and complain about land use impacts of solar and wind (e.g., removal of olive groves).
  • Others respond that every energy source has externalities and that nuclear’s perceived risk is driven more by fear, historic accidents, and politicized narratives than by current safety statistics.
  • Early plant closures (e.g., in Germany) are cited as “social licensing risk” that undermines long-term nuclear ROI in a way solar and wind rarely face once built.

Consider Knitting

Learning Curve, Flow, and Frustration

  • Several commenters tried crochet/knitting during or after Covid and found the early phase mentally demanding rather than relaxing.
  • Crochet is described as fast to “get going” but slow to become smooth and automatic, which feels mechanically repetitive compared to learning a language or instrument.
  • Mixed views on difficulty: one pithy summary offered is “crochet is harder 0→1, knitting is harder 1→10”; others insist both are initially quite hard.
  • Left-handed learners report extra friction because most material assumes right-handed technique.

Relaxation, Meaning, and Guilt About “Productivity”

  • Some struggle with feeling that slow handwork is a “waste of time” compared to more visibly impactful or prestigious pursuits.
  • Others push back that “meaningful” is subjective; simply making something physical or enjoyable is meaningful enough.
  • One person consciously uses knitting as “exposure therapy” to unlearn the idea that all time must produce external value.

Attention, Multitasking, and Mental Health

  • People differ sharply on whether they can knit/crochet while watching TV or listening to audiobooks.
  • One commenter suspects a link to anxiety/depression; others argue it’s just individual wiring or task interference (e.g., language-heavy tasks can’t be combined).
  • There’s debate over whether good multitaskers are actually doing more, or just rapid context-switching and tolerating quality loss.

Fiber Arts as Tactile, Reversible, and Deep

  • Knitting/crochet are praised as screen-free, tactile counterpoints to knowledge work. Undoing mistakes is easy compared to sewing or woodworking.
  • Some emphasize the very high skill ceiling (e.g., intricate shawls) and compare design/execution to painting or furniture-making, questioning the art vs. “women’s craft” divide.
  • Historical links to Jacquard looms and the analogy of a pattern as a tiny programming language resonate with programmers.

Comparisons to Other Hobbies

  • Many suggest or practice alternatives with similar benefits: music (especially guitar and percussion), woodworking, whittling, weaving, cross-stitch, woodcut, pop-up cards, plushies, cosplay, Lego, mini painting, climbing, cooking, gardening, cycling, and bike wheelbuilding.
  • Woodworking is lauded for usefulness and satisfaction but noted as noisier, costlier, less portable, and harder to undo mistakes than knitting.

Practical Tips and Caveats

  • Suggested beginner projects: dishcloths, hats, socks; note that cotton yarn is less forgiving for novices.
  • Knitting is likened to an “OG fidget toy” for some, especially neurodiverse people.
  • One warning: poor knitting technique can cause long-lasting RSI, particularly for heavy computer users.

GUIs are built at least 2.5 times

Software and GUIs Are Built Multiple Times

  • Many agree that good software, especially GUIs, effectively gets built ~3 times:
    1. quick prototype to explore the problem,
    2. first “real” but naïve implementation,
    3. a rewrite once the team truly understands requirements and domain.
  • Some add a “4th rewrite” joke (e.g., “now in Rust”) and reference ideas like “plan to throw one away” and second-system effect.
  • Several note that with experience, steps 1 and 2 can partially compress, but never fully disappear.

Rewriting vs Incremental Change

  • Product and project managers are reluctant to fund rewrites because past attempts often blew up schedules and budgets or failed outright.
  • Developers counter that staying indefinitely in iteration #2 leads to huge maintenance costs, blocked features, and accumulating technical debt that also causes overruns.
  • There’s tension between short-term “good enough” and long-term competitiveness and maintainability.

Agile, Lean, and Feedback Loops

  • Some think the article misunderstands lean/Agile: these methods already assume you can’t know UX in advance and optimize for fast feedback and iteration.
  • Others argue many organizations say “Agile” but behave like waterfall with sprints, still expecting fixed feature lists and “finished” software.
  • Several emphasize extremely tight iteration loops with real users as the only reliable path to good UI/UX; early mockups and paper/Figma prototypes help but never replace testing the real thing.

UX, Domain Expertise, and Roles

  • Good GUIs often come from domain experts building tools for themselves (e.g., finance, CAD, DAWs), not generic designers or programmers.
  • GUIs frequently fail because:
    • devs, designers, spec-writers, and managers all misunderstand actual user workflows,
    • responsibility for UX is split across people who are each “bad at UX.”
  • Proposed mitigations: semi-technical internal “champions” embedded in the user department, or strong product managers who truly understand both domain and tech.

Recurring GUI Problems and Opinions

  • Many relate to the described cycle: pixel-perfect design → build to spec → everyone hates it → redesign → more churn → grudging “nobody loves it but nobody hates it.”
  • Complaints include: oversimplified UIs for “average users,” flashy redesigns that worsen usability, and GUI toolkits that couple layout tightly to code, making iteration expensive.
  • Others highlight the value of rough, ugly prototypes, heavy user feedback, and deferring visual polish until information architecture and flows are stable.

Tooling and Article Critique

  • Figma and similar tools are praised for accelerating UI experiments before coding; some mention AI-generated throwaway UIs as a new kind of incidental prototype.
  • Multiple commenters find the article itself hard to read, meandering, and sometimes confused about patterns and lean literature, even if they agree with the core “GUIs are built ≥2.5 times” insight.

Using lots of little tools to aggressively reject the bots

Bot‑blocking techniques and tools

  • Many liked the article’s Nginx+fail2ban approach; others suggested more automated tools like Anubis or go-away, or platforms like tirreno with rule engines and dashboards.
  • People describe mixed strategies: IP/ASN blocking, honeypot endpoints, “bait” robots.txt entries that trigger zip bombs or bans, simple arithmetic captchas with cookies, and log-scan-and-ban systems.
  • Some argue whack‑a‑mole IP blocking is fragile and recommend fixing app hot spots instead (e.g., disabling Gitea “download archive” for commits, or putting heavy files behind separate rules or auth).
  • There’s debate over whether to focus on banning bots versus restructuring sites (caching, CDNs, removing costly features) so bots can be tolerated.

Robots.txt, user agents, and evasion

  • One camp reports that big AI crawlers identify themselves, obey robots.txt, and stop when disallowed.
  • Others provide detailed counterexamples: bots using random or spoofed UAs, ignoring robots.txt, hitting expensive endpoints (git blame, per‑commit archives) from thousands of residential IPs while mimicking human traffic.
  • Several note that many abusive bots simply impersonate reputable crawlers, so logs may not reflect who is actually behind the traffic.

Load, cost, and infrastructure constraints

  • Some say 20 r/s is negligible and that better caching or CDNs is the “real” fix.
  • Others reply that bandwidth, CPU-heavy endpoints, autoscaling bills, and low-end “basement servers” make this traffic genuinely harmful, especially with binary downloads or dynamic VCS views.
  • There is disagreement over whether small sites should be forced into CDNs and complex caching purely because of third‑party scrapers.

Ethics and purpose of scraping

  • One view: public data is by definition for anyone to access, including AI; people are inconsistent if they accept search engines but reject AI crawlers.
  • Opposing view: classic search engines share value and behave relatively considerately; many AI scrapers externalize costs, overwhelm infra, ignore consent, and provide little or no attribution or traffic.
  • Motivations for blocking include resource protection, copyright/licensing concerns, and hostility to unasked‑for commercial reuse.

Collateral damage to legitimate users

  • Multiple comments describe being locked out or harassed by CAPTCHAs, Cloudflare-style challenges, VPN/Tor/datacenter/IP-block rules, and JS-heavy verification walls.
  • Some criticize IP-range and /24‑style blocking as punishing privacy-conscious users, those behind CGNAT, or users of Apple/Google privacy relays.
  • There’s tension between “adapt to bot reality” and “we’re sliding into walled gardens, attested browsers and constant human‑proof burdens.”

Residential proxies and botnets

  • Several note that AI and other scrapers increasingly route through residential proxy networks (Infatica, BrightData, etc.), often via SDKs in consumer apps and smart-TV software, making IP‑based blocking and attribution very hard.
  • Suggestions include ISPs or network operators being stricter about infected endpoints, but others argue that would mean blocking almost everyone; security and attribution are fundamentally hard.

Alternative models and ideas

  • Ideas floated: push/submit indexing instead of scraping; “page knocking” or behavior‑based unlocking; separating static “landing/docs” from heavy dynamic views; restricting expensive operations (git blame, archives) to logged‑in users.
  • Some see aggressive bot defenses as necessary adaptation; others call them maladaptive, creating a worse web for humans.

Beware of Fast-Math

Alternative number representations (fixed point, rationals, posits)

  • Several comments advocate fixed-point and rational arithmetic (Forth, Scheme, Lisp) as safer for many real-world quantities (money, many engineering problems).
  • Rationals work well until you need trig/sqrt/irrationals; then you need polynomial/series methods or CORDIC.
  • Disagreement over “floats are just fixed-point in log space”: some argue scaled integers can be faster and adequate across many domains.
  • Interest in IEEE work on alternatives like posits; current draft standard mentioned but noted as not yet including full posit support, with only early hardware prototypes.

Rust’s “algebraic”/relaxed floating operations

  • Rust is adding localized “algebraic” float operations that set LLVM flags for reassociation, FMA, reciprocal-multiply, no signed zero, etc.
  • These are meant to allow optimizations “as if” real arithmetic holds, but are explicitly allowed to be less precise per operation.
  • Naming is contentious: “algebraic” vs “real_*”, “approximate_*”, or “relaxed_*”.
  • They do not guarantee determinism across platforms or builds; behavior may vary with compiler optimizations and hardware.

Fast-math, optimization levels, and IEEE 754

  • Fast-math bundles many assumptions (no NaNs/inf/subnormals, associativity, distributivity, etc.). Violating them is UB.
  • Contrast with -O2/-O3: those are supposed to preserve correctness; -Ofast (includes -ffast-math) is the “dangerous” one.
  • Some see IEEE 754 as overly restrictive and hindering auto-vectorization; others argue the standard is essential for determinism and safety, and languages should expose intent (order matters vs not).

Precision, reproducibility, and domain-specific needs

  • Some scientific/physics workloads tolerate float noise far larger than rounding effects; they report big speedups from fast-math.
  • Others (CAD, robotics, semiconductor optics) say last-bit precision and strict IEEE behavior critically matter.
  • Reproducibility is a major concern (e.g., audio pipelines, ranking/scoring algorithms, cross-version consistency). Fast-math can change results between builds or platforms.
  • FTZ/DAZ: criticized because they’re controlled via thread-global FP state; a shared library built with unsafe math can silently change behavior in unrelated code.
  • Tools/practices: Kahan summation, Goldberg’s paper, Herbie for accuracy-oriented rewrites, feenableexcept/trapping NaNs, and proposals for languages that track precision (dependent types, Ada-style numeric specs).

Money and floating point

  • Strong camp: never use binary floats for currency; prefer integers as cents or fixed-point/decimal; easier reasoning and exact sums.
  • Counter-camp: many trading systems use double successfully; with 53 bits of mantissa you can represent typical money ranges to sub-cent precision, and rounding can be managed.
  • Distinction drawn between accounting (needs predictability and “obvious” cents-level correctness) vs modeling/forecasting/trading (can tolerate tiny FP error).

I made a chair

Chair Design & Practical Experience

  • The design is recognized as a very old, pre‑industrial “tribal / 2‑piece / viking / bog chair” that appears across cultures and reenactment scenes.
  • People report it’s surprisingly comfortable and stable in real life, though you can’t lean too far forward or you’ll tip.
  • Some see it as structurally weak (high force at the notch, sharp points on the ground) and wouldn’t trust a single wide board; others say these chairs “basically last forever.”
  • Suggested improvements include multiple slots for adjustable recline, shortening the tail with a second interlocking slot to lift it off the ground, or gluing/screwing a stiffener on the underside to handle heavier users.

Wood, Finishes & Safety

  • The use of pressure‑treated lumber sparked debate:
    • Concerns: skin contact with treatment chemicals, playground bans, unpleasant dust when cutting, and long‑term disposal.
    • Counterpoints: modern treatments in many places use mostly copper plus fungicides and are considered “safe enough” for outdoor furniture.
  • Alternatives mentioned: cedar, redwood, untreated pine with outdoor finishes, oil, polyurethane/varnish, latex paint, and traditional burning (Yakisugi), with caveats about species and upkeep.
  • People learned about end‑cut sealer and shared practices like extra finish on end grain.

DIY Furniture Culture & Resources

  • Many links to DIY designs: the original one‑board chair instructions, Enzo Mari’s Autoprogettazione, “Make a Chair from a Tree,” Lost Art Press books (including free PDFs), stick chairs, anarchist design, Segal‑method houses, wine‑barrel chairs, Leopold benches, and Japanese/nomadic furniture.
  • Strong theme of “reclaiming” furniture from mass producers, learning to think with your hands rather than following strict plans, and appreciating the structural logic of objects.
  • Some criticize certain Mari designs as less robust (loads borne by screws or corners), while others defend them as carefully thought‑through and pedagogical.

Ultralight & Carbon‑Fiber Backpacking Variant

  • An ultralight carbon‑fiber version (~2 lb, ~US$350) drew interest and skepticism.
  • Critics note the product page oddly omits weight, the fire‑ember claim is vague, and it’s heavier than benchmark ultralight chairs.
  • Philosophical split: some argue no serious hiker brings a chair; others say once you try a very light camp chair it becomes indispensable.

Video Length & Media Preferences

  • The carbon‑fiber chair video triggers discussion of 10‑minute YouTube padding driven by ad incentives vs ultra‑short TikTok‑style clips.
  • Some find bloated videos unbearable; others enjoy slower, process‑oriented content as long as filler isn’t purely for revenue.
  • Product reviews are seen as poorly suited to short‑form; shorts are framed as better for quick entertainment than nuanced evaluation.

Making Things & Aesthetics

  • Several comments express envy or joy about making physical objects versus software, describing woodworking as meditative in moderation but grueling as full‑time work.
  • Aesthetic debates touch on “brutalist” furniture and whether exposing raw wood grain counts as brutalism or something warmer.

How Georgists Valued land in the 1900's

YIMBY, Community Input, and Veto Power

  • Debate over whether “community input is bad” or whether the real problem is giving local groups de facto veto power.
  • Some argue neighborhood processes amplify NIMBYs and block democratically approved projects (high‑speed rail, apartments), privileging nearby owners over the wider public.
  • Others note historic cases where community opposition stopped destructive megaprojects (e.g., expressways), so input can be beneficial.
  • Disagreement over what “community” means: immediate neighbors vs town/metro residents who share infrastructure and benefits.

Externalities, Zoning, and Local Control

  • Long subthread on externalities: one side says businesses impose net negative spillovers on non‑customers; others counter that many firms create net positive spillovers and that trade surplus is shared.
  • Example: slaughterhouses and affordable apartments show why relying on small-area vetoes shifts “undesirable” uses into powerless neighborhoods.
  • Some argue such allocation should be handled at higher levels of government to avoid every neighborhood defecting on its share of negative externalities.
  • Concern that banning home businesses or low-end housing types (trailers, rooming houses) removes vital “bottom rungs” of the housing ladder.

Land Value Tax and Valuation Methods

  • Several comments stress that separating land and improvement value is routine for assessors and insurers; land valuation isn’t novel.
  • Somers-style “ask the community” valuation is seen as intuitively workable at neighborhood level but questioned on scalability and incentives (no direct cost to misreporting).
  • Some suggest Harberger‑style “self‑assessment equals sale offer” as a more incentive-compatible approach to valuation.

Impacts on Homeowners and the Elderly

  • Strong concern that high LVT could force elderly, low‑income owners out as neighborhood land values rise, undermining homeownership as retirement security.
  • Georgist responses: distinguish LVT from traditional property tax; propose deferrals, phased increases, or exemptions, with tax recovered at sale/estate.
  • Normative split: some see relocating retirees from prime job centers as efficient and fair; others see forced moves as destabilizing and morally objectionable.

Harberger / Self-Assessed Schemes: Risks and Edge Cases

  • Worries about wealthy actors gaming self-assessed systems by placing strategic bids to raise others’ tax burdens or force sales.
  • Proposed safeguards include court review, minimum bid increments, and limits on abusive offers, but critics argue this still risks harming “grandma,” not just corporations.

Single Tax Scale and Practicality

  • Back-of-envelope claim that a near-100% LVT could theoretically fund all US government plus a citizen dividend; others call this unrealistic and economically disruptive (land values collapsing, farming burdened).
  • Broader skepticism that any country has actually solved speculation and affordability; some think zoning/land release and large-scale public landholding are bigger levers than tax design alone.

Web dev is still fun if you want it to be

Simplicity, Server Rendering, and “Old-School” Joy

  • Many describe renewed enjoyment by avoiding frontend build systems and heavy frameworks: just HTML, CSS, and vanilla JS (often in <script> tags).
  • Server-rendered pages with simple forms, a bit of JS “for spice,” and a single VPS are seen as fast, cheap, robust, and easier to maintain.
  • People report that modern browser APIs (querySelectorAll, fetch) and modern CSS (grid, :has(), custom properties) remove much of the old need for jQuery and complex stacks.
  • This style is viewed as ideal for personal projects, small products, and non-profits, where scaling needs are modest.

Frontend Complexity and Frameworks

  • Strong frustration with React/Next/Vite/Webpack/GraphQL and similar stacks: many feel they add layers and maintenance cost for little user-visible gain.
  • Others argue frameworks are valuable for large teams: they enforce structure, clear contracts, and decoupled front/back ends.
  • Debate over React’s nature: some insist it’s just a small library that doesn’t require JSX or a build system; others say adopting React effectively turns your app into a framework-shaped project.
  • Some see modern-framework backlash as nostalgia/elitism, insisting React et al. exist because large, complex apps are hard to maintain otherwise.

State Management, Scaling, and When SPAs Make Sense

  • One view: FE complexity grew because state moved from server to client for scalability and richer UX; “back to cookies + HTML” is unrealistic at big scale.
  • Counterpoint: with today’s powerful servers and global datastores, server-side state and HTML rendering may again be viable for many apps.
  • Several note that simple patterns (variables + “rerender” function, htmx/Stimulus/Turbo, or partial SPAs) often cover 90% of SPA-style needs.
  • Others share that they started with “classic” Django/Rails-style apps and only moved to API + React/Next when user interactions became truly complex.

Forms, Standards, and Browser Capabilities

  • Strong agreement that form submission should require zero JS; browsers already handle basic validation.
  • Frustration that HTML forms have “rotted”: lack of JSON enctype, awkward validation attributes instead of richer child tags, and stalled standardization efforts.
  • Some show patterns for sharing controller logic between SSR and SPA modes so apps still work with JS disabled.

Tooling Opinions: Tailwind, jQuery, Astro, Web Components

  • Tailwind: praised as a semantic layer that LLMs can reason about; criticized as verbose “inline styles with extra steps” that makes every site look the same.
  • jQuery: still used for readability and convenience; others see it as an obsolete polyfill that’s not worth introducing in new code.
  • Astro and similar tools: mixed experiences—some find them fun and “Rails-like,” others dislike the mental model of server/client fences.
  • Web components + native modules/CSS are highlighted as a modern, standard-based alternative to big frameworks, with small libs like Lit or Mithril layered on when needed.
  • Lightweight options like Alpine.js and minimalist hypermedia helpers are mentioned as pleasant middle grounds.

AI, “Vibecoding,” and Fun

  • The article’s mild use of AI for CSS triggered strong anti-AI reactions from some, who see any use as harmful; others say “somewhat OK” assistance is fine.
  • A few enjoy “vibecoding” with AI for rapid prototypes and playful experiments, but would not trust it for production.
  • Concern is raised that AI industrializes what used to be fun; others argue competent developers will still stand out regardless.

Careers, Hiring, and Social Dynamics

  • Several note that hiring markets enforce tool choices: resumes without React can be discarded, driving “resume-driven development.”
  • Using simpler stacks at work can be seen as “dinosaur” behavior, even when it’s more efficient; some accept this and focus on side projects for joy.
  • Tool choices are portrayed as social as much as technical: rejecting mainstream stacks can confuse peers or be perceived as criticism.

Ask HN: How do I learn practical electronic repair?

State of modern electronics repair

  • Modern devices are harder to repair: tiny components, multilayer PCBs, BGAs, microcontrollers, proprietary firmware, scarce schematics and parts.
  • Despite that, many faults (especially in consumer gear) are still fixable: power-supply issues, bad capacitors, connectors, and switches are common wins.
  • Some argue deep faults in highly integrated gear are rarely economical; others counter that hobbyists and small shops still get impressive results.

Learning path & mindset

  • You need both electronics theory and repair “intuition”; they’re related but distinct.
  • Suggested loop: learn basics → build simple circuits → tear down and fix broken stuff → repeat.
  • Many recommend starting by building simple kits (not just Arduino abstractions) before serious repair.
  • Expect to fail and “break things more” early; the low cost of junk electronics makes this acceptable.

Tools & equipment

  • Core starter kit: temperature‑controlled soldering iron, flux, leaded solder, solder wick, basic multimeter.
  • Strong emphasis on learning both soldering and desoldering; right tools (desoldering pump/iron, hot air) are described as near‑essential beyond trivial jobs.
  • Next tier: isolation transformer, oscilloscope (even a cheap/used one), bench PSU, magnification, fume extraction, “third hands,” heat gun, heat‑shrink, good hand tools.
  • Several advise starting with inexpensive tools and upgrading only once limitations are painful.

Safety considerations

  • Treat mains and high voltage with great respect; one‑hand rule, isolation transformer, GFCI, fuses, and emergency power cutoff recommended.
  • Risks highlighted: electrocution, burns, fire, and fumes; also the danger of rendering repaired devices unsafe (e.g., batteries, bypassed protections).
  • Some devices (microwaves, EV battery packs) are widely described as “don’t touch” for beginners.

Where to practice & what to repair

  • Get cheap or free broken items from Craigslist/Marketplace, thrift stores, “for parts” listings, or e‑waste.
  • Good early domains: appliances (washers/dryers), older transistor gear, vintage hi‑fi, game consoles, basic consumer electronics; avoid smartphones and very dense SMD at first.
  • Strategy ideas: buy multiples of the same broken model to combine into one working unit; focus on classes of devices you care about.

Resources (videos, books, communities)

  • YouTube is heavily endorsed for both theory and live repairs, but many note that diagnosis steps are often glossed over.
  • Recommended written resources include “Getting Started in Electronics,” “How to Diagnose and Fix Everything Electronic,” “Practical Electronics for Inventors,” and (for deeper theory) “The Art of Electronics” and ARRL materials.
  • Community options: repair cafés, Discord/online groups, local classes (e.g., community colleges), and repair‑focused wikis.

Diagnosis vs part-swapping & limits

  • Multiple commenters stress learning systematic diagnosis: tracing power rails, recognizing common failure modes (dried electrolytics, shorted MLCCs, cracked joints), reading datasheets, and inferring schematics.
  • Debate exists on how much formal EE is “needed”: some say quite a lot for serious troubleshooting, others say you can get far with pattern recognition plus basic concepts.
  • Economic and future value is mixed: some foresee growing importance of repair skills; others think increasing integration and software dependence will limit what’s realistically fixable.

AI Responses May Include Mistakes

Google Search, Gemini & Declining Result Quality

  • Many comments report Gemini-in-search routinely fabricating facts: wrong car years/models, non‑existent computer models (e.g., PS/2 “Model 280”), bogus events, or made‑up sayings treated as real.
  • Users note Google often buries the one correct traditional result under AI “slop” and SEO junk, in areas (like car troubleshooting) where Google used to excel.
  • Some link this to long‑term “enshittification” of search and ad‑driven incentives: better to show something plausible (and more ads) than admit “no answer.”

Trust, User Behavior & Real‑World Harm

  • Several anecdotes show people treating AI overviews as gospel, then being confused or misled (car system resets, population figures, employment or legal info, game hints).
  • Concern that AI overviews make bad or underspecified queries “work” by giving smooth, confident nonsense where earlier results would have been messy enough to signal “you’re asking the wrong thing.”
  • Worry that this will create more downstream work: support staff and experts having to debug problems caused by AI misinformation.

LLMs vs Search & Alternative Uses

  • Some are baffled that anyone uses LLMs as primary search; others say they’re great for:
    • Framing vague ideas into better search terms.
    • Summarizing multi‑page slop, “X vs Y” comparisons, or avoiding listicle spam.
    • Coding help and boilerplate, provided you already know enough to verify.
  • Alternative tools (Perplexity, DDG AI Assist, Brave, Kagi) are cited as better examples of “LLMs plus search,” mainly because they surface and link sources more transparently.

Disclaimers, Liability & Ethics

  • Broad agreement that tiny footers like “may include mistakes” are inadequate; suggestions range from bold, top‑of‑page warnings to extra friction/pop‑ups.
  • Others argue pop‑ups won’t help: many users don’t read anything and just click through.
  • Tension noted: you can’t aggressively warn “this is structurally unreliable” while also selling it as a replacement for human knowledge work.

Technical Limits & “Hallucinations”

  • Repeated emphasis that LLMs are language models, not knowledge models: they’re optimized to produce plausible text, not truth.
  • Some push back on mystifying terms like “hallucination,” preferring plain “wrong answer” or “confabulation.”
  • Debate over acceptable error rates: at what point is “accurate enough” for non‑critical domains, versus inherently unsafe for anything high‑stakes?

Learn touch typing – it's worth it

How common is touch typing? Generational and cultural gaps

  • Some participants claim almost all younger white-collar workers touch type; others strongly disagree, citing many colleagues who hunt-and-peck or use poor techniques.
  • Reported averages (e.g., ~40 wpm) are used as evidence that many do not truly touch type.
  • There are notable cultural differences: in some countries it was historically taught (often as a “secretarial” elective) and later dropped; in others, it was never institutionalized.
  • A coming cohort raised mainly on touchscreens often struggles with physical keyboards, modifiers, and symbols, relying heavily on phone-like habits and autocomplete.

Should touch typing be taught in schools?

  • Many argue it should be a basic school skill, given how many jobs require extensive keyboard use for decades.
  • Others note schools often no longer offer typing classes, even when “computer classes” exist.
  • Some contend heavy users will learn organically; others push back that structured teaching accelerates learning and avoids bad habits.

Value vs. skepticism: Is it really “worth it”?

  • Proponents say:
    • Keyboard “disappears,” improving focus and flow.
    • Faster and more accurate long-form writing and communication.
    • Feels like “typing at the speed of thought,” especially combined with editor shortcuts (e.g., Vim).
  • Skeptics counter:
    • Thinking, not typing speed, is usually the bottleneck in programming.
    • With code completion and AI assistants, raw typing matters less.
    • Some already type >100 wpm using idiosyncratic methods without pain and see little marginal benefit.

Layouts, technique, and ergonomics

  • Several describe switching to Dvorak, Colemak, or variants (including language-specific layouts) as a way to:
    • Reduce strain and pain.
    • Force a “fresh start” and correct bad habits.
  • Others report successfully retraining on QWERTY, often using blank or unlabeled keycaps to break visual dependence.
  • There’s debate over strict home-row technique vs. “natural” evolved styles; high-speed typists sometimes diverge from textbook fingering.
  • Ergonomic and split keyboards, thumb keys, and custom layers are repeatedly cited as major RSI mitigations and comfort improvements.

Learning strategies and tools

  • People mention formal classes, chat (AIM/IRC/MUDs), games (Typing of the Dead), and modern trainers (Monkeytype, TypeRacer, Keybr, KTouch, Typelit, TypeQuicker).
  • Common advice: prioritize accuracy over speed, avoid looking at the keyboard, practice problem symbols/rows separately, and accept a temporary productivity hit when retraining.

Valkey Turns One: Community fork of Redis

Packaging, Distros, and CI

  • Some want Valkey in default distro repos to avoid adding custom keys/repos in CI (e.g., GitHub Actions).
  • Others note Valkey already exists in many distros (Debian, Ubuntu, Fedora, Arch, RHEL 10, etc.), though often in “universe”/community repos that may not be enabled or fully maintained.
  • Debate:
    • One side prefers distro-maintained packages for stability and security backports, especially for core daemons.
    • The other side argues fast‑moving projects are better served via vendor PPAs/custom repos to avoid being stuck supporting ancient LTS versions.
  • GitHub Actions’ limited base images (older Ubuntu LTS) are seen as its problem; suggestion: use custom Docker images if you need newer Valkey.

Reliability and Managed Services

  • One user reports serious outages with AWS’s managed Valkey: connections accepted but commands never executed, restarts hung, and AWS couldn’t diagnose it; replacing with Redis fixed the issue.
  • Others suspect an AWS operational/network issue rather than Valkey itself, citing similar opaque failures with RDS.
  • Managed cache pricing is disputed: some claim ~10x EC2 cost, others see ~1.4–1.7x overhead.

Corporate Backing and Ecosystem

  • Question raised why Valkey hasn’t had an OpenTofu‑scale “moment.”
  • Explanations: Terraform’s value was more tightly bound to its provider/module ecosystem and registry policies, so license changes felt more threatening.
  • Multiple commenters clarify Valkey is in fact heavily backed (AWS, Google Cloud, Oracle, others) and under the Linux Foundation.

Licensing, Trust, and Forks (Redis vs Valkey)

  • Strong, divided views on Redis’s license changes:
    • Some argue permissive licensing enabled hyperscalers to profit while original authors didn’t, and recommend “fair source”/anti‑cloud clauses.
    • Others see relicensing and CLAs as a “rug pull” on users and contributors, undermining trust and effectively privatizing community work.
  • Redis’s move to add AGPL is seen by many as “too little, too late”; Valkey (BSD) is now the default choice, especially for large cloud users who avoid AGPL.
  • Some argue AGPL is the right long‑term answer to stop free-riding; others emphasize that its incompatibility with major clouds practically guarantees Valkey’s continued momentum.
  • Several note Redis still uses a CLA, so another license change remains possible; this is a key reason some won’t “trust Redis again.”

Technical Evolution and I/O Threading

  • The original Redis author joins to correct the article’s framing: I/O threading was added to Redis in 2020 by him, already respecting the “shared‑nothing” philosophy.
  • He explains the design: parallelize slow read/write syscalls when there is no contention, then immediately return to single‑threaded execution; Valkey later improved this and deserves credit.
  • He disagrees with claims that early I/O threads “did not offer drastic improvement,” pointing to existing benchmarks and calling such marketing/“journalism” misleading.
  • He notes:
    • I/O threading mainly matters at hyperscaler‑level loads; many large real‑world Redis deployments never needed it because CPU wasn’t the bottleneck.
    • His stance on threads is pragmatic, not ideological: they are also used for modules and new vector-set queries, where the data structures have high constant factors and threading pays off.

Valkey vs Redis Going Forward

  • Some believe most end‑users don’t care about the politics and will keep using “Redis” by name; others insist that serious companies do formal license reviews and are actively moving to Valkey.
  • Distro behavior is seen as pivotal: in some cases redis packages already install Valkey under the hood, echoing the MariaDB/MySQL precedent.
  • The idea of re-merging Redis and Valkey is brought up; replies say it’s unrealistic given:
    • Redis did not return to a permissive license (AGPL plus proprietary options instead).
    • Valkey now has strong independent backing and a rapidly growing contributor base.

Clients and Tooling

  • Users on GCP complain about poor official Redis cluster+TLS client support in C, relying on an unofficial hiredis‑cluster.
  • Response: Valkey provides libvalkey, a maintained fork that unifies hiredis and hiredis‑cluster and targets exactly this use case.

Silicon Valley finally has a big electronics retailer again: Micro Center opens

Return of a big-box electronics store to Silicon Valley

  • Many are surprised it took years after Fry’s closed for Silicon Valley to get another large electronics retailer.
  • The new Micro Center is seen as a “middle ground” between Best Buy and small shops like Central Computer.
  • Some note Fry’s had effectively declined long before closure: fewer parts/tools, more generic gadgets, inventory issues, and a failed consignment model.

Who actually needs local hardware?

  • One thread questions whether cloud-centric startups and laptop-heavy workplaces justify such a store.
  • Responses mention: home gaming rigs, local LLM experimentation, Linux boxes, and hobbyists as key customers.
  • Some argue cloud and game streaming are cheaper than owning powerful rigs; others accept higher cost for control and ownership.

Enthusiast and maker appeal

  • Micro Center is praised for:
    • PC components and small-business machines.
    • Large 3D printing section, custom water-cooling aisle, and maker boards (Arduino, ESP8266, Adafruit/SparkFun).
    • A modest but valued aisle of components, tools, soldering gear, and test equipment.
  • Critics say it’s mostly “plug-and-play” consumer hardware, not a true electronics-parts destination like surplus/parts stores (Anchor, Sayal, etc.).

Comparisons: Fry’s, Radio Shack, Newegg, Amazon, others

  • Many reminisce about Fry’s, WeirdStuff, Haltek, and other surplus stores; note the cultural void they left.
  • Micro Center is framed as what Radio Shack “should have become.”
  • Newegg is widely seen as having deteriorated into a messy marketplace; Amazon is convenient but distrusted for counterfeits, mixed inventory, and “new” open-box parts.
  • B&H and other specialty retailers are mentioned as online alternatives, with mixed views on ethics and service.

Service, pricing, and retail economics

  • Micro Center staff are described as numerous, knowledgeable, and commission-driven; some report genuine money-saving advice, others are annoyed by aggressive warranties and scripted interactions.
  • The chain price-matches Amazon, which surprises some given brick-and-mortar overhead; others point out they recoup margin on other items and that few customers actually request matches.
  • Several comments dig into thin retail net margins vs higher gross margins, sales tax arbitrage, and why large, inventory-heavy PC stores remain rare.

Geography and scarcity

  • People in Seattle, LA proper, New England, and elsewhere lament the lack of nearby stores, while longtime customers in Ohio, Virginia, Chicago, and Massachusetts share decades of positive experiences.
  • There’s speculation that high real estate costs and niche demand limit broader expansion, even in tech hubs.

Surprisingly fast AI-generated kernels we didn't mean to publish yet

Fixed-size kernels and PyTorch as a baseline

  • Some note the experiment seems to assume fixed input sizes; others explain PyTorch already uses multiple specialized kernels and tiling, but not for every possible shape.
  • A few suspect the speedups may reflect PyTorch choosing a suboptimal kernel for that exact shape, not fundamental superiority of the AI-generated code.
  • Others point out that beating generic framework kernels on a single fixed configuration has long been feasible.

Numerical precision, correctness, and evaluation

  • Several comments focus on the 1e-2 FP32 tolerance, arguing this effectively allows FP16-like behavior and makes FP32 comparisons misleading.
  • One user reports large mean squared error (~0.056) and slower performance than PyTorch on their RTX 3060M, suggesting results are hardware- and workload-dependent.
  • There is concern that using random-input testing rather than formal verification risks “empirically correct but wrong” kernels; contrasted with work that proves algebraic correctness.
  • Some kernels were found initially non–numerically stable (e.g., LayerNorm) and later regenerated.

Novelty vs existing optimization techniques

  • Multiple commenters argue there is nothing obviously novel in the example kernels; similar gains have been achieved for years via ML-guided scheduling (e.g., Halide, TVM) and vendor libraries.
  • Others emphasize that NVIDIA/PyTorch FP32 kernels are relatively neglected and that AI may just be porting known FP16/BF16 tricks.
  • Skeptics stress that “beating heavily optimized libraries” often ignores kernel-selection heuristics and real-world constraints (alignment, stability, accuracy).

Hardware, microarchitecture, and optimality

  • Discussion on NVIDIA’s poorly documented microarchitecture: this may make AI-guided exploratory search particularly effective.
  • Counterpoint: even with perfect documentation, global optimal scheduling/register allocation is combinatorially hard; compilers don’t attempt fully optimal code due to time constraints.
  • Some note that certain operations (e.g., matrix multiply on tensor cores) are already near hardware limits, leaving limited headroom.

Implications for AI capabilities and “self-improvement”

  • One camp sees this, AlphaEvolve, and o3-based bug-finding as evidence that recent models plus automated search cross a new capability threshold.
  • Others say it’s closer to genetic programming with a strong mutation operator and a clear objective; not direct evidence of broad recursive self-improvement.

Agent methodology and parallel LLM usage

  • Commenters highlight the interesting use of many short-lived “agents” in parallel, each exploring variants with an explicit reasoning step rather than pure low-level hill climbing.
  • This is contrasted with typical “one long-lived agent” patterns; some see fan-out/fan-in task graphs as a more natural fit for LLMs, though merging results is costly and lossy.

LLMs, reasoning, and “understanding” (meta-discussion)

  • Extended debate over whether LLMs “reason” or “understand,” or merely approximate patterns well enough to pass tests.
  • Some argue behaviorally they meet practical notions of understanding; others insist that anthropomorphic language obscures real limits, especially under novel conditions or strict logical demands.

Cap: Lightweight, modern open-source CAPTCHA alternative using proof-of-work

Concept & Background

  • Cap uses client-side proof-of-work (PoW) as a “CAPTCHA alternative,” but many commenters stress it’s really a rate limiter, not a human/bot discriminator.
  • The idea predates cryptocurrencies (Hashcash is cited) and inspired Bitcoin; this is seen as a return to that original PoW-for-abuse-control concept.

Threat Model & Effectiveness

  • Intended benefit: add small per-request cost that’s negligible for humans but ruinous at scale for large crawlers, spam, or bot farms.
  • Supporters note that even small extra costs or delays can kill the economics of large scraping operations.
  • Critics argue:
    • It doesn’t stop targeted attacks or low-volume bots; it only hurts generic large-scale abuse.
    • The real cost per challenge is likely tiny (far less than cents), making many abuse categories still profitable.
    • Attackers can use GPUs/ASICs/FPGAs to solve SHA-256 PoW far faster and cheaper than user devices, repeating crypto’s hardware-inequality problems.

PoW vs Traditional CAPTCHA

  • Several comments stress that PoW doesn’t determine “human vs bot,” so branding it as a CAPTCHA is seen as misleading.
  • For protecting single endpoints (e.g., “curl to CreatePost”), critics say this “lets all traffic through, just slower,” unlike CAPTCHAs that can outright block.
  • Some suggest simple delays or standard rate limiting might address similar abuses without client CPU work.

Energy, UX, and Accessibility

  • Concerns raised about battery drain, CO₂ impact, and “invisible mode” feeling like covert cryptomining.
  • Others argue the per-challenge energy is extremely small, dwarfed by normal browsing.
  • Accessibility critique: requires JavaScript; no provision for JS-disabled users, unlike some alternatives.

Law-Enforcement / Password-Cracking Paper Controversy

  • A linked white paper describes using PoW CAPTCHA-like systems to harness web users’ CPUs for law-enforcement password cracking.
  • Commenters find this “botnet for the feds” angle disturbing; some assume association with Cap, given the link from its site.
  • The project’s author responds that Cap does not send hashes anywhere, isn’t cracking passwords, and the paper was shared only as background; a clarification note was added.
  • Some remain uneasy, citing bundled WASM binaries and optics (logo, lack of initial disclosure), others accept the open-source code as sufficient reassurance.

Alternatives & Ecosystem

  • Other PoW CAPTCHA tools (Altcha, Anubis, Checkpoint) are discussed; some prefer them, especially for privacy or no-script support.
  • General frustration with Cloudflare CAPTCHAs motivates interest in PoW approaches.
  • Broader ideas include account systems tied to phone numbers or hardware attestation to solve Sybil problems, but these raise major privacy and usability concerns.

When will M&S take online orders again?

E‑commerce as core competency vs something to outsource

  • Some argue pre‑internet retailers shouldn’t run their own tech stacks; they should focus on merchandising and customer experience, and outsource websites, logistics software, payroll, etc.
  • Others counter that for a retailer with large online revenue (M&S, Walmart‑scale), e‑commerce is core and should be built and deeply understood in‑house, provided it’s properly staffed and funded.
  • There’s recognition that “build it ourselves with 10 engineers” is often hubris: platforms like Shopify concentrate enormous engineering and SRE effort that most retailers cannot match.

Amazon, Shopify, and white‑label platforms

  • Past experiments with Amazon‑run storefronts (M&S, Borders, Target, Waterstones) are cited as cautionary: partnering with a direct competitor proved strategically bad.
  • Shopify is seen by some as the cleaner model (no direct retail conflict), but others question whether Shopify scales to multi‑billion‑pound, highly customized operations.
  • A common lament: executives underestimate the complexity of large‑scale e‑commerce (“it’s not a garage sale”).

Outsourcing to Tata and the India debate

  • Thread notes M&S’s major IT outsourcing to Tata Consultancy and speculates (not proven) that a third‑party helpdesk was the breach vector.
  • One side claims outsourcing to low‑cost providers inherently trades away quality and continuity; another calls this xenophobic and argues quality vs cost is about process and management, not nationality.
  • Counterexamples of both successful and failed Tata businesses are raised; overall impact on this incident remains unclear.

Why recovery can take months

  • Many are surprised a big retailer can’t stand up at least a minimal site in weeks (even via Shopify), but others describe:
    • Highly interconnected legacy systems (warehousing, inventory, accounting, logistics, payments, loyalty, banking products).
    • Need for full forensics and hardening; you can’t just redeploy untrusted code and data.
    • Possible ransomware scenarios where repos, backups, and failover copies are compromised.
    • Loss of institutional knowledge and chronic under‑investment in DR, automation, and tested backups.
  • Example given: British Library still not fully recovered a year after its own attack.

Leadership, incentives, and AI

  • Several comments blame executive short‑termism: aggressive IT cost‑cutting, heavy outsourcing, and weak attention to resilience until disaster hits.
  • Some contrast hype about “AI replacing developers/CEOs” with very basic organizational failures (backups, DR plans, staffing), arguing most AI talk is stock‑price theater rather than operational reality.

Broader context: UK tech capability

  • Some see this as part of a wider UK pattern: reliance on cheap consultancies, underpaying high‑end engineers, and rewarding financial “grift” over technical robustness.
  • Others note that parts of UK government digital services are exemplars of well‑run, accessible infrastructure, so national capacity clearly exists but is unevenly applied.

What's working for YC companies since the AI boom

YC’s AI Focus & Batch Composition

  • Notable absence of consumer products is seen by some as YC being too narrow; others say it simply reflects macroeconomy and AI’s current stage.
  • Several commenters argue YC is heavily skewing toward AI, shaping who applies and gets in, rather than “just picking the best founders.”
  • There’s concern YC has become insular (B2B, often B2–YC), optimizing for selling to other YC/Valley companies rather than the broader economy.

Consumer vs B2B AI

  • Multiple explanations for “0 consumer”:
    • Easier/cheaper for incumbents to bolt AI onto existing consumer products than for a new entrant to build brand + pay inference costs.
    • Consumer AI often needs huge capital to subsidize usage (like free ChatGPT), which early startups can’t match.
    • B2C norms of “free” push startups into ad/shady models or lottery-style “hit” dynamics.
  • Others counter that consumer AI is already vibrant (e.g. search, music, multimedia apps) and may even be healthier than enterprise AI, where many projects don’t justify their spend.

AI Startup Viability & Moats

  • Pattern described: “ChatGPT but for X” gets funded, then the platform providers ship a better built-in version, erasing the startup’s wedge.
  • View 1: “AI startups” as a category are fragile; general models and incumbents quickly absorb successful ideas.
  • View 2: Moats live in vertical UX, integration, data, and deterministic workflows with AI as an assistant, not the control loop. Document understanding/IDP is cited as a large, enduring space where specialized players can thrive.

Metrics: Series A vs Real Traction

  • Many argue Series A count is a poor proxy for “what’s working”:
    • Post-ZIRP, more startups push for early revenue and even cash-flow positivity, delaying or skipping A rounds.
    • Some companies reportedly have multi‑million ARR on just seed money.
    • Better metrics suggested: non‑YC customer growth and churn.

Tooling, Evaluation & Infra

  • Absence of LLM evaluation/observability/tooling in the Series‑A list is seen as natural: patterns are immature and it’s hard to pick winners.
  • Confusion over what “tooling” means (infra like local model runners vs dev tools vs runtime monitoring).

Hardware & VC

  • Zero hardware in the Series‑A data resonates with hardware engineers who say traditional VC timelines and expectations don’t fit long, capital‑intensive hardware cycles.
  • Some see this as healthy: bootstrapping, strategic customers, and slower growth may be better aligned than mainstream VC.

AI Hype vs Reality

  • One camp claims YC is going all‑in on AI with unproven business value, partly due to its stake in foundational players.
  • Another counters that seed capital is supposed to underwrite exactly this kind of technology/market risk; lack of quick Series As doesn’t imply lack of long‑term economic impact.