Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 339 of 535

ClickHouse raises $350M Series C

Database business & funding round

  • Being a profitable database vendor is described as very hard: long sales cycles, big upfront investment, and a need to lock in large customers during the hype window.
  • Some see the $350M “Series C” as more like a late-stage (D/E) round and commentary that “money isn’t real anymore” reflects perceptions of inflated valuations and oversized rounds.
  • Others note that very large Series B/C rounds exist and that raising despite potential profitability can be rational to attack a huge market faster.

Open source, mission, and commercialization

  • A few commenters feel some “ClickHouse, Inc.” decisions run counter to the original project spirit and have hurt the broader OLAP ecosystem.
  • Others push back: the open-source core is still improving quickly (e.g., usability, new features) and commercializing managed storage / “build-it-for-you” pieces is seen as necessary to sustain development beyond the original Yandex use case.
  • Holding back advanced automation (like fully automatic sharding and shared storage) in the OSS version is viewed by some as a sales funnel, by others as a fair business tradeoff.

Self‑hosting vs managed ClickHouse

  • Several people report years of stable self-hosted clusters, including 50+ node setups; overall it’s considered one of the easier DBs to operate, but with important pitfalls (defaults, SSL, manual sharding).
  • Cloud offering adds closed-source SharedMergeTree over S3 with compute/storage separation and automatic scaling; attractive to teams that don’t want ops overhead.
  • Debate on cost: some argue a colo rack is cheaper than managed cloud after a year; others emphasize enterprises pay for reduced hassle.

What ClickHouse is (OLAP focus)

  • Clarified repeatedly: ClickHouse is an OLAP, columnar analytics database, not an OLTP/Postgres drop-in.
  • Best for large-scale aggregations on append-heavy data (logs, telemetry, royalties, analytics dashboards) with second-level “online” query responses.
  • Internals like MergeTree, bulk inserts, heavy DELETEs, and ordering keys are central; performance tuning often depends on dataset layout, partitioning, and avoiding nullable fields.

Performance, joins, and memory behavior

  • Many users praise it as “insanely fast” and a night-and-day improvement over systems like TimescaleDB for large analytics workloads.
  • Others recount frequent out-of-memory issues, especially around joins and large inserts; one user reports OOMs in ClickHouse, DuckDB, and Polars on modest hardware.
  • Some describe ClickHouse as a “non-linear memory hog” that really wants ≥32GB RAM, though the memory tracker usually aborts queries rather than crashing.
  • Joins are a recurring pain point: several say naive joins can OOM even on powerful machines, and emphasize it’s a columnar analytics engine, not a general relational workhorse.
  • Counterpoints say join performance has improved significantly; with careful schema design, join ordering, and techniques like IN queries, incremental materialized views, and projections, complex workloads with many joins can succeed at scale.

Adoption, use cases, and pricing

  • Multiple commenters report long-term, high-volume production usage (including a linked public blog from a large CDN provider), but stress that “someone must tend to it” at scale.
  • Some say you must understand internals for cost/performance; others argue that’s true for any serious DB.
  • A concern about “only 2k users” of ClickHouse Cloud is rebutted: many companies self-host, and cloud customers likely include large enterprise contracts.
  • Mention that data warehouse ACVs are often far above a few hundred dollars per month; one user cites a $450/month small cloud cluster and others note Snowflake-scale contracts as a reference point.

Sampling, correctness, and analytics philosophy

  • Debate on whether storing 900B+ analytics rows is worthwhile: some advocate sampling or Monte Carlo approximations, others argue certain use cases (e.g. payments, rare-event analytics) require full fidelity.
  • ClickHouse’s native sampling support is highlighted as a way to balance accuracy and performance when exact answers aren’t mandatory.

UX, learning curve, and frustration

  • Several users love ClickHouse’s documentation, performance focus, and low-friction replication from OLTP systems.
  • Others find the SQL dialect, operational model, and tools (e.g., ZooKeeper in some setups) unintuitive or filled with “footguns,” especially if approached with a pure Postgres/MySQL mindset.
  • One commenter, stuck with ClickHouse in production, would prefer Postgres for their scale but cannot justify a migration to prove it.

Miscellaneous

  • A lighthearted subthread critiques the wrinkled shirts in a team photo; someone involved explains they’d just pulled new swag out of a shipping box for a spontaneous shoot, chalked up to “startup life.”

Show HN: I wrote a modern Command Line Handbook

Website & Landing Page UX

  • Multiple people report the landing page is broken on mobile (text cut off, large title overflowing) across several Android browsers.
  • A contributor offers concrete CSS tweaks to fix font scaling and image overflow.
  • Several ask for a table of contents and clear sample pages; many only discovered them via comments or Gumroad, not the homepage.
  • Copy nitpicks include idioms (“hot off the press”, “in 120 pages”) and general English polishing.

Format, Distribution & Pricing

  • Some are eager to pay but dislike the PDF-only format, preferring epub for Kindles or small devices.
  • Others suggest converting via Calibre or relying on Kindle’s “Send to Kindle” / reflow.
  • There’s interest in a physical print-on-demand version; Amazon KDP is suggested as straightforward and compatible with PDF.
  • The author uses a “pay what you want” model primarily to share the work rather than maximize revenue; expectations for income are modest.

Content Accuracy & Typesetting

  • Readers report minor technical issues (e.g., regex wording, behavior of Ctrl-D, incomplete PATH example, diff-with-ls example).
  • Some debate whether certain examples are “best practice” versus good demonstrations of concepts like process substitution.
  • Typesetting issues in the PDF (examples split across pages, awkward page breaks, multi-page footers) are seen as breaking reading flow, especially on screens.

Target Audience & Pedagogical Use

  • Several educators plan to recommend or use the book for teaching basic CLI skills to beginners, especially interactive usage (history, job control) rather than just scripting.
  • Readers want clearer positioning on the homepage: is it for total beginners or intermediate bash users?
  • Suggestions include adding more explicit real-world scenarios and short notes tying examples to practical use.

Shell Philosophy & Alternatives

  • Debate over bash’s role: some argue anything beyond short scripts should move to higher-level languages (Python, Node), citing maintainability, lack of data structures, poor testing/debugging.
  • Others emphasize the need to understand shell regardless, due to legacy scripts and CI usage.
  • There’s discussion of focusing on standard tools (find, grep, make) versus newer utilities (fd, fzf, rg, Just); the book explicitly prioritizes tools available by default for portability.

Favorite CLI Concepts & Tools

  • Users share “aha” commands and concepts:
    • Core: find, grep, xargs, awk, sed, regular expressions, parameter expansion, job control.
    • Shell quality-of-life: Ctrl-R, set -o vi, set -o xtrace, lsof, process substitution.
    • Desktop tweaks: aliasing xdg-open to open, using notify-send for completion alerts.
    • Modern helpers: bat, zoxide, tig, atuin, choose, direnv, fd, fzf, gh, ripgrep.

Related Resources & Supplementary Material

  • Commenters share complementary learning resources: interactive tutorials, Linux learning sites, TUI-based practice apps, and other shell-related zines/handbooks.
  • There’s interest in exercises or small projects aligned to sections of the book to motivate less-engaged learners.

Learning C3

Nullability, References, and Contracts

  • Strong desire for null-restricted / non-null types; several commenters see plain nullable pointers plus comment-based contracts as a step backward vs modern approaches.
  • The “contracts in comments” design is controversial: some find it weird or non-idiomatic, others note it’s similar to SPARK/ACSL and like that it’s incrementally adoptable and keeps signatures clean.
  • There’s an extended debate on how non-null types interact with zero-initialization (ZII), bulk allocation, and lack of constructors. Some argue you can enforce non-null by requiring initialization at declaration; others say that breaks common C-style patterns (arrays, vectors, bulk allocs).
  • One view: strict non-nullability is more appropriate for “from scratch” languages than for a C-evolution like C3.

Loops, “foreach”, and Low-Level Transparency

  • Concern that foreach is “not C-like” and hides iteration mechanics (step size, pointer vs index).
  • Clarification: foreach is intentionally limited to simple element-wise iteration, caching length for performance; for custom strides or mutations, for remains available.
  • Some argue if you’re not processing every element, you shouldn’t use foreach anyway. Others worry any higher-level loop clashes with C3’s low-level ethos.

Positioning vs Rust, Zig, Hare, D, etc.

  • Many see Rust as more complex and more like a C++ competitor; C3 aims to be “C, but better” with lower complexity and C ABI compatibility.
  • C3 is contrasted with Zig/Odin/Hare/C2 as another C-adjacent option; Hare’s Unix-only focus and QBE backend are discussed as limiting factors.
  • Some argue D is the “best” C/C++ evolution feature-wise, but attribute its limited adoption to size/complexity, GC history, half-finished features, and “has-been” perception.

Performance, Backend, and Tooling

  • C3’s LLVM IR is designed to closely mirror Clang’s, so runtime performance is expected to match C; any difference is labeled a bug.
  • A C backend is planned to help target unusual platforms and act as an escape hatch.
  • Comparison with QBE: smaller and elegant but slower in end-to-end pipeline and historically lacking debug info.

Error Handling and ? Types

  • One commenter prefers classic exceptions (try/catch/finally) over pervasive Optional/Either-style error threading.
  • C3’s error model combines try/catch-like composability with T? result types and a catch binding that implicitly unwraps on success, pitched as more ergonomic than plain result types while remaining explicit.

Switch/Case and Control Flow

  • Debate over stacked case labels vs case X, Y: syntax. Concerns about readability with many labels; C3 stays close to C but adds case ranges.
  • C3 also supports an expression-style switch that lowers to if-else, which some find convenient and others consider too high-level/confusing in a low-level language.

Macros and Generics

  • Skepticism about macros in general; concern about DSL abuse.
  • C3’s macros are described as hygienic (no leaking variables into caller scope) and closer to polymorphic static inline functions than to C preprocessor tricks.

Ecosystem, Community, and Sponsorship

  • Some question advantages over Rust given Rust’s larger ecosystem; others counter that C3’s instant access to C libraries is a strong form of “community support.”
  • Sponsorship by a web company is seen as straightforward marketing-oriented OSS support, with no deeper entanglement mentioned.

Google is using AI to censor independent websites like mine

Organic Traffic, “Rights,” and Business Models

  • Many argue organic Google traffic is not a right; publishers chose to build on a private platform that can change rules at will.
  • Others counter that Google explicitly encouraged “people-first content” and built an ecosystem where traffic substituted for payment; changing the deal after publishers invested is seen as a bait-and-switch.
  • Some say this is the classic platform-risk story: if you want a business, you must own your audience (email lists, memberships, direct visits), not depend on Google.

AI vs Human Travel Advice

  • Experiences diverge: some find LLMs already excellent for trip brainstorming and “off the beaten path” suggestions; others say they still rely on niche blogs for detailed hikes, GPX files, dynamic local info.
  • A common workflow: use AI for fast candidate lists, then use traditional search and known blogs to fact-check and deepen.
  • Several highlight serious hallucinations and wrong links from search-integrated AI, saying they no longer trust it for anything critical.

Training Data, Copyright, and Privacy

  • Strong disagreement over whether AI training on public web content is “stealing” or a legal/inevitable use of public information.
  • Some predict high‑quality open content will move behind paywalls, small magazines, or closed communities as a rational response.
  • Others note AI can pivot to alternative streams: YouTube transcripts, social platforms, private communications mediated by AI assistants, raising GDPR and privacy concerns.

Censorship, Shadowbanning, and Fairness

  • Heated debate over language: critics call Google’s behaviour “censorship” and “shadowbanning”; opponents say it’s just downranking, not blocking access.
  • One side argues consciously tuning algorithms to systematically suppress small sites is effectively censorship; the other insists censorship requires viewpoint-based suppression, not commercial ranking changes.

Google’s Incentives, Power, and Search Quality

  • Many see AI overviews as Google “closing the loop”: keeping users on Google, cutting publishers out of traffic and ad revenue.
  • Others question the logic of preferring big partners: large sites have leverage; countless indie sites don’t.
  • Broad consensus that search quality has degraded: more ads, SEO/LLM slop, weaker long‑tail results. Some report even big sites’ pages are now hard to find.
  • Suggestions range from antitrust breakup and treating search like infrastructure to open‑source, federated search projects and a cultural return to link aggregators, bookmarks, and word‑of‑mouth discovery.

A Song of “Full Self-Driving”

How Hard Is FSD & Where Are the Moats?

  • One framing: if FSD is “hard,” Waymo’s years of work and sensor stack give it a big lead; if it’s “easy,” Tesla’s approach is easily copied and offers little moat.
  • Pro‑Tesla view: the moat is a “data flywheel” – millions of cars collecting real‑world video, plus custom HW/SW and manufacturing scale.
  • Skeptical view: other automakers also have cameras and connectivity; labeling and training are nontrivial; Tesla is late and has no clear sustainable edge.

Data vs. Sensors vs. Compute

  • Some argue “the bitter lesson”: scalable compute and massive data dominate clever algorithms; Tesla’s global fleet and companies like Google (via YouTube/Street View) have key “world model” training data.
  • Others reply that data volume won’t linearly solve FSD; current techniques may hit a wall regardless of data, and richer multi‑sensor inputs (lidar, radar, HD maps) may matter more than raw video.
  • There’s pushback that unlabeled, low‑quality dashcam video is not analogous to the high‑quality text corpora used for LLMs.

Cameras‑Only vs. Lidar/Radar

  • One camp claims Tesla’s camera‑only stack is fundamentally limited, citing: repeated hardware refreshes, removed sensors, well‑publicized failures (e.g., “Looney Tunes wall”, crashes into trucks/firetrucks).
  • Another camp says this overstates things: FSD has significantly improved in the last few years; newer HW (HW4, upcoming HW5) has better vision and inference; specific failure cases have been mitigated.
  • Multiple users point out competing EVs with radar arrays and higher‑resolution cameras that deliver robust ADAS (blind spot, cross‑traffic, intersection warnings) without claiming FSD.
  • One detailed comment argues Tesla camera resolution equates to sub‑legal “visual acuity” at many distances, even on newer hardware.

Safety, Reliability & Autonomy Levels

  • First‑hand FSD users report it can handle complex highway and urban maneuvers and is very useful—but still regularly: misreads lights, ignores signs/zones, chooses bad lanes, blocks merges, or pulls into traffic.
  • Many treat it like advanced cruise control and explicitly say they would not trust it unsupervised.
  • Recent crashes prompt debates over whether FSD “randomly” drives off road vs. driver error/override; no consensus.
  • Waymo is generally acknowledged as SAE Level 4 (within geofences, with occasional tele‑operator assistance), while Tesla FSD remains Level 2 driver assist.

Musk, Hype, and Trustworthiness

  • Several comments highlight lawsuits where Musk’s lawyers argued his FSD promises were mere “puffery” no reasonable investor should rely on; this is cited as a reason not to trust his timelines or claims.
  • Others think criticism of Musk and the article’s political framing overshadow real technical progress visible in extensive user videos.
  • There’s debate over how much credit Musk deserves versus execution by engineering teams, and whether his insistence on cameras‑only has harmed Tesla’s trajectory.

Competition, Scaling & China

  • Some argue Waymo’s tech lead is offset by weak manufacturing and unclear path to millions of vehicles; they may end up a software/licensing provider.
  • Chinese automakers are mentioned as already fielding lidar‑equipped FSD‑like systems at low cost and operating at substantial scale, though others think their tech still lags Waymo and relies heavily on human oversight.
  • Ride‑hailing drivers and low‑cost human labor are seen as another competitive pressure on robotaxi economics.

Miscellaneous Technical & Factual Points

  • Multiple corrections note that lane‑departure, adaptive cruise, and emergency braking often use cameras/radar today; lidar is not yet ubiquitous in mainstream ADAS, contrary to a line in the article.
  • Some discussion touches on Google Street View: use of lidar/photogrammetry, prior Wi‑Fi data collection lawsuits, and the likelihood that Google captures extensive environmental metadata.

Gurus of 90s Web Design: Zeldman, Siegel, Nielsen

Nostalgia and Early Learning Culture

  • Many commenters began their careers with these books and sites, learning by “View Source” and experimenting on Geocities, personal sites, and early blogs.
  • Other contemporaneous influences mentioned: “Web Pages That Suck,” Flash books, CSS design galleries, and early design blogs/communities.
  • There’s strong nostalgia for a time when the web felt experimental, personal, and fun, even when sites were ugly or hard to use.

Print Aesthetics vs Native Web Design

  • Some argue early “gurus” largely transplanted print and DTP thinking onto the web, overloading pages with dense layouts, colors, and information.
  • Others counter that these designers were pushing the limits of a very constrained medium and toolset, and that experimentation was necessary before standards and better tools existed.
  • Game UI and demoscene design are cited as better models for simple, screen-native interaction.

Usability, Minimalism, and the Usability Expert’s Legacy

  • The usability advocate in the trio is widely credited with popularizing empirical user testing, discount usability, and concepts like Fitts’ Law, personas, and small-sample testing.
  • Supporters say this focus on speed, clarity, and minimalism helped kill Flash intro pages and mystery-meat navigation.
  • Critics describe his work as rigid, aesthetically indifferent, and sometimes ironically unusable (book/page layouts), but still useful as ammunition against bad client demands.
  • Several note that many of his “best practices” clash with today’s conversion-driven dark patterns and ad-heavy layouts.

Flash, Creativity, and Lost Possibilities

  • Flash is remembered fondly as a uniquely approachable, powerful environment for animation and interaction; many careers started there.
  • Its strengths (consistent cross-platform visuals, vector graphics, simple scripting) are seen as still unmatched in ease of authoring, even though it was overused for ads and killed by mobile constraints and platform decisions.

Evolution of the Web: From Tables to CSS to Homogenization

  • Commenters recall invisible tables, spacer GIFs, browser-specific CSS hacks, and “web-safe” color palettes as everyday survival techniques.
  • Google’s radically simple homepage is seen as a pivotal moment showing the power of minimalism.
  • Some feel modern CSS/HTML are vastly better yet underused creatively; the visual web is now standardized and “sterile,” with far fewer surprising designs.

Climate-Change Detour

  • One of the 90s authors’ current climate-skeptic website is briefly discussed; commenters deride the content while noting the irony of its poor design.

Run a C# file directly using dotnet run app.cs

Motivation & Developer Experience

  • Many see this as overdue but very welcome: it lowers friction for quick utilities, experimentation, CI/CD scripts, and “one-off” tools without scaffolding a project.
  • Top-level statements are widely viewed as having been added largely to enable this scripting-like workflow.
  • Some argue this is mainly beneficial inside existing .NET shops (replacing PowerShell, bash, or small Python tools), less as a way to win over non-.NET ecosystems.

Relation to Existing Tools & History

  • Commenters list a long history of similar approaches: CSI, .NET Interactive, csx/dotnet-script, LINQPad, cs-script, Mono’s interpreter, and third-party script runners for other languages (JBang for Java, Kotlin scripts, Rust/C/Go wrappers).
  • Several feel Microsoft’s announcement downplays prior community efforts; the blog was later updated to acknowledge them.
  • LINQPad is still valued for its UI and data visualization; this feature is seen as complementary rather than a full replacement.

Shebang, Directives, and Language Design

  • Shebang support is appreciated for making C# behave like a “real” shell scripting option (e.g., #!/usr/bin/env -S dotnet run).
  • New #:package/directive syntax for dependencies sparks debate: some want reuse of the existing #r style from F#/.NET Interactive; maintainers argue for clearer, non-opaque directives and no new “dialect.”
  • There is tension around .NET feeling like “C# Runtime” rather than a neutral CLR, and around perceived neglect of F# and VB.

Startup Performance & Implementation Concerns

  • Multiple measurements show dotnet run app.cs has noticeable startup overhead (hundreds of ms to >1s), even with caching; some say this makes it unsuitable for short-lived CLI tools.
  • Others note this is an early preview, with explicit plans to optimize, and that running the compiled binary directly is already much faster.
  • Comparisons are made with Python, Swift, Ruby, Perl, and Crystal; supporters say performance can be addressed via NativeAOT or precompilation, skeptics cite longstanding cold-start issues in .NET.

Ecosystem, Tooling, and Adoption

  • Some dislike csproj/MSBuild complexity and target framework confusion, though others argue modern SDK-style projects are much simpler.
  • There’s discussion of using C# scripting as a PowerShell replacement, but also strong defenses of PowerShell’s strengths and ecosystem.
  • Broader adoption is seen as constrained more by Microsoft’s ecosystem decisions and “stigma” than by scripting ergonomics alone.

US Trade Court finds Trump tariffs illegal

Ruling and Legal Basis

  • The Court of International Trade held that the “Liberation Day” global tariffs imposed under the International Emergency Economic Powers Act (IEEPA) exceed the powers Congress delegated.
  • Judges emphasized IEEPA requires a genuine “unusual and extraordinary threat” and that powers “may not be exercised for any other purpose”; a long‑running trade deficit or generic “imbalance” doesn’t qualify.
  • The court also leaned on non‑delegation and “major questions” reasoning: Congress cannot hand the president essentially unlimited taxing power via vague emergency language.
  • Other statutes (e.g., Section 122 of the 1974 Trade Act) already give narrowly capped, time‑limited tariff tools for balance‑of‑payments issues, implying Congress did not intend broad emergency tariff authority here.

Executive Power, War Powers, and Emergencies

  • Commenters debate parallels to war powers: legally Congress declares war, but in practice presidents start conflicts and rely on the War Powers Resolution to act first, seek approval later.
  • Many see the tariff move as part of a broader “unitary executive” project: using emergencies to bypass Congress on taxes, trade, even court orders.
  • Some argue this is how checks and balances should work; others fear the president will simply ignore rulings, pardoning or protecting subordinates who comply.

Congress, Partisanship, and Delegated Trade Authority

  • Tariff power is constitutionally assigned to Congress, but over decades it has delegated large slices to the executive (IEEPA, Section 232, Section 301, Tariff Act mechanisms).
  • Posters note Republicans control both chambers but largely avoid voting explicit tariff schedules: they fear internal splits, local economic damage, and electoral blowback, preferring to let the White House “own” the policy.
  • A House bill provision limiting enforcement of injunctions is flagged as an attempted two‑branch “coup” against the judiciary, though some doubt its practical impact.

Economic and Practical Effects

  • Many small and mid‑sized importers have paid steep duties (examples: electronics, 3D printers, wedding dresses), sometimes over 100%, forcing price hikes, margin compression, or inventory stalling.
  • Questions arise about refunds if tariffs are ultimately ruled unlawful; responses mention customs protest procedures, Court of International Trade jurisdiction, and limited but real avenues to claw back illegal exactions.
  • Even if tariffs end, prices are “sticky”: existing high‑cost inventory must clear, and firms may not quickly roll back consumer prices—especially amid ongoing policy uncertainty.

Policy Merits and Broader Democratic Concerns

  • Supporters see tariffs as a necessary correction for offshoring, strategic dependence, and hollowed‑out middle‑class jobs, even if blunt.
  • Critics call the measures regressive consumption taxes, poorly targeted, WTO‑provocative, and often untethered from any coherent foreign‑policy objective.
  • Thread-wide anxiety centers on whether court constraints still matter if the executive simply disregards them, with some arguing the U.S. is drifting toward de facto autocracy and others insisting the judiciary remains a crucial (if slow) backstop.

Long live American Science and Surplus

Nostalgia and Personal Impact

  • Many commenters describe AS&S as a formative childhood influence: browsing the catalog, visiting stores in Milwaukee/Chicago/Geneva, and using parts for science fair projects that later led to technical careers.
  • Staff are remembered as unusually patient and encouraging with kids, helping size motors, explain safety, and refine project ideas.
  • The in-store experience (bins of parts, weird surplus, jokey hand‑written labels, sodium/potassium on display) is portrayed as a “candy store for tinkerers” and a key gateway to DIY and hacker culture.

Similar Stores and a Shrinking Ecosystem

  • Numerous analogs are cited: Ax-Man (MN), Skycraft (FL), Scrap Exchange (NC), Reuseum (ID), Electronic Parts Outlet (TX), Jameco (CA), various surplus and electronics shops in Toronto, Utah, SoCal, etc.
  • Many have already disappeared (Weird Stuff, Halted, HSC, Edmund Scientific, Active Surplus’s original location, Fair Radio, AllElectronics), reinforcing a sense of loss.

Why Surplus and Electronics Stores Are Dying

  • Online marketplaces and overseas manufacturing make parts dramatically cheaper; local stores can’t compete on price or breadth of SKUs.
  • Inventory often becomes obsolete (e.g., thumbwheels, tube sockets, BASIC Stamps), tying up capital.
  • Real‑estate pressures, suburbanization, and the offshoring of manufacturing reduce both surplus supply and viable locations.
  • Changes in tax rules and surplus channels (moving from specialty dealers to Amazon/eBay) further cut off their traditional sources.

Debate Over AS&S’s Value and GoFundMe

  • Many happily donate or plan “post‑fire purchases,” arguing the store sustains curiosity, STEM interest, and a unique weird/whimsical culture.
  • Some dislike GoFundMe for a for‑profit business, suggesting share sales or community ownership instead.
  • A minority sees current inventory as mostly novelty “store‑to‑landfill junk” not worth “saving”; others counter that even oddball items and decor have educational and cultural value.

Changing Nature of Surplus and Access

  • Several note AS&S feels less like hardcore surplus and more like kitschy toys plus a shrinking electronics section, likely due to reduced industrial surplus supply and market shifts.
  • International fans lament lack of visible overseas shipping; one suggests contacting the store directly, citing typical small‑business constraints on integrating shipping APIs.

What does “Undecidable” mean, anyway

Practical value of theory and formalism

  • Several commenters say studying automata, grammars, Turing machines, and type theory significantly improved their software engineering, especially for:
    • Seeing through abstractions (regex, parsers, CPUs) as simple underlying mechanisms.
    • Designing domain‑specific languages (DSLs) with explicit axioms and rules.
    • Doing program analysis (compilers, security), where undecidability appears routinely.
  • Others report the opposite: theory of computation never felt relevant in day‑to‑day SWE, whereas concrete CPU walk‑throughs were helpful.

DSLs, DDD, and types

  • One thread connects formalism to Domain‑Driven Design: define a precise glossary, find bounded contexts, ensure internal consistency, and mirror business change costs in code.
  • DSLs are suggested as a way to encode domain axioms and relations for clarity and onboarding.
  • Type theory is framed as turning “what data is” into structured classes and relations so programs themselves become data for a type system to reason about.

Turing machines vs real computers

  • Long back‑and‑forth on whether Turing machines are “theoretical mumbo jumbo” or essential foundations.
  • Some argue modern CPUs (or RAM machines, C abstract machine) are closer pedagogical models and easier to understand.
  • Others insist TM, lambda calculus, FSMs, CFGs, etc. are still the right foundation for understanding what computation fundamentally is, even if not for teaching low‑level programming.

Clarifying undecidability and the halting problem

  • Multiple commenters stress: undecidable ≠ “no algorithm for any instance”; it means no single algorithm correctly decides all instances.
  • Emphasis that undecidability is about guaranteed termination on all inputs, not mere difficulty or huge runtimes.
  • Several point out common confusions:
    • Finite‑state or finite‑memory systems are, in principle, decidable; halting undecidability requires unbounded memory.
    • A halting oracle could be used to build general theorem provers or classify proofs for many formal systems, but not for every conceivable theory.
    • Mixing undecidability (CS) with “really hard but decidable” problems like bcrypt cracking or chess is misleading.

Logic, constructivism, and independence

  • A detailed subthread distinguishes:
    • CS “undecidable” properties of strings/programs.
    • Logical “undecidable/independent” propositions (e.g., continuum hypothesis, axiom of choice) relative to a theory like ZFC.
  • Constructive vs classical mathematics: in constructive settings, decidability has real force; in classical math, excluded middle effectively treats every proposition as decidable “in principle,” even when no computation exists.
  • Some note that many undergrads never see this logical background, making decidability feel like “black magic.”

Uncountability, diagonalization, and function space

  • Several comments relate undecidability to Cantor’s diagonalization:
    • There are countably many programs but uncountably many functions string → bool, so “most” such functions are uncomputable.
    • Halting predicates and similar objects live in this larger, non‑program‑representable space.
  • Busy beaver and small universal/independent Turing machines are mentioned as striking examples at this boundary.

Intelligence, AI, and limits

  • Some speculate whether “intelligence” might eventually get a formal stratification similar to computation (complexity classes, hierarchies).
  • Discussion on whether there’s an upper bound on intelligence and whether self‑improving AI singularity ideas might run into limits analogous to incompleteness or undecidability.
  • Others caution that “intelligence” likely isn’t totally ordered and is multidimensional (different abilities, approximate reasoning).

Pedagogy and culture

  • Observations that many CS students treat theory courses as hurdles rather than tools, then discard them, leading to poor intuition about what’s possible or impossible.
  • A few instructors ask how to better motivate interest in theory; some answers point back to showing its role in requirements negotiation (recognizing inherently undecidable specs) and in building robust abstractions.

Prohibition and ice cream in the US Navy

Alcohol, Prohibition, and the Turn to Sweets

  • Several comments argue that when alcohol was restricted in the US, many people shifted their “hedonic drive” to sweets, helping fuel the explosion of processed candy and “junk” confections.
  • Others tie this to the idea that humans (and even pets) are wired to consume scarce high-calorie rewards immediately, leading to modern overconsumption in an environment of abundance.
  • Declines in smoking are linked in the discussion to rising caloric intake and obesity, with nicotine’s appetite-suppressant role repeatedly noted.
  • A referenced framework describes pleasures as substitutable, refinable, and blendable over time (e.g., from fermented fruits to cocktails and sugary mixed drinks).
  • One commenter wonders if modern weed use in college towns is now displacing bar culture.

Ice Cream as Naval Morale and Alcohol Substitute

  • Ice cream is widely praised as an ideal shipboard treat, especially in hot, non–air-conditioned environments.
  • The Lexington “eat all the ice cream before abandoning ship” story is highlighted as an emblem of its morale value.
  • Some note the contemporary US Navy no longer maintains the WWII-style ice cream culture; availability is spotty and often limited to ship stores.
  • Others argue ice cream would have spread in the Navy regardless of alcohol bans simply because it is popular and cooling.

Health Debates Around Ice Cream and Sugar

  • One thread claims ice cream’s health profile may be closer to yogurt than commonly assumed; others push back, stressing its high sugar and calorie content.
  • A cited meta-analysis suggests sugar in beverages is particularly associated with type 2 diabetes risk, while sugar in solid foods appears less harmful, though not necessarily “healthy.”
  • The role of sugar form (eaten vs drunk), digestion speed, fat “matrix,” and fiber (in fruit) is debated.
  • Some commenters with diabetes or lactose intolerance describe avoiding or heavily modifying ice cream consumption; others openly eat it for pleasure, not health.

Prohibition, Breweries, and Logistics

  • Prohibition is blamed for wiping out many immigrant-founded breweries and smaller wine producers, paving the way for postwar consolidation into a few industrial giants and a polarized market (mass-market vs microbrew).
  • Others argue consolidation would have occurred under capitalism anyway, citing distribution laws, later M&A waves, and similar structures in countries without Prohibition.
  • Historical logistics: farmers distilled surplus grain into spirits for easier, more stable transport; naval “torpedo juice” and medicinal-alcohol exceptions are noted as ways alcohol persisted.

Modern Military Drinking Cultures and Policy

  • Official US Navy policy allows limited “beer days” (two beers under strict conditions after extended time at sea), but reports vary on how often this happens.
  • Comparisons: the Royal Navy historically had rum rations and, more recently, daily beer allocations; some accounts describe widespread past use of both alcohol and tranquilizers.
  • An Australian Navy perspective notes alcohol availability aboard and a pay structure that varies more strongly by job role than in the US, prompting a long subthread on US rank-based pay plus selective bonuses.

Alcoholism, Discipline, and Culture Change

  • One detailed account from early-2000s US Navy service describes pervasive extreme drinking, including drunk nuclear operators, tolerated DUIs, and senior leaders enabling heavy consumption to keep it “contained.”
  • Another sailor describes a far stricter environment: no one arriving at work drunk, heavy punishment for alcohol incidents, and long liberty restrictions after serious mishaps.
  • A submarine veteran says the earlier alcohol-heavy culture was real but claims later force-reduction boards and harsher career consequences for alcohol issues largely stamped it out, at the cost of severe manning shortfalls, especially in forward-deployed fleets.

Ice Cream, Logistics, and Perceived Power

  • A possibly apocryphal story recounts Axis officers recognizing inevitable defeat upon seeing US Navy ice cream barges while their own troops starved.
  • Similar anecdotes mention POWs recognizing Allied strength through plentiful food and small luxuries.
  • Commenters connect these stories to the idea that battles are fundamentally logistics operations: when one side can deliver not just bullets but also treats, its underlying capacity is overwhelming.

Deepseek R1-0528

Running R1-0528 Locally: Hardware & Performance

  • Full 671B/685B model is widely seen as impractical for “average” users.
  • Rough home-level setups discussed:
    • ~768 GB DDR4/DDR5 RAM dual-socket server, CPU-only or mixed CPU+GPU, achieving ~1–1.5 tokens/s on 4-bit quantizations.
    • Mac M3 Ultra with 512 GB RAM or multi-GPU rigs totaling ~500 GB VRAM for higher-speed inference.
    • Some note that with huge swap you can technically run it on almost any PC, but at “one token every 10 minutes”.
  • Quantized/distilled variants (4-bit, 1.58-bit dynamic) can run on high-end consumer GPUs or large-RAM desktops, with users reporting 1–3 tokens/s but very strong reasoning.

Cloud Access, Cost, and Privacy

  • Many suggest using hosted versions (OpenRouter, EC2, vast.ai, Bedrock) instead of buying $5k–$10k hardware.
  • Single H100 is insufficient for full-precision R1; estimates of 6–8 GPUs or large multi-node setups.
  • Debate over “free” access via OpenRouter/Bittensor:
    • One side: prompts and usage data are valuable and likely monetized or re-sold.
    • Other side: for non-sensitive tasks (e.g., summarizing public content), the tradeoff is acceptable.

Model Quality, Info, and Benchmarks

  • Frustration that there’s no detailed model card, training details, or official benchmarks yet.
  • Some like the low-drama “quiet drop” style; others compare it to earlier Mistral torrent-era releases.
  • Early third-party signals (LiveCodeBench, Reddit tables) suggest parity with OpenAI’s o1/o4-mini–class models, but details and context are unclear.
  • Broader debate about benchmarks:
    • Many think popular leaderboards are increasingly “overfitted” and unreliable.
    • Preference expressed for live, contamination-resistant, or human-arena-style evaluations, plus “vibe checks.”

Open Weights vs Open Source

  • Strong argument that this is “open weights”, not open source:
    • Weights are downloadable and MIT-licensed, but training data and full pipeline are not provided.
    • Several analogies: weights as binaries, datasets/pipelines as true “source”.
  • Some argue for a multi-dimensional “openness score” (code, data, weights, license, etc.) instead of a binary label.
  • Training-data disclosure is seen as legally and practically difficult, especially given likely copyrighted and scraped content.

Platforms, Quantization & Ecosystem

  • OpenRouter already serves R1-0528 through multiple providers; many note cost roughly half of certain OpenAI offerings for similar capability.
  • Groq is discussed: extremely fast but limited model selection; hosting R1-size models would require thousands of their chips.
  • Community tools:
    • Dynamic 1–1.6 bit quantizations reduce footprint from ~700 GB to ~185 GB, with tricks to offload MoE layers to CPU RAM while keeping core on <24 GB VRAM.

Motivations for Local LLMs & Use Cases

  • Reasons to run locally despite pain:
    • Data privacy and regulatory needs (law, medical, finance, internal docs).
    • Very cheap high-volume or always-on workloads vs API billing.
    • Latency-sensitive coding autocomplete.
  • Concrete examples shared:
    • Trading-volume signal analyzer summarizing news locally.
    • Document management (auto titling/tagging) and structured extraction.
    • Coding assistants using smaller DeepSeek/Qwen-based models for completion.

Market & Narrative

  • Some speculate about timing with Nvidia earnings and hedge-fund backing; others question whether release date materially affects markets.
  • Discussion that DeepSeek both relies on Nvidia hardware and may simultaneously reduce perceived need for massive, ultra-expensive GPU clusters, shifting procurement strategies and geopolitics (e.g., interest in Huawei GPUs).

Show HN: Every problem and solution in Beyond Cracking the Coding Interview

Why the problems are free / goals of the project

  • Creators say the main goals are:
    • Get people to read the book (large portions are free).
    • Drive usage of interviewing.io.
  • They argue practice problems themselves aren’t a competitive advantage; there are already many free ones.
  • They dislike paywalls and see high‑quality free content as the best marketing to engineers.

Perceived value of the book and platform

  • Several commenters praise the book for:
    • Practical guidance on resumes, outreach, and breaking into companies.
    • A more structured way of thinking about problems (e.g., “boundary thinking,” triggers).
  • The AI interviewer is viewed as useful for simulating real interviews, though some users would prefer an option to just submit code and see if it passes.

Debate: usefulness and fairness of LeetCode-style interviews

  • Pro‑side:

    • Coding tests are seen as essential to filter out candidates who simply cannot code, even with strong‑looking resumes.
    • “Easy/medium” questions are defended as checking basic competence and foundational CS knowledge (arrays vs linked lists, complexity, etc.).
    • Some claim these interviews are effective in aggregate, citing the success of large tech firms and arguing they have low false‑positive rates.
    • Others say LeetCode‑type skills apply more than critics admit, especially beyond simple CRUD work.
  • Critical side:

    • Many argue real jobs rarely need “fancy” algorithms; production‑quality code, design, and communication matter far more.
    • They see an arms race: as candidates train, companies raise difficulty, pushing interviews toward memorization and grind.
    • Strong concern about live‑coding anxiety and performance under observation, especially for senior/principal roles where architecture and leadership are more relevant.
    • Suggestions include simpler tasks, collaborative problem‑solving, pair programming, PR/code review, or small take‑homes, with candidates choosing the format.

Are LeetCode-style interviews dying?

  • One view: signal is degrading due to AI and cheating; these puzzles may fade.
  • Counterview (including from the author): companies are conservative; DS&A interviews aren’t going away, though verbatim LeetCode questions should. The focus should shift toward teaching and evaluating how candidates think.

Software vs other disciplines, credentials, and “grind”

  • Some compare software unfavorably to other engineering fields that have licensing bodies and standardized credentials, arguing this forces companies to over‑screen.
  • Others note that most engineering disciplines can also be self‑taught; what differs is tooling cost and mentoring pathways.
  • There is disagreement over whether willingness to “grind” LeetCode is an important, job‑relevant signal or an arbitrary hoop.

Interview as performance; analogies to other fields

  • Commenters compare live coding to:
    • Auditions for actors, “staging” for chefs, hands‑on tests for trades, and case studies for managers/analysts.
  • Others argue the best test of engineering is real work, not high‑pressure performance in an artificial setting; the NFL combine is used as an analogy for imperfect proxies.

Meta: Show HN etiquette and tone

  • A substantial subthread debates whether harsh criticism of the premise (technical interview prep itself) is appropriate in a Show HN.
  • Participants cite HN guidelines: avoid fulmination, shallow dismissals, and generic tangents; be substantive, measured, and civil when critiquing someone’s work.

Other points

  • Privacy: the site uses Clearbit to enrich emails with names, but interviews are anonymous unless both sides opt in.
  • A user in India hits a country restriction; the team calls this unintended and routes them to support.
  • One commenter notes that “interesting, contained” problems are a finite personal resource: once you’ve seen a solution, you can’t unsee it, so you lose the chance to solve it fresh in the future.

Revenge of the Chickenized Reverse-Centaurs

Chickenization and Monopsony Power

  • Commenters focus on the “chickenization” model: nominally independent farmers locked into a single buyer that dictates inputs, processes, and standards while unilaterally setting pay at near-subsistence levels.
  • This is widely described as a monopsony, sometimes “borderline slavery,” especially because farmers invest in highly specific assets (coops, equipment) that have no alternative use.

Capitalism, Regulation, and Market Design

  • Some see this as the natural outcome of under‑regulated capitalism; others refine that to “consumer‑only regulation” (strict food safety rules → few processors, but little regulation of how they treat suppliers).
  • There’s a debate over whether capitalism is inherently exploitative (surplus labor theory) versus claims that surplus labor theory is “unscientific bunk.”
  • Others argue these structures require regulatory capture and antitrust failure; a nationwide monopsony in such a simple product is seen by some as implausible without state-enabled barriers.

Why Farmers Don’t Just Exit or Compete

  • “Why don’t they do something else?” gets answered with: lack of alternatives in rural areas, debt overhang, sunk investment, need to keep income flowing, and government program constraints.
  • Suggestions like “just build your own processing and sell direct” run into food-safety regulation costs, capital requirements, distribution power, big buyers’ willingness to dump prices, and blackballing by dominant processors.
  • Co-ops and artisanal/local butchery exist but only work at small, high-margin scales.

Unions, Law, and Collective Action

  • Several comments argue US law structurally cripples union power (bans on sectoral bargaining, secondary boycotts/strikes, broad strike replacement, cooling-off periods).
  • Others note unions historically gained rights by defying even harsher laws and repression.
  • There’s disagreement over whether weakened unions are mainly due to law, to offshoring/global competition, or to past union excess.

Consumer Welfare vs Worker Welfare

  • A “customers will vote with their wallets” defense of non-union models is attacked as naïve: markets get stuck in bad equilibria, and people and firms are not fully rational.
  • Some argue ethics and citizenship, not just customer surplus, must shape labor rules.

AI, Gig Work, and the Future of Labor

  • Many see gig platforms and algorithmic management as a direct continuation of chickenization, potentially leading to “techno-feudal” lock-in.
  • Others note that in some sectors (post‑COVID restaurants, trades) labor scarcity has raised wages, suggesting dynamics are uneven.
  • There’s a strong pushback on the term “unskilled labor”: physical and service work is often highly skilled but oversupplied and undervalued.

Comparisons and References

  • EU is assumed to be better on farmer/animal welfare, but one comment notes EU poultry dumping has harmed African farmers.
  • Marshall Brain’s novella Manna is cited as an eerily prescient portrayal of algorithmic labor control with an implausibly optimistic ending.

Show HN: I rewrote my Mac Electron app in Rust

Motivation and Technology Choices

  • Original Electron app worked but was ~1 GB and heavy to maintain/optimize.
  • Rewrite uses Tauri + Rust backend with a web UI (Angular/React), chosen for:
    • Cross‑platform ambitions (Windows, later Linux) rather than Mac‑only Swift/SwiftUI.
    • Familiarity with web UI libraries.
    • Smaller bundles and better performance than Electron.
  • Some commenters argue the headline overemphasizes “Rust” since UI is still HTML/JS; the main binary size win comes from using system webviews instead of bundling Chromium.

Tauri vs Electron Experiences

  • Tauri praised for:
    • Much smaller binaries (tens of MB vs hundreds).
    • Good Rust integration and a pleasant backend dev experience.
  • Major complaints:
    • System webviews (Safari/WKWebView, Edge/WebView2, WebKitGTK) behave inconsistently; complex apps hit serious rendering and API differences, particularly on Linux (WebKitGTK bugs, missing WebRTC, performance issues).
    • OS/browser updates can break UIs without app updates.
    • Migration from Tauri 1→2 described as “nightmare” by some: multi‑repo, Linux crashes, poor docs.
  • Electron defended for:
    • Single, locked Chromium version → consistent rendering and WebGPU/Web APIs across platforms.
    • Mature ecosystem (Electron Forge, update tooling).
    • For many use cases, extra 100–200 MB disk usage is seen as acceptable; RAM usage and multiple Electron apps remain concerns.

ML Inference and Indexing Stack

  • CLIP runs via ONNX Runtime using the Ort crate; main hurdle was bundling and code signing.
  • Indexing speedups attributed to:
    • Rust implementation.
    • Scene detection to reduce frames.
    • ffmpeg GPU flags and batching embeddings.
  • Other Rust ML stacks discussed: Candle, Burn, tch, rust‑bert; tradeoffs in performance, abstraction level, and portability.

Vector Search and Storage

  • Initial choice: embedded Redis with vector search modules; gave good similarity results but caused bundling/packaging pain.
  • SQLite + vector extensions (early VSS) initially produced worse results; unclear if due to configuration.
  • Community recommends lancedb, usearch, simsimd, newer SQLite extensions (e.g. sqlite‑vec). OP later reports successfully replacing Redis with sqlite‑vec.

Product and UX Feedback

  • No trial despite “Try” CTA leading directly to Stripe; many request a time‑limited or feature‑limited demo, not just a video.
  • Price ($99, one year of updates) seen by some as studio/creator‑tier rather than mass‑consumer.
  • Current version indexes/searches images and videos; text/PDF search and RAW support are planned.
  • Some criticism of marketing (“Trusted by professionals”, stock‑looking testimonials), refund policy (no refunds), and VAT handling.

Japan Post launches 'digital address' system

Why Japan Is Seen as a Good Fit

  • Many commenters note that Japanese physical addressing is unusually hard for humans and software:
    • Areas (chome → block → building) numbered by build order, non-contiguous, often no street names.
    • Multiple buildings can share similar numeric sub-addresses; building names matter and are messy.
    • Historically, people relied on local knowledge, paper maps, and later GPS; delivery drivers still struggle.
  • Online forms are especially painful: inconsistent fields, full‑width vs half‑width characters, kanji vs kana vs romaji, varying expectations for dashes and symbols. Browser autocomplete often fails badly.
  • Comparisons: Bulgaria (block-based), Ireland’s Eircode, US ZIP+4(+2), Netherlands/UK postcodes, plus codes, what3words; Japan is described as an especially strong candidate for an abstraction layer.

What the Digital Address Actually Is

  • It’s generally interpreted as a short, stable, alphanumeric identifier that expands to a full physical address on participating websites — more like a DNS name or URL shortener than a new postal code.
  • Current design:
    • Users register via Japan Post; get a 7‑character code (Latin alphanumerics).
    • Entering the code in an e‑commerce form fetches and displays the full address for confirmation.
    • The code can stay the same across moves if the user updates their address with Japan Post.
    • Codes can be deleted and reissued; system rate-limits lookups; address data is separated from other personal data.
  • Commenters emphasize this mostly simplifies input and address changes, not the physical routing process (which still uses the resolved address).

Convenience vs. Privacy and Security

  • Enthusiastic views:
    • Dramatically easier address entry in Japan’s chaotic form ecosystem.
    • Single point to update after a move instead of changing dozens of merchant records.
    • Potential to reduce errors, misdeliveries, and to enable “follow-me” deliveries for long lead-time items.
  • Privacy concerns:
    • A stable identifier that follows a person/household can become a stalking or tracking vector if widely recorded and resolvable.
    • Japan Post itself notes that anyone who learns a code may be able to determine the address, and random guessing might reveal some addresses.
  • Some propose alternative designs:
    • True intermediary model where merchants never see the address, only the code; carriers resolve it at dispatch time.
    • OAuth-style API where users grant and revoke address access per service.
    • Throwaway or multiple codes (home, work, temporary) to combat spam and limit linkage.
  • Skeptics argue the current model pushes complexity and risk onto everyone (sites must handle revocations; users must remember to rotate), while not fully solving spam or privacy.

Relations to IDs, National Systems, and Analogies

  • Several see this as akin to:
    • A “mail DNS” or URN for people/households.
    • Sweden’s SPAR (central person/address registry) or US ZIP+4+2 extended addressing.
  • In Japan-specific context, commenters link it to the MyNumber digital ID ecosystem, noting plans to tie address changes across systems and raising classic “public SSN-like identifier” worries.
  • Overall sentiment mixes:
    • Strong positive reactions from people dealing daily with Japanese addresses.
    • Cautious optimism from those who like the indirection pattern.
    • Ongoing skepticism about long-term privacy, enumeration risks, and cultural acceptance of another semi-permanent personal identifier.

Compiler Explorer and the promise of URLs that last forever

Free vs. paid services and longevity

  • Some argue free third‑party services are inherently fragile because there’s no revenue to sustain them.
  • Others counter that plenty of paid services (including from large companies) are killed with short migration windows, while many free projects and open-source software (e.g., Linux) endure.
  • Conclusion in thread: business model alone doesn’t predict durability; both free and paid services vanish.

Google, goo.gl, and trust

  • Many see shutting down read-only goo.gl as gratuitous: redirects are simple, storage is cheap, and Google previously promised existing links would keep working.
  • Speculated reasons: outdated dependencies, legal risk, internal maintenance burden, or just management wanting to reduce “distractions.”
  • Google’s pattern of sunsetting products is seen as damaging to trust; some are baffled they don’t treat this reputation more seriously.

Compiler Explorer, URL shorteners, and recovery

  • Using goo.gl for long stateful URLs is framed by some as “abusing” a shortener; others say shortening long URLs is the intended use.
  • Critical voices say that if “links last forever” is a core principle, outsourcing to a third-party shortener was self‑defeating; others note they were also trying not to store user data themselves.
  • People encourage cooperation with ArchiveTeam/Internet Archive, which have captured billions of goo.gl links.
  • Discussion recognizes that even Compiler Explorer itself can’t last forever, though current funding and possible foundation plans mitigate that.

Link rot, personal archiving, and tools

  • Many describe disillusionment with bookmarks as URLs rot, leading them to: save PDFs, copy text to files (Markdown/RTF), use reader views, SingleFile, WARC/WebRecorder, Zotero, Pinboard, or self‑hosted archives.
  • Some automate archiving every visited page or send pages to external services for search, FTS, embeddings, or LLM tagging.
  • Caveats: domains can be removed from the Internet Archive; even IA and archive.is are seen as ephemeral; self‑hosted archives lack external verifiability.
  • Various timestamping and hashing schemes (including blockchain/GPG ideas) are debated, with no clear consensus on a robust, practical proof-of-authenticity system.

URLs, URIs, and content addressing

  • Several explain the URL/URI/URN distinctions; others dismiss it as largely pedantic in practice.
  • Content-addressed URIs (e.g., IPFS) are proposed as the only “forever” references, but critics note they don’t guarantee availability—someone must still host the content and maintain name‑to‑hash mappings.

Cool URIs, file extensions, and design

  • The classic “Cool URIs don’t change” guidance is cited.
  • Debate over whether URLs should include .html (clear, maps 1:1 to files) versus extensionless paths (hide implementation, allow multiple representations).
  • Some advocate canonical extensionless URLs with optional format-specific variants; others see extensions as useful and human‑readable.

Ephemerality vs preservation

  • Some suggest URL death may be healthy “garbage collection,” preserving only what people work to keep.
  • Others emphasize historians’ desire for mundane records, warning that we can’t predict what future scholars will value.
  • Multiple comments stress designing systems under the assumption that infrastructure and institutions are not permanent; nothing truly lasts forever.

LLM usage disclaimers

  • Noted trend of authors disclosing that text is human-written but LLM‑assisted (links, grammar).
  • Some welcome transparency, especially for “serious” writing; others see such labels as unnecessary if content quality is clear on its own.

Getting a Cease and Desist from Waffle House

Overall reaction to Waffle House’s response

  • Many see this as a big missed marketing/PR opportunity: free goodwill, viral attention, and a chance to “lean into” the Waffle House Index mythos at essentially no cost.
  • Others argue the response is exactly what a large brand will do “every single time” when someone uses their marks and appears official.
  • Several commenters say this story is a reminder that Waffle House is a corporation and should be treated as such: don’t expect “cool” behavior over legal caution.

Trademark, branding, and control

  • Strong consensus that using the logo, brand colors, and “Waffle House” in the domain made the site look official and is classic trademark infringement territory.
  • Multiple people say US law effectively forces active enforcement or risk dilution; a C&D is seen as the standard tool, even for benign uses.
  • Some push back that licensing or a “used under permission” arrangement was possible; others counter that this creates ongoing overhead and risk, so the cheapest option is to shut it down.
  • Debate over how absolute the “must enforce or lose it” narrative really is; some lawyers in the thread call that an oversimplification.

Liability, disaster optics, and “disaster brand” concerns

  • Several commenters note potential tort risk: if the site appears semi-official and is wrong, people might rely on it for safety decisions and sue.
  • Others highlight economic risk: incorrect “closed” labels could directly cost stores revenue or create employee-management conflicts.
  • Some argue Waffle House likely doesn’t want to be tightly tied to “national disasters” as a core brand message, despite the positive story of resiliency.

Scraping and data use

  • People distinguish between (a) trademark issues and (b) scraping status data. Most think the C&D was about marks, but note the data source was later patched anyway.
  • There’s discussion of ToS, scraping case law, and big scrapers (AI, adtech) as context; consensus is that scraping alone would have been a murkier, slower fight.

Alternatives and what could have been done

  • Common suggestions:
    • Remove logo and WH-styled branding; use a generic name (“Waffle Index”, “disaster index”) and a clear “unofficial” disclaimer.
    • Aggregate multiple chains to dilute brand-specific issues.
  • Some argue an individual developer is rational to comply fully: C&Ds are scary, lawyers are expensive, and low-probability IP lawsuits can still be ruinous.

xAI to pay telegram $300M to integrate Grok into the chat app

Privacy, Encryption, and Data Use

  • Many assume the real goal is using Telegram chats as training data for Grok; several say they will leave the platform over this.
  • Long subthread on Telegram’s security: not end‑to‑end encrypted by default; messages are encrypted in transit but plaintext on Telegram’s servers. Secret chats exist but are rarely used.
  • Others note E2EE doesn’t help if an AI “data harvester” is integrated into the client, or if endpoints/OS are compromised.
  • Strong concern that this enables large‑scale surveillance and intelligence gathering, especially given Telegram’s heavy use in conflict zones and censored countries.
  • Some argue “if it’s free, you are the product” and use implies consent; others flatly reject that as a notion of consent.

Business Logic and Comparisons

  • Many compare this to Google paying Apple/Mozilla to be default search or OpenAI’s various distribution deals.
  • One camp: this is normal distribution/marketing—Telegram has ~1B users, xAI is paying for default status, mindshare, and future upsell (premium AI tiers, SuperGrok, etc.).
  • Another camp: if you must pay platforms and users don’t clamor for you, your product/brand is weak; this is “AI being shoved down throats” to prop up inflated valuations.
  • Some see the main asset as exclusive, high‑resolution conversational data in many regions; $300M is viewed as cheap relative to AI valuations and data scarcity.
  • A few note xAI’s low external traction and argue this is partly about juicing user metrics and “being in the race,” not just direct ROI.

Grok’s Bias and Telegram’s Reputation

  • Multiple comments call Grok “toxic” or “racist slop,” citing its unsolicited “white genocide in South Africa” outputs as evidence of political skew/mismanagement.
  • Others say Grok 3 is technically one of the better general models but acknowledge many will never touch it because of its owner’s politics.
  • Telegram is described as “shady” or a hub for scams, malware, piracy, and extremist content, though some argue that’s true of any powerful communications tool.
  • Concern that scammers and malware authors will integrate Grok into their Telegram‑based tooling.

User Experience, Consent, and Migration

  • Unclear how intrusive the integration will be: optional bot vs woven into chats vs default assistant; several say their reaction depends entirely on that.
  • Some long‑time Telegram fans (including premium users) say they’ll quit if Grok is “forced” into the app; others expect it to be mostly ignorable, like other advanced features.
  • Reports that certain communities (notably furries and other privacy‑sensitive groups) are already spinning up backup rooms and accelerating moves to Signal, Matrix, or self‑hosted options.
  • Others are resigned: Telegram has already “sold its soul” via ads, spam, and limited moderation; this is seen as another step in enshitification.

Power, Politics, and Surveillance

  • Several see this as deepening a surveillance and influence stack: rockets, satellites, social network, and now a major chat app’s AI layer.
  • Some speculate that training on Telegram’s global, often politicized content will further tilt Grok toward the owner’s worldview.
  • Others link this to a broader pattern: AI integrations used as cover for data extraction and political or commercial manipulation rather than genuine user needs.

Deal Status and Trust

  • Later in the thread, users note public statements that “no deal has been signed” yet, followed by clarification that an agreement in principle exists but formalities are pending.
  • This mismatch in public messaging is seen by some as emblematic of a business culture where big announcements precede finalized agreements and where “total shamelessness” is an asset.

Mullvad Leta

What Leta Is and How It Works

  • Leta is described as a front end/proxy to Google and Brave Search APIs, not its own index.
  • It strips tracking elements from results, offers no ads, and lets users pick the backend engine.
  • Several commenters find it very fast, clean, and more relevant/less “crap” than direct Google, and are adopting it as their default search.
  • Others are confused: the landing page doesn’t clearly explain what it is, and some only understood via the thread.

Privacy Model, Trust, and Limitations

  • Core value: Google/Brave see only Mullvad’s servers, not end‑user IPs or browser fingerprints; Mullvad sees the search and the user.
  • Some argue this is “trust shifting,” not true privacy, and criticize the lack of end‑to‑end client encryption as “theater.”
  • Others respond that some party must see the plaintext query; the main improvement is who that is, and Mullvad’s track record and jurisdiction are seen by many as acceptable.
  • Confusion over an FAQ line saying Leta is “useless” if you already perfectly block tracking; commenters interpret this as “then you gain nothing more.”

Caching, Infrastructure, and Freshness

  • Leta caches search results for 30 days in an in‑memory Redis store on diskless, STBooted RAM‑only servers (same model as Mullvad’s VPN).
  • This pooling of identical queries reduces API cost and arguably improves privacy by mixing users.
  • Concerns:
    • Cached results may be stale; some users already see multi‑day‑old caches.
    • RAM‑only cache is lost on restarts; FAQ admits upgrades flush the cache.
    • Questions about whether caching is compatible with Google/Bing API terms.

Business Model, API Costs, and Sustainability

  • Multiple commenters doubt long‑term viability: Google/Brave APIs are said to be expensive and Leta has no visible revenue.
  • Hypotheses:
    • It’s a marketing/brand‑building cost center to drive VPN/browser subscriptions.
    • Prior versions were VPN‑subscriber‑only; opening it may be a growth play.
  • Some see it as a “publicity stunt” that might be shut down once costs or marketing priorities change; others note it has already been running ~2 years.

Advertising, Growth, and Brand Perception

  • Users report heavy Mullvad advertising (billboards, buses, subway) in London, SF, NYC, airports, etc.
  • Company comments say there’s no outside investment or “lottery win”; growth over years funds these campaigns, which are cheaper than people assume.
  • They prefer broad outdoor ads over tracking-heavy online ads or affiliates, to align with their privacy stance.
  • Reactions are mixed:
    • Some appreciate the consistency (non‑targeted ads for a privacy product).
    • Others feel mass‑market advertising for a “privacy” brand erodes perceived ideological purity and increases fear of state pressure as they grow.

Comparison to Other Search and Tools

  • Compared to Startpage/DDG: Leta is another Google proxy but not owned by an ad company; behavior is similar in concept.
  • Question about “How is this different from DDG !g?” → answer: !g just redirects to Google, Leta proxies and caches.
  • Some users plan to move from Startpage; others stick with DDG, Kagi, or LLMs.
  • Debate on “search vs LLMs”: a few say they rarely use search now; others find LLMs unreliable or hallucinatory and still rely heavily on search and Stack Overflow.

Mullvad VPN Reputation and Ecosystem

  • Mixed feedback on the VPN itself:
    • Long‑term users praise stability (especially WireGuard on mobile) and privacy ethos.
    • Others report worsening usability: CAPTCHAs everywhere, frequent disconnects, laggy DNS, blacklisting relative to smaller VPNs.
    • Some mitigations mentioned (e.g., disabling obfuscation/“quantum” tunnels).
  • Mullvad staff reiterate their mission is to fight both mass surveillance and censorship, with better censorship‑circumvention tooling “on the roadmap.”
  • Leta integrates into Mullvad Browser, framed as part of a broader privacy ecosystem.

Workplace Blocking and Practicalities

  • Many workplaces block mullvad.net as “VPN/proxy avoidance,” making Leta unusable at work, unlike DDG.
  • Discussion of coarse-grained corporate filters that block whole categories (VPN, adult themes, “AI”, some TLDs), which also hurts developers.

Naming and Positioning

  • Some think “Mullvad/Leta” branding is confusing or hard to remember in English‑speaking markets.
  • Others like the Swedish names (“mole” / “search”) and compare the strategy to IKEA’s non‑English naming, pushing back against Anglocentrism.