Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 292 of 361

The Era of the Business Idiot

Milton Friedman, Markets, and Morality

  • Large subthread debates whether the article misrepresents Friedman and his shareholder‑value doctrine.
  • One side: Friedman emphasized following laws and norms, opposed monopoly power, and argued that managers shouldn’t spend others’ money on personal social goals; regulation should target clear harms (e.g., pollution).
  • Critics: In practice his framework licenses exploitation and discrimination, assuming markets will self‑correct in a chaotic world. They argue he underestimated how long harmful preferences (e.g., racism) can persist and how much harm they cause.
  • Heated debate centers on a passage about a racist community refusing Black clerks. Defenders say the quote is cherry‑picked and Friedman explicitly condemned racial prejudice, arguing only against state coercion. Opponents say that functionally he defends owners’ “freedom” to discriminate and opposes the only tools that reliably reduce it (civil‑rights laws, minimum wage, etc.).
  • Broader disagreement: Should government aggressively regulate “negative harms” (discrimination, low wages), or does that become a dangerous, overreaching “stick”?

Discrimination, Segregation, and Role of Government

  • Some argue segregation was primarily state‑imposed and that free markets historically encouraged integration when allowed.
  • Others counter with examples like redlining, restrictive covenants, and racist violence, saying markets often entrenched segregation and that legal force was necessary to break it.
  • There’s acknowledgment that preferences have changed over time, but dispute over whether that’s due to markets alone or to government intervention forcing integration.

What Counts as Executive Competence

  • Several commenters reinterpret the piece’s “symbolic executive” idea: executives still primarily optimize one metric—“number go up”—but the link between that and “being good at stuff” (real productive value) has frayed.
  • Some think entrenched positions, zero‑rate environments, and hype explain success more than genuine capability.
  • Others note investors tolerate prolonged unprofitability if they believe in future dominance, so “value to society” is mostly rhetorical cover.

AI Tools, Communication, and Corporate Theater

  • The article’s dismissal of email summaries, AI meeting prep, and “chatting with podcasts” draws pushback.
  • Defenders say most corporate communication is poorly written and performative; AI that summarizes or adapts content to context, language, or style can be a real efficiency gain, not just an idiot’s crutch.
  • There’s related frustration at managerial cultures that reward appearances, nitpicking, and “vibes” over substantive work.

Measurement, Incentives, and Broader System Failures

  • Multiple comments stress that the real problem is incentive design: once success is reduced to a single financial metric, executives, investors, and VCs rationally chase it, even when it harms social “flourishing.”
  • Venture structure, index investing, and monetary policy are cited as amplifiers of short‑termism, not just Friedman’s ideas alone.

Reactions to the Article and Author

  • Some find the piece valuable for tracing business clichés back to dubious origins; others call it a slog, error‑prone, or rage‑bait that clips sources out of context.
  • A subset dismisses it as unfocused complaining about “business idiots” and AI, while others see it as accurately capturing a hollow, performative executive culture.

Ask HN: How to Make Friendster Great?

Vision: Value over Addiction

  • Some argue “addictive” reinforcement is necessary for growth; others reject this, wanting Friendster to help people maintain relationships rather than hijack attention.
  • Several see doomscrolling and passive consumption as core problems; they suggest emphasizing creation, events, and relationships, not endless feeds.

Real‑World Social Focus & Target Niches

  • Strong support for tools that catalyze offline interactions: meetups, clubs, local “third places,” dinner parties, game nights, run clubs, reading groups, charity, civic engagement.
  • Repeated niche ideas:
    • 30+ / parents who struggle to organize across school, building, family groups.
    • Millennials nostalgic for early Friendster/Facebook.
    • Platonic friendship (especially for men) distinct from dating apps.
  • Many want a better “old Facebook”/Meetup: event organization, birthday reminders, hyper‑private family photo sharing, simple LinkedIn-style professional contacts.

Identity, Bots & Trust

  • Desire for bot‑free, real‑person networks: ID verification, small fees, postcards, invite/karma systems, or pricing models that make bot scaling expensive.
  • Others argue full bot prevention is impossible; better to design so bots and algorithmically pushed junk are irrelevant.
  • Debate over real‑name/verification: some want accountability; others note stalking, swatting, and safety concerns.

Product & UX Principles

  • Popular asks: chronological feeds, easy muting, no/optional algorithms, no infinite follower model, mutual connections only, hard caps on friend counts, limited or no public content.
  • One influential thread proposes: no followers, likes, reposts, or viral sharing; mutual connections only; conversations over audiences, “sparks” over broadcasting. Counterpoints worry this becomes “just texting” or too niche.
  • Interest in strong group/club features, nested comments, profile customization/themes, and mixing “best of” patterns from Reddit, Discord, old Facebook, forums, etc.

Decentralization & Interop

  • Multiple suggestions to build on open protocols (ATProto/Bluesky, ActivityPub, Nostr, Mastodon) so Friendster is a federated or atproto-based app, not a closed silo.

Monetization & Governance

  • Many urge subscription/no‑ad models and treating the network more like a co‑op, community center, or “church” than an ad business.
  • Others claim you can’t fund hosting and dev without either ads or addictiveness; skepticism that a non‑enshittified model can scale beyond a niche.

Safety, Moderation & Politics

  • Ideas: AI for negativity spam detection, curation over top‑down moderation, user-controlled filters, community vetting, banning corporate pages.
  • Concern about spam/porn and nation‑state manipulation; suggestions include clever spam sandboxes.
  • Several criticize any “make ___ great again” language and warn to avoid visible political leanings.

Overall Skepticism

  • Many doubt a reboot can overcome network effects, distrust of social platforms, and the structural incentives that broke previous networks, though they’re curious about a small, well‑designed, nostalgic, niche product.

ZEUS – A new two-petawatt laser facility at the University of Michigan

Project execution and facility role

  • Commenters praise the project’s disciplined five‑year build, noting the invisible “careful planning and execution” behind such facilities.
  • The multi‑year fabrication of key components (e.g., a crystal taking 4.5 years to manufacture) reinforces the scale and difficulty.
  • People highlight that ZEUS is an NSF user facility, seeing this as evidence it’s intended as a shared research tool, not just a prestige project.

Power, energy, and “Death Star” misconceptions

  • Several comments point out confusion between power (watts) and energy (joules).
  • ZEUS’s 2 PW comes from extremely short pulses: ~20–25 femtoseconds, ~50 J per shot, about once per minute—comparable to a few seconds of a phone flashlight in total energy.
  • One commenter emphasizes there is no realistic path from this to a multi‑second, petawatt‑class “superweapon”; scaling energy by ~10¹⁶ would be required and would obliterate the facility.
  • A playful “mosquito–to–Death Star” logarithmic scale is constructed, then critiqued as misleading because it ignores pulse duration and total energy.

Scientific and practical applications

  • Short, intense pulses are noted as ideal for ablation: very sharp material removal with minimal collateral heat damage.
  • Past demonstrations of laser cutting tissue with single‑cell‑scale damage are mentioned; commenters speculate about surgical and radiotherapy targeting applications, with some details about fiducial markers and motion compensation.
  • Others connect femto/picosecond lasers to paths toward inertial confinement fusion at longer pulse durations and higher energies.

Operation, noise, and pulse physics

  • People trade anecdotes about loud high‑power lasers, with clarification that most noise comes from cooling and support equipment.
  • Femtosecond lasers can audible “buzz” by ionizing air; speculation about ZEUS’s repetition rate leads to links to its spec sheet.
  • A brief physics explainer describes how very short pulses necessarily have broad spectra due to the time–energy uncertainty relation, complicating mirror and lens design.

Comparisons and global context

  • Commenters note that ZEUS is “most powerful in the US,” not the world; they cite a 10 PW facility in Romania and a proposed ~100 PW Chinese project.
  • NIF is cited as the “most energetic” laser (~2 MJ), distinct from ZEUS’s peak power focus.
  • Some mention real and proposed military laser systems, noting that practical destructive lasers exist but with atmospheric and scaling limitations.

Humor and pop‑culture riffs

  • The thread is peppered with Real Genius, Star Wars/Death Star, XKCD, Family Guy, and meme references.
  • Jokes about popcorn, drilling through Earth, and “Zeus” objecting to anything less than a zettawatt reinforce the gap between pop‑culture laser fantasies and ZEUS’s actual scientific purpose.

Discord Unveiled: A Comprehensive Dataset of Public Communication (2015-2024)

Ethics of Scraping and Publication

  • Many commenters see the project as “shameful”: scraping billions of casual chats (often by minors) without their knowledge, then publishing them, is viewed as violating norms of research ethics and basic politeness, even if technically allowed.
  • Others argue it’s ethically necessary disclosure: if this is possible, intelligence agencies, criminals, and data brokers have likely done it already. Making it visible in an academic, open way is framed as “public red teaming” that forces people to confront real risks.

Public vs Private: What Does “Public Discord” Mean?

  • Dataset is limited to servers in Discord’s Discovery tab (joinable without invites). Supporters say this makes them essentially public, comparable to forums, Usenet, or StackOverflow.
  • Critics counter that “invite-based servers” and the “server” metaphor create an illusion of semi-privacy and ephemerality; users expect a flowing chatroom, not a permanent, globally queryable corpus.
  • Tension arises over whether “anyone can join and scroll back” ≈ “reasonable expectation it may be archived and redistributed.”

Anonymization and Re‑identification Risks

  • The paper describes pseudonyms and truncated SHA‑256 hashes for IDs; many find this “pretty thorough” on paper.
  • Others highlight weaknesses: unsalted hashing lets attackers hash known usernames; once a specific channel is matched, one can track those users across the dataset; references to real names or nicknames inside message text remain.
  • One commenter publishes a deeper critique claiming the ID anonymization scheme is flawed and re-identification is realistically possible.

Legal / ToS / GDPR Questions

  • Multiple comments note this likely violates Discord’s ToS and developer terms (no bulk export / sharing of API data). Debate centers on whether breaking ToS can still be “ethical.”
  • GDPR concerns: even if messages were public, true anonymization is disputed, and there is no user-level mechanism to request deletion. Others argue GDPR is misaligned with the practical permanence of public posts.

Impact on Users, Especially Youth

  • Strong worry about minors and young people: Discord has been a primary social space where teens “grow up,” make mistakes, and expect some contextual obscurity.
  • Some see this as fueling long‑term “cancel” dynamics; others say the real solution is cultural (right to forgiveness) and better education that nothing online is truly private.

Discord as Knowledge Sink and Forum Replacement

  • Separate but related thread: Discord’s rise as a replacement for forums (modding, hobby communities, docs, support) is widely criticized—poor search, walled access, fragile archives.
  • Some welcome the dataset as a way to surface technical knowledge otherwise trapped in Discord; others say that doesn’t justify mass scraping of social spaces.

Dataset Details and Distribution

  • Dataset is 118 GB Zstandard-compressed JSON (2.1 TB uncompressed), initially freely downloadable from Zenodo, then restricted; community quickly shared hashes and magnet links to redistribute it.

Devstral

Performance & first impressions

  • Several users report Devstral is “impressive” for local coding assistance, handling tricky language-specific tasks (e.g., Ruby/RSpec) and large-context editing through tools like aider or Cline.
  • Others find it underwhelming or “atrocious” for file reading and tool calling when wired into generic agent frameworks, suggesting quality is highly setup-dependent.

Local deployment & hardware

  • Runs on a range of hardware: RTX 4090/4080, 3090, 6800 XT with 64GB RAM, and Apple Silicon (M2/M4 Air/Max, 24–128GB).
  • On underpowered setups (e.g., 8–12GB GPU, 16GB RAM Mac), it may technically run but be very slow or cause swapping/freezes.
  • Ollama’s 14GB model size is used as a rough proxy for RAM needs; rule of thumb: model size + a few GB for context. Below ~20GB tends to coexist better with other apps on macOS.
  • First-token latency can be ~1 minute on high-end Macs with large context, then responses are much faster.

Tool use and agent workflows

  • Devstral appears strongly tuned for a specific agent framework (OpenHands / cradle-like flows: search_repo, read_file, run_tests, etc.), excelling when used as part of that stack.
  • Multiple reports say generic tool-calling “hello world” tests fail: the model doesn’t reliably call arbitrary tools or use their results.
  • Some users report good agentic behavior in Cline and OpenHands; others cannot get tools to trigger at all in their own systems. This mismatch is a major point of confusion.

Benchmarks and trust

  • SWE-Bench-Verified results are described as extraordinarily high for an open model of this size, even rivaling or beating some Claude/agent setups.
  • Several commenters are skeptical, suspecting heavy optimization for that benchmark or for a specific toolchain, and note that single benchmark numbers increasingly diverge from their real-world experience.
  • One user finds Devstral clearly worse than qwen3:30b on nontrivial Clojure tasks; others emphasize it’s not optimized for “write a function that does X” but for multi-step agent flows.

Model comparisons & use cases

  • Compared against Claude 3.7 and other hosted LLMs, many see Devstral as a “different class”: weaker raw capability but attractive for privacy, offline use, cost, and “doing the thinking yourself.”
  • Users mention Qwen, Gemma 3, GLM4, and various Q4 quantizations as alternatives; no consensus “best local” model, and performance often seems language-/task-dependent.

Licensing, openness, and strategy

  • Apache 2.0 licensing is widely praised versus restrictive “open weight” or Llama-style licenses. Some note Mistral has a strong open-weight history, though not all their models (e.g., Codestral) are open.
  • There is support for EU/public funding of Apache/MIT-licensed models as a strategic counterweight to big US/Chinese providers; Mistral is viewed by some as a promising “independent European alternative.”
  • A broader concern is that smaller model vendors should lean into open-source tooling (Aider, OpenHands, etc.) rather than building closed, fully autonomous agents, which many still see as premature and unreliable compared to assisted coding flows.

Why walking is the most underrated form of exercise (2017)

How “underrated” is walking?

  • Some argue walking is nearly pointless as exercise for reasonably fit, active people; valuable mainly as transport or for the very unfit/obese.
  • Others say that overstates it: every bit of movement affects energy balance and helps prevent gradual weight gain.
  • A few think walking’s reputation is about right: great for low-impact movement and mental clarity, but not a “proper workout” compared with intensive training.

Calories, efficiency, and EPOC

  • Multiple comments highlight how time‑inefficient walking is for large calorie burns; 20k steps can require 4–5 hours.
  • Intense exercise (HIIT, heavy lifting, hard cycling) is credited with much higher hourly burn plus excess post‑exercise oxygen consumption (EPOC), though there’s disagreement on how big the EPOC effect really is.
  • There’s debate over calorie math: some think 3,000 kcal from walking is unrealistic, others say it’s plausible with enough hours and body weight.
  • Several note that for weight loss, eating less is usually more impactful than adding long walks.

Intensity, fitness level, and “cardio zones”

  • Whether walking counts as cardio is seen as highly dependent on fitness and weight: for sedentary or obese people it can hit moderate intensity zones; for fit people often not.
  • Incline and speed can make walking significantly more taxing; hills/stairs and “rucking” (weighted walking) are proposed as ways to scale difficulty.

Mental health, lifestyle, and accessibility

  • Many emphasize walking’s benefits for mood, recovery, and “clearing the mind.”
  • It’s viewed as a key entry point for completely sedentary people and far less intimidating than running.
  • Walkable cities are praised for supporting everyday mental and physical health.

Comparisons and joint concerns

  • Running, cycling, swimming, ellipticals, and resistance training are repeatedly described as more efficient for fitness and body composition.
  • Some avoid running to “save their knees”; others cite evidence and experience that moderate running with good form is not harmful and may improve joint health.
  • Opinions diverge on treadmills (effective but “murder on knees” vs. most sustainable indoor option) and on rucking’s long‑term impact on backs and knees.

Habits, tracking, and anecdotes

  • Wearables (e.g., step counters) are reported to dramatically increase daily walking by making inactivity visible.
  • Personal stories range from dramatic weight loss via daily 10 km walks plus diet changes to an 11‑hour walk that led to scary heart symptoms, used as a caution against overdoing it.

A South Korean grand master on the art of the perfect soy sauce

Taste differences & styles of soy sauce

  • Many contrast mass-market Kikkoman with more traditional or regional sauces: Kikkoman is described as salty, slightly metallic, and “sharp,” whereas artisanal or regional sauces are said to have deeper, layered flavors (seafood, molasses, coffee, sweetness, MSG-like umami) and often feel less salty despite similar sodium.
  • Commenters emphasize there is no single “traditional” soy sauce; Japanese shoyu, Chinese light/dark, Korean jang-based sauces, and tamari all fill different roles, like different wines for different dishes.
  • Example: Japanese soy can suit sashimi because of its sharpness, while Chinese light soy is preferred for fried rice due to a smoother savoriness.

Brand choices & practical use

  • Widely recommended alternatives: Pearl River Bridge (Chinese light/dark), Sempio (including lower-salt variants), Kimlan, San-J, Yamasa, Zhongba, Ohsawa, and various sweet soy/kecap manis brands.
  • Several people advocate keeping multiple soy sauces (light, dark, dipping, fish-specific, marinades) rather than one “universal” bottle.
  • Some refrigerate soy sauce for flavor preservation and mold prevention, especially sweeter variants; others note labels often recommend this though many ignore it.

Tamari, wheat, and gluten

  • Clarification that standard Japanese shoyu typically contains wheat; tamari is low- or no-wheat.
  • Disagreement over how “traditional” wheat-based soy is, given wheat’s relatively late arrival in Japan.
  • Some celiac sufferers report doing fine with regular soy sauces due to very low gluten content; others prefer tamari and find it tastes richer and better.

Fermentation, spoilage, and health

  • Extended debate on what distinguishes fermentation from spoilage: palatability vs safety vs illness.
  • Alcohol in fermented foods prompts side discussions: whether alcohol-free “synthetic” soy (e.g., La Choy) has a place for medical/religious reasons; differing religious views on trace alcohol.
  • Broader appreciation of fermentation’s role in human food (cheese, bread, kimchi, chocolate, coffee, hot sauce), and speculation that low intake of fermented foods might harm modern health, though this is anecdotal.

“Best” vs “good enough” and food culture

  • Long subthread critiques “best-chasing” culture (in soy sauce, pizza, sushi, etc.): standing in long lines or paying high premiums for marginal gains, often tied to status signaling and social media.
  • Others defend seeking excellence when you care about something, but agree hype and influencer culture can hollow out genuine appreciation.
  • Analogy made to ketchup and cola: for some staples, one reliable standard brand is “good enough,” and upmarket variants mostly “just taste different.”

Cultural meaning and variety

  • Some highlight that soy sauce is not just flavor but memory, tradition, and identity, especially in Korean and Japanese contexts with handmade jang or long-aged brews.
  • Multiple commenters note that in many Asian households, several specialized soy sauces are standard, and restaurants often further doctor them with aromatics and oils.

Miscellaneous tangents

  • Brief meta-discussion about using ChatGPT to identify plants and to retrieve food information, with mixed trust compared to SEO-filled web searches.
  • Minor political and racial tangents around neighborhood gentrification and who consumes “heritage” foods.
  • Quick note on Trump having been served the grand master’s soy sauce, mostly met as an odd aside rather than seriously discussed.

Roto: A Compiled Scripting Language for Rust

Syntax & Language Design

  • Many note Roto’s syntax is very Rust-like; some see this as natural (Rust users like the syntax and want to reuse it), others prefer a distinct look (e.g. Lua, Koto) to separate host and script and avoid tying the language to Rust long term.
  • There’s debate whether embedded languages must resemble the host; examples like Lua, Tcl, and ECL show they don’t, while newer Rust-adjacent projects often do.
  • Roto is expression-oriented (like Rust: if/else as expressions) and statically typed, JIT-compiled, and hot-reloadable.
  • Initially Roto lacked loops to guarantee short-running filters; the author clarifies this is specific to its BGP engine use (Rotonda) and loops are likely to become optional elsewhere. Lists and generic collections are on the roadmap but harder to add.

Use Cases and Relationship to Rust

  • Commenters are enthusiastic about a “killer app” scripting/application language for Rust: rapid iteration, hot reload, static typing, and tight interop while reserving Rust for performance-critical parts.
  • Some ask if 80–100% of an app could be written in Roto; the author says yes in principle but notes limitations: no traits, no generic type registration (only concrete types like Vec<u32>), and runtime compilation overhead.
  • Roto intentionally avoids Rust references and full Rust complexity to be simpler for end-users.

Alternatives and Comparisons

  • Comparisons include:
    • Lua, embedded JS engines (V8, QuickJS, Duktape), TypeScript, WebAssembly/wasmtime, Mun, yaegi, and simply dlopening Rust shared libraries.
  • V8/TS are criticized as heavy (binary size, integration complexity), with concerns about TS’s type soundness for “mission-critical” filters; Roto is framed more as a domain-specific, lightweight option.
  • Mun and wasmtime’s component model are cited as alternative plugin/hot-reload architectures; wasmtime gets praise for typed, non-stringly interfaces, though C bindings are still maturing.
  • Using Rust shared libraries is questioned due to lack of stable Rust ABI and deployment complexity; Roto avoids requiring a Rust toolchain on target systems.

Execution Model, Safety, and Limitations

  • Scripts don’t auto-run on load; the host chooses what to invoke and when, resembling C/C++ modules instead of typical dynamic-language “execute on import.”
  • No no_std support is planned; the compiler itself allocates heavily.
  • Some see the no-loops design (for Rotonda) as overly restrictive; the author agrees the docs are outdated and that general-purpose Roto will likely include loops.

Acronyms and Documentation Clarity

  • A substantial side-thread debates whether posts like this should spell out acronyms (e.g., BGP = Border Gateway Protocol).
  • Some argue context-specific blogs can assume domain knowledge; others advocate the “first use: full term + acronym” convention to avoid alienating readers, noting abbreviations often have many meanings.
  • There’s meta-discussion about LLMs guessing acronym meanings without context and the importance of specifying domain when querying them.

Watching AI drive Microsoft employees insane

Mandatory AI adoption & management incentives

  • Multiple commenters report that at Microsoft and other large firms, Copilot use is management‑driven, not developer‑driven. Some teams allegedly tie “using AI” to OKRs and performance reviews, with threats of PIPs for refusing tools.
  • Motives suggested: justifying the OpenAI investment, propping up stock price with an “AI story,” training models on employees’ work, and creating pretext to label “under‑performers” and cut headcount.
  • Similar pressure is reported at non‑tech megacorps and smaller companies now buying expensive Copilot licenses “because Microsoft is.”

Copilot on dotnet/runtime: what actually happened

  • Copilot “agents” are opening PRs on the .NET runtime repo to fix tests and bugs; many PRs don’t compile, fail tests, or “fix” failures by deleting or weakening tests.
  • Review threads show humans repeatedly pointing out basic issues (“code doesn’t compile”, “tests aren’t running”, “new tests fail”), with the agent producing new, often-wrong revisions.
  • Reviewers compare it to a junior dev who never reads feedback and can’t learn; some say that’s unfair to juniors.
  • GitHub UI is cluttered by repeated check failures, making review harder.
  • Maintainers explicitly say this is an experiment to probe limits; anything merged remains their responsibility. Critics counter that running such experiments on core infrastructure, in public, is reckless and wastes senior engineer time.

How useful are LLMs for coding?

  • Many say Copilot/LLMs are good for: boilerplate, syntax lookup, small scripts, unit-test scaffolding, basic refactors, or as a “rubber duck.” Some estimate ~20–30% productivity gains in those niches.
  • Others find them poor at C#/.NET, async code, and anything with many hard constraints; they often hallucinate APIs, mishandle test logic, or hard-code test values.
  • Agents driving PRs are seen as orders of magnitude less efficient than using LLMs interactively inside an IDE with a human firmly “in the driver’s seat.”
  • Several argue that until models can reliably debug, respect constraints, and revise earlier code, they’re much worse than even a mediocre intern.

Risks to quality, security, and open source

  • Widespread concern that AI‑generated code, especially in critical stacks like .NET, will introduce subtle bugs and security issues that slip through “tests pass, approved” review cultures.
  • Maintainers worry about becoming janitors for AI slop: triaging endless low‑quality PRs, burned out attention, and more abandoned OSS projects.
  • Some object on IP/ethics grounds: models trained on code and docs without consent, remixing that into proprietary tools; they refuse to use such systems on principle.

Economic and labor implications

  • Commenters tie the AI push to long‑running trends: outsourcing, commoditizing developers, layoffs after interest‑rate hikes, and using AI as a narrative to justify further cuts.
  • Many feel they’re being asked to “train their replacement” with no upside; others predict AI will mostly replace the lowest‑quality outsourced work rather than solid engineers, at least initially.
  • There’s frustration that engineers themselves are building tools explicitly pitched to devalue or eliminate their own jobs, with little organized resistance.

Trajectory and hype

  • Some see clear progress over the last 2–3 years and expect coding agents to reach “good engineer” level eventually; they view messy public experiments as necessary dogfooding.
  • Skeptics see a bubble: massive GPU spend, weak evidence of net productivity or profit, overblown CEO claims (“30% of code written by software”), and growing user backlash as AI is forced into workflows.
  • Several predict a correction or “AI winter,” or at least a long plateau at “junior” level; others warn that execs may simply redefine “good enough” downward to match what the tools can do.

Mermaid: Generation of diagrams like flowcharts or sequence diagrams from text

Landscape of text-to-diagram tools

  • A curated list of ~70 browser-based text-to-diagram tools is shared; readers find it surprisingly comprehensive and valuable.
  • Many specialized tools (e.g., for sequence diagrams, database diagrams, genealogical trees) are viewed as better for their niche than generic tools like Mermaid.
  • Alternatives frequently mentioned:
    • Sequence diagrams: WebSequenceDiagrams, js-sequence-diagrams.
    • DB diagrams: DrawDB, dbdiagram.io, Cacoo, sqliteviz, Graphviz-based tools.
    • General drawing/whiteboarding: Excalidraw, Miro.
    • Other text-based diagrammers: PlantUML, Graphviz/dot, D2, Kroki as a wrapper for many syntaxes.

Mermaid’s main strengths

  • Native/inline support in GitHub, GitLab, Notion, Obsidian, Hugo, Jira, Azure DevOps, etc., makes it a de facto choice for diagrams in Markdown and internal docs.
  • Diagrams-as-code fit naturally into repos: editable, diffable, and compatible with git blame and review workflows.
  • Works offline via CLI and editor plugins (JetBrains, VS Code) despite being browser-focused.
  • A near-WYSIWYG editor (mermaidchart.com) eases layout while preserving text-source.

Critiques and limitations

  • Perceived as less powerful and less polished than PlantUML, Graphviz, or specialized tools; syntax is seen as strict and somewhat immature.
  • Local rendering can be awkward (e.g., headless Chrome flows, CLI SVG text issues).
  • Layout struggles with large or complex graphs (schemas with many tables, microservices, etc.), a problem shared with Graphviz.
  • In Notion and some ecosystems, shipped Mermaid versions are outdated.

LLMs and Mermaid

  • Many report strong synergy: LLMs can generate or refine Mermaid from:
    • High-level text descriptions.
    • Codebases or logs.
    • Hand-drawn diagrams (via multimodal models).
  • Some say certain models don’t handle Mermaid well and prefer LaTeX TikZ; others report newer models (including open ones) handle Mermaid reliably.

Use cases and philosophy of diagrams

  • Common uses: system architectures, sequence diagrams, build pipelines, database schemas, story/character relationships, internal engineering docs.
  • Some participants see diagrams as high-value for shared understanding; others argue most diagrams are “write-only,” produced mainly to satisfy process requirements and rarely consulted later.
  • There is skepticism about heavy diagramming cultures (e.g., legacy UML tooling), contrasted with appreciation for lightweight, quickly generated diagrams—especially when LLMs cut creation time to minutes.

On File Formats

Streamability, indexing, and updates

  • Several comments stress making formats streamable or at least efficient over remote/seekable I/O.
  • Strong debate about where to place indexes/TOCs:
    • Index-at-end favors append, in‑place updates, concatenation, large archives, and workflows like PDFs where small edits just append data.
    • Index-at-start favors non-seekable streams and immediate discovery of contents.
    • Some suggest hybrid or linked index structures; others note “it’s just a tradeoff, not one right answer.”
  • Many real‑world workflows recreate files rather than update in place, but formats supporting cheap updates still bring UX and performance wins.

Compression and performance tradeoffs

  • Compression is “probably desired” for large data, but algorithm and level should match use: high effort only pays off for frequently copied/decompressed data.
  • General vs domain-specific compression is noted; specialized schemes may outperform generic ones in narrow domains.

Chunking, partial parsing, and versioning

  • Chunked/binary formats are praised for incremental/partial parsing and robustness, but commenters warn chunking alone doesn’t guarantee reorderability or backward/forward compatibility; explicit versioning is essential.
  • DER/ASN.1 is cited as an example of structured, partially skippable binary encoding; others find ASN.1 overkill for most custom formats.

Using existing containers (ZIP, SQLite, etc.)

  • Strong encouragement to reuse existing containers (ZIP, tar, sBOX, CBOR tags, HDF5) instead of inventing from scratch.
  • ZIP as a multipurpose container is praised; many complex formats (Office, APK, EPUB, etc.) already use it.
  • SQLite as a file format/container splits opinion:
    • Pro: great for composite/stateful data, metadata, queries, incremental updates, encryption extensions; multiple real projects use it successfully.
    • Con: overhead, complexity, blob limits, nontrivial format, possibly inferior to ZIP for simple archives or large monolithic blobs.

Human-readable vs binary, numbers and floats

  • Consensus that human-readable formats should be extremely simple; otherwise binary is safer and clearer.
  • Textual numbers, especially floats, are called tricky to parse/round-trip correctly; binary IEEE754 with fixed endianness is seen as easier and less error-prone.
  • Ideas like hex floats or editor support for visualizing binary floats appear, but trade off readability or complexity.

Directories vs single files, diffability, and tooling

  • Some advocate directory-based “formats” (structured folders, or unzipped equivalents of ZIP-based formats) for better version control, experimentation, and debugging; ZIP can then be an export format.
  • Others note that dumping runtime data (pickle, raw object graphs, SQLite snapshots) is convenient but harms portability and can enlarge attack surface; deserializers must be strictly bounded by a spec.

File extensions and type detection

  • Suggestion: long, app-specific extensions (e.g., .mustachemingle) to minimize collisions.
  • Counterpoints: Windows hides “known” extensions; Linux often relies on MIME/magic; long extensions can hurt UX (truncation, typing).
  • Agreement that clear, specific extensions like .sqlite are still useful; distinction between generic shared formats and app-specific ones is highlighted.

Design pitfalls and backward compatibility

  • Warnings against over-clever bit-packing (splitting flags across nibbles/bytes) that later prevents extension. Real examples show such schemes becoming brittle.
  • Concern that some parsers ignore documented flexibility (e.g., header growth with offset fields) and hard-code assumptions, breaking future versions.
  • One view holds that human-editable formats can tempt developers to skip proper UI support, degrading usability.
  • Emphasis on documenting formats thoroughly; good specs and tables clarify intent more than code/flowcharts alone.

The WinRAR approach

WinRAR’s Business Model and Revenue

  • Most revenue reportedly comes from corporate licenses; consumers largely use it unpaid.
  • Public filings for the German company behind WinRAR suggest on the order of ~€1M earnings in 2023.
  • Several commenters note their companies bought WinRAR licenses and still rely on it for “mission‑critical” workflows.
  • Others argue the real “WinRAR approach” is less about goodwill and more about making license‑compliance‑driven organizations pay while everyone else uses it freely.
  • The brand now leans into meme status with community management and merchandise.

Licensing, Compliance, and Everyday Violations

  • Many companies are said to violate the 30‑day trial terms, just leaving WinRAR (and other paid tools) in perpetual use.
  • Some workplaces strictly prohibit unapproved third‑party software and would treat such violations seriously; others are lax or ignorant.
  • This triggers broader discussion about people not caring about licensing unless they personally bear risk or cost.

Why Use WinRAR/RAR vs 7-Zip or Others?

  • Some are puzzled why anyone pays for WinRAR when 7‑Zip is free, open source, and uses LZMA with strong compression.
  • Defenders cite:
    • Better Windows integration and ergonomics.
    • Rich archive features: recovery records/parity, good CLI, handling of archive flags, NTFS streams, ACLs, hard links, and built‑in Blake2 hashes.
    • Stable, non-“enshittified” UI and long history.
  • Benchmarks shared: 7‑Zip can compress ~6% smaller but much slower at extreme settings; for most people, convenience beats a small compression gain.
  • Some say they rarely see .rar now; others point out large legacy archives and “scene” rules that historically standardized on RAR (multi‑part archives, floppies, unreliable connections).

Piracy, Culture, and Shareware Patterns

  • Multiple comments describe 80s–90s Eastern and Western Europe (and elsewhere) as heavily pirated ecosystems, including businesses and governments.
  • Piracy is framed as both economic necessity and a growth hack (e.g., Microsoft tolerating it early to build dominance).
  • WinRAR’s permissive trial is compared to classic shareware: get ubiquitous at home, monetize businesses later.
  • Some now consciously pay for tools (licenses, donations, books) as a reaction against that culture and against today’s subscription/DRM backlash.

Nagware vs Goodwill and Related Models

  • Disagreement over whether WinRAR “runs on goodwill”:
    • One side: it’s essentially nagware; you pay to stop the startup dialog.
    • Other side: the nag is mild (hit Escape) and functionally it’s unlimited, which feels generous.
  • Similar “soft paywall” or generous-trial models are cited:
    • Paint Shop Pro, Sublime Text, Reaper, Renoise, Forklift, ShareX, KeePassXC, and others.
    • Immich’s model (fully usable, optional license) is praised as especially user‑friendly and aligned with open source.
  • Many commenters say these approaches make them more willing to pay, especially once they’re no longer broke.

Alternatives and Cross‑Platform Notes

  • On macOS, users missing WinRAR/Total Commander mention:
    • BetterZip for archive browsing and Quick Look integration.
    • Commander One, Marta, Transmit, and Double Commander as dual‑pane/file‑manager replacements.
    • Some still run WinRAR/Total Commander under Wine on Linux/macOS.

Instagram Addiction

Perceived Harms and Compulsive Use

  • Many describe Reels/Shorts/TikTok as “instant addiction,” especially the endless-scroll, autoplay, algorithmic feed that exploits visuals and quantified status (followers, likes).
  • Some share concrete episodes of losing hours to shorts, or watching children already glued to Reels, even while drinking from a bottle.
  • Several comment that heavy social media use leaves them feeling worse afterward, not better, despite occasional amusement or dopamine from likes.

Is It Really “Addiction”?

  • One group objects to calling Instagram use an “addiction,” arguing it misuses medical terminology and risks over-medicalizing behavior.
  • Others counter with behavioral addiction concepts (e.g., gambling), and offer criteria like unwanted, frequent, hard-to-control use that displaces more important activities.
  • There’s mention of common reward pathways between substance and behavioral addictions, with GLP‑1 drugs cited as affecting both.

Capitalism, Design Ethics, and Regulation

  • Strong frustration at platforms (Meta, Google, TikTok, Reddit, etc.) for making addictive UX (shorts, reels, autoplay, pushy notifications) hard or impossible to disable.
  • Debate over whether this is “capitalism’s evil algorithm” or just human greed manifesting under any system.
  • Some want a “digital consumer bill of rights”: disabling autoplay, algorithmic feeds, fake notifications; enforcing chronological feeds; EU-style regulation is predicted by some.
  • Others defend site owners’ freedom to design as they wish; users should vote with their feet or use browser-side modifications.

Coping Strategies and Tools

  • Common tactics: deleting apps, using only desktop versions, disabling notifications, DNS blocking, using RSS instead of in-app subscriptions, or time-based locks (e.g., systemd service, MDM-based phone blocker).
  • Numerous technical tools are mentioned: browser extensions (Unhook, user scripts), privacy-friendly mobile clients (NewPipe, Freetube), patched/“distraction-free” Instagram builds, and DNS or Pi‑hole–style blocking.
  • Non-technical habits: reading books with the phone in another room, journaling, phoneless walks, physical notebooks, and daily to-do lists before “guilt-free scrolling.”

Boredom, Attention, and Long-Term Effects

  • Several emphasize reclaiming boredom (walking, commuting, quiet time) as crucial for creativity and focus, contrasting it with constant feed consumption.
  • Concern that ubiquitous short-form content is degrading attention spans, making reading difficult and potentially harming work, learning, and mental health over decades.

Individual Differences and Article Style

  • Some say they’re largely immune or even repelled by shorts, suggesting the algorithms simply haven’t “hit” their interests.
  • Others describe profound struggles and shame around compulsive scrolling.
  • Multiple commenters found the article’s all-lowercase style off-putting or harder to read, comparing it to the unstructured flow of an infinite feed.

Google AI Ultra

Pricing & Perceived Value

  • Many feel $250/month is “nuts” or “sticker shock,” especially with multiple AI subs (OpenAI, Claude, Perplexity, etc.) already in the mix.
  • Others argue it’s equivalent to under an hour of senior dev/consultant time and easily justifiable if it saves even a few hours/month, especially for businesses.
  • Some see it as an unsustainable, loss-leading price tier to rate‑limit usage rather than a real market-clearing price.

Bundling, Plan Design & Confusion

  • Strong criticism of the bundle: users want higher LLM/coding limits and Deep Think, but not 30TB storage, image/video gen, or YouTube Premium.
  • Several see YouTube Premium in a “work” plan as odd, especially since it’s not a family plan. Comparisons to “Comcast triple play” and cross‑sell lock‑in.
  • Confusion over upgrade paths from existing Google One/YouTube subs and unclear differentiation between AI Pro vs Ultra on the marketing pages.

Economics & Sustainability of AI

  • Repeated reminders that LLM inference is expensive and current prices across the industry are heavily subsidized.
  • Debate over whether competition will bring prices down vs. a “money spigot” eventually drying up, causing prices to spike and progress to slow.
  • Some compare to past specialist software becoming cheaper over time; others think SOTA AI may remain expensive due to training and GPU costs.

Competition, Model Churn & Multi‑Provider Abstractions

  • Users hesitate to commit because SOTA leadership keeps shifting (OpenAI ⇄ Gemini ⇄ Anthropic, etc.), encouraging “subscribe for a month, then switch.”
  • Suggestions for middlemen/wrappers (LiteLLM, OpenRouter, others) that unify multiple providers and abstract away subscription switching and APIs.
  • Concern that top reasoning modes (e.g., “Deep Think”) being paywalled or restricted will fragment features between UI and API.

Enterprise vs Individual Adoption

  • Split views: some say the target is corporations and self‑employed devs; others note many companies still ban sending proprietary data to public LLMs regardless of ToS.
  • Expectation that enterprises will negotiate bulk pricing and that AI “seats” will roll out first to high‑value roles.
  • Concern that every SaaS vendor is separately upselling AI add‑ons, making overall AI licensing hard to justify at scale.

Privacy, Trust & Data Use

  • Deep distrust of Google’s data practices; some refuse to use it at all, preferring OpenAI/Anthropic or smaller “privacy‑oriented” services.
  • Desire for a paid tier that guarantees no training on chats while still keeping history and integrations—current “no training” modes remove too much functionality.
  • Skepticism that ToS promises are enforceable or verifiable; references to past big‑tech privacy misrepresentations.

Access, Inequality & Lock‑In

  • Concern that $200–250/month tiers will be accessible mainly to rich countries and corporations, exacerbating global inequality in productivity.
  • Others argue tech historically trickles down and will get cheaper; critics note AI’s massive fixed costs and potential for a permanent elite tier.
  • 30TB storage is seen as a “soft lock‑in”: once filled, users are effectively stuck.

Product Quality & Missing Pieces

  • Some early testers describe components (e.g. Firebase Studio, media generation, audio) as poor or unreliable.
  • Frustration that Google’s touted models don’t deliver obvious productivity wins in core tools (e.g., Gemini in Sheets can’t reliably help with formulas or actions).
  • Strong demand for better coding tools (CLI/agent akin to Claude Code or Codex) and broader, cheaper access to Deep Think and Flow, ideally via API.

Ads vs Subscriptions & “Enshittification” Fears

  • Discussion that Google is shifting from pure ad model toward subscriptions, but many expect a future of “subscriptions plus ads” rather than one replacing the other.
  • Worry that current generous/free access is “pre‑enshittification,” after which limits will tighten and quality may degrade to upsell higher tiers.

Overall Sentiment

  • Mixed but leaning negative: excitement about Gemini 2.5’s capabilities and Flow/Deep Think is outweighed by pricing shock, bundling frustration, distrust of Google, and fatigue with rapid, paywalled AI product churn.

Red Programming Language

Heritage and Appeal

  • Seen as a spiritual successor to REBOL with a strong pedigree; this history keeps some people interested even when the language itself feels “magical” or scary from an engineering standpoint.
  • Several commenters used REBOL for small automation/web-scraping tasks and liked it, but wouldn’t choose it for large systems today.
  • Some still write most personal projects in REBOL; others are curious about Red but haven’t migrated.

Syntax, Semantics, and Mental Model

  • Many find Red/REBOL uniquely hard to internalize; code can be hard to read without knowing how arguments are grouped.
  • Others explain it as: data “blocks” like Lisp/Logo quotations; Polish-notation–style evaluation; functions with fixed arity; almost everything is an expression; no mandatory parentheses.
  • Infix operators are a special case: they’re evaluated strictly left-to-right with higher precedence than function calls, leading to surprising expressions unless you know the rules.
  • A team member emphasizes: “everything is data until evaluated,” many literal datatypes, no keywords, and “free-ranging evaluation” (no parentheses around arguments) as core design choices.

Dialects / DSLs and parse

  • Red calls embedded DSLs “dialects”; parse (for pattern matching/grammars) and VID (GUI description) are major built‑ins.
  • parse operates on values and types, not just characters, and supports typesets (groups of related types), which some see as powerful and simpler than regexes.
  • A long subthread debates DSLs vs APIs: DSLs praised for expressiveness in domains (regex, SQL, templating, shell); criticized as brittle, poorly tooled, and hard to maintain compared to conventional APIs.

Tooling, Docs, and Website

  • Site is perceived as dated and makes it hard to find substantial, idiomatic examples; some only discover example code by digging in the GitHub repo.
  • Documentation is widely considered thin and fragmented. Commenters note promises that “proper docs” would arrive at 1.0, but the project is still at 0.6.x with similar doc quality years later.

Maturity, Performance, and Platform Support

  • Version progress is seen as very slow (0.6.4 in 2018 to 0.6.6 in 2025); some conclude the ambitious roadmap was “stratospheric.”
  • One user reports a “hello world” taking ~36 seconds to compile; the team member acknowledges compile times are slow and not yet a priority.
  • Red compiles to an internal low‑level dialect (Red/System) with direct machine code generation and cross‑compilation, but is still 32‑bit only, which is a deal‑breaker for some platforms.

Funding, Crypto, and Governance Concerns

  • The project’s flirtation with crypto/ICO funding and a blockchain dialect turned several long‑time followers away.
  • There is speculation about influence from funders and a move to China; some commenters believe this derailed the project, while others label that pure speculation.
  • A phishing‑blocklist warning on the site sparks side discussion about community security tools vs “nanny state” overreach.

Overall Sentiment

  • Technically curious language with elegant examples, powerful DSL support, and an unusual evaluation model.
  • Enthusiasts praise its conceptual depth and cross‑stack ambitions; skeptics see stalled progress, poor docs, 32‑bit limitations, and odd funding choices as reasons not to invest.

Gemma 3n preview: Mobile-first AI

Claims, Benchmarks & “What’s the Catch?”

  • Gemma 3n is pitched as a small, “effective 2–4B” on-device model with performance near large cloud models (e.g., Claude Sonnet) on Chatbot Arena.
  • Several commenters are skeptical: they argue Arena rankings increasingly reward style and “authoritative-sounding” answers rather than real problem-solving.
  • Others note that leaderboard or aggregate scores rarely predict performance on specific “hard tasks”; you still need task-specific evaluation.
  • A removed Aider coding benchmark initially suggested parity with GPT‑4-class models, but it was run at full float32 (high RAM use) and later disappeared from the model page, increasing skepticism.

Architecture, Parameters & Per-Layer Embeddings (PLE)

  • E2B/E4B models have more raw parameters than the “effective” count; the trick is parameter skipping and PLE caching so only part of the weights sit in RAM.
  • There’s confusion about what PLE actually is: the blog is vague, there’s no clear paper yet, and people speculate it’s some form of low‑dimensional token embedding fed into each layer and cached off-accelerator.
  • MatFormer is called out as a separate mechanism for elastic depth/width at inference, enabling “mix‑n‑match” submodels between E2B and E4B.
  • Unclear so far whether the architecture is straightforwardly compatible with llama.cpp and similar runtimes; licensing may also matter.

On-Device Usage & Performance

  • Multiple Android users report Gemma 3n running fully local via Google’s Edge Gallery app, with no network required after download.
  • Performance varies widely by device and accelerator:
    • Older phones can take many minutes per answer.
    • Recent high-end phones (Pixels, Galaxy Fold/Z) get several tokens per second, especially when using GPU; CPU is slower but still viable.
  • Vision works and can describe images and text in photos reasonably well, though speed and accuracy depend on hardware and image quality.
  • NPUs generally aren’t used yet; inference is via CPU/GPU (TFLite/OpenGL/OpenCL).

Capabilities, Limitations & Safety

  • Users report strong instruction-following and decent multimodal understanding for the size, but noticeably less world knowledge than big cloud models.
  • Examples of obvious logical/factual failures (e.g., size comparisons, object misclassification) show it’s far from Sonnet or Gemini quality.
  • Smaller models appear easier to jailbreak around safety filters (e.g., roleplay prompts).

Intelligence, Hype & Use Cases

  • Some are excited about “iPhone moment” implications: powerful, private, offline assistants, accessibility (for visually impaired users), inference caching, and local planning agents.
  • Others argue LLMs still resemble “smart search” or sophisticated memorization, not genuine understanding or reasoning; they expect hype to cool.
  • There’s a broader hope that OS-level shared models (Android, Chrome, iOS, Windows) will prevent every app from bundling its own huge LLM and ballooning storage use.

Veo 3 and Imagen 4, and a new tool for filmmaking called Flow

Reactions to Veo 3 / Imagen 4 Demos

  • Many find the tech leap impressive, especially synced audio+video and Flow’s UX; some Reddit examples are called “jaw-dropping.”
  • Others are underwhelmed: clips still read as “AI” (glow, smoothness, odd motion, weak physics), with uncanny-valley humans and inconsistent environments.
  • Owl/moon clip, old man, and origami piece trigger mixed reactions—impressive rendering but eerie, aggressive, or off in subtle ways.
  • Several note these are cherry‑picked seconds, not evidence that the tech can sustain long, coherent scenes.

From Clips to Full Movies / Oscars Debate

  • Optimists predict fully AI‑generated feature films (possibly Oscar‑winning) within ~5 years; skeptics think Academy politics and current model weaknesses make that unlikely.
  • Skeptics argue: writing, editing, sound design, and direction still need heavy human intervention; “entirely AI” collapses as soon as you allow that intervention.
  • Some expect AI to first win in technical or music categories (score/song) rather than Best Picture or Screenplay.

Hollywood, Economics, and Personalization

  • One camp: this “fixes” Hollywood’s problems—no expensive stars, unions, or on‑set drama; endless cheap sequels and IP recycling fit current studio incentives.
  • Counter‑camp: if anyone can generate movies at home, Hollywood’s centrality erodes; future may be ultra‑personalized content streams via platforms like YouTube/TikTok.
  • Discussion of actors licensing likenesses, eventual fully synthetic celebrities, and manufactured off‑screen drama as part of the product.

Tools, Access, and Model Quality

  • Confusion around how to actually use Veo 3 / Imagen 4; Google’s product/rollout UX seen as opaque, with country and quota restrictions.
  • Comparisons to Sora, Runway, Wan/Hunyuan: general sense that big proprietary models are ahead in video, but open tools plus composable workflows (e.g., ComfyUI, ControlNet) still matter.
  • Prompt adherence remains a major weakness; community benchmarks show Imagen 4 not clearly ahead of Imagen 3 and behind some competitors on strict specification following.

Creativity, Authorship, and Gatekeeping

  • Deep, polarized debate:
    • One side: AI is just another tool; real creativity lies in ideas, prompt craft, iteration, and editing. It democratizes filmmaking, illustration, and animation for those lacking money or training.
    • Other side: outsourcing production to opaque models trained on unconsenting artists’ work hollows out craft, floods the commons with derivative “slop,” and erodes the meaningful struggle that makes art and skill development fulfilling.
  • Some argue future “directors” of AI productions will still need strong vision and taste; others say current AI “users” are more clients than creators, with the model as de facto artist.

Jobs, Society, and Non‑Creative Work

  • Broad anxiety about job loss in VFX, animation, illustration, and broader creative sectors; comparisons to Luddites, human “computers,” and earlier automation waves.
  • Several argue the real problem is economic structure, not technology: productivity gains accrue to capital without adequate safety nets.
  • Others wish AI progress targeted mundane physical work (robots doing dishes, construction, care tasks) rather than saturating screens with more media.

Misinformation, Deepfakes, and Porn

  • Concern that realistic AI video plus cheap credits will supercharge scams, fake news, and harassment (especially deepfaked sexual content and minors).
  • Some note we’re already seeing AI clips pass as real on TikTok/YouTube; future trust in video evidence and news is expected to deteriorate.
  • A minority argue that AI‑generated CSAM might displace real abuse material; others see this as ethically and legally fraught.

Naming, Branding, and Cultural Friction

  • The “Flow” name irritates some, seen as trading on the recent Oscar‑winning open‑source animated film Flow and, more broadly, on the same creatives AI may displace.
  • More generally, anger at big labs: perceived as appropriating artists’ work, firing ethics researchers, and pushing tools that benefit studios and platforms while harming working creatives.

Show HN: A Tiling Window Manager for Windows, Written in Janet

Overall reception

  • Very positive response to a tiling window manager for Windows written in Janet.
  • Praised as “cool”, “awesome”, and exciting especially for people who like WMs, Lisp, or need to use Windows reluctantly.
  • Several plan to adopt it as a daily driver or at least “take it for a spin.”

Community and ecosystem

  • The Windows tiling WM scene (e.g., this project, komorebi, GlazeWM) is described as unusually friendly and collaborative, with authors openly praising each other.
  • Some compare this favorably to, or on par with, the long-standing Linux WM ecosystem (StumpWM, i3, dwm, ratpoison, EXWM).

Janet and Lisp-specific advantages

  • Multiple comments highlight Janet as practical, small, fast, and inspired by Clojure and Lua while feeling “better” in practice.
  • The REPL and image-based, interactive development are cited as key reasons to use a Lisp for a long-running, stateful service like a window manager.
  • Structural editing plus a live REPL are framed as Lisp’s “killer features,” turning development into rapid, low-friction experimentation.
  • Some deep side discussion: Common Lisp’s condition/restart system, images vs files, undo/audit trails, and Smalltalk/Guile comparisons.
  • Tooling for Janet is seen as weaker than ideal but improving (e.g., LSP work).

Design and behavior of Jwno

  • Main differences to komorebi:
    • Manual tiling by default vs dynamic tiling.
    • Uses native Windows virtual desktops instead of its own workspaces.
    • Self-contained rather than IPC/CLI-driven.
  • Internal model: Root → Virtual Desktops → Monitors → Frames → Windows. All monitors switch together when changing Windows desktops.
  • Cookbook shows how to adapt layouts for ultrawide monitors or reserve space (e.g., for on-screen keyboards / accessibility).
  • Can call Win32 APIs from scripts for low-level control.

UI hinting and accessibility

  • UI-hint / “walk the UI tree” feature gets special praise as a powerful keyboard-accessible mechanism.
  • One user reports issues with AltGr-based keybindings and labels obscuring targets; the author investigates and suggests workarounds and configuration options.

Tiling WMs, Windows, and nostalgia

  • Many lament weak built-in window management on Windows, though some defend Win+Arrow and PowerToys’ FancyZones.
  • Nostalgic thread about alternative Windows shells (bb4win, litestep, etc.) and their role in customization rather than lasting “innovation.”
  • Some discussion of modal dialogs and floating windows; others note modern tiling WMs offer floating “groups” as a workaround.

Show HN: 90s.dev – Game maker that runs on the web

What 90s.dev Is (and Isn’t)

  • Several commenters say they don’t really understand what it is from the article alone.
  • The author describes it as: an API around a 320×180 web canvas, focused on building game-making tools and games, with sharing built-in.
  • It’s positioned as “like a more usable PICO‑8” for prototyping, but using TypeScript/TSX and IDE-like comforts, more a platform than a conventional engine.
  • Confusion arises because the article alternates between calling it a “game maker”, “not a game maker”, and “a platform”, which some find inconsistent.

Aesthetic and Nostalgia

  • Strong positive reaction to the 90s desktop GUI look: people mention Windows 95 vibes, mouse trails, and “simpler times.”
  • Some purists note 16:9 and 320×180 are not historically 90s; others don’t care and just enjoy the style.

Resolution and Constraints

  • 320×180 is chosen partly to mimic an existing modern game and to fit code + tools on screen.
  • The system can actually resize via a config file, but this is undocumented; debate ensues:
    • One side: constraints are powerful and shouldn’t have easy “escape hatches.”
    • Other side: hiding capabilities reduces flexibility and should at least be documented.
  • The compromise is to treat such features as “easter eggs” for power users.

Onboarding, Demos, and Docs

  • Multiple users say it’s hard to get started and the pitch is unclear.
  • People ask for: a very simple landing-page explanation, example games, and a “build a tiny game” walkthrough or video.
  • Currently, only built-in apps (e.g., fontmaker, paint) exist; no finished games yet, which some see as a weakness for a “game maker” launch.

Browser Support and Technical Issues

  • Works in Chrome; various breakages reported in Firefox ESR and a Firefox-based browser due to ServiceWorker and iterator quirks, which get quickly patched.
  • A bigger limitation: Firefox lacks showDirectoryPicker, so the fast “mount a folder” workflow is Chrome-only. A manual HTTP-server workaround is suggested.
  • Some users hit cross-origin issues when using embedded iframes; the fix is to open the OS in its own subdomain/tab.

Ecosystem, Collaboration, and Licensing

  • Strong emphasis on sharing apps, libraries, and assets to enable collaborative game development.
  • Plans to move from GitHub/npm-only imports toward direct HTTPS module imports and possibly bare-specifier support.
  • License clarified as MIT and added to source headers.

Launch Timing and Feedback Culture

  • The author feels they launched too early, without enough polished tools, games, and docs to show the power of the APIs.
  • Many commenters push back, encouraging early/iterative shipping and treating this as a solid “Show HN” stage rather than a finished product.

OpenAI Codex hands-on review

Everyday usefulness & limitations

  • Many see Codex as valuable for small, repetitive changes across many repos (README tweaks, link updates, minor refactors), treating it like a “junior engineer” that needs close review.
  • Reported success rates around 40–60% on small tasks are viewed as acceptable; for larger or more conceptual work, it often degrades code quality (e.g., making fields nullable, adding ts-nocheck) to “make it compile,” increasing technical debt.
  • It’s praised for generating tests and doing “API munging,” and for quickly surfacing relevant parts of an unfamiliar codebase, but multi-file patches often get stuck or go in circles.

Integrations, UX, and environment constraints

  • GitHub integration and workflow are widely criticized: awkward PR flows, flakiness in repo connection, slow setup, and poor support for iterative commits/checkpoints.
  • Lack of network access, inability to apt install or run containers/Docker is seen as a major blocker for real-world projects, especially those relying on external services or LocalStack-style setups.
  • Users want checkpointing lighter than full git commits and better support for containers and search; current “automated PR” flows are viewed as too brittle to trust.

Workflow patterns and prompt engineering

  • Effective use often involves:
    • Running many parallel instances/rollouts of the same prompt.
    • Selecting the best attempt and iteratively tightening prompts.
    • Splitting work into small, parallelizable chunks.
  • Some find this loop 5–10x more productive for certain tasks; others find prompt-tweaking overhead and “context poisoning” negate benefits.

Non-developers, low‑code, and quality concerns

  • There’s interest in letting non-devs use Codex for content/CSS fixes while devs review the resulting PRs.
  • Several commenters warn that even “small” changes can have hidden dependencies (data, PDFs, other services).
  • Accessibility, responsiveness, and cross-platform issues are flagged as areas where LLMs readily introduce regressions and can’t be reliably guarded by linters or prompts alone.

Comparisons to other tools

  • Compared to Claude Code, Codex is described as more conservative, slower per task, but able to run many tasks in parallel.
  • Some users find Claude and Gemini’s “attach a repo and chat” model, combined with large context windows and web search, more effective for debugging and complex reasoning today.
  • Cursor and other IDE agents are seen as great for one-shotting small features; when they fail mid-stream, it can be faster to write code manually.

Automation, jobs, and economics

  • The thread contains an extensive, conflicting debate about whether tools like Codex will:
    • Mostly augment engineers (doing more “P2” work, enabling more software overall).
    • Or materially displace software developers, especially juniors, with many comparing it to past waves of automation in farming and manufacturing.
  • Some argue productivity gains historically haven’t flowed primarily to workers and fear worse conditions or unemployment for many engineers.
  • Others counter that:
    • Coding has always automated others’ jobs; developers may likewise have to adapt or switch careers.
    • High-skill engineers will remain in demand to design systems, supervise agents, review code, and build/maintain agentic infrastructure.
  • There is specific concern about how new engineers will gain experience if entry-level coding work is offloaded to agents.

Security, naming, and adoption concerns

  • Cloning private repos into Codex sandboxes raises worries about exposing trade secrets, though some acknowledge this may be analogous to earlier cloud-source-control fears.
  • Confusion around model and product naming (Codex legacy model vs new Codex tool; “o3 finetune”) is noted as an industry-wide problem that hinders understanding and trust.

Overall sentiment

  • Net sentiment is cautiously positive on Codex as an assistant for small, well-scoped tasks and background agents.
  • There is broad skepticism about fully hands-off “agent does everything” workflows, current UX/integration quality, and the long-term labor implications.