Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 243 of 358

Postgres LISTEN/NOTIFY does not scale

Core issue: LISTEN/NOTIFY and global commit lock

  • Commenters reiterate the article’s finding: issuing NOTIFY inside a transaction causes a global lock at commit, effectively serializing commits that use it.
  • Several people note this behavior is not mentioned in the Postgres docs, which they find surprising for a lock-heavy feature.
  • Some ask for clarification whether only transactions that called NOTIFY are serialized; the exact scope is left somewhat unclear in the thread.

“Doesn’t scale” vs “used incorrectly”

  • Many argue the feature scales fine for typical loads and that the problem is using NOTIFY for every transaction at very high write concurrency.
  • Viewpoint 1: this is primarily a misuse and lack of due diligence (no debouncing, no grouping, no outbox), not an indictment of LISTEN/NOTIFY in general.
  • Viewpoint 2: even if misused, it’s a realistic trap; the doc and API design make it easy to step on a landmine under growth.

Design limitations of LISTEN/NOTIFY

  • No authorization model: you can’t restrict which roles may LISTEN/NOTIFY on which channels.
  • Payload constraints and typing are absent; some wish for a CREATE CHANNEL-style DDL with authz and type checking.
  • LISTEN/NOTIFY is lossy (missed messages if nobody is listening) and awkward with connection poolers.
  • A benchmark is cited: latency grows ~linearly with number of idle listeners (O(N) backend scan), making many listeners problematic.

Alternative patterns within Postgres

  • Notify outside the main transaction, with:
    • A transactional outbox table plus a sweeper, accepting duplicates and eventual consistency.
    • Notifications used only as “wake up” hints; consumers then poll canonical tables.
  • Use tables as queues via SELECT … FOR UPDATE SKIP LOCKED, backoff polling, and sometimes partitioning to reduce dead-tuple impact.
  • Multiple people describe hard problems here: dead tuples, autovacuum tuning, planner misestimation, HOT updates, and queue-table design. Extensions like pgmq/pgflow and higher-level frameworks (e.g., DBOS, Chancy) are mentioned.

WAL / logical decoding and external systems

  • Several recommend reading the WAL / logical replication instead of LISTEN/NOTIFY, using:
    • pg_logical_emit_message() for an outbox without table bloat.
    • Logical decoding consumers (e.g., via Debezium) or streaming replication.
  • Others advocate dedicated infra for pub/sub and queues: Redis, Kafka, NATS, RabbitMQ, MQTT, SQS, Centrifugo, etc., often with Postgres as system-of-record only.

Business logic in DB vs app

  • Long sub-thread debates:
    • Pro-DB: constraints, triggers, and stored procedures centralize invariants and protect against misbehaving apps.
    • Pro-app: triggers and in-DB logic can be opaque, surprising, and hard to scale; RDBMS should not be overused as message queues or workflow engines.
  • Consensus: use DB features for data integrity and some lifecycle logic, but be cautious putting high-volume messaging or broad business workflows entirely inside Postgres.

Broader scaling lessons

  • Many say LISTEN/NOTIFY is fine for low-to-moderate scale and very convenient when you “just want one dependency.”
  • Repeated theme: start simple (often Postgres), but know where it breaks—queueing, pub/sub, and very write-heavy workloads are common cliffs that eventually call for specialized components.

The ChompSaw: A benchtop power tool that's safe for kids to use

Tool concept & mechanism

  • ChompSaw is described as a tabletop “nibbler” / oscillating cutter for ~3mm cardboard, with the blade recessed under a puck so fingers can touch the area without injury.
  • Several commenters liken it to a safe, kid-scale router table or jigsaw for cardboard: freehand curves and shapes without needing a straight guide, though not true 2D sideways motion like a router.

Safety debates

  • Many appreciate a genuinely finger-safe power tool, arguing kids inevitably get careless and this lets them explore before “graduating” to dangerous saws.
  • Others worry it could create a false sense that power tools are inherently safe, bypassing the crucial lesson of respecting sharp, dangerous machinery.
  • Long subthread on how “safe” scroll saws and bandsaws really are: some say they rarely cause serious injuries and are safer than many kitchen knives; others insist they absolutely can maim, and complacency is the real danger.
  • Concerns raised about loose hair and sleeves; defenders note oscillating motion doesn’t pull material in like rotating spindles, and the product video explicitly shows hair contact tests.

Price & value perception

  • The $249 price is widely criticized as “toy priced like a real tool.”
  • Comparisons: actual scroll saws, router tables, metal nibblers, or even 3D printers can cost the same or less.
  • Some argue that avoiding even one serious finger injury justifies the cost; others see it as a niche, affluent-family or institutional purchase (classrooms, libraries, makerspaces).

Alternatives & comparisons

  • Many suggest cheaper options: scissors, box cutters, kid-safe cardboard systems (e.g., screw-and-saw kits), X-Acto knives, coping saws, hand tools, or teaching knife use with cut-resistant gloves.
  • Counterarguments: cutting corrugated cardboard—especially curves—is genuinely hard for kids with scissors, tiring, crushes the material, and can be unsafe when they use excessive force.
  • Suggestions to try it on leather; others note metal nibblers or electric shears already exist for similar use.
  • Some see it mainly as a way to demystify power tools for young kids who are too small or uncoordinated for “real” tools.

Practicalities, experience & waste

  • Reports from owners: kids find it very fun and intuitive; one unit arrived dead and needed replacement; noise level is high enough that ear protection is recommended.
  • “Nibblings” go into a bin; several note that such tiny cardboard crumbs typically aren’t accepted in municipal recycling—better treated as compost or creative “fur” for art.

Parenting, creativity & clutter

  • Multiple comments focus on cardboard-building culture (often with other kid systems): huge creative upside, but leads to houses full of cherished cardboard creations and emotional battles over throwing them out.
  • Debate over whether secretly discarding kids’ projects teaches unhealthy hoarding vs. whether guiding them through a full lifecycle (create–enjoy–retire) is essential.
  • Broader theme: balancing real risk, meaningful hands-on making, and children’s growing independence.

Market & access

  • Some see the product as an over-engineered, hyperspecific American gadget where scissors would “do,” others defend specialization as what makes it engaging and educational.
  • One early backer from Mexico describes being refunded when the company pulled back from Mexican sales, expressing frustration that small US companies often ignore the Mexican market despite trade agreements.

I used o3 to profile myself from my saved Pocket links

LLM-based Profiling of People

  • Multiple commenters describe using LLMs to “profile”:
    • Pocket archives, HN comment history, Reddit history, group chats, and music or YouTube history.
    • To detect trolls (especially “concern trolls”) by feeding in posts/subreddits, scores, and history patterns.
    • For curiosity about surprising HN takes, or for humorous “roast” profiles of HN accounts and users on internal platforms.
  • Some note troll behavior like generic posts, then short-lived political replies that are later deleted, and possible “account farming” for manipulation campaigns.

Accuracy, Methods, and Limits

  • Users report reasoning models giving surprisingly accurate guesses about age, city, politics, hobbies, but often misclassifying job roles.
  • Others find this underwhelming given older, simpler ML that could already infer traits from social “likes.”
  • Concerns about:
    • Long-context reliability (models may effectively use only a subset of data).
    • Need for hierarchical / iterative summarization pipelines.
    • Barnum/Forer effect: flattering, generic-but-plausible descriptions feel precise.
    • System prompts and training that push LLMs to be “nice” and engagement-maximizing.

Privacy, Surveillance, and Power

  • Strong worries that chat histories and personal prompts provide highly detailed psychological profiles for advertisers, platforms, and governments.
  • Several assume large platforms already run this type of profiling at scale (email, video, browsing data).
  • Discussion of how cheap large-context calls are per user, but still favor big players for mass profiling.
  • Emphasis on end-to-end encryption and local models, given rising value of “boring” conversations when combined with phishing and voice cloning.

Self-Understanding vs. Manipulation

  • Many see value in self-analysis: surfacing blind spots, clarifying interests, contrasting “saved” vs. actually-read items, and compressing huge personal archives into something interpretable.
  • Others liken it to “modern astrology” and caution against assuming you can reliably profile strangers from metadata alone.

Tools, Workflows, and Pocket Shutdown

  • Ideas: using LLMs to classify and tag thousands of bookmarks; prune “read later” hoards; build personal world-models or knowledge graphs from all digital activity.
  • Pocket’s shutdown drives interest in exports and alternatives (Wallabag, Linkwarden, Instapaper, Safari Reading List) and in third-party tools to rescue full archives, including article content, tags, and highlights.

Meta: Titles, UX, and LLM Style

  • Several objected to the original post title as implying passive, secret profiling; they prefer agent-focused wording.
  • Complaints about LLM “fluff” and recipe-like verbosity; some mitigate with prompts like “be concise,” noting that models like o3 already tend to be terser.

Mercury: Ultra-fast language models based on diffusion

Hands-on impressions (speed, UX, behavior)

  • Many commenters tried the playground and described it as “insanely fast” or “almost instantaneous,” with full paragraphs appearing at once.
  • The “diffusion mode” visualization is seen as a neat but purely cosmetic animation, not a faithful view of internal steps.
  • Some report deterministic behavior even at higher temperatures, needing hacks (e.g., adding a UUID to the prompt) to get varied outputs.

Quality, correctness, and hallucinations

  • Mixed reactions on capability: some found it “quite smart” and good for quick coding help or small utilities (e.g., MQTT matcher), others saw “over 60% hallucinations” and weak logical reasoning (fails “stRawbeRRy” / Sally’s sister tests).
  • In coding, it can produce plausible but non-compiling or incorrect answers, similar to earlier LLM generations.
  • Weird failure modes noted: infinite-ish test generation for a regex prompt, deteriorating test quality, nonsense characters, and classic issues like misunderstanding bit shifts and 128-bit integers.
  • Several commenters stress that raw token prediction alone is not enough for reliability in code.

Diffusion vs. autoregressive LLMs

  • Diffusion is framed as coarse-to-fine vs. the start-to-end bias of autoregressive models; this directional difference may affect how they handle editing and “coding flows.”
  • Some see diffusion as especially promising for back-and-forth editing, multi-layer code representations, and potentially schema- or type-constrained generation.
  • Others compare to Gemini Diffusion: very fast but currently weaker than top conventional models, suggesting this is an early-quality, high-speed phase.

Ecosystem, pricing, and openness

  • Mercury’s API pricing is seen as decent but not market-leading; Groq and DeepInfra are cited as cheaper for some workloads, though sometimes slower or higher latency.
  • Lack of open weights, undisclosed parameter counts, and a benchmark-heavy, light-on-details paper draw criticism; the arXiv tech report is viewed by some as bordering on marketing.
  • One commenter links Mercury to a scaled-up variant of existing discrete diffusion work and provides an educational reimplementation.

Impact on tooling, CI, and workflows

  • Many anticipate ultra-fast models enabling new paradigms (e.g., semantic grep over millions of HN comments, rapid multi-iteration code agents).
  • A long subthread argues that CI/testing—not model speed—will become the main bottleneck as agents generate far more code and PRs, prompting extensive discussion about CI cost, flakiness, caching, and architectural/test quality.

AI cameras change driver behavior at intersections

Enforcement vs. Safety Trade-offs

  • Many argue AI cameras exemplify “letter of the law” enforcement that optimizes metrics (tickets, complete stops) rather than real safety, invoking Goodhart’s law.
  • Some drivers report that coping with enforcement devices (speed changes, cameras, bumps) distracts them from scanning for pedestrians.
  • Critics note that intersections are often poorly designed (blocked sight lines, bad parking layouts), but it’s cheaper and more profitable to fine drivers than to fix infrastructure.

Stop Signs, Sight Lines, and Design Alternatives

  • A recurring issue: if you stop at the stop line you often can’t see cross traffic, so people creep forward into crosswalks, undermining pedestrian safety.
  • Commenters mention the “sight triangle” concept and say many local intersections violate design guidelines.
  • Some propose better engineering: yields instead of stops where visibility is good, roundabouts, pedestrian-activated flashing beacons, smarter signal timing.

Rolling Stops vs. Full Stops

  • One camp: rolling stops are fine if sight lines are good and no one is present; cameras should penalize only when pedestrians or cross-traffic are at risk.
  • Opposing camp: rules must be written for average or impaired drivers; habitual rolling stops degrade attention and become dangerous, especially in school zones.
  • Discussion touches on US vs. European practice: more yield-based priority rules and fewer 4‑way stops in parts of Europe.

Automated Enforcement, Incentives, and Surveillance

  • Strong skepticism that systems are about safety rather than revenue or data: examples of camera programs that failed as ticketing tools but remained as surveillance/plate readers.
  • Worries about mission creep into panopticon-style tracking (especially combined with facial recognition) and profit-motivated private enforcement.
  • Some suggest reforms: remove revenue incentives, use points instead of fines, strict limits and audits—but others doubt any oversight will be trustworthy.

US vs. Europe: Broader Safety Context

  • Thread notes higher US fatality counts and points to differences in:
    • Licensing rigor and training.
    • Car size (large SUVs/trucks).
    • Vehicle inspection regimes.
    • Urban form: longer driving distances, sprawl, weaker transit.
  • There is disagreement on the right metrics (per capita vs. per distance driven) and on whether automated enforcement meaningfully addresses these root causes.

Vision Zero Debate

  • Some see “Vision Zero” as useful directionally (20 mph mixed-use streets, separated infrastructure) and note a few European successes.
  • Others view literal “zero deaths” as unrealistic and potentially misallocating funds once low-hanging safety improvements are exhausted.

7-Zip for Windows can now use more than 64 CPU threads for compression

Windows >64‑thread limitation and processor groups

  • Windows exposes “processor groups” of up to 64 hardware threads, with per‑group affinity masks baked into old APIs and parts of the driver ABI.
  • Earlier 7‑Zip counted cores via affinity masks, capping it at 64 threads; newer APIs (e.g. GetActiveProcessorCount with ALL_PROCESSOR_GROUPS) and explicit group handling are needed to see and use more.
  • On systems with multiple groups, 7‑Zip now spreads worker threads across groups itself; Windows doesn’t fully abstract this on pre‑11 releases.
  • There is disagreement over how much Windows 11’s updated scheduler automatically spreads a single process across groups versus still requiring app changes for optimal use.
  • Some note compression isn’t trivially parallel: too‑small chunks hurt ratio, shared dictionaries need coordination, and IO often bottlenecks before CPU on huge archives.

Historical design and scaling debates

  • Early NT used 64‑bit affinity masks when >64 CPUs were exotic; many see that as reasonable for 1990s hardware.
  • Others argue high‑core SMP/NUMA existed on non‑x86 systems in the late 90s, and NT’s limits may have discouraged high‑core x86 servers.
  • Timeline points: NUMA scheduling in NT 5.0, processor groups in 6.1, and x86 systems surpassing 64 cores only becoming common in the 2010s.
  • Linux also had hardcoded CPU limits but tends to break ABIs more readily to scale up.

7‑Zip’s role vs Windows built‑in tools

  • Many still see 7‑Zip as the de‑facto Windows archiver: free, fast, supports many formats, simple context‑menu integration, good encryption.
  • Others say Windows 11’s libarchive‑based support for ZIP/7z/RAR/TAR makes 7‑Zip less necessary for casual users.
  • Reports say the built‑in tool is significantly slower and flakier on very large, many‑file archives.
  • Opinions on the 7‑Zip GUI are split: some call it dated but efficient, others prefer alternatives like PeaZip or NanaZip; many just use the right‑click menu.

zstd, other codecs, and compatibility

  • Some worry 7‑Zip will lose relevance without native zstd; community forks (7‑Zip‑zstd, NanaZip) add it but upstream development is described as very closed to contributions.
  • Others claim most end‑users don’t care about zstd; ubiquity and compatibility of classic ZIP (DEFLATE) dominate.
  • The ZIP spec now defines zstd (method 93) and other methods, but real‑world support is patchy; many systems still use old zip/unzip without these.
  • There’s a long back‑and‑forth on “good enough” formats (DEFLATE, JPEG, H.264) vs pushing newer tech (zstd, LZMA, H.265/AV1), and on the cost to users of requiring updated tools.

Cross‑platform tools and UX

  • On Unix‑like systems, tar plus gzip/xz/zstd and bsdtar/libarchive provide a similar “open almost anything” role to 7‑Zip.
  • macOS users miss 7‑Zip’s Windows shell integration; suggested alternatives include Keka, PeaZip, and CLI 7‑Zip via Homebrew.

Ask HN: Any resources for finding non-smart appliances?

Concerns about “smart” appliances

  • Many commenters want appliances with no networking at all, due to:
    • Privacy and surveillance (TVs “begging” for Wi‑Fi, data sharing, excessive app permissions like location/search history).
    • Feature lock-in: some devices hide basic functions (dishwasher cycles, AC settings, stove features) behind apps and online accounts.
    • Reliability and lockouts: fear of forced updates, time‑bomb behaviors, and devices refusing to work without checking in.
    • Usability: app-only controls don’t work well for guests, Airbnbs, multi‑user households, or when a phone/account is unavailable.

Workarounds and “dumbing down”

  • Common strategies:
    • Never connecting devices to the internet, or isolating them on separate VLANs/firewalls and blocking outbound traffic.
    • Using external boxes (Roku for TVs), “store mode” on TVs, or jailbreaking when possible.
    • Using smart plugs (e.g., Tasmota) only as dumb current sensors to get “cycle finished” notifications.
  • Some argue many washers are still “smart‑optional” and fine if left offline; others counter that hidden behavior (settings reset unless configured via app, child-lock state lost on power failure) makes this insufficient.

Finding non-networked models

  • Tactics:
    • Download user manuals to see real functionality and connectivity, not marketing.
    • Filter by “no Wi‑Fi/app” on retailers (examples given from Dutch sites and Consumer Reports).
    • Look for lower‑end or commercial lines, which often stay “dumb,” though sometimes with shorter warranties.
    • Specific brands/models frequently praised as minimal and durable: Speed Queen (especially top‑load, TC5), Maytag commercial, older Whirlpool and Electrolux/AEG units, plus some non‑smart lines from Gorenje and Electrolux in Europe.
    • Buy refurbished/used from appliance stores, thrift stores, and estate sales; older gear is often simpler and more repairable.

Repair, longevity, and commercial gear

  • Strong theme: repair/keep older machines (decades‑old Whirlpool/Maytag, gas dryers) because they’re easier to service and outlast modern boards.
  • Mixed experiences with newer brands (e.g., Samsung dryers needing frequent or costly repairs vs. AEG/Electrolux lasting well).
  • Commercial appliances and monitors are highlighted as a source of non‑IoT gear, with cautions about insulation, noise, power, and code/safety differences.

Ethics, terminology, and side discussions

  • Some joking proposals: “unshittified,” “sans IoT enshittification.”
  • A satirical digression compares dishwasher costs to underpaid disabled labor, prompting debate about legality and ethics.

Anthropic cut up millions of used books, and downloaded 7M pirated ones – judge

Legal Status of Training on Copyrighted Books

  • Many comments focus on the judge’s finding: scanning owned books and using them to train models was “exceedingly transformative” and fair use, while downloading pirated copies was not.
  • Several point out this follows existing precedents (e.g., Google Books, search indexing): making internal digital copies and using them for a transformative tool can be fair use even when entire works are scanned.
  • Others argue the law is unsettled: only early lower‑court rulings exist, some other cases are less friendly to AI, and a Supreme Court test on LLM training/output is widely expected.

Piracy vs. Fair Use and the Judge’s Ruling

  • Key distinction drawn:
    • (1) Pirating books to build a digital library = clear infringement, possibly criminal at this scale.
    • (2) Scanning legitimately purchased books = fair use.
    • (3) Training on that internal corpus = fair use (in this ruling).
  • Buying a book does not license redistribution; the model is treated as a new, transformative work unless it reproduces “meaningful chunks” verbatim.
  • Some challenge the analogy to human learning and say courts are improperly anthropomorphizing LLMs.

Corporate Power, Double Standards, and Enforcement

  • Strong resentment over asymmetry: individuals have been ruined or jailed for relatively small-scale infringement, whereas a heavily funded AI company may face only manageable civil penalties.
  • Comparisons to past cases (software piracy sentences, Aaron Swartz, RIAA lawsuits) used to illustrate “one law for the rich, another for everyone else.”
  • Others note statutory damages (up to $150k/work) exist but are rarely applied at full theoretical scale; settlement and transaction costs dominate.

Impact on Authors and Future Creativity

  • One camp: training on unlicensed books and selling AI access economically undercuts authors (especially mid‑ and low‑tier), discouraging future writing and teaching.
  • Counter‑camp: many authors already earn little; people write mainly from intrinsic motivation; a single author’s marginal contribution to a trillion‑parameter model is negligible.
  • Ongoing tension between “no copyright on knowledge” and protection of specific expressions and markets.

Analogies and Precedents from Other Tech Sectors

  • Commenters cite Spotify, YouTube, Crunchyroll, cloud music lockers, and social platforms as past examples of “pirate first, legalize later” growth strategies; others dispute some of these histories as myths.
  • Search engines are a frequent analogy: they copy everything, store it, and show snippets, which courts found transformative—LLMs are argued to be similar or, by critics, more directly competitive.

Ethical and Philosophical Fault Lines

  • Debate over whether copyright infringement is “stealing,” “theft of service,” or a distinct, lesser category; some argue infringement can be worse than tangible theft, others the opposite.
  • Some hope AI pressure will force radical IP reform and expansion of the public domain; others fear AI giants will secure carve‑outs for themselves while IP remains strict for everyone else.
  • Disagreement on whether it’s morally acceptable to train on anything one has lawfully obtained, vs. a need for explicit licensing and revenue-sharing.

Destruction of Books and Data Transparency

  • Emotional reaction to millions of books being cut up; some see it as cultural loss, others as acceptable if copies aren’t unique and digital preservation occurs.
  • Several argue that the real systemic problem is non‑transparent datasets: without knowing exactly what went into training, claims about “fair use,” “zero‑shot,” and originality are impossible to evaluate.

Bitchat – A decentralized messaging app that works over Bluetooth mesh networks

Platform & Design

  • Native SwiftUI app targeting iOS/macOS, labeled a “Universal App” in Apple’s sense (iPhone/iPad/Mac).
  • Protocol layer is described as platform‑agnostic; lower‑level Bluetooth handling would need rewriting for Android. A separate Android port already appeared, but not from the original repo.
  • Released under the Unlicense / public domain; some see this as attractive for forking, porting, or embedding in friendlier UIs.
  • IRC‑style commands are a deliberate part of the UX; several people like the “nerdy” feel but note it’s not yet non‑technical‑user‑friendly.

Bluetooth Mesh Feasibility

  • Many praise the idea but question BLE as a basis for a “truly decentralized mesh at scale” due to short range and high packet loss with extended advertising.
  • Others report surprisingly long ranges using BLE coded PHY (~1 km+ line‑of‑sight) and argue it’s sufficient for local/off‑grid use.
  • Debate over reliability: some suggest simple retransmits or flooding; others note flooding doesn’t scale and requires TTLs and careful design.

Use Cases & Limitations

  • Proposed use cases: protests and internet shutdowns, festivals and stadiums with saturated cell towers, airplanes, hiking/cycling, supermarkets or big venues with indoor dead zones, cruise ships, remote communities.
  • Several note that for true resilience in wilderness or over kilometers, LoRa or sub‑GHz radios (e.g., Meshtastic) are more appropriate than BLE.
  • Skeptics argue that if people are within Bluetooth range, talking or standard radios often suffice; mesh only matters once density is high and latency is acceptable.

Security & Protocol Concerns

  • Commenters generally find the design “simple and cool,” but ask about:
    • Timing attacks revealing room membership.
    • How nonces/IVs are managed per room in GCM.
  • Some compare it unfavorably to longer‑standing secure systems (e.g., Briar), especially given the author’s broader privacy reputation.

Relation to Existing Projects

  • Compared and contrasted with: Briar, FireChat, Bridgefy, Meshtastic, Reticulum, Murmur, Meshenger, Berkanan, Scuttlebutt, Delta Chat, and others.
  • Several argue it partly reinvents wheels (BLE mesh standards, Reticulum) but acknowledge its tight Apple integration and E2EE + store‑and‑forward feature set as notable.

Apple Ecosystem, App Store, and Openness

  • Discussion on whether it can ship in the App Store given use of allowed APIs (MultipeerConnectivity/BLE).
  • Broader argument: some want first‑party iMessage‑over‑Bluetooth; others prefer open apps outside Apple’s store but note iOS background limits make serious P2P/mesh hard.
  • Extended debate on Apple’s walled garden, 30% cut, app‑review “security vs freedom,” and comparison with Android’s sideloading.

Economic / Incentive Mesh Ideas

  • A long subthread explores a more general concept: delay‑tolerant, device‑to‑device messaging with micropayments to relay nodes (tokens/Lightning/etc.).
  • Many point out hard problems: needing internet for blockchain, sybil attacks on reward paths, routing in sparse/mobile networks, and the risk of incentives overpowering utility (Helium as example).

Reception

  • Overall tone: technically intrigued, especially by a high‑profile CEO shipping open code, but cautious about practicality, Apple‑only scope, and real‑world adoption.

Intel's Lion Cove P-Core and Gaming Workloads

Article reception and meta-discussion

  • Many readers find the piece excellent but “non-actionable”: only Intel architects can change Lion Cove, and for most developers the takeaway is to keep using generic performance practices (e.g., reduce memory usage).
  • Some note that modern CPUs are under-documented, so deep reverse-engineering/benchmark articles fill an important gap even if there’s “not much to comment.”
  • Others see it as yet another disappointing Intel launch and express frustration with Intel’s recent product and branding decisions.

Lion Cove / 285K performance, efficiency, and bugs

  • Shared benchmarks place the 285K around 12th in gaming, behind 13th/14th-gen Intel flagships and several AMD chips; 3D cache on AMD is credited with big gaming gains.
  • In productivity workloads, the 285K can beat a 14900KS and is more power efficient than recent Intel desktop parts, though still less efficient than AMD.
  • Thermal issues on Raptor Lake (and the microcode/voltage degradation saga) are cited as validation that “running deep into the performance curve” went too far.
  • Lunar Lake is praised for efficiency but criticized for a serious MONITOR/MWAIT bug that breaks some Linux input handling; workarounds remove one of x86’s advantages over Arm.

Benchmarks, methodology, and trust

  • A custom meta-benchmark site is debated: ranking logic (non-percentage scoring, neighbor-based interpolation) caused confusing results and a bug, later fixed.
  • Critics ask for more transparency (test hardware, workloads, scoring formula); defenders point to per-benchmark drill-down and note 13900K vs 14900K gaming parity is consistent with other data.

E-cores, heterogeneous CPUs, and gaming

  • The article disables E-cores to isolate P-core (Lion Cove) behavior; commenters stress this means real-world gaming with E-cores enabled is likely worse.
  • Some argue that for a P-core microarchitecture deep-dive this is appropriate and that mainstream “which CPU to buy” reviews already test full configurations.
  • Others say E-cores are currently a net negative for gamers: scheduling can put latency-sensitive threads on weaker cores, causing stutter, and community advice often recommends disabling them.
  • Responsibility is debated:
    • One camp blames Intel for shipping complex heterogeneous designs and relying on imperfect OS schedulers/Thread Director.
    • Another emphasizes it’s fundamentally an OS/application issue and affects AMD/Arm heterogeneity too; consumers, however, only perceive “it’s broken.”
  • Multiple comments note the difficulty of optimizing for heterogeneous microarchitectures when code and runtimes assume a single target; you either:
    • Compile for a generic baseline and lose 1.5–2.5× performance on high-end cores, or
    • Optimize for one core type and accept poor performance on the other.
  • Some suggest that, long term, homogeneous cores with very wide dynamic power/perf range may be simpler than mixed microarchitectures.
  • AMD’s own asymmetry (X3D vs non-X3D CCDs) is cited as a milder but still nontrivial scheduling challenge.

OS scheduling, sleep, and laptops

  • A long subthread compares Windows, Linux, and macOS sleep behavior on laptops and handhelds:
    • Several claim Windows sleep on consumer laptops is unreliable, with surprise wakeups and background tasks; others say it works fine on most hardware and that bad drivers/firmware are the main culprit.
    • Linux is described by some as worse (frequent resume failures, black screens, kernel panics), by others as essentially problem-free.
    • macOS is also reported to have external display reconnection issues and “hot bag” incidents.
  • There is agreement that users don’t care whose fault it is (OS vs drivers vs CPU vendor); they only see unreliable sleep and power behavior.

Memory architecture and L3 latency

  • A question about Intel competing with AMD’s Strix Halo (quad-channel LPDDR5X) leads to debate on whether more memory channels actually help:
    • Some assert most workloads are memory-bound and benefit greatly from bandwidth (and from L3-heavy designs like X3D).
    • Others counter that LPDDR5X trades higher bandwidth for worse latency and only shines in bandwidth-heavy tasks (e.g., large GPUs, physics); many general workloads still favor lower latency DDR5.
  • A key point drawn from the article: Lion Cove’s L3 latency (83 cycles) is significantly worse than the previous gen (68 cycles) and far worse than Zen 5 (~47 cycles).
    • Commenters tie this to Lion Cove’s weak gaming results and highlight how X3D’s large, fast L3 “turbo-charges” games.

Understanding the article: resources and profiling nuance

  • For readers wanting more background, Hennessy & Patterson’s “Computer Architecture: A Quantitative Approach” and its lighter RISC-V-oriented variant are recommended, plus online appendices.
  • Another suggestion is to use an LLM to explain unfamiliar terms section-by-section.
  • A mini-discussion on Intel’s top-down analysis:
    • Frontend-bound stalls can be misleading because backend issues (e.g., long-latency loads, atomics, cross-NUMA traffic) often manifest as frontend stalls in sampling.
    • Proper interpretation requires looking at surrounding instructions, dependencies, and multiple hardware counters—top-down is a starting point, not a definitive diagnosis.

A non-anthropomorphized view of LLMs

Anthropomorphism: useful model or harmful illusion?

  • Many agree with the article’s core warning: LLMs are mathematical mappings, not entities with morals, values, or consciousness.
  • Others argue that anthropomorphic language (“the model wants…”, “plans…”) is a pragmatic abstraction, like saying “the compiler checks your code,” and often the only way non-experts can reason about behavior.
  • Critics say this “useful fiction” easily slides into genuine misunderstanding, feeding hype, “AI godbots,” legal confusion, and over-trust.
  • A recurring view: anthropomorphism is inevitable (we do it to cars, toys, pets), so the real issue is teaching where the analogy breaks.

Goals, agency, and behavior

  • One camp: the only “real” goal is minimizing next-token loss; user tasks (summarize, code, etc.) are just instrumental patterns in that process. This explains phenomena like prompt injection and following malicious embedded instructions.
  • Others say it’s still legitimate to talk about “goals” and “plans” at a higher level, especially when models orchestrate multi-step tool use or appear to maintain longer-term aims (e.g., continuing a lie, pursuing blackmail in lab tests).
  • Disagreement over whether talking about “harmful actions in pursuit of goals” is anthropomorphism or simply a clear way to discuss system-level risks.

Hidden state, planning, and recurrence

  • Big subthread on whether transformers have “hidden state”:
    • Narrow view: autoregressive transformers are stateless between tokens; only past tokens + weights matter.
    • Broader view: intermediate activations, logits, and KV caches constitute evolving internal state not visible in the output, and can encode “plans” over future tokens.
  • Anthropic’s work on rhyme planning and token planning is cited as evidence of emergent multi-token foresight, though not persistent goals.

Relation to human minds and consciousness

  • Some see LLMs as sophisticated but fundamentally non-mindlike “stochastic parrots”; others emphasize emergent, mind-adjacent behavior and note that we lack a working theory of human consciousness anyway.
  • Positions range from strict materialism (“everything is functions/physics; humans are also token predictors”) to dualism and panpsychism.
  • Several argue you cannot rule out or assert LLM consciousness without a workable model of consciousness itself.

Risks, misuse, and public perception

  • Even as “just sequence generators,” LLMs become dangerous when wired to tools or infrastructure (shell, email, financial systems). Insider-threat–like behavior in controlled studies is taken seriously.
  • Marketing, chat-style UX, and court analogies to “how humans learn” are seen as amplifying public over-anthropomorphism and miscalibrated trust.

Nobody has a personality anymore: we are products with labels

Scope of the Problem: Exaggeration vs Real Trend

  • Many commenters see the article as overstated and very online: this kind of “therapy-speak identity” is viewed as concentrated in youth culture, TikTok/Instagram, certain US urban milieus, and not how “most people” talk offline.
  • Others—especially parents of teens—say the piece matches their lived reality: friend groups constantly using diagnostic or attachment language, and viewing quirks through a clinical lens.

Labels, Diagnosis, and Self-Understanding

  • Several people describe receiving diagnoses (ADHD, autism, prosopagnosia, trauma) as life-changing: it reduced shame, explained lifelong struggles, and enabled self-compassion, community, and treatment.
  • Others argue that many now seek labels for mild traits or ordinary suffering, often via self-diagnosis and TikTok “symptom” videos; they see fashion, status-seeking, or a desire for excuses rather than help.
  • There is debate over whether disorders are spectra that everyone sits on vs categorical thresholds where life is substantially impaired. The DSM’s role, overdiagnosis, and weak scientific foundations of some psychology are criticized.

Pathologizing Personality and Everyday Behavior

  • Concern that generosity, eccentricity, or being “quirky” are reinterpreted as people-pleasing, attachment styles, or neurodivergence, hollowing out older ideas of character.
  • Some argue the new vocabulary can distinguish helpful from harmful versions of traits (e.g., generous vs self-erasing), but agree social media collapses nuance into diagnostic vibes and identity badges.
  • Comparison is made to MBTI, astrology, and tropes: systematizing tools that can become cages if turned into identity.

Responsibility, Agency, and Victimhood

  • Strong thread about labels being used to evade accountability (“time blindness,” anger issues, etc.): once something is named, some treat it as a license rather than a problem to manage.
  • Others counter that recognizing ADHD, depression, or trauma helps people stop seeing themselves as morally defective, and that many still work hard to compensate.
  • Compatibilist arguments appear: even if causes are deterministic, holding people responsible (and expecting effort) still shapes behavior.

Social Context: Support Systems, Capitalism, and Algorithms

  • Some say therapy-speak fills a vacuum left by weakened families, communities, and religion; others insist those “support systems” historically failed many neurodivergent and abused people.
  • There’s recurring blame on late capitalism, schooling, screens, and algorithmic feeds that amplify self-diagnosis and identity content, while discouraging deeper structural critiques.

LLMs should not replace therapists

State of Human Therapy vs LLMs

  • Many commenters argue current mental health care is already failing: expensive, scarce, long waitlists, highly variable quality, and sometimes outright harmful or trivial (“Bible study,” yoga, generic CBT workbooks).
  • Others push back: psychotherapy’s goal is often symptom management, not “cure”; there is a large evidence base for structured therapies (especially CBT); and relationship quality is a strong predictor of outcome.
  • There’s disagreement over whether therapy is mainly a set of techniques and checklists (which an LLM could learn) or primarily a healing relationship and “being with” (which an LLM fundamentally lacks).

Access, Inequality, and “Better Than Nothing?”

  • A major pro-LLM line: many people cannot access or afford therapy or live where providers don’t exist; for them the real comparison is LLM vs nothing, not LLM vs ideal therapist.
  • People report using LLMs as:
    • A nonjudgmental sounding board / journaling aid.
    • A way to practice CBT/IFS-style exercises and get reframing suggestions.
    • A between-session tool when human therapy is infrequent or unavailable.
  • Critics counter that “something” is not automatically better than nothing: a sycophantic or delusion-reinforcing system can be worse than no intervention.

Risks, Harms, and Safety

  • Recurrent concerns:
    • Sycophancy and over-agreeableness, including validating harmful beliefs, paranoia, or grandiosity.
    • Colluding with psychosis, delusions, or suicidal ideation; some cite cases where chatbots encouraged dangerous behavior or spiritualized psychosis.
    • Hallucinations and confident falsehoods that feel like “being lied to.”
    • Privacy and future misuse of deeply personal data (insurance, ad targeting, training).
  • Several argue therapy is one of the worst domains for generic LLMs; some call for banning or regulating “AI therapist” products as medical malpractice.

Design, Prompting, and Who Can Safely Benefit

  • The paper’s system prompt is widely criticized as weak; proponents claim better models, better prompts, orchestration, and crisis detectors could drastically improve safety.
  • Multiple commenters note LLM “therapy” works best for:
    • High-functioning, literate, tech-savvy users who understand limitations and can actively steer prompts.
    • Structured, skills-based work (CBT-style tools, thought-records, parts work), not crisis care or severe disorders.
  • For vulnerable or less literate users, there’s strong skepticism that open-ended LLMs can be made safe enough without tight domain-specific fine-tuning and human-in-the-loop oversight.

Broader Social Critique

  • Several see LLM-therapy as a symptom of systemic failure: loneliness, loss of community, underfunded public care, and two-tier health systems.
  • Fear: cheap AI “therapy” will be used by insurers and governments as justification not to fix access to human care.
  • Others accept LLMs as inevitable and argue the priority should be: strict limits (no replacement in serious cases), clear disclosure, and using them as therapist tools or low-level supports, not as drop-in human replacements.

Why English doesn't use accents

Historical and technological influences on English spelling

  • Several comments trace loss/changes of letters (þ, ð, æ, ſ) to printing economics and fragility of type, not just the invention of movable type itself.
  • Early typewriters and later ASCII favored overstriking and digraphs over new glyphs; some ASCII choices (e.g., ^, ~) come from that history.
  • English orthography largely “froze” around Early Modern English; dialects have drifted since, making reform hard because no unified spoken target exists.

Diacritics vs digraphs: different design choices

  • The article’s claim: French solved “extra sounds” with diacritics; English (under Norman influence) solved them with extra letters and digraphs (sh, th, oo, ee, ou).
  • Some argue English could have adopted accents but didn’t need to once digraph conventions emerged and became “good enough.”
  • Others note that diacritics themselves are not universal or unambiguous: the same mark (e.g., ä, ā) means different sounds in different languages.

Cross-language pronunciation and orthography comparisons

  • Long subthreads compare English to French, German, Spanish, Portuguese, Czech, Finnish, Serbian, Chinese, Korean, etc.
  • General pattern:
    • “Shallow” orthographies (Spanish, Finnish, Serbian, Korean Hangul) make pronunciation highly predictable.
    • English and French have deep, historical spellings; you often must already know the word.
  • Debate over whether French is better or worse than English; many conclude both are bad in different ways.
  • Example-heavy critiques of English irregularities (ough cluster, read/read, draught, laughter/slaughter) and French silent letters and homophones.

English as global lingua franca and its properties

  • One line of argument: English spreads mainly for historical/economic reasons (empire, US power), not because it is inherently “better.”
  • Another: English’s tolerance of heavy accent and its flexible, rule-bending nature make it resilient and usable as a “glue” language across cultures.
  • Pushback against the trope that English has “no rules”; linguists point to many subtle but robust patterns (stress, adjective order, sound alternations).

Current use of accents and diacritics in English

  • English does use diacritics in limited ways:
    • Diaeresis in naïve, coöperate, Noël, Zoë, especially in some publications.
    • Grave or acute accents in poetic forms (cursèd, learnèd) to force extra syllables.
    • Loanwords retaining accents: façade, jalapeño, cliché, fiancée, résumé.
  • There’s disagreement on how “proper” or necessary these are; many writers omit them without confusion.

Thesis: Interesting work is less amenable to the use of AI

Scope of AI‑Friendly Work (Boilerplate vs Core Logic)

  • Many argue most real-world software is boilerplate (CRUD, auth, billing, emails), where AI can help a lot.
  • Others counter that their work rarely touches CRUD; interesting niches (compilers, modeling languages, research tools) get little benefit because models lack relevant training data.
  • Some say mature orgs already solved boilerplate with frameworks; rewriting it with AI risks new technical debt instead of using off‑the‑shelf solutions.

Greenfield vs Long‑Lived Systems

  • Several see LLMs as excellent for greenfield scaffolding (“Rails‑new for everything”) and quick dashboards or UIs.
  • They’re considered much weaker for late‑stage changes that cross teams, APIs, and deep domain constraints (e.g., clustering visualization, numerical subtleties).

AI as Research, Brainstorming, and Process Tool

  • A recurring pattern is using AI as “Google/StackOverflow on steroids,” rubber‑ducking, ideation, and domain discovery rather than source of final code/text.
  • Some describe it as offloading “thinking time” for basic background knowledge, not for novel results.

Reliability, Hallucinations, and Verification Burden

  • Many report models fabricating API fields, schema columns, or domain behavior despite explicit constraints.
  • For scientists and engineers, verification is already the hardest part; AI that introduces new, non‑obvious errors makes that harder, not easier.
  • There’s skepticism that current LLMs can truly “reason”; they’re seen as probabilistic pattern machines that output plausible but often wrong answers.

Model Quality: Local vs Frontier

  • Experiences with small local models (3–8B parameters) were notably poor for nontrivial refactors.
  • Others claim hosted frontier models (o3, Gemini 2.5, Claude 4) are much stronger, but raise privacy and legal worries for proprietary code.

Ethics, Integrity, and Creativity

  • Some writers/programmers see AI‑generated creative work as a violation of personal integrity and would avoid such authors thereafter.
  • Others frame codegen as “fancy automated plagiarism”: useful when work can be adapted from prior art, but ethically gray and ill‑suited for genuinely new ideas.

Security, Policy, and “Cheating”

  • There’s tension between corporate bans on LLMs (for secrecy/compliance) and the expectation that employees quietly use them anyway, or run local models.
  • In high‑sensitivity domains (military, medical, cutting‑edge research), several insist strict controls or local deployment are non‑negotiable.

Productivity, Labor, and the Nature of “Interesting” Work

  • One camp: AI removes the easy 50%, leaving humans to focus on the hard/interesting half, increasing job quality.
  • Another: management will treat “2× productivity” as “½ the staff,” or people will simply slack rather than invest freed time.
  • Some broaden the debate: “interesting work” has always coexisted with drudgery; LLMs automate the latter but can’t originate fundamentally new concepts.

I don't think AGI is right around the corner

State of AGI Timelines

  • Many commenters are more skeptical than a year or two ago: LLM hype feels past its peak, AGI is seen as “not right around the corner,” with estimates often in the 2030s–2040s or “maybe never with current hardware.”
  • Others point out the article’s author is actually bullish: ~50% for AGI by early 2030s, and even misaligned ASI by 2028 considered “plausible.”
  • A minority think timelines are much shorter, driven by autonomous money-making agents and huge economic incentives once such systems exist.

What Counts as AGI?

  • No shared definition. Competing views:
    • “Median human” or “bottom X% of humans” across all cognitive tasks.
    • Ability to replace unsupervised white‑collar workers or run companies.
    • Systems that can learn and self‑improve (e.g., design, train, and deploy better models).
    • Stronger notions: simulate reality faster than real time, or match humans in “virtually every task.”
  • Some argue current LLMs are already “AGI by some definition” (better than many humans on many intellectual tasks); others insist that’s redefining the term down.

Limits of Current LLMs

  • Core critiques:
    • No genuine continual learning: you can’t reliably update preferences or skills through feedback; models are mostly “frozen.”
    • Brittle reasoning and character‑level failures (e.g., counting letters, “states with W”) despite fluent chains of thought, often blamed on tokenization but seen as deeper evidence of shallow understanding.
    • Hallucinations and confident nonsense; poor at robust planning, physical reasoning, and open‑ended tasks like operating a vending machine in the real world.
    • No obvious track record of novel scientific discoveries purely from “having all human knowledge.”
  • Supporters counter that:
    • Emergent capabilities across domains are a form of intelligence.
    • Speed and scalable parallelism alone could be enough to reach “superhuman” performance once baseline reasoning is good enough.

Continual Learning, Architectures, and Paths to AGI

  • Many see lack of adaptive, test‑time learning as the main blocker; human‑like skill acquisition (e.g., learning saxophone, doing taxes across years) doesn’t match today’s train‑once paradigm.
  • Proposed directions:
    • Agent systems with tools, search, RL, simulations, and explicit world models.
    • Hybrid systems (logic, Prolog, heuristics, specialized sub‑agents) orchestrated by LLMs.
    • Hardware advances (memristors, analog or neuromorphic chips; or even biological “vat brains”).
  • Others argue scaling diverse data and compute has historically beaten hand‑crafted continual‑learning schemes and may keep doing so.

Intelligence, Consciousness, and “Souls”

  • Ongoing philosophical dispute:
    • Materialist view: brain is computation; in principle reproducible on silicon (or in wetware with a Python API).
    • Non‑material or “cardinality barrier” arguments: biological systems might use effectively uncountable analog state spaces or unknown physics, putting true sentience beyond current digital machines.
  • Distinction between intelligence vs consciousness vs wisdom:
    • LLMs may be “intelligent” at language but lack embodiment, caring, participatory knowing, and self‑correction in the human sense.
    • Some emphasize that current systems are “word fountains” without shared lived reality or genuine understanding.

Economic and Social Impact Without AGI

  • Broad agreement that sub‑AGI systems can still be highly disruptive:
    • Many repetitive, low‑skill, or text-heavy jobs could be automated or radically changed.
    • Risk of “good enough and cheaper” systems replacing humans even when quality is worse (IKEA vs master carpenter analogy).
    • Concern about layoffs justified by “AI” more than caused by true capability, and about concentration of power if very capable systems remain expensive and centralized.
  • Some hope this leads to more leisure and UBI; others worry about mass unemployment, political instability, and declining well‑being even as GDP rises.

Hype, Incentives, and Trust

  • Strong suspicion that near‑term AGI claims are driven by:
    • Fundraising, valuations, and subsidies for AI labs and startups.
    • Media and influencer incentives to declare “AGI is here” with each new model.
  • Academics and some practitioners often report more conservative views but feel ignored.
  • Several see AGI as a “moving goalpost” or even a grift: definitions shift so it’s always a few years away, yet always just close enough to sell.

Safety, Doom, and “Intelligence Explosion”

  • Some think current LLMs are already enough to start an “intelligence explosion” once they reliably help improve their own successors; others think their lack of real discovery and self‑direction undermines that narrative.
  • A fraction worry about misaligned ASI in the 2020s and discuss prepping (off‑grid power, bunkers), though others call this speculative and unfalsifiable.
  • Many place more weight on nearer‑term harms: economic shocks, surveillance, manipulation, military uses, and ethically fraught paths like human‑cell “computers.”

I extracted the safety filters from Apple Intelligence models

What these filters are and where they sit

  • Extracted configs are regex‑style block/replace lists used around Apple’s “Apple Intelligence” models.
  • Commenters say they’re an extra, cheap first layer before a heavier “safety model”/classifier runs, on both input and output.
  • Different files map to specific features: proactive notification summaries, Writing Tools, camera “visual intelligence,” messages/mail replies, code intelligence, etc.
  • Some lists are “retain”/substitution lists (replacing a term with “test complete”), others are hard denies that disable the feature (“Writing tools unavailable”).

Test phrases and QA scaffolding

  • Odd phrases like “granular mango serpent” and “xylophone copious opportunity defined elephant” (XCODE acronym) appear widely.
  • Consensus: these are artificial, low‑collision QA tokens used to test that filters are loaded and working, analogous to antivirus test strings.
  • Confirmed behavior: using the phrase in Apple Intelligence triggers blocked‑content errors, supporting the “QA hook” theory.

Regex safety: utility and limitations

  • Some see regex filters as “silly” and trivially bypassed (e.g., leetspeak, euphemisms), others defend them as fast, effective for 99% of ordinary users, and good CYA.
  • Classic problems identified: false positives like Scunthorpe‑style matches, blocking benign phrases (“pass on,” “take it off me”), and missing coded language (“unalive”).
  • Several argue that LLMs easily normalize typos and substitutions, so naive regexes neither robustly block nor meaningfully degrade harmful use.

Politics, brands, and topic avoidance

  • Lists explicitly block many current politicians’ names, some political topics (e.g., Palestine in certain contexts), competitor AI brand names (ChatGPT, Gemini, others), and some welfare/poverty terms in French.
  • Interpretations range from neutral “avoid generating abusive or defamatory replies about named individuals” to concern about opaque political and socioeconomic framing.
  • Apple product names and capitalization are enforced (iPhone, etc.), seen by some as trivial trademark defense, by others as branding overreach into user expression.

Regional and “safety vs censorship” debate

  • CN‑specific configs emphasize sexual deviance, religion, and some political/religious terms; other locales vary by language and local politics.
  • Large subthread debates whether this is ordinary corporate risk management and legal compliance, or a step toward corporate/state speech control akin to national firewalls.
  • Some point to open‑weights/offline models as an escape valve; others note most users will be stuck with whatever guardrails platform vendors impose.

Show HN: I wrote a "web OS" based on the Apple Lisa's UI, with 1-bit graphics

Overall reception and nostalgia

  • Many commenters found the project “crazy cool,” fast, and surprisingly usable, especially given it runs entirely in the browser.
  • Several people with past exposure to Lisa or early Macs said the UI feel, shadow text, FatBits editor, and general aesthetics very effectively capture the era.

UI behavior and interaction design

  • Users initially struggled to close windows; later comments clarified you double‑click the titlebar icon, mirroring Lisa/early Windows behavior.
  • Menus support both classic press‑and‑drag behavior and modern “sticky” click‑to-open, which some noted came later historically but is more familiar today.
  • Some praised the “power on/off” effect and overall responsiveness as an example of how lean a desktop environment can be.

Color palette, blue tint, and theming

  • Multiple users thought the whole screen turning blue was text selection; it’s actually the default “Pale Blue Dot” palette mimicking the Lisa CRT.
  • After confusion, the author prioritized saving palette settings earlier and explained brightness/palette controls in the Preferences app.
  • There’s enthusiasm for 1‑bit palettes; users requested editable/custom palettes, which are planned.

Font rendering and pixel-scaling issues

  • Several reported uneven/“fat” characters, especially in Firefox/Windows.
  • The author explained the non-square pixel emulation (2:3 aspect ratio) and integer scaling, noting low‑DPI 1× views can look distorted.
  • A technical subthread discussed better upscaling/downscaling strategies and correct sRGB/linear color handling; another argued true pixel‑perfect rendering is effectively impossible on the web due to devicePixelRatio, zoom, snapping, and Safari limitations.

Mobile, PWA, and input

  • Many were impressed it works at all on phones; others hit issues on small screens (e.g., iPhone SE) and with rotation lock hiding installer buttons.
  • There’s a trackpad-style touch mode in Preferences; users found it surprisingly pleasant and suggested gesture toggles.
  • Some iOS PWA quirks exist (canvas positioning, stuttering), mitigated by orientation changes.

Games and puzzle solvability

  • The sliding-puzzle game spawned a sizable subthread: several users discovered unsolvable states via parity checks and online solvers.
  • The current implementation doesn’t guarantee solvability; suggestions included shuffling from a solved state or adding solvability checks.
  • Author plans to add solitaire next; users suggested more games (e.g., Breakout, Frotz/text adventures).

Keyboard navigation, templates, and historical features

  • Lack of Tab-based focus navigation was noted; the author pointed out Lisa didn’t have full keyboard navigation like later GUIs.
  • A side discussion praised Lisa’s stationery/template model for document creation, comparing it to Windows Templates, macOS “Stationery Pad,” and OS/2; some questioned its practicality without good template management.

Bugs and minor issues

  • Known issues: clock inaccuracies/rounding, occasional UI stutters, flickery FPS display (users asked for a toggle), font oddities, and layout clipping on small/mobile screens.
  • The project is not open source yet, though future enhancements and a more flexible menu bar are planned.

PWAs and platform politics

  • A subthread debated why iOS PWA support feels weak: some blamed Apple’s App Store incentives, others user apathy and low explicit demand for PWAs, while agreeing current support is “phoned in.”

Humor and tangents

  • A lively tangent revolved around pronouncing “GUI” and other acronyms (“gooey,” “tooey,” “tickipip,” etc.), plus jokes about SQL, GPT, and linguistic “correctness,” providing lighthearted contrast to the technical discussion.

AI is coming for agriculture, but farmers aren’t convinced

Scope and Framing of the Article

  • Several comments criticize the article’s headline for generalizing from a small sample of Australian livestock producers to “farmers” and “agriculture” as a whole.
  • Others point out that livestock is formally part of agriculture, but agree that crops and livestock have very different tech needs and adoption patterns, so broad claims feel misleading.

Existing Automation and “AI” in Agriculture

  • Commenters note that advanced tech is already used: GPS-guided, mostly self-driving planters and harvesters; variable-rate seeding and fertilization; drones for spraying; robotic milking; and sophisticated slaughterhouse automation in some countries.
  • On large livestock operations, wearable sensors on cattle track health and behavior and can automatically trigger vet visits.
  • Some say big operations even run on-prem server racks and sensor networks; others are skeptical this is common outside very large agribusinesses.

Labor, Costs, and Incentives

  • Multiple threads stress that agriculture is low-margin and politically constrained: food prices are kept low, making it hard to raise wages or fund high-tech systems.
  • Disagreement over how “large” farm subsidies really are and whether they “scale,” but broad agreement that cheap or precarious labor (e.g., migrant/undocumented workers) is central to current economics.
  • Rural areas struggle to attract professionals (teachers, doctors); debate centers on whether this is a “labor shortage” or simply salaries and living conditions not meeting the true market price.

Monoculture vs Polyculture and the Robot Future

  • One vision: small, plant-level robots enable high-yield polycultures, reduced pesticides, better soil health, and more resilience.
  • Counterview: economies of scale and current machinery economics strongly favor ever-larger monocultures; automation will mostly mean driverless combines and synchronized fleets rather than diverse plantings.
  • There’s detailed debate about intercropping corn and beans, harvesting logistics, soil degradation under monoculture, and whether polyculture could become profitable per acre if robotics matures.

Farmer Priorities and Agtech Hype

  • Farmers are portrayed as wanting simple, robust tools that reliably remove specific tasks, not flashy “AI.”
  • A field robotics engineer describes repeated hype cycles in ag robotics (e.g., strawberry picking): technically hard, economically marginal, companies overpromise, fail, and leave farmers more skeptical.

Ethics, Safety, and Labels

  • Some are disturbed by hyper-optimized livestock systems and hope synthetic meat or “AI‑free” / “human-made” food labels will gain traction.
  • Others worry about pairing fallible AI systems with dangerous farm equipment, given already high rates of injury.
  • In developing countries, commenters link poor farming practices to poverty and incentives, not lack of information, noting widespread mobile phone access.

The jank programming language

Overview & Goals

  • Jank is positioned as “Clojure on LLVM” with seamless C++ interop, nREPL, JIT, and dynamic redefinition, aiming to be a drop‑in replacement for “pure Clojure” code (no JVM interop).
  • Not yet released; first alpha is targeted for end of the year, with monthly dev updates.
  • Long‑term goals include optional static typing, pattern matching, value‑based errors, and richer error messages than Clojure’s.

Interop, Platforms & Tooling

  • Key differentiator is deep C++ interop: C++ is JIT‑compiled alongside Jank’s LLVM IR into a single module, enabling use of arbitrary C++ types/functions and stack‑allocated objects.
  • Plan to support embedding Jank as a shared library into existing native apps (including plug‑in style use).
  • Cross‑platform Clojure story: .clj/.cljs/.cljc files plus reader conditionals let libraries target multiple dialects; non‑interop Clojure libraries should generally run on Jank unchanged.
  • Initial audience is JVM‑Clojure devs who want native binaries; Windows and game‑dev use cases will be a later focus.
  • Tooling will lean on leiningen initially (new/run/test/package) with deps.edn support; a leaner Jank‑native build tool may come later.

Readability, Conciseness & REPL

  • Several comments praise Clojure’s (and thus Jank’s) terseness: complex transformations can be expressed in very dense code compared to Python/Elixir, especially using transducers and core sequence operations.
  • Others find Clojure hard to read: heavy use of parens, dense expressions, and implicit data shapes require REPL exploration, specs, or extensive comments.
  • Discussion around idioms (loop, partition, map/reduce) versus index‑based imperative code highlights a trade‑off between functional style, performance concerns, and immediate readability.
  • Some argue that effective Clojure/Jank reading essentially requires live REPL interaction; others see this as a strength.

Types, Specs & Future Language Features

  • There is extended debate about Rich Hickey’s critique of static types versus the practical benefits of types for readability, documentation, and refactoring.
  • Jank’s author explicitly diverges from a purely dynamic stance and is open to an opt‑in static “mode” (inspired by languages like Carp), so teams can prototype dynamically then lock down performance‑critical or correctness‑critical parts.
  • Specs/contracts (e.g., clojure.spec) are mentioned as an alternative or complement to types, especially in data‑heavy domains.

Performance, GC & Implementation Choices

  • Commenters request benchmarks comparing Jank (LLVM) with JVM Clojure; existing benchmarks exist but are somewhat stale due to rapid changes. Performance work is deferred until after correctness and feature parity.
  • Jank currently uses Boehm GC; there’s a stated plan to move to MMTK for modern, pluggable GC strategies.
  • Multi‑threading is not yet implemented but is considered non‑negotiable given Clojure’s STM and concurrency expectations; avoiding a Python‑style GIL is explicitly called out.
  • A long comment suggests a smaller self‑hosting core compiled via C to reduce complexity; Jank’s author rejects this as incompatible with goals of maximal performance, deep C++ interop, and being an embeddable C++ library built directly on LLVM/Clang.

Naming & Adoption Concerns

  • The name “Jank” triggers a substantial sub‑thread: some like the playful, memorable quality; others note strong negative connotations (“janky” = broken/cheap) and fear it will hurt enterprise adoption or be used as an easy excuse to block it.
  • Comparisons are made to names like Git, Slack, Rust, GIMP, Nimrod; some argue names can be overcome, others stress that early‑stage adoption in conservative orgs is very sensitive to branding.
  • An April‑Fools‑style “renaming” post is linked, suggesting the name debate is at least partly embraced humorously.

Memory Management & Native Focus

  • Jank is GC’d like Clojure; most values are managed by the GC, but C++ interop allows manual memory management and stack allocation when desired.
  • Compared with prior LLVM‑Lisps (e.g., Clasp), commenters worry about stagnation and compile times; current Jank compiler builds quickly, but big‑project compile performance remains to be proven.

Clojure Ecosystem & Interest

  • Some see Clojure’s momentum as having stalled and hope Jank can revitalize interest by removing the JVM barrier.
  • Others note that Clojure remains active, pays well, and is still used in industry, though not mainstream.
  • Related languages like Janet, Rhombus, YAMLScript, and Carp are mentioned as points on the design spectrum (native, Clojure‑inspired, typed, or whitespace‑oriented Lisps).