Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 512 of 547

Intel's $475M error: the silicon behind the Pentium division bug

FPU architecture and performance tricks

  • Discussion of the Pentium FPU as a “stack machine” at the ISA level but really 8 registers with renaming underneath.
  • fxch acts like a cheap rename, can issue in the secondary pipe and is 1-cycle, enabling dense scheduling of fadd/fmul.
  • Constraints: fmul only issues every other cycle; one operand must be TOS, leading to complex fxch patterns, especially across loops.
  • Compilers of the time varied: some did good stack scheduling; others spilled or overused fxch.

Reverse engineering and microcode

  • Microcode/ROM contents can be extracted from high-quality die photos using automated tools, but delayering and clarity are hard.
  • The real challenge is understanding the encoded micro-ops; early CPUs are better documented than later ones.
  • Reverse-engineering work is being done with a metallurgical microscope at home; optical resolution is nearing its limits for Pentium-scale geometries.

FDIV implementation, bug, and table design

  • Many readers appreciated the detailed explanation of how floating-point division is built from repeated integer-like steps and lookup tables.
  • Several comments focus on why unused lookup entries weren’t simply filled with 2 from the start.
  • Explanations offered: “zero” was treated as a normal value rather than a “don’t care”; table generation and PLA optimization were likely split across teams; once the PLA was “small enough” optimization may have stopped.
  • The later fix (filling all undefined entries with 2) both removed edge cases and simplified the hardware.

Error rates, real-world impact, and user perception

  • Intel’s claim: astronomically rare per-user error rate, comparable to DRAM bit flips.
  • IBM’s analysis: for a heavily used spreadsheet, an individual user might hit it every few weeks.
  • Some argue IBM’s scenario is unrealistic because spreadsheets often recompute the same stable values; others think IBM’s framing was misleading marketing.
  • Broader point: “1 in a billion” can be frequent at scale (large systems / many users), and averages can hide that a few users are hit constantly.

Intel’s response, QA, and trust

  • Commenters view the initial “doesn’t even qualify as errata” stance as wild for a CPU doing incorrect arithmetic, regardless of rarity.
  • The incident is seen as both a PR disaster and, paradoxically, a long-term brand amplifier.
  • Several recall having to ship workarounds or detection code, pushing Intel’s problem onto developers.
  • Discussion that Intel later invested heavily in verification, then allegedly cut verification staff to move faster, with some linking this to more recent reliability issues.

Comparisons to other companies and support models

  • Comparisons with Amazon and Apple quietly replacing defective devices highlight how strong support infrastructure can contain reputational damage.
  • Others note that for high-value infrastructure products (like CPUs in corporate fleets), quiet consumer-style replacement isn’t as simple; vendor contracts and large IT deployments complicate responses.

Broader tangents: strategy, GPUs, and mobile

  • Some argue Intel’s truly huge errors were strategic: neglecting GPUs and missing mobile/SoC opportunities (e.g., selling off XScale, declining early smartphone chips).
  • Debate over whether ISA (x86 vs ARM) or business focus and culture are the main reasons Intel lagged in low-power markets.
  • Mixed views on Intel iGPUs: praised as “good enough and solid” for everyday Linux use by some; others report frequent GPU hangs and see decades of underinvestment.

Attitudes toward numerical correctness

  • Several comments stress that users rarely check results; even visibly wrong outputs can go unnoticed without domain intuition.
  • Nevertheless, in finance, science, and engineering, silent arithmetic errors are considered unacceptable, regardless of how infrequently they occur.

Family of OpenAI whistleblower Suchir Balaji demand FBI investigate death

Scope of Concern and Calls for Investigation

  • Many argue that an independent or federal investigation (e.g., FBI) is warranted, given whistleblower status, large financial stakes, and possible wider implications.
  • Others stress that law enforcement needs concrete evidence or specific suspicions; they see “investigate because he criticized OpenAI and then died” as too weak a premise.

Suicide vs Foul Play

  • Some find the reported rapid “suicide” determination (≈40 seconds) suspicious and question how that could be done without clarifying firearm ownership or a full forensic workup.
  • Others note this timing comes from a grieving parent and may not reflect the actual medical examiner process.
  • There is debate over how much scrutiny apparent suicides get, especially in under-resourced departments, and whether toxicology or deeper investigation is standard or affordable.

Policing, Resources, and Inequality

  • Thread disputes whether big-city police (e.g., SF, NYC) are “underfunded,” pointing to billion‑dollar budgets vs. competing demands and staffing ratios.
  • Several comments highlight perceived disparities: high‑profile victims or CEOs get exhaustive investigations, while murders of less notable or marginalized victims are seen as under‑prioritized.

OpenAI’s Role and Working Conditions

  • Some want OpenAI’s workplace culture investigated, arguing stress, ostracism, or retaliation could have contributed.
  • Others say there is no concrete evidence tying working conditions or corporate actions to the death, especially given the person had already left the company.
  • Pay at OpenAI is debated: top researchers are described as extremely well‑compensated, while others argue many staff (and data labelers) are less so, especially relative to Bay Area costs.

Whistleblowers, Mental Health, and Risk

  • One side emphasizes the plausibility that whistleblowing, career damage, and social ostracism can trigger or worsen mental health crises and suicide risk.
  • Another side objects to implying whistleblowers are more likely to have pre‑existing mental health problems, calling that prejudicial.

Corporate Power, Retaliation, and Conspiracy Claims

  • A number of commenters raise examples of aggressive corporate harassment (e.g., the e‑commerce stalking scandal) to argue that high‑level executives can behave abusively, even criminally.
  • Some extrapolate further, asserting that large tech firms and billionaires likely have access to hitmen or exotic “chemical” methods to induce suicide‑like states; they see a pattern in multiple whistleblower deaths (e.g., Boeing cases).
  • Others push back hard, calling murder theories illogical, lacking motive (the whistleblower’s claims were already public and not unique), and unsupported by evidence; they argue extraordinary claims require extraordinary proof.

Government, National Security, and Foreign Actors

  • Several note that frontier AI work likely attracts attention from domestic and foreign intelligence services; they speculate the whistleblower could have been approached for sensitive information.
  • Some mention revolving doors between intelligence agencies and AI firms, viewing OpenAI as strategically important and intertwined with the “military‑industrial complex.”

Gun Ownership and Specific Unanswered Questions

  • Commenters question how a young SF tech worker came to possess the firearm used, and why public reporting has not clarified who legally owned or purchased it.
  • These gaps in public detail fuel calls for a more thorough and transparent investigation.

Why it's hard to trust software, but you mostly have to anyway

Software complexity and opaque updates

  • Several comments criticize large, frequent updates (e.g., 100MB weekly desktop client updates) with no visible UI change.
  • Electron and full-package reinstall (no incremental updates) are cited as primary causes, but others argue these are still design choices that expand the trusted codebase.
  • There is concern that huge dependency trees (e.g., npm ecosystem) and “big balls of mud” like compilers make meaningful review impractical, though some note specific tools (like a certain compiler) now have zero dependencies.

Trust, supply chain, and “open source in name only”

  • Building from source is praised, but people question how realistic it is to read or fully understand all code.
  • Examples are given of projects where binaries on release pages may not match the visible source.
  • Reproducible builds and systems like Guix are highlighted as concrete ways to verify that a binary matches its source, assuming deterministic builds.
  • Some emphasize that even if you write and compile everything yourself, you still depend on compilers, libraries, hardware, and environments (echoing “trusting trust” concerns).

Verifiable hardware, VMs, and formal methods

  • Work on verifiable hardware platforms and open FPGA flows is praised, with comparisons among projects on how “open” and verifiable they are.
  • Verifiable VMs and zkSNARK-based systems that can prove execution of compiled code (e.g., Rust→RISC-V) are seen as promising for proving what ran, though others note that “provable” does not equal “secure.”
  • Some hope for a future of standardized, formally verified, narrowly scoped software stacks, though others point out likely resistance from state actors and high costs.

Security models and capability systems

  • Multiple comments argue that capability-based security, least privilege, and stronger sandboxing are more realistic ways to mitigate untrusted software than trying to fully trust code.
  • Web and Electron apps are criticized as especially dangerous because they can load fresh, obfuscated code each run and often have broad local access.

Liability, warranties, and regulation

  • There is debate over whether software should carry warranties or engineer liability similar to civil engineering or medicine.
  • Critics say full liability is infeasible given massive legacy code, complex interactions, and global competition; others argue that high-stakes domains still need stronger accountability and processes.

Cultural and meta reactions

  • Several note a shift from earlier, relatively user-aligned desktop utilities to today’s “extract value” culture, locked-down platforms, and user-hostile defaults.
  • App stores are viewed both as improving safety for non-technical users and as restrictive “golden cages.”
  • Some dismiss the article for using a generative-AI header image; others reference classic essays and fiction about trusting compilers and hidden code.

EU law mandating universal chargers for devices comes into force

Scope and Technical Details of the EU Rule

  • Law targets “radio equipment” under 100W. Many commenters note USB Power Delivery (USB‑PD) is explicitly required for devices that do wired “fast charging” (>5V, >3A, >15W).
  • Proprietary fast‑charge protocols are allowed, but they must support full USB‑PD functionality at least as well as the proprietary mode.
  • The baseline 5V/slow charging remains allowed; fast‑charge behavior is constrained by PD rules.
  • Separate EU “Ecodesign” regulation is expected to deal with charger-side behavior (efficiency, PoE, etc.), not just device ports.

Innovation vs Regulation

  • Critics worry mandating USB‑C will freeze connector innovation and create regulatory capture: only big firms will have the lobbying power to update the law when a better standard appears.
  • They point to slow or awkward evolution of other EU rules (e.g., cookie banners, data protection, self‑driving constraints) as evidence that “laws don’t get updated quickly.”
  • Supporters counter that:
    • Previous EU pressure already pushed micro‑USB, then allowed migration to USB‑C.
    • Laws can reference evolving standards (e.g., newer EN/IEC versions).
    • Market “left alone” did not converge; only Apple resisted USB‑C.

USB‑C / USB‑PD Practicalities

  • Many users want the mandate extended to more DC devices (routers, switches, small appliances) to eliminate barrel‑jack “wall warts.”
  • Discussion of PD quirks:
    • Fixed‑voltage PD‑to‑barrel “trigger” cables often fail if a charger doesn’t support certain voltages (e.g., 12V).
    • PD 3.0’s Programmable Power Supply (PPS) can request 3.3–21V in fine steps, which partially solves this, but support varies in the wild.
    • Mis-implemented USB‑C (e.g., missing CC pull-downs) leads to devices that only charge via A‑to‑C cables.
  • Proprietary fast‑charge systems (e.g., SuperVOOC) are debated: some praise thermal behavior; others argue PD PPS can match them and that vendor lock‑in is the real motive.

Batteries and E‑Waste

  • Many believe chargers are a smaller e‑waste issue than non‑replaceable batteries.
  • There is strong support for upcoming EU rules requiring “readily removable” portable batteries by ~2027, though “readily” and water resistance details are debated.
  • Arguments:
    • Pro: dramatically extends device life; aligns with circular‑economy goals.
    • Con: complicates waterproofing, adds cost/volume, increases risk from bad third‑party cells.

User Experience, Apple, and Cables

  • Mixed views on Lightning vs USB‑C: Lightning seen as mechanically robust and well‑controlled; USB‑C praised for power/data capabilities but criticized for fragile/loose ports and confusing cable capabilities.
  • Anecdotes about Apple devices refusing to fast‑charge with some third‑party cables raise suspicions of “malicious compliance,” though others report no issues and point to wattage or signaling differences.

After a 24-second test of its engines, the New Glenn rocket is ready to fly

New Glenn vs. SpaceX

  • New Glenn is years late versus earlier promises; many see Blue Origin as far behind SpaceX’s Falcon 9/Heavy and Starship.
  • Some argue New Glenn was implicitly framed as a “Falcon 9/Heavy competitor,” with Kuiper as its anchor customer and broader commercial/government payloads filling capacity.
  • Others question claims that Starship will be “fully operational” before New Glenn, noting Starship is still in test mode and New Glenn’s first launch and landing attempt are imminent, with up to ~12 missions targeted in 2025.

Launch Costs and Economics

  • Heated debate over Starship’s per‑launch cost: cited numbers range from ~$90–100M for current fully expendable test vehicles to Musk’s aspirational $10M or even $2M.
  • Some criticize “Elon GAAP” for ignoring fixed infrastructure, labor, and amortized R&D, arguing those dominate total program cost.
  • Others counter that mass production, stainless-steel structures, cheap methane, and full reusability will make Starship the lowest cost per kg.
  • New Glenn’s often-quoted ~$68M/launch is also characterized as aspirational; posters stress the need for apples-to-apples comparisons (similar maturity, reuse, and accounting assumptions).

Mission Profiles and Market Segmentation

  • One camp claims a fully reusable Starship would dominate essentially all mission profiles, including smallsats via rideshare.
  • Others argue:
    • Small, dedicated payloads and unusual orbits can favor smaller vehicles (Electron, Neutron, New Glenn) on a “dedicated ride” rather than per‑kg basis.
    • Some beyond‑LEO or non‑orbital missions may prefer simpler expendable upper stages rather than carrying Starship’s landing hardware.
    • Insurance and risk tolerance can steer customers away from the cheapest option or from very large, complex vehicles.

Starship Technical Progress and Risks

  • Supporters highlight Starship’s achievements: clustered methalox engines, catching boosters, belly‑flop reentry, and methane engines reaching space.
  • Skeptics note ongoing issues with heat shields, incomplete payload capability, and comparisons to historical programs (N1, X‑33), questioning schedule optimism and rapid-turnaround claims.

Telemetry and Static-Fire Data

  • Consensus: data from a 24‑second engine test is likely in the MB–GB range, far below LHC scales.
  • Most rocket sensors run at tens–hundreds of Hz, with limited high‑speed channels; test video often dominates raw data volume.

Blue Origin Culture and Competition

  • Some describe Blue Origin’s hiring and org structure as “Amazon‑like” and insular, with puzzling rejections of experienced candidates.
  • Overall sentiment favors Blue Origin and other providers succeeding to keep prices down, avoid a SpaceX monopoly, and enable more ambitious missions (Kuiper, lunar landings, space habitats).

Apple Photos phones home on iOS 18 and macOS 15

What the new Photos feature does

  • iOS 18 / macOS 15 add “Enhanced Visual Search” in Photos.
  • When enabled, the device creates feature vectors for suspected landmarks in photos, adds differential-privacy noise, encrypts them, and sends them via an OHTTP relay to Apple’s servers.
  • Servers perform homomorphic-encrypted nearest‑neighbor search against a global landmark index, return encrypted results, and the device tags photos locally.
  • Several users report the setting was enabled by default after upgrade; others say it was off or depends on other settings, so behavior is unclear.

Is this “sending your photos”?

  • One side: this is not uploading photos or readable metadata but non‑reversible, noisy, homomorphically encrypted vectors that Apple cannot decrypt, and relays hide IPs.
  • The other side: any derived data tied only to one’s photos is “my data”; exfiltrating it without explicit consent is a privacy violation regardless of cryptography.
  • Some argue even if the design is sound, bugs, implementation mistakes, or future changes could leak real data.

Consent, defaults, and user agency

  • Strong theme: anything leaving the device should require explicit, up‑front opt‑in; “privacy by math” does not replace informed consent.
  • Others counter that most users won’t understand HE/DP and already suffer from consent fatigue; for features believed to be mathematically safe, default‑on may be acceptable.
  • Many see default‑on as violating Apple’s own “what happens on your iPhone stays on your iPhone” marketing, and as a trust‑eroding pattern alongside other telemetry (e.g., “Help Improve Search”).

Trust, closed implementations, and threat models

  • Pro‑Apple commenters emphasize multiple privacy layers, open‑sourced HE libraries, and Apple’s comparatively strong stance vs. Google/Meta/Microsoft.
  • Skeptics point out:
    • The OS and service code are closed; users cannot verify what actually runs.
    • Cryptography can be misused or quietly repurposed (e.g., revived CSAM‑style scanning).
    • Powerful actors or future Apple policies could weaken protections or exploit metadata over time.

Broader reactions and alternatives

  • Some view the outrage as overblown “rage‑bait”; others see it as justified pushback against creeping client‑side scanning.
  • A minority argue that owning a modern smartphone is already a fundamental privacy failure (cell towers, apps, clouds), so this is marginal.
  • Coping strategies discussed:
    • Turning the feature off where possible.
    • Using self‑hosted photo solutions (Immich, LibrePhotos, PhotoPrism, Ente).
    • Moving to privacy‑focused Android variants (e.g., GrapheneOS) or Linux, with firewalls and no cloud backups.

Debugging memory corruption: who the hell writes "2" into my stack? (2016)

Nature of the Bug

  • Thread agrees the core issue is: kernel writes asynchronously to a user-provided buffer that was on the stack, after that stack frame was unwound by an exception → use-after-return.
  • Several commenters stress this is primarily undefined behavior (throwing through C frames), not a classic in-process buffer overrun.
  • Confusion over whether this is “memory corruption” vs “UB”; consensus: UB at the language/ABI boundary, manifesting as stack corruption.

Debugging Approaches

  • Hardware breakpoints suggested, but others note they don’t trigger on kernel writes in user space.
  • Time-travel / reverse debuggers (rr, Windows TTD) discussed:
    • Could help if they record kernel side effects or interpose async writes.
    • But handling async syscalls is hard and often not implemented.
  • Perf_event on Linux mentioned as a way to set global hardware breakpoints.
  • Valgrind, ASAN/MSAN/UBSAN praised, but multiple people note they wouldn’t catch this specific bug.

Exceptions, C ABI, and APCs

  • Strong skepticism about C++ exceptions in systems code; multiple comments advocate avoiding them entirely, or tightly scoping them.
  • Key rule repeated: never throw exceptions across C frames, callbacks, or OS callbacks (APCs, signals, qsort, etc.).
  • Discussion around noexcept and C ABI:
    • Idea: treat C and extern "C" functions as implicitly noexcept.
    • Proposals for compilers to warn when non-noexcept function pointers are passed to C/noexcept APIs.
    • Rust 1.81 change (aborting on unwinding through extern "C") cited as a mitigation.

Memory-Safe Languages and FFI

  • Disagreement on whether Rust/memory-safe languages “would have prevented” this:
    • One side: safe Rust can encode lifetimes and forbid passing stack buffers with too-short lifetimes.
    • Other side: once you cross into syscalls/FFI, the language can’t fully enforce kernel contracts; unsafe FFI remains a risk.
  • General consensus: safe wrappers help, but correctness hinges on accurately modeling OS API requirements.

Win32 / OS API Design & Patterns

  • APC / alertable waits described as a powerful but dangerous mechanism, analogous to but safer than Unix signals.
  • Criticism that documentation under-emphasizes “don’t throw / don’t unwind” from APCs.
  • Contrast drawn with Unix I/O:
    • Windows completion-style APIs (OVERLAPPED, IOCP) can hold user pointers across time.
    • Traditional Unix syscalls rarely keep user pointers asynchronously; newer AIO/io_uring do.
  • Self-pipe / loopback-socket trick recognized as a standard, safer pattern for interrupting select().

Lessons and Broader Takeaways

  • Don’t throw exceptions out of async callbacks or while inside syscalls.
  • Avoid stack-backed buffers for operations where the kernel may complete later or re-enter your code unexpectedly.
  • Design cancellation with explicit, synchronous semantics where possible.
  • Several anecdotes reinforce how small “clever” ideas can cause hugely expensive debugging sessions.

Google's Results Are Infested, Open AI Is Using Their Playbook from the 2000s

Perceived decline of Google Search

  • Many see modern Google Search as “enshittified”: more ads, SEO sludge, and UI clutter vs early-2000s fast, relevant, link‑oriented results.
  • Complaints include: AI overviews pushed on users, difficulty forcing literal queries, poor handling of code identifiers, and infested “best X” affiliate listicles.
  • Some argue the web’s underlying content degraded; others say Google’s ad incentives and product decisions actively caused that degradation.

AI Overviews and LLM Search: Helpfulness vs. Risk

  • A minority likes Google’s AI Overview as a way to skip ads and junk and get quick summaries (e.g., Bluetooth pairing steps).
  • Many report frequent, confident wrong answers, especially on technical, medical, and numerical topics (e.g., child vomiting diet, LD50 of caffeine, calorie recommendations, fictional movie sequels, passport rules).
  • Core tension: AI is often “good enough” for trivial queries but dangerously opaque and fallible for high‑stakes ones.

Trust, Hallucinations, and Verification Cost

  • Users accept “bullshit” from standalone chatbots more readily than from Google’s top results; Google is held to a higher standard.
  • Verifying AI answers can take as long as solving the problem directly, undermining the supposed time savings.
  • Some note that web-search-enabled LLMs can provide sources, but citations are sometimes fabricated or misrepresent the linked page.

Impact on the Web Ecosystem and Creators

  • Content creators describe AI summaries as “plundering” their work: extracting answers, stripping traffic, and weakening incentives to publish high‑quality guides.
  • Affiliate‑funded sites are seeing traffic drops as AI overviews answer queries without clicks, threatening business models that relied on organic search.

SEO, Advertising, and “Dark Google”

  • SEO is framed as a coordinated “dark” ecosystem gaming search and now aiming to poison LLM training data to bias brand mentions.
  • Several argue Google profits from low‑quality, ad‑heavy SEO sites via its ad network, so it has weak incentives to truly fix spam.
  • Widespread expectation that AI search (from Google or OpenAI) will eventually be monetized with embedded or blended ads, repeating search’s trajectory.

Alternatives and Coping Strategies

  • Many report partial migration to Kagi, Perplexity, Brave Search, DuckDuckGo, or local/open‑source LLMs; none are seen as perfect.
  • Workarounds include: appending “reddit”/“wikipedia” to queries, using separate tools for “search vs answer,” uBlock filters to hide AI, or using LLMs only for brainstorming.
  • LLMs are praised for “fuzzy recall” tasks (finding half‑remembered quotes, books, films, games) but also shown to fail badly on similar prompts.

U.S. homelessness jumps to record high amid affordable housing shortage

Causes of rising homelessness

  • Many argue homelessness is primarily a housing-supply/price problem: where rents are high, homelessness is high.
  • Others stress stagnant or depressed wages interacting with housing costs.
  • Some say it’s “a bit of everything”: building codes, immigration, remote-work policies, corporate ownership, and macro trade dynamics.
  • Several posters point to evidence that poor U.S. states with cheap housing have relatively low homelessness, suggesting supply and price matter more than income levels.

Immigration and asylum seekers

  • Some see recent immigration and asylum flows as a major additional demand shock on an inelastic housing supply.
  • Others note that “border crossings” overstate net new residents, and that total unauthorized immigrants haven’t exploded.
  • Local finance issue: benefits of immigrant labor may accrue in one region (e.g., corporate HQ) while service burdens fall on another, driving local backlash.
  • Disagreement over how much immigrants are visible among the homeless; some report most visible homeless are long-term locals, not recent migrants.

Drugs, mental health, and visibility

  • One camp: drugs are overemphasized; addiction is common but not the primary structural cause, which is lack of cheap housing.
  • Another: chronic addiction cases consume disproportionate resources and complicate “just build housing” approaches.
  • Some differentiate “temporarily unhoused” (often invisible, car-sleeping) from chronically street-homeless, who are more likely to have severe addiction/mental illness.

Vacancy, speculation, and taxes

  • National rental vacancy ~6.9% is seen by some as low but not necessarily inefficient; some vacancy is necessary for mobility.
  • Others highlight local derelict or intentionally vacant buildings as evidence housing is treated as an investment asset.
  • Proposed fixes: land value tax, heavy taxes on empty/secondary homes, speculation/“vacancy” taxes, but some warn taxes can be passed to renters or hurt mobility.

Zoning, NIMBYism, and construction

  • Broad agreement that restrictive zoning and NIMBY politics block needed supply, especially in high-demand metros.
  • Suggested solutions: upzone cities, federalize zoning (Japan-style), incentivize dense construction, allow more ADUs, and enable remote work to spread demand.
  • Some emphasize that current owners benefit financially from shortages and resist value-lowering reforms.

Public and non-market housing models

  • Advocates point to mixed-income public or non-market housing (examples cited abroad) as stabilizing rents and de-stigmatizing “projects.”
  • Critics highlight long waits, lease structures, and difficulty moving near jobs in some models.
  • Veteran homelessness is noted as a “bright spot” where focused subsidies and services have measurably reduced homelessness, prompting debate over scalability.

Ask HN: Are you unable to find employment?

Overall state of the tech job market

  • Many posters across the US, Europe, Australia, and India report the toughest market they’ve seen in years, especially since late 2022.
  • Common patterns: hundreds of applications, very few interviews, multi‑round processes ending in rejection, and long unemployment spells (6–12+ months).
  • Juniors and mid‑levels report near-total lockout; seniors say they still get interviews but face lower offers, more hoops, and more competition.
  • Some respondents, especially with strong networks or niche skills (ML, embedded, SDET, finance/low‑latency), report landing jobs relatively quickly and question how “bad” the market really is.

Drivers: macroeconomy, AI, and company behavior

  • End of zero/low interest rates and post‑COVID correction: less speculative hiring, more focus on cost and profitability.
  • Many firms cut large portions of staff, froze or reduced hiring, and reallocated budgets into AI initiatives.
  • LLMs are seen as boosting productivity of existing engineers, especially juniors, reducing demand for new hires in some shops.
  • Several describe “fake” or placeholder job postings, very slow response times, and automated resume filters eliminating candidates early.

Offshoring, H‑1B, and pay

  • Strong perception that companies are shifting hiring to lower‑cost regions (India, Poland, Brazil, Mexico, etc.) and to H‑1B workers, often at much lower salaries.
  • Some argue this is straightforward cost optimization and has been happening for years; others frame it as systemic suppression of domestic wages and mobility.
  • Several describe US and UK teams being replaced or “restructured” into foreign offices.

Age, race, and DEI

  • Many older engineers report ageism: resumes filtered for “too much” experience, visible gray hair or long careers seen as a liability, advice to hide graduation years and early roles.
  • Debate around whether white men are now disadvantaged:
    • Some cite DEI policies, diversity targets, and anecdotes of being passed over.
    • Others counter that URM hiring is still limited, DEI peaked around 2020–21, and the primary issue is oversupply and cost, not race.

Education, oversupply, and skills

  • CS enrollment keeps rising; bootcamps and weak university programs are blamed for a glut of mediocre candidates.
  • Complaints that many grads can’t code without heavy library use; others say the real issue is employers demanding narrow, “perfect-fit” experience.

Coping strategies

  • Recurrent advice: network and get referrals; cold applications are seen as low‑yield.
  • Some pivot to self‑employment, consulting, game dev, or non‑tech work; others double down on open source to signal skill.

I automated my job application process

Overall reaction to automated job applications

  • Many see automating applications with LLMs as technically clever but socially harmful.
  • Frequent framing: tragedy-of-the-commons / prisoner’s-dilemma. If others spray-and-pray, individuals feel forced to do the same, even though it worsens the system for everyone.
  • Some call it narcissistic or spammy; others argue it’s a rational response to opaque ATS filters, ghost jobs, and instant automated rejections.

Impact on hiring managers and companies

  • Hiring managers report hundreds to thousands of applications per role, often 90%+ clearly unqualified or obviously AI-generated.
  • Complaints include: duplicated resumes, identical cover letters with minor edits, fake identities, and even organized fraud rings using fabricated CVs and remote-work scams.
  • Noise pushes many to:
    • Ignore early applicants.
    • Rely more heavily on referrals and private channels.
    • Stop public postings entirely and hire through networks or agencies.
    • Consider on-site or in-person-only interviews even for remote jobs.

Job seeker experience and incentives

  • Job seekers describe:
    • Hundreds–thousands of applications for a single interview.
    • Ghosting at all stages, including after multi-round interviews.
    • “Ghost jobs” and roles posted without real intent to hire.
  • This drives a numbers-game mentality and makes deeply researching and tailoring each application feel irrational.
  • Some insist targeted, high-effort applications and networking still work; others say that advice is out of date for mid-level/senior engineers today.

Cover letters, resumes, and LLM use

  • Mixed views on cover letters:
    • Some hiring managers like them when they add specific, non-generic detail or personal context.
    • Many see LLM-generated letters as long, formal, and content-free; they confer no advantage and can be a negative signal.
  • Resume optimization via LLMs appears to increase callback rates in some anecdotal tests, but is often detected as templated.

Cheating and “fake candidates”

  • Reports of candidates:
    • Using LLMs live during remote interviews.
    • Having others do the work after hire (offshore substitution).
  • Leads to moves toward:
    • In-person or proctored interviews.
    • “Humanity checks” (possibility of in-person rounds, surprise calls, local meetups).

Proposed fixes and structural ideas

  • Suggested mitigations:
    • Small manual tasks or instructions in postings to filter bots.
    • More in-person events and career fairs.
    • Greater reliance on referrals, networking, and internal candidate pipelines.
  • More structural proposals:
    • Professional licensing or standardized competency exams.
    • Union/guild-like bodies or centralized reputation/credential systems.
    • Regulation of job boards and ATS behavior, though feasibility is debated.

3D-printed neighborhood nears completion in Texas

Economics and Market Dynamics

  • Many commenters see little current cost advantage: sale prices ($450–600k) are similar to or higher than comparable conventional homes in the same area.
  • Repeated point: “market-rate housing sells at market rates.” Lower build costs, if any, tend to increase developer margins rather than lower prices.
  • Some argue this is a demonstration project; if there were significant savings, they’d likely highlight them.
  • Others note that builders care about needing fewer, less-skilled workers even if buyers don’t see price cuts.
  • Speculation that risk premiums from a new method currently raise costs; potential savings might emerge once the tech matures.

Construction Process, Prefab, and Alternatives

  • Several note that superstructure/framing is only a minority of total cost; sitework, finishes, MEP (mechanical/electrical/plumbing) dominate.
  • There’s extensive debate on prefab and panelized walls: challenges with joints, transport, storage, code compliance, on‑site fitting, and responsibility when parts don’t fit.
  • US single-family is already highly standardized/prefab at the component level; fully prefab systems often struggle with economics and flexibility.
  • Some think 3D printing is mostly a marketing gimmick; others see it as an early, immature technology with room to improve, analogous to early 3D printers or smartphones.

Performance, Materials, and Modifiability

  • Pros cited: potential for better insulation and energy efficiency (especially in Texas heat), greater storm, flood, and fire resistance vs timber, and more consistent quality.
  • Cons: high concrete CO2 footprint, harder repairs and renovations, blocked Wi‑Fi/cellular, and more difficult retrofits for plumbing/electrical.
  • On modifications, reports from site visits say new openings require masonry cutting and patching—more work than drywall, similar to or slightly worse than cinder block.

Design, Urban Form, and Aesthetics

  • Some dislike the “suburban box” aesthetics and car-centric layout, noting small lots and minimal yards; others argue preferences vary and the location (Georgetown) is a full city, not just an Austin suburb.
  • Commenters are disappointed that the technology is mostly used to reproduce conventional ranch houses rather than exploit new forms (curves, ornament, unique geometries).

Future Outlook

  • Split views: some call it a dead-end gimmick; others believe it has crossed proof‑of‑concept and will steadily improve, possibly enabling more custom, resilient housing or even off-world construction.

EmacsConf 2024 Notes

EmacsConf 2024 experience & format

  • Many commenters enjoyed the “cozy” feel and smooth organization of EmacsConf 2024.
  • Talks are available as video, audio, slides, captions, and transcripts; some initially thought it was “video only” but were corrected.
  • Automation and Emacs Lisp tooling are heavily used to schedule, stream, caption, and publish talks; small incremental improvements accumulate year to year.
  • The conference runs on a very small hosting budget (under US$200/year), but setup time and technical complexity (e.g., BigBlueButton) are noted as the real cost.

Comparison with NeovimConf and other editor events

  • EmacsConf is much smaller than NeovimConf in viewer numbers, which participants think makes self‑hosting and careful scheduling manageable.
  • NeovimConf is seen as more stream/YouTube/twitch‑oriented, with some complaints (elsewhere) about ads, schedule slippage, and meme‑heavy chat.
  • Commenters like cross‑pollination of ideas between Emacs, Neovim, Helix, and others; several explicitly browse other editor conferences for workflow inspiration.

Emacs vs VS Code and other editors

  • Some long‑time Emacs users moved to VS Code, often citing:
    • Better “out of the box” UX and packaging.
    • GitHub Copilot integration.
    • Easier onboarding for teams and new grads, especially around shared extensions, linters, and remote editing.
  • Others tried VS Code and returned to Emacs, mainly due to:
    • Deep custom Elisp workflows they can’t easily replicate.
    • Strong preference for full customizability and keyboard‑driven workflows.
    • Concerns about Microsoft’s telemetry, cloud tie‑ins, and long‑term “enshittification.”
  • Some use both: Emacs for notes/org/magit and VS Code for language tooling or remote work.

Lisp, Elisp, and alternative runtimes

  • Strong defense of Lisp as the core of Emacs’ power; skepticism toward “rewrite Emacs in Lua” ideas, though some argue Lua is essentially Lisp‑like with a different syntax.
  • Discussion of Guile‑powered Emacs and desire for a “proper runtime” for Elisp; some would even accept Elisp-on-Guile only.
  • Fennel (Lisp-to-Lua) is praised as a nicer way to script Lua ecosystems (Neovim, hammerspoon, etc.).

Alternative Emacs-like editors

  • Lem (Common Lisp–based, Emacs‑ish editor) attracts interest:
    • Praised for responsiveness, threading, and potential to surpass Emacs.
    • Criticized for missing features (multiple frames, theming issues, config/docs) and a very small ecosystem compared to Emacs.
  • Talks on Guile-Emacs, an Emacs core in Rust (Rune), and other emacsen are highlighted as exciting experiments.

Use cases, community, and longevity

  • Many emphasize Emacs as a “lifetime editor” and general computing environment, not just for programming: email, writing, note‑taking, document authoring, and more.
  • Non‑programmer or “writer” use is surprisingly common; people list thesaurus, spellcheck, translation, dictionaries, reading‑ease analysis, and LLM integration as examples.
  • Some argue that even if many developer‑only users move to VS Code, core usage and development are driven by people who use Emacs for much more than coding, so the ecosystem remains vibrant.

Performance, concurrency, and remote work

  • Emacs’ single‑threaded nature and global mutable state are widely seen as a structural limit, causing UI freezes on heavy tasks (e.g., large diffs, big LSP operations).
  • External processes and tools (e.g., LSP servers, rsync-based sync, Emacs LSP booster, native compilation) mitigate some issues but don’t solve core concurrency problems.
  • VS Code’s remote development is often described as far more reliable and polished than TRAMP; others counter that VS Code requires non‑free server‑side components and more resources, so comparisons are not strictly fair.

So you want to write Java in Neovim

Neovim for Java: Appeal vs. Reality

  • Many are intrigued by recent Neovim+Java setups, especially as an escape from heavyweight IDE “black magic.”
  • Several people tried and bounced off: configuring LSP, debugging, and plugins was a large cognitive load compared to a Java IDE that “just works.”
  • Common advice: unless Neovim is already your main editor, start with a dedicated Java IDE.

Navigation, Project Structure, and Fuzzy Finding

  • Deep Java package paths and Maven-style layouts worry some; they find raw Vim directory navigation painful.
  • Others say a fuzzy file finder (Ctrl-P, Telescope, fzf.vim, etc.) or file managers (yazi, oil.nvim, filepicker.vim) solve this, similar to JetBrains “search everywhere.”
  • LSP-powered symbol navigation (gd, gr) in Neovim can approximate IDE-style “go to definition/references.”

LSPs, JDTLS, and Alternatives

  • Java on Neovim is typically powered by JDTLS; several complain it’s oddly complex compared to other language servers.
  • A NetBeans-based Java LSP exists but isn’t commonly used here; status with Neovim is unclear.
  • Some note that Java is a “special snowflake” in LSP setup, unlike languages where you just install the server and point the editor at it.

IDE vs Text Editor: Capabilities and Philosophy

  • Ongoing debate: at what point does a plugin-heavy Neovim become an IDE in practice.
  • Some emphasize minimal setups (tags + terminals, no LSP, sometimes no syntax highlighting); others insist autocomplete, refactoring, and instant feedback are too valuable to give up.
  • Terminal-centric workflows are praised for speed, composability with CLI tools, and stability over time.

Java Tooling Exceptionalism (Especially IntelliJ)

  • Strong consensus that Java IDE tooling, especially IntelliJ, is far ahead of LSP-based setups: powerful refactorings, inspections, debugging, code transformations, and ecosystem-aware features.
  • Some argue this leads teams to structure code and builds around the IDE, which can hurt portability and non-IDE workflows. Others see it as simply using the best tools available.

AI IDEs, Cloud IDEs, and Future Directions

  • AI-focused IDEs (Cursor, Zed) are seen as competitive with VSCode/Neovim, but not yet with language-specific Java IDEs.
  • Cloud IDE experiments show high uptake only once IntelliJ is offered; VSCode-only environments mainly attract juniors.
  • JetBrains’ AI features are viewed as decent and deeply integrated, but not universally loved; some find aggressive multi-line AI completions distracting.

Spotify Shuts Down ‘Unwrapped’ Artist Royalty Calculator with Legal Threats

Streaming royalties and business model

  • Most major music streamers reportedly pay ~70% of revenue to rightsholders; Bandcamp/SoundCloud somewhat higher but not directly comparable because they’re more purchase‑oriented.
  • Commenters argue the main problem isn’t Spotify’s headline cut, but:
    • Labels own the recordings, take most of the payout, and then split leftovers among performers, songwriters, producers, etc.
    • Low subscription prices (~$10/month for “all music”) devalue music and leave a small pie to split.
  • Some artists note thousands of streams don’t even cover basic costs like rehearsal space; others emphasize that small differences in platform payout matter less than label contracts.
  • Several point out that Spotify’s pro‑rata model sends part of each user’s fee to top artists regardless of what they personally play; Apple’s actual allocation model is debated/unclear.
  • Free tiers and ad‑supported listening are seen as a major driver of low per‑stream rates.

Spotify pricing, UX, discovery

  • Some users would tolerate a price increase; others would cancel over even a small hike, especially if discovery feels weak or repetitive.
  • Complaints include poor recommendations (e.g., cover‑song spam, homogenized music), aggressive playlist focus, and sponsored or house‑owned content being pushed.
  • Others praise Spotify’s discovery, playlists, and near‑universal catalog as its main value, even with UX frustrations.

Alternatives: Apple Music, Tidal, YouTube Music, others

  • Apple Music: praised for sound quality, human‑curated shows, and integration; criticized for buggy/odd UIs (especially playlists, desktop/tvOS) and deleting user data after unsubscribing.
  • Tidal/Deezer/Qobuz: mentioned for better discovery, lossless audio, or album‑centric use, but with catalog/UX gaps.
  • YouTube Music: strongly polarizing. Some find its algorithm and catalog (including user uploads, bootlegs, niche content) vastly superior; others call the apps unstable, UI “garbage,” audio quality inconsistent, and distrust Google’s product longevity.
  • Bandcamp and local/college radio are recommended for true discovery and direct artist support.

Artist economics and “value of music”

  • Multiple comments compare music to games or open‑source software: huge oversupply, many creators willing to work for little or nothing, so most income comes from touring, merch, or side jobs.
  • Some argue that for most everyday “background” listening, music has become a low‑value commodity; others worry about AI‑generated “filler” displacing human work but expect live/local scenes to persist.

Legal and “Unwrapped” shutdown

  • Several see the takedown threat as likely based on trademark/brand confusion rather than defamation.
  • Others emphasize that facing corporate legal departments, small projects will almost always fold, regardless of actual merits.

Where can you go in Europe by train in 8h?

Booking & Ticketing Fragmentation

  • Many commenters lament how hard it is to book multi-country journeys on a single ticket; cross-border pricing is inconsistent and often opaque.
  • National operators are said to resist a unified European ticketing system to avoid transparent, comparable pricing.
  • Legislative efforts for multimodal digital mobility services exist but are expected to be slow and heavily lobbied against.

Tools, Meta-Search & “Google Flights for Trains”

  • Mentioned tools: Trainline, RailEurope, Nightjet, All Aboard, direkt.bahn.guru, seat61, national sites like DB/ÖBB/Trenitalia, and travel-time/isochrone sites (Traveltime, Mapnificent, etc.).
  • Trainline is broadly praised but has fees and gaps (e.g., Eurostar integration, some regional tickets).
  • Some argue a full “Google Flights for trains” is unnecessary for timetables (data is mostly shared) but clearly missing for unified booking.

Chronotrains App: Usefulness & UX

  • People like the idea for discoverability (where you can get in 8h) but criticize:
    • UI confusion (heatmap vs point-to-point mode, hard-to-reset selection).
    • Performance issues/overload.
    • Missing or outdated data (e.g., Sweden–continent links, Iberia, new Spanish lines, Vilnius–Riga, mislabeled cities like Enschede).
    • Strict 8h cutoff ignores sleeper trains and “close calls” (e.g., 3h02 treated as 4h).

Trains vs Planes: Time, Access & Experience

  • Pro-train points: city-center stations, minimal security overhead, boarding within minutes of departure, easier luggage, less stress, possibility of overnight travel turning “dead time” into sleep.
  • Skeptical points: door-to-door time can approach or exceed flying, especially with poor local rail or sparse schedules; missed connections can still be painful.
  • Debate over how much buffer time is reasonable; some routinely aim to arrive 5–15 minutes before departure, others find this too risky when connections are tight.

Reliability & National Differences

  • Frequent criticism of German rail (DB) for chronic delays, cancellations, and “replacement bus” chaos; argument over whether this stems from privatization logic vs underinvestment.
  • Norway and Denmark also cited as suffering from maintenance underfunding and slow, infrequent services.
  • Dutch NS, French TGV, Swiss rail are often described as comparatively good, though far from perfect.

Night Trains & Comfort

  • Strong nostalgia and enthusiasm for European night trains; also many negative anecdotes (noise, police checks, motion, cramped couchettes, cost of sleepers).
  • New night-train startups (e.g., European Sleeper, Luna Rail) try to fix economics and rolling-stock issues; consensus that rolling-stock finance is a major bottleneck.
  • Argument over environmental efficiency: night trains are less dense than daytime trains but still much lower emissions than aviation.

Geographic & Political Gaps

  • Notable weak spots: Iberian Peninsula (Lisbon–Madrid, Spain–Portugal generally), southeastern Europe, parts of Scandinavia, Ireland.
  • Complaints that many EU-wide apps under-represent long but feasible international routes (e.g., Sweden–Germany–Netherlands).

Comparisons with US/Canada & Russia/Japan

  • Long subthread on why US/Canada lack comparable passenger rail: competing explanations include density/geography, car culture, lobbying (oil/auto), political dysfunction, and land acquisition issues.
  • Some point to Russia’s Trans-Siberian and Japanese/Chinese HSR as proof large countries can build effective rail; others note differing population patterns and governance.
  • Agreement that some dense US corridors (Northeast, parts of California, Texas triangle, Midwest cluster) could support much better rail than they currently have.

Casual Viewing – Why Netflix looks like that

Netflix’s “Casual Viewing” Strategy

  • Many focus on the reported note that Netflix asks writers to have characters explicitly “announce what they’re doing” so distracted viewers can follow along.
  • Several see this as converging toward radio plays, audiobooks, or podcasts with video attached.
  • Some argue it’s targeted at people who half-watch while doing chores, driving, or looking at their phones, and that this use case is now central to Netflix’s strategy.

Audience Behavior & Background Viewing

  • Multiple commenters admit they or their partners routinely “watch” Netflix while on their phones or working.
  • Others find this distressing and say if you can’t focus, you should use audio formats or simply turn the TV off.
  • There’s disagreement over whether this behavior reflects ADHD, modern distraction patterns, or just normal multitasking.

Quality, “Enshittification,” and Business Incentives

  • Many see this as part of broader “enshittification” or “quality fade”: more filler content, fewer enduring films, and aggressive cancellation of promising series.
  • Some emphasize subscription economics: incentives shift from making great individual works to maximizing “time on platform” and retention.
  • Counterpoint: Netflix is still profitable and supplying what mass audiences demonstrably watch, even if cinephiles dislike it.

Storytelling, “Show Don’t Tell,” and Artistic Concerns

  • Commenters lament exposition-heavy dialogue (e.g., in The Mandalorian, certain Netflix movies), calling it “Tide Pod cinema” or “slop.”
  • Others note that “show, don’t tell” is a guideline, not an absolute; some genres (soap operas, certain anime, Turkish series) have always leaned hard into explicit narration.
  • A minority say they actually enjoy heavily explained stories or use synopses to avoid wasting time on full viewings.

Comparisons, Accessibility, and Alternatives

  • Historical parallels: broadcast TV written for dishwashing viewers, opera and singspiel with explicit lyrics, TV formulas like Star Trek: TNG.
  • Some suggest that such narration could benefit visually impaired viewers if offered as an optional track, not baked into scripts.
  • Many report canceling or planning to cancel Netflix and gravitating to services perceived as more curated or “prestige” (HBO/Max, Apple TV+, Mubi, Criterion) or to physical cinema and pay-per-title models.

AI, Data, and Future Fears

  • A few speculate, half-seriously, that overt narration conveniently creates labeled training data for AI or foreshadows AI-generated filler content.
  • Others think this is overblown and limited to specific lowbrow genres, not all Netflix output.

Ada's dependent types, and its types as a whole

Memory model and “two stacks”

  • Ada implementations commonly use a secondary stack for dynamically sized data, separate from the call stack.
  • This solves many “short-lived allocation” problems that arenas/scratch buffers address in C/C++ and games.
  • Language-level support lets the compiler statically track secondary-stack usage, free earlier, and maintain safety; manual arenas can’t reclaim mid-stack allocations without resetting everything.
  • Some note that similar behavior could be done with the main stack and copying, but the dedicated second stack is simpler and ABI-friendly.

Ada vs. Rust and other languages

  • Several commenters argue Ada is underrated compared to Rust, especially for embedded/MCU firmware that avoids dynamic allocation.
  • They stress Ada’s strengths: ranged types, representation clauses for bitfields and serialisation, decimal types, strong spec/reference manual, and SPARK for formal verification.
  • Rust is praised for memory safety and ecosystem (Cargo, libraries) and for attracting new kernel contributors, but criticized for focusing mostly on memory safety rather than functional correctness.
  • Some wish Linux had Ada-based drivers, citing SPARK and representation clauses; others doubt maintainers would accept it.

Dependent types and SPARK

  • Multiple posts challenge the article’s framing of Ada as “dependently typed” in the type-theory sense.
  • Classic dependent types (as in Coq/Agda/Lean) allow expressing and proving properties like “sorting always returns a sorted list” at the type level.
  • Ada’s subtype predicates and discriminated records are powerful and can encode runtime-checked invariants; static predicates exist but are limited.
  • SPARK adds a separate verification layer: Ada code plus contracts is translated to Why3/SMT provers; examples include fully proved sorting.
  • Consensus: Ada/SPARK gives strong, practical verification, but is not a full Martin-Löf–style dependently typed language.

Tooling, ecosystem, and licensing

  • Complaints: historically expensive/proprietary compilers, confusing licensing, and weaker general-purpose libraries (e.g., TLS/crypto, codecs, Android NDK story).
  • Responses: GNAT (Ada in GCC) is FOSS; multiple commercial vendors exist; Alire and the “getada” installer now make setup easy on major platforms.
  • Some worry about GPLv3 implications; others counter that compiler/runtime licenses don’t infect user code.
  • Lack of a rich, modern standard ecosystem is seen as a major barrier compared to languages like Rust, Go, Python.

Syntax, readability, and IDE support

  • Ada’s verbose, English-like syntax and end Name; blocks are seen by some as hard to skim compared to brace-heavy C/Rust.
  • Others find Ada’s explicit structure easier to read and less ambiguous, especially in large, nested code, and note that end-names are optional.
  • Several argue modern editors (block selection, sticky scroll, jumping to matching delimiters) mitigate most syntax readability issues regardless of language.

Adoption, domains, and jobs

  • Historically strong in defense, aerospace, high-integrity embedded systems; also influenced VHDL and PL/SQL/pgSQL.
  • Reported non-military uses include warehouse management software. Some suggest more jobs now exist in VHDL than in Ada itself.
  • Ada’s adoption was hurt by early compiler pricing, lack of free tools during key growth periods, and today by fashion/mindshare favoring Rust and C-like syntaxes.

Trying Ada

  • Recommended entry points: FOSS GNAT via package managers, Alire as a package manager/toolchain orchestrator, the “getada” curl-based installer, and browser-based tutorials and playgrounds.

PCIe trouble with 4TB Crucial T500 NVMe SSD for >1 power cycle on MSI PRO X670-P

Root cause in the OP’s MSI + Crucial NVMe case

  • HDMI from a monitor is backfeeding power into the motherboard when the PC is “off.”
  • This leaves the 3.3 V rail at ~1.9 V instead of 0 V, so the NVMe SSD never sees a true power loss.
  • The SSD’s controller likely requires proper power-rail sequencing and a full drop to 0 V to exit its shutdown/brownout state; instead it “latches up” and is not detected on the next boot.
  • Some argue this behavior is understandable given out-of-spec power; others say the SSD should still recover once normal power is restored.

Power-leak measurements and mitigation attempts

  • Measuring with a multimeter showed substantial leakage: a 48 Ω load only dropped the rail from 1.9 V to 1.8 V.
  • Progressively lower resistances were tested; only ~6 Ω pulled the rail to 0 V, implying ~2 W continuous dissipation and significant leakage somewhere on the board.
  • Suggested “proper” fixes: use a FET to actively pull 3.3 V to ground in standby, or identify the offending chip via thermal imaging.
  • There is debate whether the motherboard is out-of-spec for not pulling 3.3 V to 0 V, or the SSD is at fault for not resetting when 3.3 V drops out of regulation.

HDMI / DisplayPort power behavior

  • HDMI traditionally supplies 5 V at very low current (for EDID and small devices), but real hardware often exceeds spec.
  • Newer HDMI 2.1 “cable power” can supply up to 5 V / 300 mA with special cables.
  • Conflicting anecdotes about early Chromecast sticks being powered solely via HDMI vs always requiring USB power.
  • DisplayPort’s DP_PWR pin is normally unused in standard cables, which may explain why DP connections don’t trigger the issue.

Broader NVMe / PCIe and sleep issues

  • Multiple reports of Crucial and other NVMe drives failing to wake from sleep, disappearing under load, or training to reduced PCIe link widths.
  • Linux NVMe drivers contain extensive “quirk” tables to handle vendor bugs, but they are incomplete.
  • Some platforms have had SSDs bricked or made unstable by S2 sleep or Gen5 PCIe, sometimes fixed by moving to Gen4 slots or different chipsets.

Troubleshooting, quality, and power anomalies

  • Several anecdotes highlight bizarre faults traceable to PSUs, RAM, ground loops, house wiring, or power conditioners.
  • Opinions split on whether modern consumer electronics are “disposable garbage” or generally reliable despite heavy cost-cutting.

Carlsen quits World Rapid and Blitz championship after dress code disagreement

Dress code rules and enforcement

  • FIDE’s 2024 regulations for this event explicitly ban jeans, t‑shirts, shorts, sneakers, etc., and prescribe “dark-coloured pants” with jacket and proper shoes.
  • A presentation before the event reportedly showed jeans under a big “Not Approved” slide; organizers say players were briefed and no one objected.
  • Penalties shared with players: first infringement → monetary fine, allowed to play that round; further infringements → excluded from next round’s pairings, each round in violation counts separately.

Sequence of events (per thread)

  • Carlsen wore jeans after a break (reportedly coming from a sponsor appearance).
  • He was informed after round 7, fined, and asked to change before round 8 or at least before round 9; the hotel was said to be a few minutes away.
  • He declined to change that day “as a matter of principle,” was unpaired for round 9 (a forfeit), and then chose to withdraw from both Rapid (mid-event) and Blitz (pre‑event).
  • Other players, including a top male player in sports shoes and players in past events, have been fined and required to change; some allegedly got away with jean‑like chinos, fueling “double standard” complaints.

Was he disqualified or did he withdraw?

  • Some argue he was effectively disqualified from the Rapid by being unpaired until he changed.
  • Others stress he could have continued the next day in compliant attire and that withdrawal was his choice.

Motives and context

  • Many see this as a proxy battle in longer‑running tensions between Carlsen and FIDE, including disputes over Freestyle/Chess960 and streaming/camera rights.
  • Some think his poor standing in the event and general frustration with FIDE made this a convenient exit and publicity move; others see genuine civil‑disobedience against a petty rule.

Debate over the rule itself

  • One camp: rules were clear, agreed in advance, and must be enforced equally, especially for stars; organizers showed integrity by not bending for the #1 player.
  • Opposing camp: the ban on neat jeans is outdated, arbitrary, selectively enforced, and bad for the sport; enforcing it to the point of sidelining the main draw is seen as self‑sabotaging bureaucracy.

Broader reflections

  • Long subthreads debate dress codes in sports and workplaces, class signaling, “professionalism,” generational shifts in norms, and whether sponsors truly require strict formality versus simply non‑shabby attire.