Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 501 of 546

When will we fix the tools that run the world?

Legacy vs “sleek” modern software

  • Many argue that “sleek” modern UIs often hide fragile, complex stacks (Angular/React/SPAs) that are slower and less reliable than older tools.
  • Old-looking systems can be fast, stable, and well-tuned to workflows; appearance is a poor proxy for quality.
  • Some see modern redesigns as prioritizing aesthetics and “novice friendliness” over speed and power for expert users.

Expert, keyboard-driven interfaces

  • Multiple comments praise 80s–90s TUIs and dense GUIs: full keyboard control, instant response, high information density, and input buffering (typing ahead while screens load).
  • Modern web apps often discard buffered input, lack shortcuts, have unpredictable focus, and rely on tooltips/search instead of discoverable accelerators.
  • There’s interest in new frameworks explicitly designed for expert use, with queued input and systematic shortcut discoverability.

Healthcare and EMR case studies

  • Several examples (Norway’s “Health Platform,” Swedish regional systems, VA’s Cerner rollout) are cited as disastrous replacements of old health software, allegedly degrading care and even causing harm.
  • Others note that leading EMR vendors employ many domain experts and extensive QA; many issues stem from local misconfiguration and attempts to preserve idiosyncratic processes.
  • There is concern that health IT optimizes for billing, monitoring, and management metrics rather than clinical workflows.

Economics, incentives, and replacement risk

  • A recurring theme: organizations won’t fund major rewrites unless they clearly generate revenue (e.g., an MRI machine beats IT upgrades in hospital budgets).
  • Legacy systems encode vast domain knowledge; rewriting them is risky, costly, and often attempted by teams without that expertise.
  • Managers frequently undervalue UX and productivity gains for frontline staff, constraining efforts to “fix” tools.

Paper-like flexibility vs rigid digital systems

  • Digital forms/databases bring searchability, backups, and automation, but often remove paper’s flexibility (leaving fields blank, writing in margins, attaching arbitrary documents).
  • Many enterprise UIs mirror rigid database schemas and reporting needs instead of real-world workflows.
  • Some argue we could design systems that preserve paper’s adaptability while leveraging digital strengths.

“When will we fix it?” and systemic constraints

  • Several comments challenge the framing of “we”: there is no unified actor, only many agents with misaligned incentives.
  • Fixing foundational tools requires long timelines, substantial resources, and the ability to “change engines in flight,” which few organizations possess.

Servo Revival: 2023-2024

Project status & funding

  • Some readers read the “revival” framing as implying Servo is effectively “default dead” without substantial funding.
  • Others argue “default dead” is a startup concept and less applicable to an OSS engine with no direct customers.
  • Multiple comments hope for institutional backing (EU, governments, nonprofits, or many small donors) rather than reliance on a single billionaire donor, to avoid undue influence.

Igalia and the browser ecosystem

  • Igalia is described as a major, highly trusted consultancy in browser and OSS work, including large contributions to Chromium and maintaining Linux WebKit ports.
  • Several note Igalia’s depth of talent and even speculate it could take technical leadership of Chromium if Chrome is ever divested.
  • Companies often fund Igalia to optimize or implement specific browser features they rely on.

Technical direction: rendering and backends

  • Questions arise about moving WebRender from OpenGL to Vulkan or wgpu; some doubt WebGPU-based APIs can fully exploit modern GPUs.
  • Servo contributors mention exploration of pluggable backends and using the wgpu-based Vello library for Canvas2D.
  • Separate Rust DOM/render projects (e.g., Blitz) are noted but aren’t direct Servo/WebRender drop-ins.

Mozilla’s cancellation of Servo

  • Many are surprised Mozilla cut the Servo team given its role as a Rust showcase and potential future engine.
  • Others defend Mozilla: Gecko already exists, large-scale rewrites are risky (Netscape cited), and resources were better spent incrementally modernizing Gecko.
  • Parts of Servo (Stylo, WebRender) successfully shipped in Firefox; remaining components, especially layout, were immature and mid-rewrite when funding ended.
  • Debate centers on whether long-term competitiveness needs a new Rust-based engine or continued Gecko refactoring.

Rust, rewrites, and safety

  • Long subthread debates full rewrites vs incremental Rust adoption for memory safety.
  • One side stresses that only a near-total Rust rewrite meaningfully eliminates C/C++-style memory bugs; the other cites classic “never rewrite from scratch” arguments and Google’s success adding new safe code without full rewrites.
  • Rust’s model (ownership, lifetimes, more stack allocation, fewer leaks) is praised, but commenters note Rust can still leak memory (e.g., reference cycles).

Funding and sponsorship options

  • Servo now accepts donations via Open Collective and GitHub Sponsors.
  • Discussion compares fees and centralization risks: GitHub yields slightly higher net funds, but some prefer Open Collective’s independence and transparency despite higher fees, especially outside the US.

Other Rust and “rewrite in Rust” projects

  • Several Rust projects (OSes, desktops, tools, data libraries, browser-like engines) are cited as promising.
  • Commenters differentiate between productive rewrites and unconstructive calls for “someone else” to rewrite mature C projects in Rust.

User access / testing

  • A commenter asks if there is a browser build where Servo can be tested; no clear answer is provided in the thread.

Laid off for the first time in my career, and twice in one year

Job market, layoffs, and macro context

  • Many posters describe the current tech market as one of the worst in decades, worse than dot-com in some geographies, with far fewer recruiter contacts and slower hiring.
  • Several argue massive capital misallocation and “bullshit jobs” in tech (management layers, overstaffed teams) are now being unwound.
  • Others note workloads for remaining staff have increased, giving management cover to claim “efficiency gains.”
  • News narratives about “massive engineering/AI shortages” are widely viewed as misleading, driven by lobbying for cheaper labor (H‑1B, offshoring) and hype.

Interviews, Leetcode, and hiring practices

  • Strong dislike of Leetcode-style interviews; many senior devs avoid such companies, especially outside top-compensation roles.
  • Some see Leetcode as testing conformity, determination, and willingness to grind more than real job skills.
  • Others say DS&A knowledge remains useful and advise younger devs to practice Leetcode to access top-paying roles.
  • IQ/cognitive tests are discussed: some big firms use them; legality and value are contested.

Recruiters, networking, and job search

  • Experiences with recruiters are polarized: some got most jobs through them; others get spam, ghosting, or zero value.
  • Networking and referrals are repeatedly emphasized as crucial, especially for bypassing Leetcode-heavy funnels and ATS noise.
  • Several recommend always keeping options open, updating materials regularly, and viewing employer relationships as transactional, not loyal.

Resumes and ATS systems

  • Many report ATS parsing issues with modern, multi-column or heavily formatted resumes, PDFs with ligatures, or designer tools.
  • Simple, single‑column, text-heavy resumes (often LaTeX/Markdown → PDF) parse more reliably and yield more callbacks.
  • Some think ATS “myths” are overblown; most failures are edge cases or user formatting errors, but others show concrete parsing mistakes.

Compensation, FAANG vs. niche companies

  • One camp: only BigTech/FAANG‑adjacent reliably pay top of market; grinding Leetcode is rational if you want $200k–$400k+ trajectories.
  • Counter‑camp: real “top money” can come from becoming uniquely valuable at smaller, stable, niche companies over many years, without Leetcode churn.
  • Broad agreement that everyone is ultimately expendable, but some argue you can become “hard to replace” and negotiate accordingly.

Side businesses and career resilience

  • Strong advocacy from some for learning sales/marketing and building side businesses or small consultancies as income insurance.
  • Others push back that most apps/side ventures fail, margins get competed away, and time may be better spent on interview prep and traditional career moves.
  • Work–life–family tradeoffs and sleep deprivation concerns surface around “hustle” advice.

Interview feedback and candidate treatment

  • Many want honest rejection feedback; hiring managers counter that detailed feedback often triggers defensiveness, harassment, or legal risk.
  • As a result, most companies stick to generic rejection messages despite candidates’ desire to improve.

Magic/tragic email links: don't make them the only option

Overall sentiment

  • Strongly mixed but skewed negative.
  • Many find email magic links frustrating or unsafe as the only login method, especially compared to passwords + managers or passkeys.
  • Some defend them as simpler for non-technical or infrequent users and useful in specific contexts.

Usability and UX of magic links

  • Multi‑device pain: email often isn’t available on work laptops, gaming PCs, TVs, or shared/corporate devices; users end up forwarding links, copying long URLs, or creating QR codes.
  • Slower and fragile: SMTP delays, spam filters, graylisting, corporate firewalls, and .zip TLD blocking frequently interrupt flows.
  • Distraction: being forced into an inbox breaks task flow and leads to users getting “lost” in email.
  • In‑app browsers: email/RSS/Slack clients open links in embedded WebViews or wrong browsers, stranding the session.
  • Frequent re‑logins plus magic links (e.g., some AI tools, banking apps) push users to abandon products.

Security considerations

  • Email as single factor: if email is compromised, accounts are too; many point out this is already true for password resets.
  • Phishing risk: training users to click login links in email normalizes behavior phishers exploit; QR login has similar abuses.
  • Anti‑phishing scanners and link previewers auto‑click links, consuming one‑time tokens or auto‑confirming actions.
  • GET‑based logins violate the “GET must not change state” norm and interact badly with prefetchers and security tooling.
  • Some implementations log in the requesting device when the link is clicked elsewhere, which can be a “phisher’s dream.”

Alternatives and mitigations

  • Email/SMS OTP codes are widely seen as a better middle ground: work cross‑device, avoid link auto‑clicks, but remain phishable.
  • Recommended mitigations:
    • Link → landing page → explicit “Confirm login” POST.
    • Couple link to a browser cookie or IP; if mismatch, require extra confirmation or OTP.
    • Include device, location, and browser info on confirmation screens.
    • Provide backup codes or allow passkeys/passwords alongside magic links.
  • Some advocate QR‑based cross‑device flows, but note they are heavily phished in practice.

Passkeys and other auth methods

  • Passkeys praised for strong security and quick UX on existing devices; suggested as primary, with email OTP/magic links as recovery.
  • Concerns:
    • Recovery and vendor lock‑in (loss of device, platform bans, sync failures).
    • Poor fit for untrusted or shared devices where users don’t want their whole passkey store.
    • Legal issues around biometrics vs passwords in some jurisdictions.
  • Many still prefer traditional username/password + password manager + TOTP for control, speed, and predictable cross‑device use.

When magic links are seen as acceptable

  • Low‑risk, infrequently accessed tools, simple enterprise utilities, guest checkouts, unsubscribe links, and “grandma‑friendly” apps.
  • Several argue they can be “good” in such niches, but “not the best,” and should rarely be the only option.

Mistakes engineers make in large established codebases

Consistency vs Evolution

  • Many strongly agree that inconsistency is the main pain in large codebases: “When in Rome” / “be a chameleon” makes code predictable and reduces mental overhead.
  • Counterpoint: rigid consistency norms can freeze bad patterns, block improvements, and turn “consistency” into the tie‑breaker that keeps the status quo forever.
  • Several suggest: only introduce a new pattern if you commit to rolling it out systematically (or rolling it back), with a concrete migration plan and buy‑in.

Working in Bad or Inconsistent Codebases

  • Some engineers focus first on making even “bad” codebases internally consistent, reporting big reliability gains.
  • Others ask how to handle codebases that are consistently bad or already fragmented; advice includes:
    • Stay locally consistent in touched areas.
    • Isolate new “better” code in well‑documented, reusable modules.
    • Get agreement on “best of the existing” patterns rather than inventing a sixth way.

Refactoring vs Rewriting

  • Strong skepticism toward full rewrites, especially by new teams unfamiliar with the monolith; several anecdotes of multi‑year failed migrations (auth, permissions, monolith→microservices).
  • Successful stories emphasize:
    • Deep prior fluency in the old system.
    • Strangler‑fig or parallel‑implementation approaches.
    • Delivering production value early and incrementally.

Testing Strategies in Large Systems

  • Heavy emphasis on tests to de‑risk refactors: unit tests plus integration/end‑to‑end tests.
  • Debate over DB‑hitting “unit” tests:
    • Some insist on real databases for fidelity.
    • Others prefer mockable interfaces to enable fast, thorough unit tests of logic and failure modes.
  • Consensus: integration tests are crucial in large, stateful, cross‑cutting systems.

Tooling, Style, and Automation

  • Automatic formatting and linters (e.g., Go‑style, .NET/clang tools) are praised for eliminating style debates but seen as orthogonal to deeper behavioral consistency issues.
  • Massive mechanical refactors can be enabled by consistent patterns plus code‑mod tooling.

Org Culture, Politics, and Incentives

  • Many note that real blockers are organizational: risk‑averse management, “sticky” seniors guarding old patterns, PRs rejected for “inconsistency,” and lack of time for refactoring.
  • Some argue that “glue work” and cleanup are undervalued; others caution against random “boyscout” changes that derail feature work without clear benefit.

Microsoft disguises Bing as Google to fool inattentive searchers

Bing’s Google-Like Results Page

  • When users search “google” (and some other engines like “yandex”) on Bing, they see a special above-the-fold pane: blank white background, centered search box, and a colorful doodle, visually reminiscent of Google’s homepage.
  • The page auto-scrolls so the main Bing header and logo are hidden; focus goes to the spoofed search box.
  • There is a small “Promoted by Microsoft” label and an “X”, but some note these are easy to miss, especially as some ad blockers hide the element.
  • Behavior varies: some report it only in certain browsers, when logged out, or not in Edge / when signed in.

Is It Deception or Improvement?

  • Critics call it intentional deception: changing layout only when searching for a competitor, hiding Bing branding, and hijacking the user’s clear intent to go to Google.
  • Defenders frame it as harmless or even clever: many nontechnical users equate “Google” with “search” and just want results; this removes a step and keeps them on a functionally similar engine.
  • Others see it as ethically nefarious even if effective, likening it to serving a different brand than requested without saying so.

Search Quality: Bing vs Google vs Others

  • Opinions on relevance are split:
    • Some say Bing and Google have largely converged; others insist Bing remains worse, especially for nuanced or structured queries.
    • Several argue Google has degraded (“enshittified”) and is more overrun by SEO spam and low-value/AI content.
    • Niche patterns: Bing praised for image/adult search and for fewer scammy ads; Google still seen as better for local queries and some tech topics.
  • Many mention using DDG (partly Bing-backed), Kagi, Yandex, and Brave as alternatives, often switching among them depending on query type.

Privacy, Ads, and Captchas

  • Bing is preferred by some because it works under strict privacy setups and VPNs without CAPTCHA “hell,” unlike Google.
  • Others emphasize that all major engines track users and that ad blockers are essential; a few choose paid or ad-free engines to avoid this entirely.

Microsoft’s Broader Behavior & Brand

  • The stunt is viewed in context of other Microsoft dark patterns: aggressive Edge promotion, Start Menu search tying into Bing, and mobile banners nudging app installs.
  • Some see this as typical of a historically aggressive, sometimes anti-competitive company; others argue Google plays similarly dirty tricks with search and Chrome.
  • The “Bing” brand itself is widely seen as weak or mocked, though some say the service has quietly become “fine” or even better than Google in recent years.

UI Mimicry and Legal/Brand Issues

  • Several note that copying successful UI (including Google’s) is common; Microsoft’s ads platform already closely mirrors Google Ads to ease switching.
  • There is debate over whether Google’s doodles and layout are protectable trademarks; some suggest any legal case would hinge on proving deception rather than pure design copying.

Anthropic raising funding valuing it at $60B

Valuation and Funding Mechanics

  • Many see the $60B valuation as driven by private-market dynamics: tiny equity slices sold at the highest price one investor will pay, plus complex preference stacks.
  • Some argue private valuations are no more extreme than certain public tech stocks.
  • Strategic investors (e.g., big clouds) are seen as “bartering” equity for guaranteed AI spend and reporting gains on both cloud revenue and investment marks.
  • A subset frames AI startup investing as a “zero or infinity” AGI lottery rather than a normal growth bet.

Anthropic vs OpenAI: Products, Structure, and Trust

  • Several commenters are bullish on Anthropic: perceived better downstream products, strong team offering, and a cleaner corporate structure than OpenAI.
  • Others strongly prefer OpenAI (especially o1/o3 and the ChatGPT app), see it as more reliable for coding, and note Anthropic’s app UX as weaker.
  • Some businesses reportedly choose Anthropic partly because they view it as more trustworthy than OpenAI.
  • Market-share discussion: Anthropic has far less chat usage but more comparable API revenue and is growing fast.

AI Progress, Futures, and Commoditization

  • One recurring debate: two futures (near-AGI with commodity AI vs ASI “lottery ticket”) vs a more likely incremental-improvement path.
  • Some expect AI to become a low-margin commodity where compute, chips, and energy providers capture most value; others argue quality and integration can sustain moats, analogous to office software.
  • Several stress how quickly models have improved; others say progress is already slowing without new techniques or data.

Economic and Societal Impact

  • Strong disagreement over whether AGI/ASI would collapse the economy (infinite cheap labor) or resemble past technological shifts (tractors, industrialization) with painful but survivable transitions.
  • Concerns raised about climate costs of ever-larger models and finite energy resources.

Usage Patterns, Quality, and Limits

  • Mixed reports on recent Claude quality: some say it degraded; others praise the latest Sonnet, attributing issues to capacity throttling.
  • Some users heavily rely on LLMs daily; others still default to search engines.
  • A side thread explores LLMs as “therapists” or companions: some find them helpful and low-friction; others see false empathy, privacy risks, and corporate data mining as disturbing.

You can't optimize your way to being a good person

Optimization vs Moral Goodness

  • Some argue you can “optimize” toward being better, via therapy or deliberate practice, but good therapy often reduces compulsive optimization rather than feeding it.
  • Others say optimization is inherently about chasing maxima and is opposed to balance; attempting to “optimize” emotions or morality risks anxiety, OCD‑like behavior, and over‑attention to distant issues at the cost of local life.
  • Several comments suggest that overthinking morality leads to analysis paralysis; being decent often requires less abstraction and more action.

Limits of Moral Systems

  • One line of discussion applies incompleteness ideas: any formal moral system is likely incomplete or inconsistent, so “moral optimization” on a single framework is doomed.
  • This leads to “epistemic modesty”: accept that no system captures all moral truth, so using grand theories to justify current suffering is dangerous.
  • Others stress that “good” and “evil” themselves are constructed within systems; attempts to anchor them in physics or entropy are challenged.

Kindness, Empathy, and Everyday Practice

  • Many emphasize small, concrete habits: kindness to strangers, compliments, parking farther away to leave close spots for those with mobility issues, “do no harm.”
  • Empathy is widely valued but debated: some distinguish emotional vs cognitive empathy and warn that excess emotional empathy can bias decisions or cause burnout.
  • Several stories illustrate choosing leniency (not pressing charges, not demanding someone be fired) out of empathy, with commenters split on whether this is wise or naive.

Consequences, Justice, and “Toxic Empathy”

  • A substantial subthread disputes whether refusing to prosecute a non‑lethal crime is compassionate or “toxic empathy” that ignores future victims and deterrence.
  • There is also discussion of the US prison system: some see incarceration as mainly producing worse offenders; others argue that consequences and structured intervention can stop criminal escalation.

Effective Altruism and Optimization Culture

  • Some see effective altruism as exactly the kind of moral optimization the article criticizes; others defend it as simply “if you care about X, use methods that best achieve X.”
  • Skeptics point to association with high‑profile fraud and label EA a cultish scam; defenders reply that one bad actor doesn’t invalidate the idea.

Religious and Philosophical Frames

  • Religious commenters frame goodness as love of God and neighbor, emphasize effort over perfection, and sometimes explicitly reject trolley‑problem tradeoffs.
  • Others argue utilitarianism mischaracterizes morality; being a “good specimen of human nature” and living in accord with our nature is offered as an alternative criterion.

Reactions to the Article

  • Several find the piece patronizing, vague, or saying little beyond “don’t obsess”; others think the core warning—against moral perfectionism and self‑optimization spirals—is valid.

Type 2 Diabetes and cardiovascular disease attributable to sugar beverages

Health impacts of sugar-sweetened beverages (SSBs)

  • Commenters largely treat the SSB–T2D/CVD link as established; this paper is seen as important mainly for quantifying global burden by country.
  • Several emphasize that liquid sugar is metabolically distinct: rapid ingestion, low satiety, strong glucose/insulin spikes, promotion of visceral and liver fat.
  • Some argue the metabolic-syndrome constellation (T2D, CVD, fatty liver, etc.) is driven primarily by diet and lifestyle, with SSBs near the top of the list.

Environment vs personal responsibility

  • Many say blaming individuals is unfair when cheap sugar drinks are omnipresent and healthy, high‑protein or low‑sugar options are scarce or costly.
  • Others push back that people can and do change via willpower and education, though this is countered with “willpower hasn’t worked at a societal level.”
  • Cultural norms (e.g., soda/juice instead of water in Latin America or the US South) are seen as powerful drivers.

Regulation, universal healthcare, and industry influence

  • One thread claims lack of US universal healthcare is partly because processed‑food industries fear later regulation of unhealthy products; others demand evidence and note UH countries still sell soda and tobacco.
  • Sugar taxes are heavily debated:
    • Evidence from places like the UK, Berkeley and several US cities suggests reduced sugary-drink purchases and reformulation toward low/zero‑sugar products.
    • Critics say taxes are easily circumvented (buy in neighboring areas), regressive, and “window dressing” compared to fixing subsidies and the food system.

Diet composition: sugar, fat, carbs, seed oils

  • Strong disagreement over whether saturated fat or sugar is the primary villain; some cite mainstream cardiology positions against saturated fat, others doubt studies and argue refined carbs and seed oils are worse.
  • Debate over “carbohydrate poisoning”: some see excess carbs as central to modern disease; others say the issue is ultra‑processed, hyperpalatable foods, not carbs per se.

Artificial sweeteners and alternatives

  • Mixed views: some see diet sodas as clearly better than sugar; others worry about microbiome effects, possible long‑term risks, and maintained “sweetness addiction.”
  • People report success switching to water, tea, coffee (often black), flavored sparkling water, or occasional “real” treats instead of daily sugar drinks.

GLP‑1 drugs and behavior change

  • One camp is enthusiastic about GLP‑1 agonists as scalable tools that dampen food reward signals.
  • Another warns this shifts dependence from food to pharma and should be a last resort after fixing diet and systems.
  • Disagreement becomes sharp around whether “you can’t beat brain chemistry” vs. lifestyle change being sufficient for many.

Show HN: Tramway SDK – An unholy union between Half-Life and Morrowind engines

Overall Reception & Aesthetic

  • Many are enthusiastic about the project and especially the retro website; the tone, jokes, and “90s/00s” presentation resonate strongly.
  • Some complain about the narrow fixed-width layout and tiny column on large screens; others argue it’s “period correct” or that ultra-wide text is uncomfortable.
  • A hidden “Enterprise Mode” and “Design Patterns scoreboard” are widely praised as hilarious and on-theme.

Licensing & Openness

  • Commenters note the initial absence of a license; it is quickly clarified that the project is MIT-licensed and a LICENSE file is added.

Performance, “Turbobloat,” and Hardware Targets

  • Many agree modern engines and games feel bloated and slow despite far more powerful hardware.
  • “Turbobloat” is interpreted as humorous shorthand for unnecessary CPU-hungry features and heavyweight tooling, not a formal term.
  • Several praise targeting older hardware and small, fast builds; some question whether requiring OpenGL 4 for HL1-ish visuals is still a form of bloat.
  • There is debate over whether upgrading hardware can be environmentally justified versus the embodied energy of manufacturing new machines.

Resolution Limits & Rendering

  • The site’s mention of 320x200–800x600 and 24-bit color triggers concern; later clarified in the thread as a joke, not a hard cap.
  • Default renderer intentionally emulates fixed-function-era pipelines; more modern renderers are planned.
  • Discussion branches into how far you can go with prebaked lighting, lightmaps, and good art direction vs modern dynamic GI and ray tracing.

Architecture: Entities vs Nodes / Engines vs Libraries

  • The engine’s “Entities, not nodes” stance resonates with some frustrated by complex node-based editors (Unity/Godot).
  • Others argue node/graph/ECS systems are powerful for composition, reuse, and quick iteration; fear that pure subclassing will hurt scalability mid-project.
  • Multiple people contrast monolithic editors (Unity, Unreal, even Godot) with lightweight library-style tools (raylib, libgdx), praising faster debug loops.

Tooling, Ecosystem, and Use Cases

  • Some see Tramway as a promising middle ground between minimal libraries and huge engines, but worry it might be too opinionated for some and not high-level enough for others.
  • Wasm builds work but are currently ~20MB and unoptimized.
  • There are calls for tutorials, demos, and even a game jam, though the author considers it early.

Tone & Humor

  • The project’s self-aware humor (turbobloat, “disrupting the wheel industry,” “Yeet” lifecycle, Rust rewrite threat) is a major part of what people enjoy.

Is XYplorer really written in VB6?

XYplorer’s Tech Stack and twinBASIC

  • XYplorer’s long-time 32‑bit codebase is VB6; a 64‑bit version now uses twinBASIC, largely by importing the original VB6 code and forms.
  • twinBASIC aims for near‑100% VB6/VBA compatibility, adds 64‑bit, modern language features (generics, attributes, type inference, etc.), an optimizing compiler, and future cross‑platform ambitions.
  • Some see twinBASIC as “heroic niche work” worth paying for; others dislike the subscription model but note there is a free community edition and periodic discounts, plus competing options like B4X and RemObjects Mercury.

VB6: Dead End or Still Viable?

  • Critics call VB6 a frozen, 32‑bit, Windows‑only language with awkward Unicode support, poor error handling (On Error Goto), and no built‑in multithreading, arguing it’s a bad choice for new projects.
  • Defenders stress VB6 as a “platform” (IDE + COM + RAD UI) rather than just a language, claiming it still excels for rapid CRUD and business apps and that Unicode and modern APIs can be handled via libraries and Win32/COM.
  • Some point out ad‑hoc threading via Win32 CreateThread and mutexes, while others note this is unofficial and fragile.
  • Security-wise, the VB6 IDE is unsupported, but Microsoft still ships compatibility/security updates for the runtime.

Developer Experience, RAD, and COM

  • Many reminisce about VB’s drag‑and‑drop GUI builder and COM components (telephony, embedded browser, database, Excel, etc.) as uniquely productive, despite COM’s complexity and footguns.
  • There is broad nostalgia for similar RAD experiences (VB, Delphi, early web tools like FrontPage) and frustration that modern stacks often replace them with boilerplate and fragile dependency graphs.

Alternatives and Successors

  • Suggested successors/relatives include Gambas, Xojo, Lazarus/FreePascal/Delphi, WinForms, Qt Creator, and GTK designers.
  • Delphi/FreePascal are praised for decades of language evolution and cross‑platform support, though Delphi’s pricing is criticized. Lazarus is seen as powerful but documentation and learning materials are considered weak.

XYplorer as a Product

  • Users report using XYplorer for many years, praising speed, dual‑pane mode, scripting, customization, and stability versus Windows Explorer and Linux file managers.
  • Some share extensive script workflows (image pipelines, backups, renaming, VirusTotal integration).
  • Concerns are raised about the single‑developer “bus factor,” but responsiveness to bug reports is praised.
  • Lack of a native Linux/macOS port is attributed to the VB6/Windows focus; Wine compatibility is queried but not clearly resolved.

Broader Language and Tooling Themes

  • Multiple comments argue that “dead” or unfashionable languages (VB6, Delphi, PHP, ColdFusion) still generate plenty of work, and that outcomes depend more on developers and practices than on language fashion.
  • Others counter that in some ecosystems (e.g., PHP, VB6) pay and new‑project opportunities can be limited and largely maintenance‑oriented.

Why is the American diet so deadly?

Engineered, Hyper-Palatable Food

  • Many argue U.S. food is effectively “concentrated pleasure”: like turning coca leaves into cocaine via extraction and refinement.
  • Mass-market products are optimized for cost and “addictiveness” using sugar, fat, salt and flavor layering (“hedonistic ratchet”), encouraging overconsumption.
  • Prepared and restaurant foods are described as almost universally heavy on fat, sugar, salt, and large portions, from chains to “nice” restaurants.

Ultra-Processed Food: Definition & Limits

  • “Ultra-processed food” (UPF) is widely blamed, but several commenters say the category is too broad and heterogeneous to be scientifically precise.
  • Disputes over where pasta, bagged bread, or factory noodles fall show definitional fuzziness and “I know it when I see it” application.
  • Some think the term is still a useful starting point; others insist we must identify specific harmful processes/ingredients instead.

Macros, Micronutrients & Specific Culprits

  • Proposed main problems: too much sugar, high-glycemic carbs, calorie density, low fiber, and excess cheap calories.
  • Disagreement on carbs: some say “it’s the carbs,” others say the real issue is inactivity and total calories, with carbs just easy to overeat.
  • Fiber’s benefits are debated: several cite strong epidemiological links to lower mortality; skeptics point to confounding and weak mechanistic clarity.
  • Seed oils, preservatives, additives, contaminants and pesticide residues are suspected by some; evidence in the thread is mixed or “unclear.”

Culture, Environment & Portions

  • American norms: huge portions, constant snacking, soda ubiquity, car-dependent sedentary life, and limited availability of cheap, prepared whole-food meals.
  • Comparisons to Japan, Korea, Denmark, and parts of Europe highlight: smaller portions, more walking, more vegetables and seafood, fewer sugary foods, and different restaurant/cafeteria norms.
  • Habits from childhood and “comfort foods” strongly shape adult choices, even for people who understand nutrition.

Dieting, Obesity & Metabolism

  • Multiple commenters say the core problem is overeating, but note that simple “dieting” often fails long term.
  • Cited explanations include metabolic adaptation (reduced resting energy expenditure), difficulty sustaining behavior change, and selection bias in diet studies.
  • Personal anecdotes show success with calorie counting, low-carb/keto, fasting, or changing beverage habits (e.g., switching from sugary to diet soda).

Policy, Industry & Responsibility

  • Some blame subsidies, profit-driven regulation, historical sugar-industry influence, and misleading dietary guidance.
  • There is discussion of lawsuits against food companies, analogized to tobacco, but outcomes are still uncertain.

Doomsday Book (2006) [pdf]

Document purpose and scope

  • Described as an internal NY Fed legal playbook for financial emergencies, compiling decades of opinions on the legal limits of Fed action in crises.
  • Intended mainly for Fed lawyers to advise quickly under stress; earlier versions were shared more broadly, now restricted to legal staff because non‑lawyers found it confusing.
  • The released PDF appears to be an introduction and table of contents; sample agreements and much substantive content are missing.

Name and “Doomsday” framing

  • “Doomsday Book” is portrayed as a crisis‑management runbook, not literal end‑of‑the‑world planning.
  • Several note the naming may be dramatic but fitting for scenarios where major sectors or the financial system face collapse.
  • Some confusion and jokes around “Doomsday” vs the medieval “Domesday Book” (land/ownership register); others argue the resemblance is intentional but mainly rhetorical.

Comic Sans and presentation

  • Many react strongly to Comic Sans on the cover/early pages, calling it inappropriate or unserious for such a document.
  • Some speculate it was chosen in a 2006 “Great Moderation” mindset, when severe crises seemed unlikely, or to downplay the document’s gravity “in plain sight.”

FOIA, structure, and accountability

  • NY Fed states it is not subject to FOIA, though it claims to follow the “spirit” of it.
  • Debate over whether this is acceptable:
    • One side: independence from direct political control is essential for sound monetary policy; Congress created and oversees the Fed and could extend FOIA.
    • Other side: exemption plus quasi‑private structure makes a powerful institution insufficiently accountable to the public.
  • Clarifications: Board of Governors is a federal agency and FOIA‑able (with exemptions); regional Feds like NY are structured as corporations with member‑bank “stock,” dividends, and surplus remittances to Treasury, but are tightly constrained and not profit‑maximizing.

Money creation and central bank independence

  • Extended argument over how money is created (“out of thin air” via bank lending vs more nuanced mechanisms including fiscal spending and Fed operations).
  • Some see the system as an exploitative “con” benefitting banks and elites, advocating hard‑cap assets like gold or bitcoin.
  • Others counter this is standard modern monetary economics, not conspiracy; inequality stems from many policies beyond monetary design.

Crisis planning: prudent vs worrisome

  • Several liken the Doomsday Book to disaster recovery or FEMA plans: necessary preparation for low‑probability, high‑impact events.
  • Others find its existence “somewhat concerning,” as it reveals the Fed expects recurring, possibly systemic crises and needs pre‑authorized extraordinary measures.
  • Majority sentiment: better to have detailed legal and operational playbooks than improvise in the next “2008‑style” event.

Why we built Vade Studio in Clojure

Language choice, hiring, and productivity

  • Several argue you should “build with what you’re comfortable with,” but also what you can hire for and scale with.
  • One camp claims niche languages (Clojure, Rust, Elixir, etc.) attract unusually strong developers and enable small, very productive teams.
  • Others counter that the best engineers are generally language‑agnostic, and that language evangelism correlates with low productivity and rewrite churn.
  • A more conservative view emphasizes mainstream languages (Python/Java/TS/C#/Go) for recruiting, tooling, AI assistance, and long‑term maintainability.

Why Clojure / Lisps appeal

  • Supporters highlight: simplicity, immutable persistent data structures, REPL‑driven development, and strong data orientation (EDN, maps, sequences).
  • Some report Clojure rekindled their interest in programming and improved how they reason about abstractions and other languages.
  • Emphasis on modeling systems as transformations of simple data; Pathom + Malli mentioned for graph-like domain modeling and generated resolvers.
  • Others argue similar functional/immutable styles are viable in TypeScript, Elixir, etc., and that Clojure isn’t uniquely capable.

Dynamic vs static typing

  • Static‑type proponents worry about large Clojure codebases, unclear data shapes, and painful refactors; prefer Rust/TS‑like “hover to see types.”
  • Clojure defenders point to REPL introspection, specs/Malli, clj‑kondo, and argue that dynamic + REPL can be extremely productive, especially for smaller teams.
  • There is debate over Typed Clojure: technically available but seen as immature and rarely used; many prefer runtime spec-based approaches.
  • Several note that all approaches involve trade‑offs; static typing is helpful but not a silver bullet.

REPLs and development experience

  • Strong praise for Lisp-style REPL‑driven development: interactive, “video‑game‑like” coding, including connecting to running systems (even in production).
  • Some Elixir users prefer IEx; others find CIDER/nREPL superior. General agreement that most non‑Lisps have weaker REPL integration.

Complexity, abstractions, and adoption

  • Multiple comments stress that managing complexity, not adding technology, is the real challenge; preference for “boring” tech (e.g., Postgres) emerges with experience.
  • Debate over whether powerful abstractions and exotic languages scale in large teams vs. simple OO/Java‑style stacks.
  • Historical side threads discuss why Lisp/Smalltalk didn’t dominate, windows of opportunity, and how community and ecosystem shape adoption.

Vade Studio specifics & product feedback

  • Some impressed that a three‑developer team built a complex system in ~1.5–2 years; others question how much credit belongs to Clojure versus abstractions and team skill.
  • Users report GitHub login loops; maintainer acknowledges and investigates. Google/GitHub auth chosen for frictionless signup; email login requested and promised.
  • Requests for clear pricing, email auth, mobile app generation, and better explanation of “data as first-class citizens” and the conflict‑resolution model.

Mark Zuckerberg: Fact-checking on Meta is too "politically biased"

Nature of Facts, Truth, and Belief

  • Several comments distinguish between:
    • “Facts” as immutable aspects of reality, in principle verifiable.
    • “Truths” or beliefs as subjective, personal, and often tribal.
  • Disagreement over whether societies ever had a widely shared “set of facts”:
    • Some say the last 10–15 years mark a new “post-truth” era.
    • Others argue propaganda, disputed facts, and manufactured consent (e.g., Iraq WMDs, Gulf of Tonkin) have always existed; the internet mainly exposes this more clearly.

Role and Limits of Fact-Checking

  • Strong criticism: “fact checkers” are viewed by some as a de facto “Ministry of Truth,” enforcing a dominant narrative and political biases.
  • Counterpoint: fact-checkers are just domain experts verifying claims labeled as “facts,” though they are fallible and biased like anyone else.
  • Practical argument: individuals cannot verify everything themselves; functional illiteracy and time constraints make trusted intermediaries necessary.
  • Concern that fact-checking is selectively applied and often aligned with powerful interests or governments.

Social Media Dynamics and Community Notes

  • Many see online life and algorithmic feeds as amplifying bubbles, tribalism, and performative outrage, weakening respect for evidence.
  • Community Notes on X/Twitter are widely viewed as:
    • Often useful, contextual, and preferable to outright removal.
    • Limited on highly polarized topics and big polarizing accounts, where loyal followers downvote corrections.
    • Too slow relative to the speed of misinformation spread.
    • Underused or ineffective in regions with fewer users.
  • Some argue any such system must be transparent/open and cannot be a single arbiter of truth.

Platform Power, Politics, and Decentralization

  • Strong suspicion that Meta’s shift away from fact-checking is:
    • Cost-saving.
    • An adaptation to new political power (especially in the US), aligning moderation with the preferences of incoming leadership.
  • Concerns about systemic political censorship (e.g., on certain conflicts) persisting despite rhetoric about “free speech.”
  • Some argue centralized platforms let a handful of wealthy actors shape global opinion; decentralization or more “private” feeds could reduce political leverage, while others question whether that would simply harden self-selected bubbles.

It's time to get back to our roots around free expression

Policy Shift Overview

  • Meta is loosening restrictions on political and social topics (e.g., immigration, gender), and replacing NGO-style “fact checking” with community-driven notes.
  • Content moderation and “trust & safety” work will be moved from California to Texas.
  • Supporters see this as a return to more open discourse; critics see it as a rebranding of reduced moderation.

Motivations and Timing

  • Many commenters suspect the timing is tied to the incoming Trump administration and is meant to avoid regulatory or political retaliation.
  • Others point to cost-cutting: paid moderators and fact-checking NGOs are expensive; community-based systems are cheap or free.
  • Some frame it as corporate pragmatism: platforms ultimately serve profit, not public-interest ideals.

Free Expression vs. Harmful Content

  • Pro–free-expression voices argue:
    • Opinions and lies should be countered by more speech, not bans.
    • Heavy-handed fact-checking can turn bad actors into perceived “martyrs” or “truth seekers.”
  • Critics respond:
    • Social media’s scale and algorithms make unfiltered lies and bigotry far more harmful than in offline discourse.
    • Disinformation (e.g., vaccine skepticism) has already shown real-world damage.
    • “Free speech” without limits ignores bots, fake personas, and coordinated manipulation.

Community Fact-Checking / “Community Notes”

  • Some praise the X/Twitter-style system:
    • Algorithm seeks cross-spectrum agreement.
    • Focuses on scams, miscontextualized media, and concise, sourced corrections.
    • Seen as less ideologically captured than NGO fact-checkers.
  • Others say it is easily gamed:
    • Trolls and coordinated groups can upvote misleading notes.
    • Examples given of nitpicky, partisan, or outright wrong notes being elevated.
    • Non-English or smaller-language communities are described as especially vulnerable to brigading.

Bias, Location, and Trust

  • Moving moderation to Texas is viewed skeptically:
    • Seen as symbolic pandering to conservatives rather than reducing bias.
    • Debate over Meta’s claim of “less concern about bias” vs. actual bias.
  • Some argue any moderation team will be biased; the issue is how transparent and accountable it is.

Platform Design and Power

  • Several note Facebook’s aging user base and lament the loss of a simple chronological friends-only feed.
  • There is concern that algorithmic amplification, blue-check prioritization, and billionaire influence now dominate what speech is actually seen, even if “allowed.”
  • A side thread worries about US platforms working with the US government against foreign regulation, raising EU self-determination and geopolitical power asymmetry.

Learning Synths

Overall reaction to Ableton “Learning Synths”

  • Many find the site a clear and intuitive introduction to synths, especially the visual oscillator “dot” in the playground that makes parameter changes tangible.
  • Some note this resource has been posted to HN multiple times over the years.
  • A few criticize the pedagogy order (starting with amplitude instead of oscillators → filters → amplitude) as visually slick but conceptually suboptimal.

Tools and tutorials for learning synthesis

  • Recommended interactive tools:
    • Ableton’s Learning Synths and related learning-music content.
    • Syntorial (and a related “building blocks” tool) for ear-based subtractive synthesis training.
    • Glicol (browser/livecoding language) and its quick tour.
    • Lambda Musika, Sonic Pi, Nyquist (via Audacity), Glicol, and a broader “awesome-livecoding” list for code-based sound work.
  • Several suggest VCV Rack (and the Cardinal plugin fork) as a strong way to understand subtractive and other synthesis methods by explicitly patching oscillators, filters, envelopes, and exploring FM, additive, physical modeling, etc.
  • Others strongly object: VCV is seen as overwhelming “assembly language”; they recommend starting with simple all‑in‑one synths (e.g., Surge, Vital, Helm, simple analog-style plugins or cheap iOS synths).

Modular vs fixed‑architecture debate

  • Pro‑modular: patch cables and explicit signal flow rapidly build deep intuition; tutorials are short and reproducible; works well for people more interested in sound design than composition.
  • Pro‑fixed synth: simpler subtractive synths (hardware or software) reduce option overload and focus on making usable patches/music rather than architecture.

Conceptual debates about synthesis

  • One commenter argues “subtractive synthesis isn’t synthesis but transformation,” triggering:
    • Pushback that synthesis broadly means building sound from parts, and subtractive architectures still qualify.
    • Discussion that almost all audio processing can be framed as filtering, which makes relabeling unhelpful.
    • Side debate on whether delays “are” filters at a DSP level vs in musical practice.
  • Terms “East Coast” (subtractive, Moog-style) and “West Coast” (waveshaping/FM, Buchla-style) are discussed; some consider them niche, others say they’re well-established in synth circles.

Learning to play vs sound design

  • A tangent asks about learning piano “via keyboard like a professional” using a computer keyboard.
    • Strong consensus: this is a dead end for actual piano technique due to key size/layout, lack of velocity, limited polyphony, and latency; a cheap MIDI keyboard is heavily recommended.
    • Some note constraints can be creative, but most say it teaches a different, less expressive “instrument.”
    • Broader advice: take lessons, practice daily, learn scales/chords, and choose music you actually enjoy.

Alternative interfaces and sci‑fi vibes

  • Some prefer gestural/hand‑tracking or movement‑based synth control (theremin-like, old sci‑fi feel) and mention tools that map hand tracking to MIDI.
  • Ondes Martenot and theremin‑style portamento are referenced as inspirations.

Technical / browser notes

  • Several notice the browser’s MIDI permission prompt:
    • Some appreciate Web MIDI and WASM use cases.
    • Others criticize requesting MIDI access before showing any content as poor UX and security‑anxiety‑inducing.

Other resources and meta

  • Additional recommendations: Allen Strange’s Electronic Music: Systems, Techniques, and Controls reissue; older touch‑synth apps; an AI‑driven preset generator for a popular softsynth.
  • One comment jokes that such tools “distract you from SuperCollider,” implying deeper environments exist for advanced users.

Building Ultra Long Range Toslink

DIY optical audio hacks (lasers, mirrors, robustness)

  • People link a video where TOSLINK LEDs are replaced with lasers for wireless surround.
  • Concerns raised about line-of-sight links breaking from vibration (subs, walls, cars with loud bass). Others note beam divergence and joke about “self-correcting” when bass drops the link and thus the vibration.
  • Critique of Manchester-encoded amplitude-modulated TOSLINK in free space: vulnerable to ambient light; suggestion to modulate onto a higher-frequency carrier like old IR headphones for robustness.
  • Some argue consumer IR (38 kHz carrier) is insufficient for good audio; others counter that IR headphones prove it works in practice.

SFP modules and low‑bitrate signals over high‑speed optics

  • Key observation: with SFPs the project is really “S/PDIF over SFP fiber” rather than extending classic plastic-fiber TOSLINK.
  • Discussion of AC coupling and DC wander: 10G optics expect high-rate, scrambled data; slow Manchester-coded S/PDIF looks almost DC, stressing coupling caps and retimers.
  • Participants note this explains why links only work above ~100–150 kHz effective transition rates.
  • Mention that SDI and AV-specific optics handle “pathological patterns” better; normal Ethernet optics assume pre-scrambled, line-coded signals.

S/PDIF, TOSLINK, HDMI, and formats/DRM

  • Some lament S/PDIF’s ≈1.5 Mbps cap limiting it to compressed 5.1 (DTS, Dolby), pushing people to HDMI for uncompressed surround.
  • Others contest the strict 1.5 Mbps limit, citing 24‑bit/96 kHz stereo specs (~5 Mbps). This remains unresolved in the thread.
  • Lack of bidirectional signaling and robust DRM is cited as a reason TOSLINK wasn’t extended for richer formats; HDMI won due to HDCP.
  • TOSLINK is seen as “boringly reliable” and still widely used (TV → amp, legacy CD/DVD), despite being old.

Audio over Ethernet and live‑sound latency

  • Live‑sound folks compare: AES50 (layer‑1, synchronous, ~62 µs per link) vs Dante/Audio-over-IP (1–10 ms typical, can be lower in some modes).
  • Very low latency over long fiber (≈11 µs) is seen as valuable for digital live audio paths, though venue speaker arrays still deliberately add delay for alignment.

Fiber physics, tools, and oddities

  • Clarification that light in fiber travels ~c/1.5, slower than in vacuum, which matters for latency and trading links.
  • Mention of OTDR launch fiber spools (100–200 km) as an easier way to test ultra-long optical paths.
  • Fiber tech anecdotes: talk sets, non‑intrusive fiber clamps that detect modulation, and the difficulty of mid‑run tapping without splicing.

Ending our third party fact-checking program and moving to Community Notes model

Fact-checkers vs. Community Notes

  • Many see third‑party fact-checking as biased, error‑prone, and easily weaponized, citing Covid, Hunter Biden, and masking examples; they like Community Notes as more pluralistic and transparent.
  • Others counter that professional fact-checkers are rarely wrong in big ways, have documented processes, and that there’s little evidence Community Notes produces higher‑quality information.
  • Disagreement over what “fact” means is a recurring theme: some say facts are clear and checkable; others stress framing, selective emphasis, and statistics as inherently contestable.

Free Speech vs. Censorship

  • One camp views corporate and government‑nudged moderation as authoritarian “arbiters of truth” that drive people into radicalized silos and fuel backlash (e.g., antivax, Covid debates).
  • Another camp sees fact‑checks and bans as necessary guardrails against harmful disinformation and hate, pointing to research on subreddit bans reducing hate speech and to historical limits on speech in emergencies.
  • Some stress that labeling and downranking is suppression in practice, not just “more speech.”

Political Context and Motives

  • Many see Meta’s move as aligning with the incoming US administration and Trump‑aligned figures (board appointments, leadership changes, donations, Texas move).
  • Some frame it as a reaction to shifting political pressure: platforms previously bent toward one party, now toward the other.
  • Others argue tech firms mainly seek to avoid regulation and legal risk, currying favor with whichever side holds power.

Moderation of Harmful but Legal Content

  • Strong concern about Meta reducing proactive removal of suicide, self‑harm, and eating‑disorder content, given past teen suicides and research on contagion effects.
  • Counterpoints emphasize over‑censorship harming discussion of suicide recovery or reporting, and argue parents, not platforms, should gate kids’ exposure.

Business Incentives and Scale

  • Commenters note fact‑checking is expensive, low‑ROI, and politically thankless; Community Notes is cheaper and boosts engagement through conflict.
  • Some see Meta betting that algorithmic feeds plus lighter moderation and more politics maximize time‑on‑site, even if that worsens polarization.

Effectiveness, Extremism, and Echo Chambers

  • One side claims heavy moderation and “forbidden knowledge” effects worsened extremism by pushing people into uncensored silos.
  • Others say pushing extremists off large platforms and breaking echo chambers reduces overall harm and can soften user behavior.
  • There is broad agreement that “town square at global scale” is structurally hard: noise, recruitment, and coordinated state propaganda are persistent problems, and no approach is clearly winning.

Getty Images and Shutterstock to Merge

Overall reaction to the merger

  • Seen largely as consolidation in a shrinking, threatened market rather than growth.
  • Many interpret it as a defensive move against AI image generation, stagnant revenue, and falling valuations.
  • Expectation of cost-cutting and layoffs; some see it as classic financial engineering to “make the line go up” rather than improve products.

AI, stock photography, and the future of the market

  • Both companies already offer AI image generators; some argue this is mainly a way to get paid if their libraries are scraped anyway.
  • Several commenters say they’ve stopped buying stock since modern models (Stable Diffusion, Flux, etc.) became good enough for generic web/marketing uses.
  • Others find AI images uncanny and view them as a signal of low-effort, cheap branding; they still prefer real photos, especially where authenticity matters.
  • Broad consensus that:
    • Generic conceptual stock (“diverse people smiling in an office”) is highly vulnerable to AI.
    • Event/news photography and “record of reality” images remain hard to replace.

Pricing, access, and impact on users

  • Many complain stock licenses are extremely expensive for light or one-off users; subscriptions only make sense at scale.
  • Contributors report earning pennies per download despite high retail prices.
  • Free/“freemium” sites (Unsplash, Pexels) are praised, with concern that acquisitions and mergers lead to paywalls and “enshittification.”
  • Expectation from several participants: post-merger quality down, prices up, fewer options for end users.

Antitrust and regulation

  • Some argue this merger creates a highly concentrated market (Getty + Shutterstock vs Adobe Stock).
  • Others respond that, in practice, US regulators require clear evidence of price or consumer harm, making a challenge unlikely.
  • General cynicism that mergers often degrade products and services even when they pass legal review.

Business model, innovation, and contributors

  • One view: the real money is in editorial and exclusive contracts, plus licensing and enforcement, not the generic stock catalog.
  • Former-insider descriptions portray both firms as mature, low-innovation businesses focused on acquisitions, tech stack churn, AI licensing deals, and layoffs.
  • Ideas surface for alternative models: decentralized indexes, direct payments to photographers, simpler microtransactions—but recognized as hard given current payment friction.

Copyright enforcement and ethics

  • Getty is described as aggressive in pursuing unlicensed use, including high settlement demands.
  • Some see this as fair deterrence; others label certain tactics (e.g., allegedly bumping list prices once infringement is found) as “shady,” though details are contested.