Our newsroom AI policy
AI, Original Content, and Incentives
- Several comments worry AI will “poison its own well”: models depend on human-created content, but free content is being paywalled or degraded as ad revenue and traffic drop (Wikipedia, review sites, etc.).
- Some suggest LLMs could underpin a long-dreamed micropayment system where usage-based fees are shared with content creators; others counter this risks incentivizing spam and is functionally similar to taxation funding public goods.
Micropayments, Spotify, and Creator Pay
- A recurring analogy is “pay out like Spotify,” but many argue streaming models mainly favor major players and encourage fraud/bot activity.
- Some note Spotify’s overall payout percentage is not terrible, yet per-creator income is low because subscriptions are cheap and revenue pools are unevenly distributed.
- Alternatives like Bandcamp and copyright-alternative schemes are mentioned as more creator-friendly reference points.
AI, Capitalism, and Inequality
- Multiple comments argue AI under current economic structures will deepen inequality: verified information stays behind paywalls while the masses get low-quality “slop.”
- Others stress the core problem is wealth concentration and incentives under capitalism, not the technology itself, and call for non-wealth-based incentives for progress.
Ars AI Policy, Accountability, and Ethics
- Many see the policy as a response to Ars’ previous incident with fabricated AI quotes and a fired reporter.
- Critics call the policy self-contradictory: it permits AI for research/summarization but says reporters bear full responsibility; they argue verification thorough enough to catch all hallucinations negates any efficiency.
- Supporters say this is no different from using Wikipedia, search engines, or human sources: tools can assist, but journalists must verify and not treat AI as authoritative.
Practical Use of LLMs in Journalism
- Advocates propose LLMs as “metal detectors” over large document dumps (e.g., leaks, datasets) that surface leads to be manually checked, not as oracles.
- Skeptics argue LLM output is optimized for plausibility, not truth, making it unsuitable where accuracy is paramount.
Visuals, “Slop,” and Reader Backlash
- The policy’s allowance for AI-assisted visuals but human-led “creative direction” is seen by some as vague and more about vibes than enforceable limits.
- Some commenters now treat Ars as an “AI slop factory,” plan to avoid it, or even use scripts to blacklist the domain, while others are more pragmatic and will continue using whatever source (AI or human) yields the most useful news.