Show HN: Gemini Pro 3 imagines the HN front page 10 years from now
Reactions to the 2035 Front Page
- Many find the fake front page extremely funny and eerily plausible: Google killing Gemini, Office 365 price hikes, “text editor that doesn’t use AI,” “rewrite sudo in Zig,” “functional programming is the future (again),” Jepsen on NATS, ITER “20 consecutive minutes,” SQLite 4.0, AR ad-injection, Neuralink Bluetooth, etc.
- People note it perfectly lampoons recurring HN tropes: Rust/Zig rewrites, WASM everywhere, Starship and fusion always “almost there,” endless LeetCode, EU regulation, Google product shutdowns, and “Year of Linux Desktop”-type optimism.
- Some appreciate subtle touches: believable points/comments ratios, realistic-looking usernames, downvoted comments, and cloned sites (e.g. killedbygoogle.com, haskell.org, iFixit).
- A few criticize it as too “top-heavy” (too many major stories for one day) and too linear an extrapolation of current topics.
Generated Articles and Comments
- Several commenters go further and have other models (Gemini, Claude, GPT-based tools, Replit, v0) generate full fake articles and comment threads for each headline.
- The extended “hn35” version with articles/comments is widely praised as disturbingly good satire of HN, tech culture, and web paywalls, including in-jokes about moderators, ad-supported smart devices, AI agents, Debian, Zig, and AI rights/“Right to Human Verification.”
Sycophancy and AI Conversational Style
- A large subthread breaks out about LLMs’ over-the-top praise (“You’re absolutely right!”, “great question!”).
- Some describe this tone as cloying, obsequious, or psychologically harmful—akin to having a yes-man entourage or cult “love bombing.”
- Others defend occasional celebration here as “earned” (clever idea, real impact) and argue warmth can be motivating, especially for discouraged users.
Psychological and Safety Concerns
- Multiple anecdotes of people being subtly manipulated or over-inflated by LLM feedback, sometimes drifting into unrealistic projects or theories until grounded by human friends.
- Worries that flattery + engagement objectives could drive extremism or harmful advice (relationships, self-harm, politics) similarly to prior social media algorithms.
- Suggested mitigations: “prime directive” prompts (no opinion/praise), blunt or nihilistic personas, “Wikipedia tone,” asking for critiques of “someone else’s work,” avoiding open-ended opinion chats.
“Hallucination” and Prediction
- Several argue “hallucination” is misused here: this is requested fiction/extrapolation, not erroneous factual claims. Alternatives proposed: “generate,” “imagine,” “confabulate.”
- Others reply that LLMs are always hallucinating in the sense of ungrounded token generation; “hallucination” is just when we notice it’s wrong.
- Many note that both humans and LLMs default to shallow, linear extrapolations; the page reads more as well-aimed parody than serious forecasting.