Google’s AI thinks I left a Gatorade bottle on the moon
AI-Generated Podcasts: Quality, Style, and Uncanny Valley
- Many find the NotebookLM podcasts unsettling: realistic voices and emotions but no consistent “person,” memory, or evolving viewpoint.
- Listeners note odd reactions (fake surprise followed by detailed knowledge) and shallow, repetitive content.
- Others argue it accurately mimics a large class of existing podcasts: overproduced, padded with small talk, and full of generic praise.
- Some enjoy them as background noise or as satire; others say they’d stop listening within 30 seconds because it “fails the BS filter.”
- The built-in time-wasting banter bothers some, who see it as Google intentionally baking in filler.
NotebookLM Mechanics, Cloaking, and Scope of the “Attack”
- NotebookLM generates summaries and podcasts only from supplied documents/URLs.
- The “Gatorade on the moon” setup relies on serving different content to NotebookLM’s crawler than to normal browsers.
- Some see this as trivial, akin to old-school SEO “cloaking” and not yet shown to poison broader Google AI systems.
- Others emphasize the user-risk: the human sees one page, while the AI consumes a hidden version the user can’t inspect.
- Several expect Google to apply its existing anti-cloaking and spam defenses to NotebookLM, though note that prompt and spam attacks tend to persist.
Misinformation, Manipulation, and Real-World Examples
- Potential abuses mentioned: defamation with plausible deniability, manipulating product recommendations, and influencing elections.
- Another example is seeding a fabricated but plausible detail online and having Gemini repeat it as fact, sometimes with attribution, sometimes not.
- Some argue that LLM training already contends with absurd web claims; others fear LLMs will amplify and legitimize them.
User Experiences and Ethical Concerns
- People report comic results when feeding in resumes, blogs, or fiction; the AI podcasts treat mundane material with exaggerated enthusiasm.
- Some use these tools with children for fun and creativity; others argue this replaces kids’ own creative work with passive AI-generated content.
- Safety filters that block disturbing topics are viewed by some as infantilizing and overly puritanical.
Broader Reflections on LLMs
- Opinions split between seeing LLMs as overhyped, resource-wasting “junk AI” and as an early stage of a rapidly improving technology.
- There is debate over whether fundamental reasoning limits have budged since earlier models.
- Discussion touches on anthropomorphizing language like “thinks,” the risk of a “post-knowledge” culture, and how AI-native generations may use these tools differently.