Gemini figured out my nephew’s name
Mobile layout and web UX
- Many readers report the blog is broken on mobile (cut-off sides, unusable in portrait).
- Workarounds include reader mode, landscape orientation, zooming out, or desktop view.
- This triggers a broader gripe: modern sites and even AI docs (ChatGPT, Anthropic) often have unreadable tables/code on both mobile and desktop.
- Some see this as a symptom of HTML/CSS being used for pixel-perfect layout instead of device-driven presentation.
Giving LLMs access to email vs local models
- Several commenters are uneasy about handing email to hosted LLMs, even via “read-only” tools.
- Some argue this is moot for people already on Gmail; others still avoid plaintext email for private topics, preferring E2E messengers (WhatsApp, Signal) or calls.
- Others note that current local models (Gemma, Qwen, Mistral) can already do tool use and summarization, so similar setups could run entirely on-device—if you have strong enough hardware.
Privacy, deanonymization, and future misuse
- A major thread discusses how AI plus large-scale training data will pierce online pseudonymity.
- Stylometry and writing-style fingerprinting can already link alt accounts; AI will make this easier and more accurate.
- People recount being doxed or “history-mined” over petty disputes; targeted ads and data brokers are cited as proof that large-scale harvesting is already happening.
- Some update their “threat model” to assume any shared data could be recombined in surprising ways years later.
LLM memory and hidden data retention
- One commenter claims ChatGPT retains information even after visible memory and chats are deleted, implying some hidden, unmanaged memory.
- Others are skeptical and ask for proof, arguing it may be hallucination or misunderstanding; they note potential legal implications if it were true.
- There’s general cynicism that tech companies may keep more data than they admit, and “soft deletion” is suspected.
How impressive is the “nephew’s name” trick?
- Some view Gemini’s deduction as a neat but minor demo: essentially email search plus a plausible inference from subject/content (“Monty”) to “likely a son.”
- Critics say a human assistant would be expected to do at least as well, perhaps adding validation (e.g., searching that name explicitly).
- Others argue the value is offloading the tedious scanning and that this resembles what a human secretary would do.
Everyday uses and “parlor tricks”
- Examples include using LLMs to:
- Scan photo libraries for event flyers and extract details.
- Connect to email/Redmine via MCP for contextual coding help.
- Perform weight-loss trend extrapolation and then infer the underlying task from bare numbers.
- Some call these “parlor tricks”; others say the speed and flexibility are genuinely useful, even if the underlying operations (search, summarize, regress) are conceptually simple.
Tool use control and safety
- A few stress that “discuss before using tools” must be strictly enforced; preferences about style can be loose, but tool invocation must not be.
- There’s consensus that robust enforcement belongs in the client (or orchestration layer), not just in the model prompt, though this is nontrivial to implement.
- One user limits the LLM’s email access to a few threads and keeps sending as a separate, user-approved step.
Broader anxieties and humor
- Commenters joke about AI predicting crimes or votes, invoking sci-fi (Minority Report, 2001) to express concern about loss of control.
- Some mock the blog title as clickbait (“your son’s name,” trivial inference, or just “call your brother instead”).
- There’s light humor about bizarre names and injection-style names that would smuggle instructions to AIs.