Search tool that only returns content created before ChatGPT's public release
How the extension works and its limits
- The tool just uses Google’s API with a date cutoff (Nov 30, 2022). Several commenters test it and confirm it’s ultimately driven by Google’s notion of “publication date.”
- Multiple examples show Google often infers dates from page content or metadata, not crawl/index timestamps. Pages and domains can be backdated years or decades, so post‑ChatGPT pages can appear as “pre‑2022.”
- Because sites can alter metadata or rewrite old pages, people argue the filter can be gamed and cannot reliably guarantee “human‑only” content.
Search dates, SEO, and enshitification
- Many note that search results had already degraded years before ChatGPT due to SEO spam, listicles, and ad‑driven UI changes.
- Some argue Google effectively “gave up” on fighting SEO because worse results and more engagement with ads are more profitable.
- Others say pre‑2022 already had lots of auto‑generated SEO text and early GPT‑2/3‑based slop, so the cutoff is somewhat arbitrary.
AI slop vs human slop
- A recurring theme is “slop”: low‑effort, mass‑produced content. Commenters connect today’s LLM slop to longstanding marketing/SEO slop.
- Some think AI is mostly replacing existing content‑farm rubbish; careful searchers can still avoid it.
- Others argue AI radically worsens signal‑to‑noise: AI text is cheap, overconfident, and can flood every niche query, making careful search much harder.
Trust, quality, and “low‑background tokens”
- Several liken pre‑LLM text to “low‑background steel” or “low‑background tokens”: a finite, relatively uncontaminated corpus valuable for training and research.
- There’s disagreement whether human‑origin text is inherently higher signal, but many value that humans show uncertainty and must expend effort, so pure spam is rarer.
- Creators resent that carefully written human work will be ingested into future models and recycled as slop.
Alternatives and coping strategies
- People mention using
before:filters directly, Kagi (with AI/“slopstop” toggles), Mojeek, DuckDuckGo, or non‑Google engines; others fall back to books, in‑person advice, and private communities. - Some want tools that detect and strip AI content (even “AI to fight AI”); others call the extension imperfect but still “better than nothing.”
Bigger-picture concerns
- Commenters worry about AI‑generated pages training future models (“slop training slop”) and feedback loops where hallucinations get “confirmed” by AI‑written sources.
- There is speculation about human‑only networks, web‑of‑trust gating, and the broader cultural impact of an internet saturated with indistinguishable machine text.