France targets Australia-style social media ban for children next year

Perceived harms and rationale for a ban

  • Many see mainstream social platforms as addictive, manipulative systems comparable to harmful substances, especially for teens.
  • Concerns cited: AI‑generated “slop”, gore and disturbing content, grooming and private messaging by adults, self‑harm material, and long‑term attention/learning issues.
  • Some argue kids enjoy curated, moderated content (cartoons, kids’ shows, older video games) and don’t need algorithmic feeds at all.
  • Others expect bans to reduce teen mental‑health problems and suicides, likening them to existing limits on alcohol or tobacco.

Surveillance, ID, and deanonymization worries

  • A major thread: “ban for children = ID verification for everyone.” You can’t exclude minors without authenticating all users.
  • Australia’s model (facial age estimation, behavioral signals, optional government ID) is criticized as mass surveillance; some clarify the law discourages mandatory ID but still pushes data‑heavy methods.
  • EU/French approach: “double‑anonymous” age checks and an EU Digital Identity Wallet using zero‑knowledge proofs are described; others distrust EU privacy promises and foresee mission creep.
  • Many fear a broader political project to de‑anonymize the web and expand state and corporate tracking under a “protect the children” banner.

What counts as “social media”?

  • Debate over whether forums like HN/Reddit/Discord are “social media” and thus in scope.
  • Suggested distinctions: personalized addictive feeds, engagement‑driven recommendation, data‑harvesting ad models, and ease of publishing self‑incriminating content.
  • Others note regulators can and do target platforms selectively and politically, not by clean technical definitions.

Alternative solutions and age‑verification schemes

  • Proposals include:
    • Device‑ or account‑level “child mode” with OS‑enforced content ratings.
    • HTTP headers or a child‑safe TLD; schools and parents restrict devices to those.
    • Scratch‑off “age tokens” or bank/eID‑based zero‑knowledge proofs.
  • Critics highlight black‑market resale, complexity, and the risk of building “oppression tech” that will later be repurposed for broader censorship.

Politics, control, and responsibility

  • Some blame social media for the rise of (especially right‑wing) populism and see regulation as a way to limit extremist spread; others call that open political censorship.
  • Split between those who see this as necessary public‑health regulation and those who see a nanny‑state overreach that parents and existing tools should handle.
  • Many doubt enforceability (VPNs, proxies, helpful adults) and view the measures as symbolic, though supporters argue even partial friction can break harmful network effects.