Spain to ban social media access for under-16s, PM Sanchez says

Perceived harms and support for bans

  • Many see social media as highly addictive, “drug-like,” and particularly toxic for children’s mental health, attention, and susceptibility to manipulation.
  • Some argue the harms now clearly outweigh benefits, likening regulation to controls on alcohol, cigarettes, or prescription drugs.
  • Several commenters would go further: raise the age limit to 18, regulate it like a controlled substance, or even ban algorithmic social feeds altogether.
  • Others note that teens themselves often feel unable to control their usage; legal friction could help them.

Privacy, deanonymisation, and digital ID

  • A major concern: banning under‑16s implies age verification, which many see as de‑facto mass deanonymisation and a new surveillance vector.
  • Critics point to repeated ID leaks (e.g. Discord) and distrust promises that IDs will be deleted.
  • There is particular worry about tying social media logins to government tax/ID systems, enabling cross‑database tracking by tax authorities, police, and possibly private firms.
  • Some say “we already show ID for SIMs, alcohol, etc., so this is minor”; others counter that online ID checks are persistent, copyable, and leak‑prone in a way offline checks are not.

“Zero trust” age verification: theory vs practice

  • Several argue that privacy‑preserving systems are technically possible:
    • Government or third‑party identity providers revealing only “over 16: yes/no” plus a per‑service pseudonymous ID.
    • Systems that don’t log which site is being accessed.
  • Skeptics respond that:
    • Real deployments rarely match the theory; existing schemes aren’t truly zero‑trust.
    • Remote attestation, closed clients, and centralized auth services still let states or vendors see where you log in.
    • Political track records make “preemptive cynicism” rational.

Impact on vulnerable youth and communities

  • Some worry bans will disproportionately hurt young people who rely on online spaces for community, especially autistic or socially isolated teens, and for learning tech skills.
  • Others argue that today’s large platforms are now dominated by bots, troll farms, and predatory content, making them especially dangerous for such groups.

Democracy, control, and geopolitical angles

  • One view: restricting children’s exposure to algorithmic misinformation could help protect democracies from foreign influence and domestic radicalization.
  • Another view: “protecting the children” is a pretext to expand speech control, classify disfavored views as “hate,” and increase monitoring of citizens.
  • Some note growing distrust of U.S.-based platforms and intelligence access; others suspect governments mainly fear losing narrative control.

Definition and implementation challenges

  • Recurrent question: what exactly counts as “social media”?
    • Is a forum like HN, a PHPBB board, GitHub, or Mastodon included?
    • Are all sites with comments in scope, including news sites?
  • One proposed line: ban systems with personalized, engagement‑optimizing algorithmic feeds; exclude chronological, user‑controlled feeds and classic forums.
  • Concern that compliance burdens will crush small communities and favor large platforms that can implement complex age‑verification.

Alternative or complementary regulatory ideas

  • Frequently suggested instead of (or alongside) age bans:
    • Ban “addictive dark patterns” and engagement‑maximizing algorithms for all ages.
    • Mandate chronological feeds or severely weaken recommendation engines.
    • Prohibit user‑targeted ads in favor of contextual ads.
    • Enforce Do Not Track and existing privacy laws more rigorously.
  • Some suggest anonymous, offline‑purchased age tokens (like buying cigarettes) as a less intrusive way to gate access.

Parental choice vs state intervention

  • A number of commenters think decisions about kids’ online use should remain primarily with parents, combined with active involvement and open communication.
  • Others counter that platforms are so optimized for addiction and manipulation that individual parenting cannot realistically counter systemic harms.