TikTok is harming children at an industrial scale
Scope: TikTok vs “all social media”
- Many argue TikTok is not uniquely harmful: Instagram Reels, YouTube Shorts, Snapchat, Facebook, X, Reddit, Discord, etc. are seen as running the same engagement-maximizing playbook.
- Others push back on “whataboutism,” noting the article’s author has long criticized social media broadly and that targeting TikTok doesn’t imply US platforms are benign.
- Some think singling out TikTok is politically driven; others say it makes sense to start with the platform that dominates youth attention.
Mechanisms of harm and what’s new
- Core problem identified: infinite scroll + short-form video + algorithmic feeds tuned for engagement, not wellbeing.
- Reported effects: diminished attention span, inability to tolerate boredom, difficulty with long-form reading, mental health issues (anxiety, depression, body dysmorphia), and compulsive use in both kids and adults.
- Several see this as qualitatively different from TV, video games, or music: personalized feeds, ubiquity on phones, and dense dark patterns make it more like gambling or drugs.
- Others say every generation has a “media panic” (TV, rock, D&D, games) and warn against overreacting, though some concede that this time the data on teen distress post‑2010 look more worrying.
Parental responsibility vs regulation
- One camp: parents are primarily responsible; giving a child unsupervised access to TikTok/YouTube is likened to neglect, similar to leaving them with alcohol.
- Counterpoint: individual parenting can’t solve peer effects and industrial-scale manipulation; “you can’t raise kids in a silo.”
- Policy ideas floated: higher minimum ages for social media accounts, strict age verification, limits or taxes on algorithmic feeds, bans or throttling of infinite scroll, stronger privacy laws, and competition policy to weaken network-effect monopolies.
- Others fear heavy-handed regulation, First Amendment conflicts, or “government as parent.”
Geopolitics and motives
- Some see anti‑TikTok rhetoric as driven by national security: potential for Chinese state influence, data access, and covert coordination of crowds.
- Others think the main driver is its refusal (relative to US platforms) to align with certain foreign-policy narratives (e.g., Palestine/Israel), or more generally as a trade‑war tool.
- Several emphasize that US‑owned platforms already cause comparable social and political damage at home.
Coping tactics and alternative norms
- Many parents in the thread describe strict regimes: no TikTok, banning YouTube entirely or Shorts only, whitelisting content via YouTube Kids, downloading videos and serving them offline, or using curated games and long-form shows instead of algorithmic feeds.
- Practical tricks: disabling YouTube watch history to kill Shorts, using alternative frontends (Invidious, Jellyfin + yt‑dl), screen‑time rotation (e.g., only every second day), or limiting kids to specific non‑“dark pattern” games.
- Several note that adults also struggle badly, deleting apps, installing blockers, or relying on Screen Time codes managed by partners.
Debate over evidence and “moral panic”
- Some commenters think the social-science case against phones/social media is strong and that we’d be irresponsible to ignore it.
- Others criticize the underlying research as methodologically weak or overconfident, and see a pattern of “boy who cried wolf” moral panics.
- A recurring meta-critique: focusing solely on TikTok misses the structural issue—the ad-funded attention economy optimized to exploit human psychology at scale.