We’re receiving about 3,000 reports/hour

Moderation Load and CSAM/Abuse Concerns

  • Users note an apparent surge of reports (including CSAM) as Bluesky’s user base explodes, and see this as the classic “UGC at scale” problem.
  • People discuss nuances of CSAM terminology and law: intent is to emphasize “abuse/exploitation,” but real‑world laws sometimes criminalize even “innocent” or “simulated” material.
  • Some stress that even teen‑to‑teen sharing can be abusive (e.g., bullying with leaked nudes), not just adult involvement.
  • Others mention how CSAM can be weaponized by hostile actors (e.g., planting it on free‑speech sites to trigger legal takedowns).
  • There are references to automated detection tools (PhotoDNA, Project Arachnid) and bad experiences on other platforms (e.g., Threads flooded with CSA‑adjacent content).

Bots, Spam, and Identity Friction

  • Many see bot/spam resistance as the decisive challenge; easy signup is believed to attract the same bot/propaganda operations seen on Twitter.
  • Suggested mitigations:
    • Limit reach of new accounts; invite/vouch systems; groupchat‑style smaller networks.
    • Paid signups (one‑time or recurring, fiat or crypto) to add friction; prior examples (Metafilter, WhatsApp, SomethingAwful) cited.
    • Proof‑of‑work / hashcash, hardware fingerprints (TPM), KYC/ID checks.
    • Reputation/karma systems that weight reports and throttle abusers.
  • Counterpoints:
    • Determined spammers and “bad actors” already pay for Twitter verification; $1 fees or KYC won’t stop serious operations and harm privacy/anonymous access.
    • Hardware IDs and KYC are seen as privacy‑toxic, linkable across services, and exclusionary to people without IDs or bank access.
    • Some believe the problem is fundamentally unsolved at scale.

Blocklists, Labelers, and Echo Chambers

  • Bluesky’s culture of “block and move on,” shared blocklists, and community labelers is praised as effective and user‑controlled.
  • Others warn that centralized or popular blocklists can be abused for personal crusades, leading to opaque blacklisting and “samethink” communities with little dissent.
  • Suggestions include:
    • Conditional or threshold‑based blocklists (e.g., auto‑block if multiple subscribed lists include the same account).
    • Making block reasons explicit and visible, to aid user judgment.
  • Skeptics argue this simply shifts power from corporate moderators to list maintainers, potentially re‑creating the same issues.

Centralization, Federation, and Web‑of‑Trust Ideas

  • Some argue Bluesky’s central moderation is inherently brittle and expensive compared to federated systems like Mastodon or more P2P models (e.g., Scuttlebutt, friend‑to‑friend networks).
  • AT Protocol supporters point to planned ecosystem of third‑party labelers, feeds, and community classifiers so users can compose their own moderation and ranking.
  • Alternative visions include:
    • Badge‑ or web‑of‑trust–based visibility, where groups grant membership and trust and feeds are filtered via those relationships.
    • Smaller, interest‑based communities and group chats as the “best” social network model.
  • Others counter that “unmoderated” spaces quickly become unusable due to spam and harassment; some level of moderation and curation is seen as unavoidable.

Volunteer vs Automated Moderation

  • Several commenters note that the wider internet has long run on unpaid volunteer moderators, but also that “moderation does not scale gracefully.”
  • There is skepticism that community labelers can match the reliability or resourcing of big‑tech moderation teams.
  • Some think LLMs are close to being able to enforce a platform’s “tone,” while others doubt automation can fully replace human review, especially for edge cases and serious content like snuff or CSAM.

Business Model and Sustainability

  • People question how Bluesky will fund growing trust‑and‑safety workloads while remaining ad‑free.
  • A linked company statement mentions a planned premium subscription (e.g., higher‑quality video, profile customization) but promises no algorithmic boost for paying users.
  • Some are cautiously optimistic this could work; others suspect that, like Twitter, influence or political/VC motives may subsidize the platform even if it’s not profitable.