California bill would require bots to disclose that they are bots

Scope of the bill and what “bot” covers

  • Commenters note the bill broadens an existing 2019 “BOT Act” that only applied to influencing commerce or voting; this amendment would cover any online communication with Californians, unless the bot discloses itself.
  • Some argue this should clearly include AI-generated email, sales outreach, LinkedIn messages, and social media DMs.
  • Others stress nuance: it should apply only to interacting bots, not passive crawlers or non-interactive automation.
  • There is confusion about the 10M-users clause: one reading is that it defines “online platforms” that must provide bot-identification mechanisms, but doesn’t exempt smaller bot operators from disclosure.

Political texting and semi-automated systems

  • Several describe current political SMS systems: humans click “send” on prepopulated messages to exploit bans on fully automated texts.
  • Recipients often assume these are bots and express hostility; volunteers confirm that real humans usually handle responses.
  • Some see mandating a human “press send” as a reasonable safeguard; others call it a loophole that makes regulation feel tokenistic.

Startup vs. big platform burden

  • Debate over the 10M threshold:
    • One side thinks smaller outfits should also comply; otherwise large actors can game the system with multiple sub-10M entities.
    • Another side argues small businesses can’t track a growing patchwork of AI rules; requiring compliance too early would crush small sites, echoing GDPR’s burden on tiny projects.
    • Some see large-firm-friendly regulation as intentional gatekeeping; others call the user-threshold “horse trading” needed to get any bill passed.

Enforcement and efficacy concerns

  • Many doubt enforceability: bad actors and foreign entities will simply ignore it, leaving only “good” bots labeled and giving users false confidence.
  • Questions arise: How will authorities detect undisclosed bots? How to handle human-in-the-loop operations or human “click farms” fronting for AI?
  • Some suggest the real problem is behavior (resource abuse, scams), not whether the actor is human or artificial.

User preferences, ethics, and legal worries

  • Some users actually prefer clearly labeled bots for simple, fast tasks, analogizing to self-service kiosks.
  • Others want reciprocal honesty: bots must admit they’re bots, and humans should have to admit they’re humans; concerns include privacy and trust if humans masquerade as bots.
  • Comparisons are made to psychics, Santa, and entertainment contexts to question where mandated truthfulness should stop.
  • A few raise First Amendment and compelled-speech issues, suggesting broader “must-disclose” rules might be unconstitutional outside narrow commercial contexts.

Comparisons to other regulations and existing bot law

  • Prop 65 is invoked as a cautionary tale: it began as a pollution-control measure but turned into ubiquitous, largely ignored warnings and a lawsuit industry. Some fear bot disclosures could devolve similarly into meaningless boilerplate.
  • Others argue Prop 65 did meaningfully change disposal practices and that such rules can still have real environmental effects despite over-warning.
  • Commenters link the original BOT Act text and note this bill simply amends that framework, expanding the definition of bots (including generative AI content) and removing the need to prove “intent to mislead.”

Miscellaneous and meta points

  • Some see the Veeto site itself (with its chatbot explaining the bill) as an ironic direct target of the proposed law.
  • A few suspect the thread is undisclosed promotion for that site and prefer using official legislature links.
  • Several express broader appetite for stronger disclosure laws: not just for bots, but also for paid human shills and meme-origin transparency.