Human study on AI spear phishing campaigns

Perceived severity and human vulnerability

  • Commenters report frequent, often crude “CEO” and invoice scams via email and SMS, and note that non‑technical and technical people alike fall for them.
  • Many argue that anyone can be socially engineered in the right context; “grandma” is no longer a special case.
  • Some see the everyday inbox as an active “warzone” where a small mistake can cause major harm.

AI’s role in spear phishing

  • Many view AI‑generated spear phishing as a terrifying but likely already‑active threat: it automates personalization and drastically lowers cost.
  • Others note this commoditization might, paradoxically, reduce long‑term risk by flooding channels with so much junk that they become unusable or force systemic changes.
  • There is speculation about an “agent vs agent” future: attackers’ AI vs defenders’ AI triaging email and information.

Study design, metrics, and ethics

  • The study used data scraped from public profiles to personalize messages and framed them as “targeted marketing emails,” with IRB approval.
  • Several think defining “success” as merely clicking a link is weak; security‑savvy users may click in sandboxes or out of curiosity without being truly compromised.
  • Some worry about bias if participants knew they were in a study; others question whether any non‑consensual design would be ethical.

Email, OS security, and structural issues

  • Strong frustration that email security still relies on individuals not clicking links.
  • Critiques of deny‑list models (spam filters) vs trust/allow‑list models, though some argue strict whitelisting would kill the open internet.
  • Debates about desktop sandboxing, running as admin, and whether operating systems and email clients should better isolate risky actions.
  • Divided views on HTML email: some see it as unnecessary attack surface driven by marketing; others note plain text cannot prevent social engineering.

Legitimate services mimicking scams (“scamicry”)

  • Numerous anecdotes where banks, big tech, ecommerce sites, and even governments send messages that look indistinguishable from phishing: odd domains, third‑party verifiers, cryptic links, SMS fines, and callback flows.
  • This erodes trust and blurs the line between legitimate and malicious contact; users end up ignoring or auto‑filtering even important internal or security emails.

Ideas for defenses and future directions

  • Suggestions include stronger trust networks, email aliases, removing HTML, clearer “we will only contact you from X addresses” policies, and more realistic internal phishing tests (measuring more than clicks).
  • Some expect social media exposure and OSINT to keep raising risk and foresee growing reliance on AI and other tools just to assess what is real online.