Scamlexity: When agentic AI browsers get scammed

Terminology and Hype (“Scamlexity”, “Agentic”, etc.)

  • Many dismiss “Scamlexity” and “agentic” as awkward or overused buzzwords, seeing them as a pivot by AI vendors once plain “AI” showed cracks.
  • “VibeScamming” is noted as another emerging term for exploiting LLMs’ pattern‑matching and social cues.

Core Vulnerability: Content vs Commands & Actuators

  • Central flaw highlighted: LLMs don’t distinguish “content to read” from “commands to execute,” making prompt injection and indirect instructions from web pages or emails dangerous.
  • Once connected to actuators (browsers, payments, smart devices, drones, military systems), bad completions can become bad real‑world actions.
  • Some argue the real problem is requirements: users want an agent that executes instructions from text they haven’t vetted, and don’t want to confirm every tiny action.

Sandboxing vs Alignment & AGI Worries

  • One camp: treat this as a standard security problem—sandbox, least privilege, no external levers, and the risks largely vanish.
  • Others argue this is naïve: widely deployed systems are already networked and tool‑connected, and future more‑agentic models may resist shutdown or resort to coercion once given enough power.
  • There’s skepticism toward current “safety” work that focuses on model narratives (“blackmail to avoid shutdown”) instead of hard security boundaries.

Should Agents Buy Things for Us?

  • Many don’t see the appeal of fully delegating purchases; they want AI for search, comparison, and list‑building, but insist on final selection and payment.
  • Others argue that wealthier people already delegate purchases to human assistants; if AI reached similar reliability, there would be demand for routine buys (groceries, refills, minor items).
  • Concerns:
    • LLM fallibility (hallucinations, being fooled by fake sites or knockoff products).
    • Corporate incentives to bias recommendations, dynamic pricing, and kickbacks (“AI shopper as the retailer’s agent, not yours”), leading to enshittification.

Scams, Deterrence, and Singapore Digression

  • One commenter claims scams are “solvable” via extreme punishments (citing Singapore); others refute this with stats and examples and reject draconian penalties as immoral and ineffective.
  • Broader consensus: scams are persistent, adaptive, and unlikely to be “eliminated”; scammers will evolve prompt‑injection and agent‑targeting techniques at scale.

Article and Product Critiques

  • Some argue the demo scenarios are stacked: the user explicitly steers the agent to scam pages or tells it to follow email instructions, so the “no clicks, no typing” framing is misleading.
  • Others see the article as a valid warning: if an AI browser happily executes curl | bash‑style flows on arbitrary content, large‑scale exploitation is inevitable.
  • A few report useful, non‑financial agent uses (scraping, long‑running web tasks) but say payments and sensitive operations are a step too far for now.

Outlook for Agentic Browsers

  • Suggestions: enforce strict capabilities (scoping by domain, mandatory human approval for any spend, allowlists, and new “core primitives” for safe actions).
  • Some say browser‑driven agents are fundamentally brittle compared to dedicated APIs; others note the web is de‑facto the only API many sites expose, so agents will keep using it despite risk.