California issues fine over lawyer's ChatGPT fabrications
Human accountability and “unaccountability sinks”
- Several comments argue there are roles (pilots, lawyers, doctors) where society demands a clearly responsible human, even if much work is automated.
- Others counter that this is “linear” thinking: AI will be used heavily even in those roles, with a smaller number of humans assuming more liability.
- The idea of an “accountability sink” is raised: complex systems (including AI) make it harder to pin responsibility on any one person, eroding recourse and quality.
AI already embedded in law and lawmaking
- Lawyers, judges, and even legislators are said to be using AI for drafting; some MPs reportedly use AI-written speeches.
- Multiple comments note that much legislation is already written or copy‑pasted from lobbyist “model bills,” so AI‑authorship may be a marginal shift rather than a revolution.
Fine size, deterrence, and sanctions
- Many see the $10k fine as a slap on the wrist, especially relative to lawyers’ billing rates and other California fines (e.g., watering lawns, fireworks, littering).
- Others stress that for an individual attorney this is unusually high and “historic” mainly as a precedent for AI misuse, not for the raw dollar amount.
- Opinions on appropriate punishment range from modest fines and “warning shot” to suspension or disbarment; a minority argue for jail time, which others call disproportionate.
Professional duty vs AI use
- Strong consensus: the core problem is not using AI but submitting unverified output. Lawyers are always responsible for what they sign, just as if a junior associate or paralegal had drafted it.
- Many emphasize that the attorney’s unapologetic framing (“there will be some victims”) undermines trust and suggests the sanction was too light.
Tools, hallucinations, and verification
- Commenters note that legal research systems already let lawyers quickly retrieve and validate citations, and that checking cites has long been standard (e.g., “shepardizing”).
- Newer AI‑augmented tools with grounding and linked citations are mentioned; some predict fake‑citation scandals will fade as these become common.
- Others argue LLMs are fundamentally poor tools for authoritative search/citation, and that their improving but still nonzero hallucination rate may actually increase complacency.
Access to justice and defeatism vs optimism
- Some see AI legal tools as a potential boon for people who otherwise couldn’t afford a lawyer; others counter that unreliable “cheap law” may be worse than no representation.
- A recurring theme: the legal system is not an API to spam with “AI slop”; credentials and sanctions exist precisely to prevent that, and this case is viewed as a straightforward example of malpractice rather than a technological inevitability.