DoNotPay has to pay $193K for falsely touting untested AI lawyer, FTC says
Perception of the Fine and Enforcement
- Many see the $193K FTC fine as a “slap on the wrist,” especially relative to subscriber and revenue estimates.
- Others argue fines often serve as a first formal warning; repeat violations could trigger much harsher penalties.
- Some want penalties tied to ill‑gotten gains or total profits, plus personal liability (even jail) for executives or shareholders.
Nature of DoNotPay’s Product and Conduct
- Early versions were described as narrow “mad-libs” style form generators that helped with simple tasks (e.g., parking tickets, landlord letters), and some users report genuine value.
- Over time it shifted to broader claims, including being a “robot lawyer” and using ChatGPT, without attorney oversight or rigorous testing.
- Commenters highlight deceptive marketing, dark patterns (difficult cancellation), and exaggerated AI claims as the core issues, not automation per se.
AI, Law, and Regulation
- Strong consensus that unverified LLM output is unacceptable for high‑stakes legal work; hallucinated case law is worse than a bad human attorney.
- FTC action is framed as about false advertising, not banning AI in legal services; commissioners explicitly said AI in law is acceptable in principle if honestly represented.
- Some argue that because “lawyer” is a regulated term with duties and liability, you can’t market an automated tool as a lawyer without meeting those standards.
Access to Justice vs Consumer Protection
- Many sympathize with the idea of cheap tools for ordinary people to fight corporations, predatory landlords, and abusive parking enforcement.
- There’s tension between “fighting fire with fire” against systemic legal abuse and not becoming another exploitative, misleading business.
- Some note that large firms already automate legal actions against individuals; the system tolerates that more than automation that empowers the public.
Views on AI Quality and Hype
- Numerous comments are deeply skeptical of current AI quality: errors, shallow reasoning, generic prose, and unreliable code.
- Others see real productivity gains in low‑stakes, templated tasks and expect lawyers will increasingly rely on LLMs for routine drafting, though not full replacement.
Legal Complexity and Gatekeeping
- Several discuss law as “magic incantations”: exact wording matters, which justifies expertise but also creates exclusion and potential for abuse.
- Debate over whether legal complexity is mainly due to genuine edge cases or political and economic interests protecting the status quo.