Mike Lindell's lawyers used AI to write brief–judge finds nearly 30 mistakes

Legal ethics, sanctions, and court burden

  • Many commenters support the judge’s move to consider sanctions and disciplinary referrals, arguing courts must come down hard on AI-fabricated citations to deter others.
  • Several note that judges’ and clerks’ time is scarce; forcing the court to clean up a lawyer’s AI-generated mess is seen as disrespectful and wasteful.
  • Some point out lawyers already have a duty (e.g., under Rule 11) to verify their filings; failing to confirm that cases even exist is viewed as basic professional negligence.

AI hallucinations and citation checking

  • Repeated theme: LLMs routinely produce plausible but incorrect references (wrong URLs, non-existent cases, near-miss resources), so relying on them without verification is reckless.
  • Commenters contrast search engines (which at least point to existing documents) with LLMs that can fabricate entirely fictional but convincing sources.
  • One lawyer says they now use “reasoning” models plus cross-checking across multiple AI tools, then still manually verify, claiming large time savings. Others call this overcomplicated and inefficient versus traditional legal research tools.

Use of AI answers in everyday discussion

  • Many dislike people pasting unvetted ChatGPT answers into forums, especially on nontrivial questions, seeing it as lazy and unhelpful.
  • Some defend it as akin to “let me Google that for you” or as a signal that information is easily obtainable, but others emphasize that LLMs are often confidently wrong, making the analogy dangerous.

Competence, politics, and lawyer selection

  • There’s debate about why certain high-profile right-wing figures end up with poor counsel:
    • Explanations include clients’ refusal to follow sound advice, nonpayment, extreme reputational risk, and ideological litmus tests that filter out competent, ethical lawyers.
    • Others argue incompetence exists “on both sides,” but several push back that the specific pattern here is distinctive.

Understanding hallucinations and LLM limits

  • Long subthread debates whether hallucinations stem from “missing knowledge” versus being an inherent artifact of probabilistic token prediction with no internal notion of truth.
  • Consensus in that subthread: regardless of mechanism, all LLM output requires independent fact-checking; they are powerful drafting and brainstorming tools, not authoritative sources.

Broader implications

  • Some worry sensational stories of AI misuse will convince non-technical people that AI can never be used in high-stakes settings.
  • Others respond that these failures are exactly why society must anticipate widespread, uncritical AI use—and build norms and safeguards accordingly.