Providing ChatGPT to the U.S. federal workforce

Pricing, Lock-In, and “Trojan Horse” Strategy

  • Many see $1 for the entire federal government as classic bait-and-switch: get deeply embedded for a year, then raise prices once workflows depend on it.
  • Some predict OpenAI becomes “too big to fail,” similar to Microsoft/Boeing/Intel: once the state relies on it, policy and bailouts will protect it.
  • Others counter that AI markets are competitive, margins will be squeezed by open models and alternative hardware, and there’s no strong long-term moat.

Hallucinations, Reliability, and Government Power

  • A central worry: LLM hallucinations plus the authority of the U.S. government could normalize wrong answers as de facto reality.
  • Fears include opaque “computer says no” decisions, unintelligible bureaucratic outputs, and citizens forced to comply with AI-generated errors.
  • Some are opposed in principle (“please don’t”); others say broad rollout is acceptable only with serious training and human-in-the-loop safeguards.

Security, Confidentiality, and Data Use

  • Strong concern that an “official” AI tool will encourage uploading sensitive or even classified information, creating a massive target for hacking, insider abuse, or data poisoning.
  • Skepticism toward claims that Enterprise use is excluded from training; some assume anonymized or indirect use of government data is inevitable.
  • One commenter outlines U.S. impact-level / FedRAMP practices and segregated classified networks, arguing OpenAI shouldn’t see classified data—but acknowledges non-classified PII could still leak.

Usefulness for Federal Work vs Skepticism

  • Supporters cite large text and data workloads: summarizing regulations, cross-referencing spreadsheets with maps, RMF paperwork, legal/technical search, and general “thought organization.”
  • Critics emphasize low AI literacy and the cost of verifying outputs; they argue real productivity gains often come from skipping verification, which is exactly what you shouldn’t do.
  • Some doubt any tool can raise productivity without incentives; others say with 2.2M workers, there are clearly many legitimate use cases.

Competition, Procurement, and Anticompetitive Concerns

  • Questions about how this was approved: Was there a tender? Is it exclusive? Who bears liability for errors?
  • $1 pricing is viewed by some as below-cost dumping and anticompetitive, comparable to other big-tech “grow at all costs” tactics.
  • Calls for FOIA requests and lawsuits to uncover contract details and protections against future price hikes.

Broader AI Economics, Ads, and Influence

  • Debate over future AI costs: some expect steep price increases and ad-supported models, including covert ad-like language in answers; others think open models and hardware competition will push prices down.
  • Multiple examples show current models can already insert themed persuasion subtly, raising fears about future political or commercial manipulation.
  • Some worry about “alignment” being used to steer government outcomes (e.g., benefits decisions, foreign policy narratives).