Doge Developed Error-Prone AI Tool to "Munch" Veterans Affairs Contracts

Misuse of AI and VA Contract “Munching” Tool

  • Many see the AI contract‑scanning tool as fundamentally unfit for deciding which VA contracts to cut, especially medical ones affecting veterans’ care.
  • Strong criticism that its author openly admits he wouldn’t trust his own code, yet it was allowed to influence real decisions.
  • Several note the prompts assume LLMs have deep institutional knowledge (e.g., what can be insourced), which they clearly do not.
  • Some defend the concept of AI as a triage aid for human reviewers, but others argue that in practice it became a de‑facto decision tool without rigorous testing or metrics.

Ethics and Professional Responsibility

  • Many argue participation in DOGE, especially in building tools that affect benefits and healthcare, should be a serious black mark on a résumé.
  • Suggested interview questions: why they joined, why they stayed after seeing the risks, and whether they tried to understand how outputs were used.
  • Counterpoint: the job market is tough and many workers are “cogs” with limited choice, though this is challenged given reports of unpaid/volunteer roles.

DOGE Staffing, Culture, and Intent

  • Widespread view that DOGE was staffed with very young, inexperienced, ideologically aligned tech people who “axe first, ask questions later.”
  • Examples cited of recruiting college dropouts and self‑congratulatory blog posts about “saving government” after a few weeks.
  • Some see this as deliberate: people without domain knowledge or empathy are more willing to make drastic cuts.
  • Others suspect the real goals were political/ideological purges (e.g., using AI to flag DEI/WHO‑related content) and broader data access, not efficiency.

Government vs Startup Mentality

  • Strong pushback against applying “move fast and break things” to veterans’ healthcare and other critical services; this is “not Tinder.”
  • Commenters note reviewing 90k contracts is entirely possible with lawyers and analysts given realistic timelines; the 30‑day deadline is seen as artificial justification for reckless shortcuts.
  • Long subthread compares DOGE to Musk’s Twitter layoffs, debating whether aggressive cost‑cutting is sound business practice or destructive short‑termism.

Broader AI-in-Government Concerns

  • Some cautiously support AI for preliminary filtering if humans remain firmly in the loop and accuracy is continuously audited.
  • Others fear a predictable pattern: unproven AI adopted for scale and cost reasons, then gradually allowed to replace human judgment, with harms difficult to unwind.