Two kinds of AI users are emerging

AI in Finance and Modeling Risk

  • Many find it alarming that non-experts are using AI to convert complex, 30‑sheet financial Excel models into Python and then layering simulations and dashboards on top.
  • Critics argue equivalence to the original spreadsheet is hard to prove, edge cases will be missed, and the original itself is often a buggy, untested artifact.
  • Others counter that Excel models are already fragile and under‑tested; an AI port that’s regression‑tested against the sheet may not be worse, and in some finance shops spreadsheets are already versioned and tested like code.
  • Broader concern: business and policy decisions are routinely driven by flawed quantitative work (examples cited include national statistics errors and the Reinhart–Rogoff Excel debacle), and AI may just accelerate this existing problem.

Copilot, Claude, and “Shadow AI”

  • Multiple comments report Microsoft 365 Copilot, especially in Excel, as poorly integrated and often unable to even read the open workbook, in stark contrast to more capable external tools like Claude Code.
  • This fuels a split: enterprises locked into sanctioned but weak tools vs. individuals quietly installing terminals, local models, or browser add‑ons (“Shadow AI”) to actually get work done.
  • Some note the irony that large vendors’ own employees are reportedly favoring competitors’ tools, suggesting internal recognition that their official offerings lag.

Agentic Coding, Productivity, and Tech Debt

  • Strong enthusiasm for agentic coding on greenfield projects: people report 2–5x (and sometimes much higher) productivity when bootstrapping new tools, CLIs, servers, or small apps.
  • On mature, messy codebases, gains shrink (~10–30%) and supervision overhead rises. Models struggle with long‑lived complexity, implicit knowledge, and legacy quirks.
  • Several describe “vibe‑coded” AI projects: impressive prototypes that quickly become unmaintainable, with huge functions, scattered queries, and explosive tech debt.
  • A recurring theme: AI is powerful when guided like a junior engineer within clear architecture and tests; using it as an unsupervised service produces brittle systems.

Verification, Testing, and Rigor

  • Many stress that the bottleneck isn’t code generation but verification: building test suites, sanity‑checking outputs, and resisting the temptation to accept plausible‑looking graphs or numbers.
  • Stories include LLMs silently reordering time axes, hallucinating test passes, and business cultures that penalize rigorous checking as “too slow.”

Types of AI Users and Use Patterns

  • Several alternative taxonomies are proposed:
    • People who treat AI as a tool/intern vs. those who outsource entire skillsets and critical thinking to it.
    • People solving new problems vs. those maintaining old systems.
    • Coders vs. “on‑demand learners” using AI primarily as a personalized tutor or explainer.
  • Many admit they straddle categories depending on task: careful in production work, carefree for side projects or learning.

Small vs Large Organizations

  • Commenters largely agree that small teams gain disproportionately: they can combine AI with lack of bureaucracy to ship quickly.
  • In big companies, process, risk, and hidden dependencies dominate; faster code generation doesn’t fix organizational drag or opaque legacy systems.

Security, Governance, and Confidential Data

  • There’s broad concern about non‑technical users running agents with high privileges, pasting sensitive data into consumer chat UIs, and generally recreating “Shadow IT” at much higher stakes.
  • Some argue sandboxing via separate accounts/containers is straightforward; others note that enterprises still prioritize speed and hype over rigorous security design.