Using LLMs at Oxide

Overall Reaction to Oxide’s LLM Policy

  • Many see the RFD as measured and thoughtful: LLMs are encouraged but tightly bounded by values like responsibility, rigor, and trust.
  • Others find a tension: caveats are so strong that they question whether LLMs should touch anything near production, or note that a lot is left to “personal responsibility” without concrete rules.
  • Some also think it underplays public perception issues around “stolen training data.”

LLMs for Coding: Scope, Quality, and Ownership

  • Strong consensus that LLMs are useful for: boilerplate, simple refactors, tests, scaffolding, and pattern-matching tasks where correctness can be mechanically checked.
  • Several describe workflows: write a spec/plan first, have the LLM implement, then thoroughly self‑review before peer review. Step “careful line‑by‑line review” is often the most time‑consuming part.
  • Others prefer small-grain autocomplete over big diffs: keeps context and scope small, feels 20–30% faster overall without huge review burden.
  • Many stress “you must own the code”: LLM output is acceptable only if the human understands and stands behind every line.
  • Skeptics report LLMs failing badly on complex or cross‑language tasks and see “amazingly good at writing code” as overstated.

Impact on Juniors, Learning, and Craft

  • Debate over juniors: some worry LLMs will stunt deep understanding, creating developers who can’t debug or design; others compare this to earlier resistance to Google, IDEs, and autocomplete.
  • Concern that organizations now penalize “not using AI enough,” pushing juniors toward shallow, LLM-heavy workflows.
  • Broader craft vs pragmatism theme: some want meticulous, hand‑tooled code; others argue that for many projects, “getting it done” with messy internals is economically rational.

Use for Writing, Editing, and Reading

  • Oxide’s hard line against LLM‑written prose resonates with many: it’s seen as breaking a social contract of effort and authenticity; readers “would rather read the prompt.”
  • Counterpoint: for non‑fiction, writing is “data transmission,” and using tools to increase clarity is respectful of the reader; the process shouldn’t matter if the result is accurate and clear.
  • LLMs as editors get mixed reviews: they can improve structure and grammar but risk erasing voice or producing verbose, generic text.
  • Claims that LLMs are “superlative at reading comprehension” are disputed; people report hallucinated summaries and misleading “translations” of documents.

Trust, Detection, and Hiring

  • Oxide reports widespread LLM-authored application materials and uses LLMs themselves as aids in spotting such writing, especially when human reviewers are already suspicious.
  • Commenters question how reliable this is without measured false-positive/false-negative rates and worry about unfairly rejecting genuine writing.
  • Applicants share experiences of heavy writing effort, long delays, generic rejections, and uncertainty about whether they were misclassified as LLM-generated.

Legal, Ethical, and Policy Gaps

  • Some are surprised the RFD barely mentions copyright: risks of verbatim code reproduction, copyleft implications, and unsettled law around LLM-generated artifacts.
  • Others argue these concerns may be implicitly covered by the general “you are responsible for what you ship” stance, but agree this area is still unclear.