AI Usage Policy

Use of AI Transcripts and Prompts

  • Some want full AI session transcripts attached to work (PRs, tickets) to show how code was produced, what alternatives were considered, and to help reviewers target scrutiny and learn prompting.
  • Others see little value: prompts are non-deterministic, not true “source,” and add more to read; they worry about exposing messy thought processes or feeling pressure to “polish” transcripts.
  • A middle view: transcripts are personal project artifacts, like notes or issues, useful for self-audit and improving one’s prompting, but not always necessary to share.

Perceptions of the Ghostty AI Policy

  • Many call it balanced and foresee it becoming a template for OSS and internal company policies: AI is welcomed as a tool, but humans must think, test, and own the result.
  • Some plan to adopt similar rules to combat low-quality contractor or drive‑by AI code.
  • The requirement to disclose AI tools divides opinion: supporters cite maintenance risk and transparency; critics argue maintainers should only care about code quality, not how it was written, and see it as an unjustified intrusion.
  • The project’s exemption of its own maintainers from the strictest parts strikes some as an unfair double standard.

Quality, Verification, and “AI Slop”

  • Wide agreement that human verification of AI-assisted code should be mandatory; several note that LLMs can “game” or disable tests.
  • Concern that some contributors trust AI outputs blindly, then claim to have “reviewed” them; maintainers report a surge of plausible-looking but broken PRs and even AI-generated screenshots.
  • Some say AI helps them cut fewer corners by offloading boilerplate; others say it just makes producing garbage much cheaper than reviewing it.

Trust, Shame, and Contributor Incentives

  • AI is seen as eroding trust in unknown contributors and new repos; maintainers become more defensive and reputation-focused.
  • Many lament a lack of shame among people spamming low-effort AI PRs for résumé points, course requirements, or vanity; others attribute this to inexperience, economic pressure, or cultural differences around shame.
  • The policy’s threat of public ridicule is polarizing: some see shaming as a necessary deterrent and reputational counterweight; others see it as medieval, ineffective for shameless actors, or a waste of maintainer energy, preferring simple bans or “ghosting.”

Legal and Copyright Concerns

  • A minority raises unsettled law around AI-generated code, worrying future rulings could retroactively affect project licensing or copyright status.
  • Others argue this only matters if someone sues, and note that widespread use means law will likely adapt to entrenched practice rather than unwind it.

Media vs. Code Distinction

  • The explicit ban on AI-generated media but allowance for AI text/code is questioned; several argue code and prose are trained from copyrighted corpora just like images and audio.
  • One view is pragmatic: project owners feel more moral authority to set norms around code (their own domain) than around digital art, where they’d be benefiting from consent-less training of other people’s work.

Enforcement and Future Norms

  • Several note that “good” AI-assisted PRs are indistinguishable from human-only ones, making strict AI-specific policies partly unenforceable; experienced review of the diff remains the real filter.
  • Others stress metadata and disclosure will still matter as an early signal of which contributions are worth the costly verification effort.
  • There is broad expectation that AI use disclosure will become boilerplate, personal track records and trust networks will matter more, and the old “code speaks for itself” ethos will be harder to sustain.