Jellyfin LLM/"AI" Development Policy

Overall reception of Jellyfin’s policy

  • Many commenters see the policy as reasonable and overdue, especially the insistence that contributors understand their code and explain it clearly.
  • Some think most of it is just restating existing good contribution practices, but others argue LLMs change things by massively increasing the volume and plausibility of bad PRs.

LLMs in communication (“no AI prose”)

  • Strong support for banning LLM‑generated direct communication (issues, PR descriptions, comments).
  • People dislike the recognizable “chatbot tone” (overlong, corporate, emoji-laden), and feel it disrespects readers by offloading thought onto them.
  • Several note that if someone could prompt an LLM, they could also send the shorter human original; the LLM output is often just a lossy re-encoding of that.
  • Some are surprised it’s even necessary to spell out “you must write your own words and understand your code.”

Translation, grammar, and non-native English

  • Many like the explicit carve‑out for LLM-assisted translation/grammar as an accessibility win, especially for making open source more global.
  • Others strongly prefer honest, imperfect English over polished text whose author may not understand it.
  • Debate over tools: some recommend traditional machine translation (e.g., Google Translate) to avoid “ChatGPT slop” and fluff; others argue modern LLM-based translation is extremely good.
  • A recurring concern: if you don’t know the language, you can’t reliably judge whether the LLM changed your meaning.

LLM-generated code and PRs

  • Maintainers describe being swamped with large, “vibe‑coded” LLM PRs, especially after a big Jellyfin release—multiple unrelated fixes mashed into one, unclear intent, and huge review burden.
  • Commenters emphasize that code authors must be able to explain, justify, and test their changes; “LLM code” is acceptable only if the human really understands it.
  • Some argue code is code regardless of origin; others counter that a key variable is whether the submitter grasps the intent, not just the diff.

Enforcement, standards, and open source health

  • Suggestions range from instant permabans to more lenient “repeat‑offender” handling; skepticism that bans stop determined users with alt accounts.
  • Several propose standardized “Agent Policy” / “Agents.md” documents to guide LLM tools, akin to licenses or contribution guidelines.
  • There is concern that sustained LLM slop could push projects away from open PRs toward more closed, trusted‑contributor models, and even be abused as a smokescreen for malicious changes.
  • A nuanced critique holds that the true boundary should be verification and accountability, not whether an LLM was involved at all.