Learnings from paying artists royalties for AI-generated art

Artist Adoption and Business Viability

  • Many see the core failure as lack of demand: few customers actually want to pay for specific named-artist styles versus generic “looks” (e.g., 1970s film grain, consistent characters).
  • Only about 21 artists joined from ~325 cold emails; commenters view that ~6.5% signup as the real signal: many artists don’t want “passive AI income,” they want AI out of their market.
  • Some liked the transparency and postmortem, but question framing the failure as “timing wasn’t right” instead of “the idea/product was fundamentally unattractive.”
  • Marketing/distribution also criticized: a product few people had even heard of is unlikely to succeed, especially against better-known, higher-quality tools.

Model Quality, UX, and Ethics

  • Users who tried Tess reported worse output and ergonomics than OpenAI, Flux, etc., needing many attempts per usable image.
  • Several say they’d pay extra for ethically sourced models, but only if quality and workflow match top competitors; ethics alone won’t beat “pirate-quality” tools.
  • Some argue most artists now distrust any AI offering, even “ethical” ones, because they see AI as inherently threatening their livelihoods.

Legal and IP Debates

  • Large subthread on whether training on copyrighted works is fair use:
    • One side: training is transformative, akin to reading/learning; outputs aren’t reproductions, and copyright shouldn’t expand to “style.”
    • Other side: using art in training without consent should require licenses; some even advocate criminal penalties.
  • Disagreement over fair use factors: purpose (commercial), amount (all works), and market harm (models competing with originals).
  • Many note there is no settled legal precedent on AI training and fair use; claims that it’s “clearly fair use” are challenged.

Compensation Models and Attribution

  • Ideas floated: ASCAP/BMI-style royalty systems, licensing entire training sets, global artist payouts. Skepticism about feasibility and economic scale.
  • Some argue per-output attribution is computationally intractable or prohibitively expensive; others counter that big AI companies simply lack incentive, not capability.
  • Concern that any “style-compensation” regime could chill ordinary artistic borrowing, which has always been part of art practice.

Base Model and “Ethical” Claims

  • Multiple commenters say the product’s core promise (“every image traceable to a consenting artist”) was undercut by fine-tuning on a Stable Diffusion base model trained on unlicensed internet scrapes.
  • This is seen as a thin ethical “veneer” over fundamentally non-consensual training data, undermining the moral positioning and legal clarity.

Broader Reactions and Side Points

  • Some appreciate the startup-level honesty, including noting an engineer’s burnout, and discuss shared responsibility between leadership and individuals.
  • Others note that many consumers say they want artists paid but are less willing to pay or support strict IP enforcement.
  • A brief tangent critiques the corporate buzzword “learnings” vs. “lessons.”