Eleven Music
Existing “reverse” music AI (listening → notes)
- Several commenters note tools already exist to transcribe audio to notation/MIDI or isolate instruments: AnthemScore, ScoreCloud, Melody Scanner, Spleeter, CREPE, Moises, Google’s AudioLM, Spotify’s BasicPitch.
- Some think these use “older” ML and don’t reach expert-musician quality; others report surprisingly good results (e.g., generating decent guitar tabs via ChatGPT).
- People want deeper “understanding” tools: extracting chords/tabs, interactive idea exploration, or isolating instruments for practice.
Perceived quality and limitations of Eleven Music & peers
- Many compare Eleven to Suno and Udio; consensus is that Eleven’s v1 sounds behind: timing/pacing issues, robotic vocals, artifacts, low apparent bitrate, narrow context window, buggy UI.
- Suno and Udio are seen as more musical, with better stereo, stems, and editing, though still generic and occasionally “off.”
- Specific failures include mis‑generating Argentine tango (defaulting to ballroom “tango”) and awkward blues/rock solos that feel random and unnatural.
Use cases: from muzak to prototyping
- Widely seen as ideal for low-stakes, background uses: intros for podcasts/YouTube, generic corporate or marketing music, game placeholders.
- Some musicians see value as a prototyping tool: quickly generating drones, grooves, or bass/drum ideas to refine in a DAW; or as an “infinite sample library.”
- Others want more collaborative, stem-level, iterative tools (e.g., “add drums to this demo”) rather than one-shot song generators.
Impact on musicians’ livelihoods
- Strong worry that every use case AI can serve removes another “entry-level” or middle‑tier income stream: library music, ads, TV/film cues, session work.
- Several argue this “eats the seedcorn”: fewer paid apprenticeships → fewer future professionals and innovators.
- Counterpoint: music was already heavily industrialized and generic; AI is an accelerant, not the root cause.
Art, originality, and “soul”
- Many describe AI output as lifeless, aggressively mediocre, “McMusic” optimized for average palatability, good for “muzak” but not boundary-pushing art.
- Some argue curation, prompting, and editing can themselves be art, analogous to photography or collage; others say that’s just selecting from the model’s whims, not expressing a genuine intent.
- Ongoing debate over whether art must be difficult to produce, must “challenge,” and whether distinguishing art from entertainment is meaningful.
Ethics, copyright, and business models
- Serious concern that models are trained on music without consent, then sold back into the same market, threatening the original creators’ income—even if legally defensible as “fair use.”
- Eleven claims collaboration with labels/publishers, but commenters find details unclear and remain skeptical.
- Subscription licensing (paying a platform indefinitely to use a generated track) is seen as exploitative; some argue users should own full rights to outputs or be able to self‑host open models.
- Frustration that major players keep weights closed, slowing community experimentation and open tooling.
Automation, capitalism, and cultural worries
- Several connect this to a broader pattern: automation under capitalism increasing drudgery and precarity rather than freeing people for creative work; comparisons to the Industrial Revolution and Luddites.
- Fear that cheap, infinite AI “slop” plus platform economics (e.g., Spotify) will further crowd out distinctive human work and deepen cultural malaise.
- A minority predict a counter‑movement: renewed demand for “organic” live music, weird and experimental human art that AI can’t easily imitate.
Musicians’ emotional responses
- Hobbyists and semi‑pros express real demoralization: after years of practice, being outclassed in seconds by a model feels worse than competing with other humans.
- Others reaffirm that the real reward is the process, community, and live performance—things AI can’t replace—and expect human-made art to become more valued, if smaller in market share.