
ElevenLabs – a company known for generating voice using AI – has just announced the debut of a new service Eleven Music. This tool allows users to generate their own music based on a simple description in natural language. You don't need to know composition – just type something like: “create smooth jazz with a 60s vibe, strong lyrics, and an atmosphere perfect for a Friday afternoon” – and the artificial intelligence will create a track with vocals and instrumentation in just a few minutes. For content creators, filmmakers, app owners, or small businesses, this could be a gigantic change – lower costs, less bureaucracy, more creative control.
Playing Open Cards with Labels
To avoid the same problems as its competitors, ElevenLabs has chosen the path of legalization and collaboration. The company has already signed agreements with Merlin Network, which represents independent labels, and with Kobalt Music Group, a well-known player in the world of copyright management and music publishing. For now, they are determining which catalogs will be used to train the model, but the mere fact that these discussions are underway is a signal: ElevenLabs wants to operate wisely. Importantly – the company emphasizes that it does not use music from the major labels (UMG, Sony, Warner), but hopes to negotiate terms of cooperation with them soon. This is significant, as without permission to use the data, training AI on others' recordings is a legal gamble.
Security Comes First
Of course, this doesn't mean that ElevenLabs does not see the threats. The company immediately implemented safeguards to prevent the generation of songs featuring specific artists' names, quotes from albums, or lyrics that may be illegal, offensive, or inciting violence. The goal is to avoid cases of "deepfake hits" using Drake's voice or unauthorized songs masquerading as those of existing performers. ElevenLabs wants to build a tool that gives users freedom while not crossing ethical and legal boundaries.
Who is this all for?
The new service targets creators who care about music for advertisements, apps, films, or games – in other words, places where you typically have to pay for a license or hire a composer. According to ElevenLabs, 20 companies and creators are already testing the system. It is not known exactly who, but sample applications include television productions, games, meditation apps, and even fitness apps. Industries such as automotive, telecommunications, and creative agencies can also quickly jump on this bandwagon – especially since AI can replace expensive stock music for literally "a fraction of the cost."
Creators vs. Technology
However, not everything looks like paradise. Organizations protecting the rights of creators – like ASCAP – warn that AI can be a threat if it does not operate fairly. Creating models based on other people's works without permission poses a risk not only of lawsuits but also of destroying the income of entire professional groups. As Elizabeth Matthews, the head of ASCAP, stated, AI can be an innovation, but only if it respects the rights of the people who have created music for years. And it's hard to disagree with that – because although technology is racing ahead, creativity should still be rooted in respect for human labor.
Customer Delight or Audience Anger?
There’s also the issue of image. Even if AI-generated music is legal and works flawlessly, it can be a problem for many listeners and clients. Analysts like Mike Proulx from Forrester warn about potential backlashes – social support for "real artists" and concerns about human jobs are very strong emotions today. Companies that start to massively employ AI instead of hiring people may get hit hard – in terms of image, emotionally, and in some cases, financially.