Spotify is testing AI music tools that require artist opt-in, clearer labels, and stronger detection to reduce fake tracks and voice impersonations.
In short: Spotify is testing an artist-first approach to AI music tools, with opt-in access and clearer labels to reduce fake tracks being linked to real artists.
Spotify is rolling out new rules and tools for AI-made music on its platform. The company says the goal is to protect artists, especially from voice cloning and impersonation, where someone makes a track that sounds like an artist and uploads it under that artist’s name.
Spotify says it is working with big music companies, including Sony Music, Universal Music Group, and Warner Music, plus independent groups like Merlin and Believe. Together, they plan to build AI products that require artist consent and aim to ensure fair compensation.
Spotify is also creating a generative AI research lab and a product team focused on music. “Generative AI” is the kind of AI that can create new content, like a new song or a new version of an old one (like a remix made by a computer). Artists would be able to opt in if they want their catalogs used for things like AI-generated covers or remixes.
Fake and spammy uploads have become a real problem on streaming services. Spotify says it removed more than 75 million spam tracks in the past year. For listeners, better labeling could make it easier to understand how a track was made. For artists, opt-in controls and better detection are meant to reduce the risk of having low-quality or misleading tracks associated with their name.
Source: TechCrunch AI
55
AI Infrastructure & MLOps31
Productivity & Workflow43
Software Development42
Automation & Workflow34
Data & Analytics26
Marketing & Growth27
Voice & Speech25
Customer Support18
Writing & Content Creation26
Photography & Imaging27
Operations & Admin14
Sales & Outreach17
Design & Creative20
Research & Analysis16