Spotify deletes 75M ‘slop’ songs in massive AI crackdown

AS Photography/Pexels

Spotify has quietly carried out one of the largest content purges in modern tech, stripping more than 75 million low‑quality and fake tracks from its catalog as part of a sweeping crackdown on AI‑generated “slop.” The company is pitching the move as a defense of artists, songwriters, and listeners, and as a way to stop industrial‑scale spam from gaming its royalty system. I see it as something else too: an early test of whether the streaming era can survive the next wave of synthetic content without collapsing into noise.

Behind the headline number is a deeper shift in how Spotify wants AI to function on its platform, from invisible background tool to tightly policed creative partner. The company is not banning machine assistance outright, but it is drawing a hard line around impersonation, royalty fraud, and what it calls “bad actors” flooding the service with near‑identical tracks. The result is a new set of rules that will shape how music is made, distributed, and paid for in the age of generative models.

What 75 million deleted tracks really means

When Spotify says it has removed more than 75 million “spammy” tracks, it is not just trimming the edges of its catalog, it is ripping out an entire shadow economy of AI‑assisted uploads. The company has described using a proprietary detection system to identify vast clusters of low‑effort songs that were designed less to be heard than to be clicked, often in bulk and at scale, in order to siphon off royalties from legitimate listening. In its own framing, Spotify has deleted as part of a broader effort to clean up the experience for users and rights holders.

Other reporting has put the scale in starker terms, noting that Spotify Cracks Down, Removing 75 M Million Tracks and Targeting Impersonators in a single coordinated push. That figure rivals the entire catalogs of some competing services, and it underscores how aggressively AI tools have been used to churn out filler content. I read that number less as a one‑time cleanup and more as a signal that Spotify expects this to be an ongoing battle, with automated upload farms constantly probing for new ways to slip through.

Inside Spotify’s new AI rulebook

To make sense of the purge, it helps to look at the rulebook that now sits behind it. Spotify has rolled out strengthened AI protections that explicitly frame harmful synthetic content as a threat to both user experience and the economics of streaming. The company has warned that this kind of material degrades listening and often attempts to divert royalties to bad actors, and it has tied the issue directly to how total music payouts are distributed to artists and songwriters on the platform. In its own policy language, total music payouts are at risk when spammy uploads flood the system.

At the same time, Spotify is trying to draw a distinction between creative use of AI and what it calls “AI slop.” The company has acknowledged that AI tools can help artists write, produce, or master tracks, but it is targeting the industrialized use of those tools to generate thousands of near‑identical songs or to mimic recognizable voices without consent. That is where the language around Spotify Culls AI by Bad Actors, Plans for Future Transparency becomes important, because it signals that the company is not only deleting content but also preparing to tell users more clearly when AI is involved in what they are hearing.

Deepfakes, impersonation, and the “Heart on My Sleeve” problem

Underneath the spam fight sits a more emotionally charged issue: musical deepfakes. One of the most notorious examples was One of the tracks that broke through in 2023, a song called Heart on My, which used AI‑made vocals purporting to be major stars and forced the industry to confront what happens when anyone can clone a famous voice. Spotify’s new policies are clearly shaped by that moment, treating unauthorized vocal cloning as a form of digital identity theft rather than just a copyright puzzle.

To tackle that, Spotify is implementing an impersonation policy that allows artists to file complaints when their voices or likenesses are used without permission. The company has described this as a way to help tackle what amounts to digital identity theft, giving performers a formal channel to challenge deepfakes that use their voices and uploaded content without permission. In practice, that means a singer who finds their cloned voice on a track can now point to Spotify’s own rules and demand removal, rather than relying solely on labels or legal threats.

Follow the money: how AI spam warps royalties

Behind the rhetoric about quality and authenticity is a blunt financial reality: AI spam is a way to steal money. Spotify has acknowledged that its existing measures for combating spam uploads have become easier to exploit as AI tools have improved, and that AI‑generated spam uploads are specifically designed to capture royalties based on play count. In other words, the flood of synthetic tracks is not an accident, it is a business model built on exploiting how AI‑generated spam uploads can be pumped into playlists, background listening, or even bot networks.

Spotify has been explicit that this kind of harmful AI content dilutes payments to legitimate artists, because total payouts are shared across all streams, including those captured by bad actors. When the company says that kind of content degrades the user experience and often attempts to divert royalties to bad actors, it is acknowledging that every fake stream is effectively a transfer of income away from human creators. That is why the company is pairing its deletions with new enforcement tools, including improved detection of Spotify Removes 75 Million Spammy Songs, Cracks Down on schemes that try to game the system.

Artists, platforms, and the next AI frontiers

For working musicians, the crackdown is both a relief and a warning. On one hand, many artists have been vocal about how AI‑generated spam and deepfakes threaten their livelihoods and identities, and they are likely to welcome a system that promises stronger protections and more transparent labeling. On the other, the same generative tools that power spam are also being marketed to creators as productivity boosters, and some are already using them to co‑write lyrics, generate stems, or simulate instruments. I see a tension here between Spotify’s promise that choice stays in artists’ hands and the reality that platforms and tech giants, including Google, are training AI tools on massive troves of creative work that musicians did not explicitly license for that purpose.

Spotify is not operating in a vacuum. The same week that it was touting its purge of fake songs, industry voices were sharing posts about how AI slop is invading other kinds of work, from code to corporate documents, and how professionals are scrambling to adapt. One widely shared update highlighted how Spotify just deleted 75 million fake songs and framed it as a sign of how quickly generative tools are reshaping creative industries, with names like More Relevant Posts, Kristy de Vera, Software Solutions Architect, Enterprise Order Management Expert, and Enthusiast appearing in the conversation as people tried to map what this means for their own fields. That kind of cross‑sector anxiety, captured in More Relevant Posts, suggests that Spotify’s move is being watched far beyond music.

More From The Daily Overview