← Back to stories Smiling woman in data center showcasing technology expertise.
Photo by Christina Morillo on Pexels
凤凰科技 2026-03-21

US musician admits AI-driven streaming fraud that siphoned millions from major platforms

The scheme

A 54-year-old North Carolina musician, Michael Smith, has pleaded guilty to conspiracy to commit telecom fraud after a years-long scheme that used AI-generated music and automated bots to inflate streaming counts on major platforms. It has been reported that Smith bought hundreds of thousands of AI-created tracks, bulk-uploaded them to services including Spotify, Apple Music, Amazon Music and YouTube Music, and used automated robots routed through virtual private networks to generate billions of fraudulent plays. Court filings say he ran over 1,000 bot accounts across 52 cloud service accounts — about 20 bots per cloud account — producing an estimated 660,000 plays per day at the scheme’s peak.

Legal fallout and broader implications

Smith reportedly agreed to forfeit about $8.09 million and faces up to five years in prison; prosecutors say the operation illegally netted more than $10 million in royalties. It has been reported that internal messages boasted the AI catalog had accumulated over 4 billion streams and roughly $12 million in royalties since 2019. Why did this evade detection for so long? The answer lies partly in scale and disguise: distributed cloud infrastructure, VPNs and mass-produced AI tracks were used to spread plays thinly across many titles and accounts to avoid triggering platform anti-fraud systems.

What this means for platforms and cloud providers

Beyond the criminal case, the episode highlights two fast-moving tensions: the weaponization of generative AI to create disposable content, and the use of cloud infrastructure to automate abuse at scale. Streaming platforms now face renewed pressure to harden fraud detection and to work with cloud providers to spot abusive access patterns. Regulators are also watching. In an era of cross-border data flows and competing tech regulations, enforcement in the U.S. sends a clear message: using AI and cloud services to monetize fake engagement can trigger both criminal penalties and reputational damage for the companies involved.

AISpace
View original source →