Sep 22, 2023

X’s most toxic users are trying to game its Ads Revenue Sharing program

“It’s convoluted and there are no clear rules”

Some X users are testing the guardrails on a new system designed to share advertising revenue with them.

The program, known as Ads Revenue Sharing, comes with vague brand safety guidelines and a lot of upside. Get the depravity just right, and you’ll pay rent for the month. Go too far outside the company’s brand safety standards, and you may not get advertiser cash — but you can get up and try again.

“It’s very convoluted and no clear rules,” X user AlphaFox78 told Check My Ads over direct message about the platform’s revenue-sharing arrangement.

Some of the AlphaFox’s content with the most views is about celebrating “white pride,” transphobia, being “anti-woke,” burning the Pride flag, and driving through protesters.

AlphaFox78 said they didn’t “want to share my info on amounts” as it “makes people jealous,” but claimed that 10 million impressions earn them roughly $2,500 as a member of X’s Creator Ads Revenue Sharing system.

The user said they constantly tweak their posting strategy to keep abreast of the platform’s shifting rules and keep the money flowing from advertisers.

“I have had to modify my content and approach to not get ads removed,” said AlphaFox78, who said they were a 44-year-old IT worker from Massachusetts.

That means not reposting or commenting on “certain things” and ignoring sensitive content to avoid being “deboosted,” or having their reach decreased on X. It’s “a royal pain,” they added.

Even in this “Mad Max” world, there are some types of content so beyond the pale that it doesn’t qualify for ad revenue.

X is betting big on broken brand safety technology

According to X’s Creator Monetization Standards, users cannot monetize “content relating to tragedy, conflict or mass violence,” a category that includes natural disasters.

The X rules also state that users “may not deceptively share synthetic or manipulated media that are likely to cause harm,” which includes “media likely to result in widespread confusion on public issues (or) impact public safety.”

Recently appointed X “CEO” Linda Yaccarino — sorry, but if you have to tell the press you have autonomy, then you don’t — told CNBC in August the company had rolled out new content moderation tools “that have never existed before at this company” and policies that would protect both “freedom of speech” while ensuring “brand safety.”

“If you’re going to post something that is lawful but is awful, you get labeled… you get de-amplified… and it is certainly de-monetized,” said Yaccarino. “Brands are protected from the risk of being next to that content.”

Earlier this year, X announced partnerships with DoubleVerify and Integral Ad Science to protect advertisers.

The new tools use technologies like machine learning and keyword detection to try to prevent ads from appearing next to controversial content.

In a blog post, the company said it was also launching “an automated industry-standard blocklist,” along with “sensitivity settings” for brands to fine-tune what kind of content they’re comfortable with.

That is, brands get to choose things like whether they’re okay with being next to “gratuitous gore.”

But as we've said before, there's a lot of evidence that Integral Ad Science doesn't work.

We don't know how many brands are employing X's Sensitivity Settings. But we do know that ads were still served up under posts spewing lies about the cause of the Maui wildfires — including an ad from a Canadian government-owned cannabis store.

Is there a “lawful but awful” list?

Despite Yaccarino’s promises to protect brands from toxic content, X’s fuzzy rules have created an incentive structure for users to post lies in the hopes that money will come out the other end.

“The Maui, Lahaina, Hawaii Fires Were A Planned Attack!!” user WallStreetApes wrote in a post that received nearly 600,000 views. But the post didn’t appear to translate into revenues.

“I’m on the ‘lawful but awful’ list….” wrote user WallStreetApes on August 19th. “So this is what you get paid for close to 100 million impressions… $54 bucks.”

“The fires in Maui were caused by a direct energy weapon to turn the island into a 15 minute city, all electric, and AI governed island,” AlphaFox78 wrote on August 12. They told Check My Ads that the post made them “a few bucks.”

Others see a potentially lucrative future on X. “I NEVER in a million years imagined that a content creator like myself, putting out the kind of information I do & speaking FREELY while doing so would EVER yield a cent in ad revenue,” wrote hateful and harmful conspiracy spreader and X user The Patriot Voice.

Neither did we.

Check My Ads Institute is a non-profit 501(c)3 organization.
Copyright © 2021 Check My Ads. All content © of their respective owners. Privacy Policy