First, thank you. Thank you for being part of our community. Thank you for using your voice to fight disinformation. Thank you for working to bring transparency to adtech and, in doing so, defending democracy.
2023 was a huge year. We took the battle for adtech transparency to Google; met with policymakers in Canada, the US, and the EU; tackled the scourge of Fox News; kept the pressure on Elon Musk; and basically reduced Tim Pool to selling coffee.
Here are the big bones we picked in 2023:
Google runs the biggest digital ad exchange in the world. The $1.7 trillion company earns 80 percent of its revenue from ads, but the marketplace it set up is murky and ripe for grift. Look no further than Breitbart to see how sites get around universal blocklists with Google’s help.
For years, Google’s line has been if you don’t want your ads on Breitbart, block it yourself. But it’s not that easy. We exposed how, through its Google Search Partner network, Google put major brands’ search ads on unsafe websites — including Breitbart, where advertisers have specifically asked not to be.
After our report, Google stopped serving search ads on two of the sites we flagged — but kept them on Breitbart. And then we found even more ads on unsafe sites before Google took action after we called them out, again.
That wasn’t the only time Google surprised advertisers with where they dumped ads this year. We broke down how YouTube leads marketers to think they’re putting ads on YouTube but often sticking them on third-party websites and mobile apps called Google Video Partners. Too bad GVP is a toxic landscape of inventory that runs the gamut: hate sites, made for advertising clickbait, and even Russian state media.
And Google’s still trying to keep the billions it took from advertisers with this scam. Oh yeah, and then it spent a week furiously denying a report about how it has secretly displayed violent ads meant for adults on kids’ YouTube channels, and then harvested children’s data without parental consent. Just another day when you’re a monopoly.
This November, Check My Ads celebrated our second birthday the same day as the first anniversary of Elon Musk buying Twitter. We launched two years ago with a promise to rip the beating heart out of the disinformation economy, and spent a lot of this year taking on Musk.
In May we dove into the fact that X, formerly Twitter, hasn’t magically become brand safe again just because the CEO of WPP — one of the world’s biggest advertising agency groups — said that “Twitter has become a lot more stable over the last few months.”
By August, X was showing how truly radioactive it is for advertisers under Musk’s watch, exploding with racism, LGBTQ hatred, and conspiracies. Unsurprisingly, advertisers fled and didn’t want to return. Musk responded by suing the handful of people who are fighting to bring transparency to the platform, like our friends at the Center for Countering Digital Hate.
To wallpaper over Musk’s destruction, X partnered with Integral Ad Science to convince advertisers they can protect their brands from the platform’s toxicity. There are just two problems: X has never been more toxic, and Integral Ad Science doesn’t actually work.
The other fig leaf X uses to pretend it’s brand safe, Trustworthy Accountability Group certification, was quietly renewed at the same time. We immediately filed a formal complaint.
Under increasing scrutiny, Musk attacked the ADL to distract from his own incompetence, claiming that the Anti-Defamation League is behind X’s declining ad revenue. Then his own company dropped a new X ad that unintentionally dunked on itself.
In the middle of all this, X started a program to share ad revenue with its verified users — who immediately began gaming the system to make a buck off outrage and misinformation. As the trash fire escalated, Musk inadvertently made the case for why X isn’t brand safe in a Daily Wire interview.
He made that case again as the year wrapped up, telling advertisers who were already halfway out the door to “go fuck themselves.” And as big-name advertisers stay away, ads for nonconsensual AI porn, semen stealing, and scams flooded in.
But at least those ads are labeled, unlike in the hundreds of examples you all sent in that we compiled into an FTC complaint.
This year we continued our fight against Fox News with some big wins. We exposed Tubi as a Fox Corp. streaming company, and that NewsGuard, a media reliability ratings company, is gaming the system for Fox News.
We pressured Digital Content Next to drop Fox News. DCN is an influential trade association that lobbies for the interests of “premium” digital publishers. DCN members are some of the most trusted and well-respected media brands. But they also represent Fox News and its new streaming service. Through their partnership with Fox News, DCN helps secure ad dollars for the biggest voices in disinformation. Fox Corp. earned $4 billion last year — and they pay DCN just $79,000 a year — to peddle Fox News as a legitimate news organization alongside PBS, the Associated Press, and USA Today. Is our democracy that cheap?
Oh yeah, and Tucker Carlson was dropped by Fox. whistles innocently
After Russell Brand was demonetized on YouTube, Rumble said it would continue to run ads on Brand's channel — it didn’t matter that he was facing multiple assault and grooming accusations. It was an example of how the YouTube alternative monetizes toxic, unsafe content. It’s no coincidence that after its vocal support of Brand, advertisers started fleeing. And ahead of it being the official streaming partner of GOP debates, we shared 8 things about Rumble that you should know.
The ad exchange has built a niche for itself in the advertising business: “saving online conversations” from hate, lies, and disinformation. They’ve gone on the record saying they’d never work with Breitbart, but we exposed OpenWeb's secret relationship with Steve Bannon.
In September we showed how Jordan Peterson, a prolific climate denier, is being funded by YouTube and we launched multiple email action campaigns to ad networks that are helping fund Townhall, a top climate change disinformation outlet. These included Criteo, Teads, Index Exchange, Xandr, and Google. Due to your action, Teads dropped Townhall from its inventory! Thank you for using your voice to defund climate disinformation!
Building on our Townhall research, we published a report with the Climate Action Against Disinformation ahead of the UN’s large climate conference, COP28. We showed how over 150 ad exchanges enable the monetization of climate mis- and disinformation on 15 key websites, including Breitbart, Newsmax, and Townhall.
It was a big year for meeting with changemakers. Claire joined a panel at the European Parliament to discuss defunding disinformation in the Balkans. And she didn’t stop there — she later visited Canada’s Parliament Hill, where she met with lawmakers and committee members and broke down the intersection of adtech’s opacity and the problems plaguing online discourse. Nandini went to Austria to explain how misinformation spreads and what you can do against it at the Ars Electronica festival. Our policy director Sarah traveled to Washington and met with lawmakers — and we can’t wait to show you what we’re working on 🤫
Generative AI tools, such as ChatGPT and Midjourney, can produce text, images, and more through user prompts. It can be a handy tool for creating outlines, or testing design ideas. But it also brings new and substantial risks. It’s never been easier to fake an image, rip off an artist, or spin up sites whose sole function is to drain money from advertisers. In 2024, we intend to push ad exchanges, advertisers, and publishers to prioritize the public interest, artists’ rights, transparency, and authenticity while adopting AI tools.
AI’s risks don’t stop with ad placement — they reach down to strangle the root of democracy by making it easy to profit by generating outrage for engagement with fake images, text, and even video. The AI revolution comes as X begins allowing political parties and candidates to advertise for the first time since 2019. And Meta is accepting money to run ads questioning the results of the 2020 election.
Artificial intelligence is not the biggest risk to election security in 2024. Disinformation and attacks on election security are coming from human beings in the form of lawsuits, media manipulation including AI, physical intimidation, and chaos agents. We are facing intense realities ahead of this year.
There is no chance that we can do this work without you. The research, strategy, psychological mobility, and resilience this work requires relies on the many kinds of support our Checkmates and friends provide. Please know that you matter very much to us and to this work.
Look at everything our community accomplished in 2023! Thank you for using your voice to make a difference. If you’re not already, consider becoming a Checkmate and supporting us with a recurring donation.
This next year, it’s on all of us to be vigilant, call out bullshit when we see it, let advertisers know what they’re funding, and pull the plug on hate. We’ve got some big stuff lined up and are going to be going after Google, grifters, and those who threaten democracy to make a buck. We can’t wait to work with you on it.
Best New Year wishes,
Claire & Nandini