Welcome back to BRANDED, the newsletter exploring how marketers broke society (and how we can fix it).
Here’s what’s new with us:
For a brief and glorious window of time last week, we received a rare glimpse into the secret algorithm that determines who on the internet gets to receive advertising dollars and who gets blocked.
Every year, brand safety technology companies block about $3 billion from reaching news organizations based on a single myth: that placing your ads on negative news stories is unsafe for your brand.
There is not a shred of evidence that this is true, but that hasn’t stopped them from providing at least two half-baked ways to do it:
Thanks to the war on “negative” news, the sentiment rating has become one of the most critical factors behind whether a news article gets monetized. It’s also one of the shadiest, because no one knows how the machine works.
But last week, Integral Ad Science launched its “Context Control” demo, inviting users to test its version of the tool, which allegedly reads “like a human” and spits out a real-time decision of whether it makes readers feel overall good (positive) or bad (negative).
We took the tool for a spin, testing it on (what else?) hate outlets. The next day, the demo had vanished.
We can guess why. The demo reveals a machine that is actually alarmingly and dangerously mixed up. It does not understand North from South or up from down. It is a college student’s computer science project, which she turns in to the professor at the end of the semester and says, “Good thing no one’s ever going to use this in real-life haha.”
Except IAS does use this machine in real-life and at scale, to control billions of impressions and ultimately, the fate of newsrooms around the world.
Here’s what we found:
First, we looked up a handful of extremist and extremist-adjacent sites. The sentiment machine didn’t seem to catch on:
Then we tested an article about “lesbian bed death.” We’ve long hypothesized that this socially accepted topic would be unfairly dinged because it contains two “bad words” (guess which ones?). It turns out we were right:
We also looked at Kenosha’s local newspaper covering Jacob Blake’s story:
Then we looked at coverage of Jacob Blake’s story from The Root, which provides an unflinching analysis of issues in the Black community:
IAS’s demo was removed before we could investigate further. But we saw enough to confirm that if you’re taking IAS’s advice to avoid negative content and using their technology to do it, you’re actually keeping your brand dollars away from some of the most responsible media coverage out there today. And probably still funding white nationalism.
Why are we measuring negative sentiment to begin with? If you’re not in the adtech bubble, let’s catch you up. For years, the adtech industry has coalesced around the myth that placing ads on negative news could harm your brand.
It’s a fabrication that is as bold as it is bonkers. The fact is, no brand has faced a brand safety crisis for placing their ads on the news. In fact, any brand that takes this advice is forfeiting its spot on the most highly trafficked, highly reputable domains in the world.
It’s ridiculous and it’s coming from the top. At the start of the global pandemic, the CEO of IAS urged clients to use their technology to target “positive hero-related content.” What she didn’t mention was that there is precious little good news to go around in 2020, and that following this advice means withholding revenues from news organizations that are producing the essential coronavirus reporting we all depend on.
The anti-bad-news campaign worked, too. Buzzfeed reported in March that one major brand blocked 2.2 million ads from appearing next to “coronavirus-related keywords,” which resulted in up to 56% of impressions being blocked from the Washington Post, New York Times, and Buzzfeed News.
Do publishers still receive the revenues if the ad is blocked? No one can say for sure. Ad tech folks will tell you that blocking the news happens so quickly on a page-by-page basis that it often has to happen after the “bid” has taken place on ad exchanges — after the budget has already been spent. This is somehow meant to defend this type of blocking, as if to say “it’s harmless anyway, so why complain?” Here are some reasons:
If the block happens pre-bid, it’s bad for publishers. If it happens post-bid, it’s bad for marketers. Folks, this would all be a lot easier if we just didn’t block the news.
Our Twitter thread made its way to the adops subreddit, where one user had a message for Nandini (“leftist psycho”) and all the other idiots:
“If you think any ‘machine learning’ or other bullshit is going to fit into your subjective interpretation of all pages, think again.”
They nailed it. How is a machine supposed to be subjective?
Interpreting what we read is a human thing. We decide how we feel about an article based on our knowledge, values, cultural identity, and the sum of our experiences as sentient creatures. The only thing that matters in brand safety is what humans think.
Machines don’t have any of those. They only know what the humans who built them taught them. And that’s where we hit a wall. The algorithms are built by a team of people that Integral Ad Science (and DoubleVerify and Oracle) prefer to keep under wraps.
We don’t know who made it. We don’t know their backgrounds, their cultural experiences, whether they are a diverse group. We do not know whether the developers understand that racism and white nationalism is bad and whether they have a baseline understanding of how to identify hate speech and disinformation.
And that means we are handing over one of our most critical brand safety decisions — where our brand appears — to a group of unknown people and the black box they built. Both publishers and brands are left in the dark.
If you’re uncomfortable about censorship, how about these advertising decisions that we never even see? You couldn’t find a more dystopian way to kill the free press.
We’ve covered this before, but let’s reiterate: Brand safety is not about a page-by-page analysis. No social media crisis will come from an awkward ad placement. What will create a brand safety crisis, though, is funding organizations that peddle dangerous rhetoric (hate speech, conspiracy theories, dangerous disinformation).
Brand safety is about ensuring that your ad spend aligns with your brand values. When the technology doesn’t fit the goal, it does more bad than good. You don’t need to scan every page on The Boston Globe. It should be on your inclusion list.
If you use brand safety technology, here’s what you should do:
Thanks for reading!
Nandini and Claire
Did you like this issue? Then why not share! Please send tips, compliments and complaints to @nandoodles and @catthekin.
Check My Ads Institute is a non-profit 501(c)3 organization.