Welcome back to BRANDED, the newsletter exploring how marketers broke society (and how we can fix it).
Every day, a handful of tech companies decide how billions of advertising dollars will be spent on the web. We don’t see these decisions take place, but brand safety algorithms scan every page and every piece of content we look at to decide whether it’s “safe” before serving an ad.
These millions of little verdicts add up. They determine who on the web gets monetized — and who gets blocked.
It’s a big responsibility, and it appears, one they do not take seriously. While brand safety tech companies have been extremely secretive about how it all works, it turns out they have also been unwittingly sharing their own proprietary data all this time.
Dr. Krzysztof Franaszek of Adalytics contacted us with a startling discovery last month: he was able to see how brand safety companies classify every individual news article he reads by right-clicking on “inspect” in Google Chrome. He could see exactly which brands were blocking which articles, because the keyword blocklists of global brands were also sitting out there. In other words, there’s a leak. A pretty big one.
Three brand safety companies — Oracle (Grapeshot & Moat), Integral Ad Science and Comscore — forgot to encrypt their data, giving us our first real look into how they categorize, block and move your advertising dollars across the web.
This issue is the first in a series of posts that will explore Krzysztof’s findings.
While digging around, he realized he was seeing the real-time ratings behind every article he was reading on the web, from the New York Times to Vice. In other words, he could see the automated signals they use to decide whether to allow or block an ad on a webpage, all within a fraction of a second and in total secrecy.
👉 NOTE: Krzysztof has published his research and methodology in full here.
With the help of URLScan.io and Internet Archive, he was able to retrieve the following page-level values assigned to national news publishers...
Brand safety floor categories
Commonly known as “the dirty dozen”, these are the categories that both the 4As and IAB Tech Lab urge advertisers to mark as “never appropriate.” These include adult content, arms, crime, death or injury, online piracy, hate speech, military conflict, obscenity, illegal drugs, spam, terrorism, and tobacco.
“Low,” “medium” or “high risk” content. This data appears to map to IAB Tech Lab’s Content Taxonomy.
Grapeshot had explicitly marked some articles as “gv_safe.” It appears that Grapeshot has an option for brands to only advertise on pages they mark as “safe.”
Integral Ad Science has been leaking several of its clients’ keyword blocklists, including Fortune 500 companies in the financial, telecom and consumer goods sectors. We will talk about this in our next issue of BRANDED.
This is significant. We know that brand safety technology blocked ~$3 billion globally from the news industry last year. We know how one major company's COVID-19 ad keyword blocking forced trusted news sources to forfeit up to 55% of ad placements on their pandemic-related news. And occasionally, we can see with our own eyes when an ad has been blocked from the news.
But no one, not even news publishers themselves, knows the extent to which their articles are being blocked from monetization. Today, we’re looking at the brand safety machine whirring in real-time — and it does not look good.
Brand safety companies have just one job: to keep their clients off hateful and extremist content.
Much of the work has already been done for them. In September, The Global Alliance for Responsible Media adopted a framework that nails down pretty clear definitions of what hate speech and extremism is.
Before that, the American Association of Advertising Agencies (or the 4As) came up with the Brand Safety Floor Framework, which is made up of the aforementioned 13 dealbreakers that most brands have been clear they don’t want to be on.
All they have to do is implement those requirements. Here’s what they’re doing instead:
It appears that brand safety technology cannot tell the difference between actual offensive content and journalists reporting on the issues to inform the public. Krzysztof’s data shows us that Oracle has aggressively blocked the news:
Even at the newspaper of record, certain beats appear to be entirely unsafe. At the New York Times, nearly every article written by these top reporters were marked unsafe:
If they’re this good at catching reporters, they must be amazing at catching actual hate and extremism, right? Not exactly.
Having taken Integral Ad Science’s “Context Control” tool for a spin a few months ago, we knew that brand safety tools appear to be unable filter for extremist & white supremacist websites. Krzysztof’s study confirms that it gets worse.
Both Moat and Grapeshot are allowing brands to advertise alongside conspiracy theories, COVID disinformation, election disinformation, and much more.
Crucially, Krzysztof did not find a disinformation parameter in Grapeshot, IAS, Comscore, or Moat. They don’t appear to filtering for the one thing that does cause brand safety crises.
The Brand Safety Floor Framework offers a consistent set of definitions for brand safety companies to implement.
At a minimum, they should all be catching the same “bad” content. But we’re not even seeing that. On the Wall Street Journal, Moat and Comscore’s algorithms only agreed on what is brand unsafe 59% of the time. Which begs the question, what exactly is this technology good for?
The implications of this research are breathtakingly anti-democratic. Ads are the currency of the digital economy, and brand safety technology companies have been acting with impunity because no one has the information to challenge them.
We don’t know how many media outlets have been run out of existence because of brand safety technology, nor how many media outlets will never be able to monetize critical news coverage because the issues important to their communities are marked as “unsafe.”
Brand safety is a product looking for a problem to solve. As they pivot from keyword blocking (which always sucked) to their fully opaque “contextual intelligence” solutions, there is little evidence that any of it works.
The CMO of a multibillion dollar company recently told AdAge:
“Nobody knows anything...There's a lot of tricks these companies can do to make their products look like they're working, and they work—until they don't.”
We get it. It’s convenient to not know how your budget is being spent. The internet is big, and it’s easy to hand over these decisions to someone else. It helps us avoid uncomfortable, politicized conversations about what media environments are appropriate for our brands.
But brand safety requires us to do two things: 1.) keep our ads away from hate speech and 2.) fund our news ecosystem. When you know your vendors are failing at both, can you afford to look away?
In our next issue of BRANDED, we will examine keyword blocklists of the Fortune 500. Some of these blocklists are so long, they have effectively cut off the entire news industry.
Until then, there are three ways to act:
Krzysztof’s research is available in AirTable throughout his piece, so you can play around with the data yourself. Let us know if you see anything interesting!
Krzysztof’s browser extension lets us analyze the advertisers, for once. You’ll be able to see which brands are targeting you with the most with ads, figure out how many (ir)relevant ads you’re seeing per day, and estimate how much companies are paying for your attention.
In the future, you will be able to share your ad data for money, and help marketers understand if their ads are actually being shown to the intended audiences.
We work with global brands to develop brand safety playbooks, so you can draw your own line and operationalize your brand values in your media spend. We are booking into March 2021.
With that, we’ll close out 2020 with a hearty thank you to you, our readers. This has been a wild year, and we’re grateful to be in your inbox.
See you in the new year!