2) "I think we should have … degrees of advertiser safety."
Musk went on to argue that advertisers should be open to appearing next to “not safe for work stuff,” even “something pretty spicy,” because “it will cost less per impression.” But most brands don’t want to be associated with “spicy” content — even if it’s a cheaper deal. In fact, it’s rare enough that it can make headlines when advertisements for major brands appear on X-rated videos. The fact that Musk doesn’t understand this basic concept shows how little he understands advertisers’ concerns.
3) "There's a list, I think Chaya Raichik was pointing out, like testing the system. ... that list [of brand-unsafe keywords] needs to be trimmed. That's the sort of not safe for advertising list."
Chaya Raichik, the woman behind LibsofTikTok, has made a career out of attacking LGBTQ folks. She’s been credited with inspiring bomb threats to children’s hospitals. Her “test” involved replacing parts of words with symbols to see if she could get ads to run alongside brand-unsafe content. Her effort prompted Musk to openly muse that ad restrictions were too tight, even as he openly admitted that he doesn’t understand what brands might consider a “bad word.” And, instead of trying to learn, Musk called for the not-safe-for-advertising list to be trimmed down. One of the words Raichik flagged from this list was “groomer,” which accounts (including hers) use as a slur to falsely associate the LGBTQ community with pedophilia. And it comes as no surprise that one of X's most toxic users is looking for ways to game the ad system.
4) "One person's hate speech,” he said, “is another person's free speech."
Let’s be clear — responsible content moderation is necessary for brand safety. The current policy has turned X into a minefield for advertisers, who are just one ad campaign away from their brand funding white nationalist accounts. Which happensa LOT. After all, advertisers have the right to their own speech to decide which content to support, and it’s clear X's built-in brand safety technology with Integral Ad Science isn’t working, which is no surprise to us. We previously tested it out and found it declared pro-eugenics content as brand safe.
5) "Russell Brand is not a bad guy" and "we're just sort of in the witch burning phase here."
YouTube demonetized Brand after he was accused of sexually assaulting and grooming women and girls as young as 16 years old. But Musk thinks Brand should have been able to collect ad dollars until he is found guilty — even as he is being investigated UK police. Innocent until proven guilty is the standard for criminal law, not brand safety. Musk doesn’t seem to understand that.
6) "I think we're running out of conspiracy theories that didn't turn out to be true."
7) "Our current approach is […] you can say things that are hateful, hateful, but legal on the platform, but we're not going to recommend that to others. That's the current approach that we have."
Musk says X won’t recommend hateful content to others, but we know this isn’t true. The Washington Post foundhate speech, including posts from a “self-proclaimed Nazi,” were being boosted to the “For You” timeline. X is also still putting your ads under posts promoting thinks like climate denialism, white pride and anti-semitism. And with X's revenue sharing program for verified accounts, ad dollars have the potential to go straight from advertisers to the site's most toxic users.
Musk said “I think we're doing a pretty good job.” The advertisers fleeing X seem indicate otherwise: Despite X CEO Linda Yaccarino's recent claims, the Financial Times reported that the company's ad revenue in the U.S. is still down 60 per cent.
And with Musk’s own words showing he either doesn’t understand advertisers’ brand safety concerns or doesn’t care, why would they want to come back?