As of now, Starbucks, Coca-Cola, Ben and Jerry’s, Unilever, PlayStation and close to 1000 other brands have joined the #StopHateForProfit boycott. The movement is gaining traction, but are calls to stop advertising on Facebook a noble act or a knee-jerk reaction from brands wanting to be seen doing good? Here are the facts.
How hard do Facebook’s algorithms work?
Facebook’s automated tools work harder than any other social media platform at detecting hate speech and recent figures show they’re only improving over time.
In spring this year, AI accounted for 88% of the total amount of hate speech removed from Facebook, according to its quarterly report on harmful content. In that same report, Facebook put the amount of harmful posts it removed at 9.6 million, which means Facebook’s automated tools alone roughly removed 8.5 million posts that breached platform guidelines in Q1 2020 – that’s an 86% increase from Q4 2019.
If you go back even further, reports from the EU’s code of conduct show Facebook has been making improvements from as far back as 2018, with the EU putting the volume of hateful posts Facebook removed at 82% the year – up from 28% in 2016.
In more recent years, the EU’s data shows Facebook also reacts faster to hate speech complaints. 95.7% of complaints were assessed within a day in the latter months of 2019. By comparison, Twitter only responded to 76.6% of complaints within a day. These figures also put Facebook above YouTube (81.5% within a day).
While this article is by no means a defence of Facebook, it’s true that you rarely hear advertisers hit out at Twitter, which you could argue faces the same issues as in terms of racism, Islamophobia, LGBTQ+ discrimination, sexism and other hate. That’s because Facebook is an easy target. Not only is it the biggest social media platform, its size often prevents it from being more reactionary on issues like President Trump.
Facebook also has to tow a thin line between hate speech and freedom of speech. At times, it’s a victim of its own success. By its very design, hate speech is often amplified because it’s well-known that divisive and emotionally-charged content is naturally more engaging. If a pro-Trump account posts extreme views, there’s going to be a bigger debate for and against than someone posting an inoffensive opinion.
But tech aside, should we be asking more of society, brands and even politicians?
The role society plays in hate speech
For all its past scandals – and there have been many– Facebook is not the Internet’s parent. Facebook can’t prevent individuals from having hateful views; likewise, hate also plagues Internet forums, Reddit pages and live gaming platforms like Twitch. To ask where these views stem from is to ask the chicken and the egg question.
While hate speech seen online may influence others to post hateful views, the fact some of these hateful posts are reshared or retweeted by divisive figures like Trump only normalises this type of behaviour. Politics aside, brands also stand accused of being complicit, and often without knowing it. It’s common knowledge that various FMCG brands have peddled racial stereotypes for years, including Aunt Jemima’s pancake mix and Uncle Ben’s – brands which have been around for over 80 years. Many have rightly been brought into question as a result of Black Lives Matter.
White supremacy aside, which is often the example used when we speak about hate on social media, Black Lives Matter has shone a greater light on the casual racism millions experience every day. You could go as far as to argue that Aunt Jemima and Uncle Ben’s, which are based on age-old American stereotypes, have long played into this. The problem is, we’re just talking about these systemic issues.
At what point do we expect too much of tech?
AI and algorithms can identify a Nazi swastika and clearly define when it’s used in a hateful context instead of a historical one. But while it’s common to say technology is moving ahead of humans, in some respects it’s playing catch up too. If society cannot get the tone right on hate speech, how do we expect tech to fare better?
Social media is a mirror. It reflects what we put onto it. If world events turn the world into a more extreme and less hospitable, then this will often be reflected on social media. That’s not to say there aren’t systemic issues surrounding the general design of social media which need to be taken into account. To take what Mariame Kaba, a writer for The New York Times, has said on defunding the police (a complex matter in itself), “We need to change our demands.” In this case, this means brands.
What if brands asked more of Facebook, including advertisers working with the site to address what success on the platform looks like. As marketers, we can all agree there is an obsession with vanity metrics, including likes and comments, when really these aren’t always the basis of a successful campaign. On Facebook’s part, this may mean looking at algorithms and how they promote the most bizarre, controversial and emotionally-charged content. This has already been addressed on Instagram by removing likes in certain territories – including New Zealand – to quell fears over mental health and lower the obsession with vanity metrics. What if Facebook found a way to limit the reach on hate or emotionally-charged content, as it has clickbait and engagement bait, except in the case of advertising from approved accounts.
Brands are in their rights to apply pressure to Facebook, but that doesn’t mean a boycott is necessarily the correct approach. Reactionary action has its play but in a world where tech seems to be the answer to everything, we have to remember that AI isn’t a panacea to all of society’s ills. While Facebook clearly has a problem with hate speech – which is an abuse of its platform guidelines – the fact it’s being posted there at all is a societal problem. Hate is a complex issue that we can’t solve overnight with a boycott, especially when most – including Mark Zuckerberg – believe that many of the same brands boycotting Facebook will be back when it’s more brand-safe to do so. Surely it makes more sense to work together instead.
Kunal Pattany is a public speaker, technology commentator and the founder and CEO of Digital Human. With 15 years’ experience in marketing for leading companies like Kantar, a WPP data and insights company, he has turned his attention to the impact of digital and AI on humans and society’s response to innovation. To find out more about Digital Human, click here. To talk with Kunal about speaking opportunities, email firstname.lastname@example.org 👋