Facebook reforms not good enough
7 July 2020
Opinion: Matt Bartlett explains why Facebook should come to see political advertising and its challenges in the same way as Twitter.
Even before Covid-19 prevented most of us from leaving our homes, social media platforms like Facebook and Twitter had become society’s main public square. We gather online in Facebook and WhatsApp chats and average two and a half hours a day on social media, trawling through never-ending feeds of updates and information.
This has changed political life as well as the social fabric - for instance, even civil activism like the mass support of the Black Lives Matter movement is (at least in part) conducted online.
With these technology platforms becoming increasingly central, the way these online spaces have been designed by their overlords in Silicon Valley has come under heightened scrutiny. Facebook’s annus horribilis in 2016, from Cambridge Analytica to the invasion of Russian troll bots, has driven the company into taking steps to prevent the degradation of its platform. After initially trying to bluster through the criticisms, Mark Zuckerberg eventually conceded “we made mistakes, there’s more to do, and we need to step up and do it”.
This is the context in which Facebook’s recent announcements about political advertising in New Zealand should be understood. As part of a stated effort to “fight the spread of misinformation”, Facebook is:
Banning foreign political advertising — as of next month, only New Zealanders will be able to post ads that make references to political figures, parties, social issues or the country’s election.
Making a previously voluntary transparency tool mandatory — anyone wanting to run a political ad will need to confirm their identity, publicly disclose who was responsible for the ad, and provide public contact information so they can be held accountable for the ads.
Make no mistake, these are welcome changes. It’s pretty tough to see how New Zealand’s democracy benefits from the intervention of foreign actors with money. Similarly, holding political parties to different transparency standards was always a strange notion, and it is good to see Facebook take some sort of action on this.
But let’s also be clear about how incremental and minor these changes are. For all of Facebook’s fighting talk about a campaign against fake news and misinformation, the company has ruled out policing the veracity or truth of political ads. In response to Twitter fact-checking two notable lies from the tweeter-in-chief himself, Zuckerberg pointedly said that Facebook does not want to be the “arbiter of truth”.
This seemingly benign claim to neutrality was also central to Facebook’s announcement in New Zealand. While Facebook is introducing a third-party fact-checking system to review the accuracy of some posts, Mia Garlick (Facebook’s director of policy for Australia and New Zealand) explicitly noted that politicians and political parties would be exempt from any fact-checking:
"We don’t think it’s the role of a US-based private company to be interfering in the political discourse of elected New Zealand officials or candidates in New Zealand and so we think that should just be part of the public debate and that the transparency that arises from the debate surrounding those comments can be part of sunlight that holds people accountable”.
Zuckerberg and Garlick set out a sunny view of the world where Facebook is nothing more than a neutral vehicle for the public conversation, allowing for a free and frank exchange of views. However, this carefully constructed piece of PR does not reflect how the platform works in the real world. Facebook’s decisions, omissions and algorithms shape political debate in a number of important ways, no matter how much the Facebook CEO would prefer to ignore them.
For a start, take the Facebook content algorithm, designed to promote engaging content. “Facebook’s profits depend on people coming back, clicking and sharing things,” said Alex Howard of the Sunlight Foundation, which advocates for transparency in political advertising. “It’s not based on, ‘Did we arrive at a resonated discourse on this policy proposal?’ or ‘Did the best questions get asked at this town hall?’”
By prioritising “engaging” content, Facebook is shaping the discourse. Put another way, Facebook systematically deprioritises content which is “unengaging” — think tax plans and policy proposals — no matter their value to the political conversation.
Political advertising amplifies this distortion. As this academic study indicates, Facebook’s system for political advertising is practically designed to polarise. Because Facebook has so much data about each of us, political advertising is “microtargeted” at the people who are most likely to agree with the ad. Believe it or not, Facebook’s advertising algorithm actually makes it more expensive (sometimes twice as much) to deliver ads to people unlikely to agree with a political ad, compared to those who would support it.
In this way, Facebook’s advertising structure encourages political parties to focus on their own echo chamber and support base, rather than messages that might reach across the aisle. It also encourages the demonisation of rival politicians, particularly given that outright lies and false smears are given a free pass by Zuckerberg. The Trump campaign even tried its luck with ads featuring literal Nazi symbols, where Facebook finally drew the line. Weaponised racism and xenophobia are okay, apparently, so long as you don’t hark back to the Fūhrer.
There are a variety of other well-documented issues with political advertising on social media, including the preponderance of targeted misinformation designed to whip up fear and prejudice. In short, Facebook’s PR around “not interfering in the public discourse” obscures and deflects from the uncomfortable fact that Facebook’s choices and omissions have an enormous amount of influence on how the public conversation unfolds.
So while greater regulation of foreign political ads and more transparency can only be good for the digital public square, Facebook’s reforms do not go nearly far enough. Twitter’s decision in 2019 to ban political ads altogether represents a model of what real change in this area would look like, with CEO Jack Dorsey noting that “while internet advertising is incredibly powerful ... that power brings significant risks to politics".
I am hopeful that Facebook and its CEO will, eventually, come to see political advertising and its challenges in the same way as Dorsey. As one commentator commented astutely about Twitter’s decision: “When faced with a choice between ad dollars and the integrity of our democracy, it is encouraging that, for once, revenue did not win out.”
Matt Bartlett is a professional teaching fellow in the Law School.
This article reflects the opinion of the author and not necessarily the views of the University of Auckland.
Used with permission from Newsroom Facebook reforms not good enough 7 July 2020.
Alison Sims | Research Communications Editor
DDI 09 923 4953
Mob 021 249 0089