Scoop has an Ethical Paywall
Licence needed for work use Learn More
Top Scoops

Book Reviews | Gordon Campbell | Scoop News | Wellington Scoop | Community Scoop | Search

 

Principles over promises: Responding to terrorism

Principles over promises: Responding to the Christchurch terrorism

1,490 words

Almost exactly a year ago, Facebook was in the news in New Zealand over a row with Privacy Commissioner John Edwards. The heated public exchange between Edwards and the company took place in the context of the Cambridge Analytica scandal, in which the private information of millions of Facebook users was harvested, illicitly, for deeply divisive, partisan and political ends. Edwards accused the company of breaching New Zealand’s Privacy Act. The company responded that it hadn’t and that the Privacy Commissioner had made an overbroad request which could not be serviced. Edwards proceeded to delete his account and warned others in New Zealand that continued use of Facebook could impact their right to privacy under domestic law. Just a few months prior, the COO of Facebook, Sheryl Sandberg, was pictured on Facebook’s official country page with New Zealand PM Jacinda Ardern. The caption of the photo, which captured the two women in an embrace after a formal meeting, flagged efforts the company was making to keep children safe. It is not surprising that Sandberg also wrote the paean to Ardern in last year’s Time 100 list of the most influential people.

Advertisement - scroll to continue reading

The violence on the 15th of March in Christchurch dramatically changed this relationship. In response to the act of terrorism, Facebook announced, and for the first time, a ban on “peace, support and representation of white nationalism and separatism on Facebook and Instagram”. Two weeks after the killings in Christchurch, a message by Sandberg was featured on top of Instagram feeds in the country and featured in local media. The message noted that Facebook was “exploring restrictions on who can go Live depending on factors such as prior Community Standard violations” and that the company was “also investing in research to build better technology to quickly identify edited versions of violent videos and images and prevent people from re-sharing these versions.” Additionally, the company was removing content from, and all praise or support of several hate groups in the country, as well as Australia. Sandberg’s message called the terrorism in Christchurch “an act of pure evil”, echoing verbatim David Coleman, Australia's immigration minister, in a statement he made after denying entry to far-right commentator Milo Yiannopoulos, who after the attack referred to Muslims as “barbaric” and Islam as an “alien religious culture”. Last week, New Zealand’s Chief Censor David Shanks, declared the document released by the killer as ‘objectionable’, which now makes it an offence to share or even possess it. Following up, authorities also made the possession and distribution of the killer’s live stream video an offence. Facebook, Twitter and Microsoft have all been to New Zealand in the past fortnight, issuing statements, making promises and expressing solidarity. Silicon Valley-based technology companies are in the spotlight, but I wonder, why now? What’s changed?

Since its debut in 2015, a report by BuzzFeed News published in June 2017 flagged that at least 45 instances of grisly violence including shootings, rape, murders, child abuse and attempted suicides were broadcast on Facebook Live. That number would be higher now, not including Christchurch. The Founder and CEO of Facebook, Mark Zuckerberg, in May 2017, promised that 3,000 more moderators, in addition to 4,500 already working, would be added to over live and native video content. Promises to do more or better are what Zuckerberg and Sandberg are very good at making, in the aftermath of increasingly frequent and major privacy, ethics, violence or governance related scandals Facebook is in the middle of. Less apparent and forthcoming, over time, is what really the company does, invests in and builds.

There are also inconsistencies in the company’s responses to platform abuses. In 2017, the live video on Facebook of a man bound, gagged and repeatedly cut with a knife, lasting half an hour, was viewed by 16,000 users. By the time it was taken down, it had spread on YouTube. A company spokesperson at the time was quoted as saying that “in many instances… when people share this type of content, they are doing so to condemn violence or raise awareness about it. In that case, the video would be allowed.” Revealingly, the same claim wasn’t made with the Christchurch killer’s production.

The flipside to this is the use of Facebook’s tools to bear witness to human rights abuse. In 2016, the killing of a young black American Philando Castile by the Police in Minnesota was live-streamed on Facebook by his girlfriend, Diamond Reynolds, who was present with him in the car. The video went viral and helped document police brutality. There is also clear documented evidence of how violence captured from a Palestinian perspective, as well as content on potential war crimes, is at greater risk of removal on social media platforms. In fact, more than 70 civil rights groups wrote to Facebook in 2016, flagging this problem of unilateral removals based on orders generated by repressive regimes, giving perpetrators greater impunity and murderers stronger immunity.

It is axiomatic that deleting videos, banning pages, blocking groups, algorithmic tagging and faster human moderation do not erase root causes of violent extremism. The use of WhatsApp in India to seed and spread violence is a cautionary tale in how the deletion of content on Facebook’s public platforms may only drive it further underground. The answer is not to weaken or ban encryption. As New Zealand shows us, it is to investigate ways through which democratic values address, concretely and meaningfully, existential concerns of citizens and communities. This is hard work and beyond the lifespan of any one government. It also cannot be replaced by greater regulation of technology companies and social media. The two go hand in hand, and one is not a substitute for the other. It is here that governments, as well as technology companies, stumble, by responding to violent incidents in ways that don’t fully consider how disparate social media platforms and ideologues corrosively influence and inform each other. Content produced in one region or country, can over time, inspire action and reflection in a very different country or community.

Take an Australian Senator’s response, on Twitter, to the Christchurch terrorism. Though condemned by the Australian PM, the very act of referring to the Senator and what he noted on Twitter promoted the content to different audiences, both nationally and globally. The Twitter account, as well as the Facebook page of the Senator in question, produce and promote an essential ideology indistinguishable from the Christchurch killer’s avowed motivations. It is the normalisation of extremism through the guise of outrage and selective condemnation. What should the response be?

In Myanmar, an independent human rights impact assessment on Facebook, conducted last year, resulted in the company updating policies to “remove misinformation that has the potential to contribute to imminent violence or physical harm”. And yet, it is unclear how what may now be operational in Myanmar is also applied in other contexts, including in First World countries at risk of right-wing extremism.

I wonder, does it take grief and violence on the scale of Christchurch to jolt politicians and technology companies to take action around what was evident, for much longer? And in seeking to capitalise on the international media exposure and attention around an incident in a First World country, are decisions made in or because of New Zealand risking norms around content production, access and archival globally, on social media platforms that are now part of the socio-political, economic and cultural DNA of entire regions? Precisely at a time when any opposition to or critical questioning of decisions taken on behalf of victims and those at risk of violence can generate hostility or pushback, we need to safeguard against good-faith measures that inadvertently risk the very fibre of liberal democracy politicians in New Zealand and technology companies seek to secure. An emphasis on nuance, context, culture and intent must endure.

So is meaningful investment, beyond vacuous promises. In 2016, Zuckerberg called live video “personal and emotional and raw and visceral”. After the Christchurch video’s visceral virality, it is unclear if Sandberg pushed this same line with PM Ardern. In fact, Facebook astonishingly allowed an Islamophobic ad featuring PM Ardern wearing a hijab, which was only taken down after a domestic website’s intervention. Clearly, challenges persist. Social media companies can and must do more, including changing the very business models that have allowed major platforms to grow to a point where they are, essentially, ungovernable.

Grieving, we seek out easy answers. Banning weapons and blocking extremist content helps contain and address immediate concerns. Ideas though are incredibly resilient, and always find a way to new audiences. The longer-term will of the government to address hate groups, violent extremism in all forms and the normalisation of othering, from Maori to Muslim, requires sober reflection and more careful policymaking. What happens in New Zealand is already a template for the world. We must help PM Ardern and technology companies live up to this great responsibility.

Sanjana Hattotuwa is a PhD student at the National Centre for Peace and Conflict Studies (NCPACS), University of Otago.


ends

© Scoop Media

Advertisement - scroll to continue reading
 
 
 
Top Scoops Headlines

 
 
 
 
 
 
 
 
 
 
 
 

Join Our Free Newsletter

Subscribe to Scoop’s 'The Catch Up' our free weekly newsletter sent to your inbox every Monday with stories from across our network.