New Zealand’s Sovereign Fund Reckons With a Massacre

Facebook, Twitter, and Google will be held to account, writes NZ Super CEO Matt Whineray.

People gather to honor the victims of the Christchurch terrorist attack outside the Kilbernie Mosque in Wellington, New Zealand. (Mark Coote/Bloomberg)

People gather to honor the victims of the Christchurch terrorist attack outside the Kilbernie Mosque in Wellington, New Zealand.

(Mark Coote/Bloomberg)

Following the devastating terrorist attacks in Christchurch, New Zealand is undergoing a period of reflection on issues including hate speech, racism, gun ownership, and the role of social media.

As the country’s sovereign wealth fund, with investments in some of the world’s largest social media companies, we considered the harm caused by these platforms in live-streaming and distributing the attack.

We believe companies must maintain their social license to operate to survive over the long-term. They must be alert to and address the negative social impacts of their businesses.

As a long-term institutional investor, the NZ$41 billion (US$27 billion) New Zealand Superannuation Fund has the ability to ask questions of businesses in a way that can help shape their future strategic direction. That means making sure the companies we put our money into are flexible enough to adapt when they need to and strong enough to make the tough decisions when required.

We recognise the positive and important role played by social media companies in society. However, in transmitting the Christchurch terrorist attacks on their platforms, these companies betrayed their users and society. This severely damaged their social license to operate.


Facebook livestreamed the murder of fifty New Zealanders for 17 horrifying minutes. Facebook, Alphabet (Google), and Twitter all carried links to video footage of the attack, and struggled to stamp out repeat uploads of the footage. The video reached millions of viewers.

Facebook says it did not initially receive any warnings from users as it streamed the Facebook Live video, but several have since claimed that they did try to complain. Similarly, those who reported the video on YouTube and Twitter were initially told that it did not breach community standards. Once it became apparent that the scenes were real, all three companies worked to remove the footage, but that all took time and required human intervention rather than the algorithms that each company relies on for such activity.

The fact that this horrific video was able to be distributed and shared in the manner it was means the safeguards in place are inadequate. Across the world, different approaches to regulation in response to the attack and the social media response are already underway. These government responses will impact the operations of the social media platforms.

However, as New Zealand Prime Minister Jacinda Ardern stated, social media companies must do more because, “ultimately, we can all promote good rules locally, but these platforms are global.” In Paris next month, she and French President Emmanuel Macron will co-chair a meeting of world leaders and tech company representatives aimed at ending the ability to organize and promote terrorism and violent extremism over social media platforms.

For our part, NZ Super has joined a collaborative international initiative of major investors representing more than US$1 trillion under management. Nearly 50 participants have already signed on, including 27 based in New Zealand and 20 organizations from around the globe, such as pension funds from Sweden, the U.K., and Australia. We are working with our peers to encourage social media companies to review their operational safeguards around user-generated content.

Our intention is to open a dialogue with these social media companies. As we see it, there must be a way forward that will enable these companies to continue operating successfully, but in a manner that precludes the streaming and sharing of objectionable and harmful content.

Specifically, we want to understand how they control objectionable content, how they plan to handle it in future, and whether they are investing in the right tools, resources, and people needed to quickly assess and manage violent, racist, or harmful footage and prevent its transmission. Are they developing algorithms that quickly identify real-life objectionable material? Do they spend enough on hiring and training content moderators? Do those moderators have the right tools and support? Can companies assure us that they will act more proactively in future to ensure this kind of content is not live-streamed and distributed?

By engaging with Facebook, Twitter, and Alphabet, together we can work towards ensuring they are fulfilling their duty of care to prevent harm to users and society. To ensure their long-term sustainability, these companies must take action to preserve their social license to operate and prevent further damage to their brands.

We do not claim to have the technical skills to provide the solutions to the grave issues raised by this horrible event. But we are invested in these business’s ultimate success. By raising our voice as shareholders, we are stating that the situation as it stands is not acceptable. They must take more responsibility for what is published on their platforms.

The collaborative engagement is being led by the New Zealand Crown-owned investors. If you would like to join the initiative, please contact: