New Zealand is being advised to doing a better job of social media reform than Australia has.
Steps towards global regulation of social media companies to rein in harmful content looks likely, with the Government set to take a lead role in a global initiative, the Herald has learned.
The will of governments to work together to tackle the potentially harmful impacts of social media would have only grown stronger in the wake of the terror attacks in Sri Lanka, where Facebook and Instagram were temporarily shut down in that country to stop the spread of false news reports.
Following the Christchurch terror attack, Prime Minister Jacinda Ardern has been working towards a global co-ordinated response that would make the likes of Facebook, YouTube and Twitter more responsible for the content they host.
The Government has been talking to global partners and the Herald understands an announcement is due soon.
Dr Belinda Barnet, a social media lecturer at Melbourne's Swinburne University, told Heather du Plessis-Allan that this is the right move.
"I think she's doing the right thing, this does need to be a global approach. If you have a small country such as Australia or New Zealand, that content could be hosted in other countries."
Barnet says that Facebook has committed to legislation, and they do have the ability to solve this issue.
"Facebook has come up with a solution for a difficult problem in the past, they have learned how to do it. It wouldn't be a single algorithm, it would be a combination of tweaking the existing algorithms that detect violent content alongside community moderator tools."
A spokeswoman for the Prime Minister would not comment on the matter last night.
Currently multinational social media companies have to comply with New Zealand law, but they also have an out-clause - called the safe harbour provisions - that means they may not be legally liable for what users publish on their sites, though these were not used in relation to the livestream video of the massacre in Christchurch.
Other countries, including Australia, are taking a more hardline approach that puts more onus on these companies to block harmful content, but the Government has decided a global response would be more effective, given the companies' global reach.
However, Barnet says that the laws were "slapped together" and are not very well thought.
"Ostensibly, it limits the amount of time that violent, abhorrent material can be hosted on a platform, and executives have to take the content down as soon as its noticed."
However, the bill does not specify a time-frame, leaving a lot up to assumption.
Barnet says that the bill has not yet gone to court, and it is unlikely that the law would be specified.
"We do need laws like this, I'm, fully supportive of a legislative structure but it's just that this one was not terribly well thought through."
Facebook has faced a barrage of criticism for what many see as its failure to immediately take down the livestream and minimise its spread; Facebook removed 1.5 million videos of the attack within 24 hours.
Facebook took down the video 12 minutes after the livestream had ended following a notice from police, not from Facebook users or its own algorithms, which reportedly failed because the footage did not have enough gore.
However, New York-based researcher Eric Feinberg reported he had found another 12 copies across Facebook (which was hosting five copies), Facebook-owned Instagram (six) and Google-owned YouTube (four). All were live as of Monday NZT.
One of the versions on YouTube - two minutes and 17 seconds of "highlights" - had clocked 773,009 views according to a screen grab taken by Feinberg on Monday.
"It was posted on March 15, 2019 and still up on April 21, 2019 at 5.10pm EDT," Feinberg told the Herald
The hate content researcher has been continually locating copies of the clip - which has been banned by NZ's Chief Censor - since March 15.