Social media companies and tech firms will be legally required to protect users and face tough penalties if they do not comply under proposed legislation from the U.K. government.

Possible punitive measures include substantial fines, blocking users’ access to sites if firms fail to act swiftly to complaints about content, and potentially making individual senior managers directly liable for non-compliance.

Holding social media firms to account

 

Social media and tech firms like to think that the current system of “self-governance” in deleting inappropriate content works well enough and that results are getting even better.

 

For example, video-sharing site YouTube (owned by Google), which has some 10,000 people globally monitoring and removing harmful content, has said that 7.8M videos were taken down between July and September 2018, with 81 percent of them automatically removed by machines (and three-quarters of those clips never even receiving a single view).

 

Meanwhile, Facebook removed 15.4M pieces of violent content between October and December 2018—nearly double the amount of content it deleted in the previous quarter.

 

In the case of terrorist propaganda, Facebook says 99.5 percent of all the material taken down between July and September was done by “detection technology.”

 

The company, however, failed to prevent the live-streaming of the New Zealand shootings which resulted in 50 deaths. Consequently, on 5 April, Australia passed the Sharing of Abhorrent Violent Material Act, which introduces criminal penalties for social media companies, possible three-year maximum jail sentences for tech executives (as well as fines of up to AUS $2.1 million/U.S. $1.5 million), and financial penalties worth up to 10 percent of a company’s global turnover.

 

Several governments around the world believe that tech firms should be held to account more easily. The European Union, for example, is considering ways to make social media companies directly liable for harmful content, possibly through fines. In 2018, Germany introduced its “NetzDG” law, which compels companies with more than two million users to set up procedures to review complaints about content they are hosting and remove anything that is clearly illegal within 24 hours. Non-compliance can result in fines worth up to €5 million (U.S. $5.6 million) for individuals and up to €50M (U.S. $56.4 million) for companies. So far, however, no fines have been issued.

 

—Neil Hodge

In a joint proposal from the Home Office and the Department for Digital, Culture, Media and Sport (DCMS), companies including Facebook, Twitter, and Google will be subject to a new statutory “duty of care” to make them take more responsibility for the safety of their users and tackle harm caused by content or activity on their services.

There will also be further stringent requirements on tech companies to ensure child abuse and terrorist content is not disseminated online and to respond to and address users’ complaints quickly.

Other measures outlined in the Online Harms White Paper include appointing a regulator—possibly by creating a new body or by expanding the remit of an existing watchdog—with the power to force social media platforms and others to publish annual transparency reports on the amount of harmful content on their platforms and what they are doing to address this.

The regulator will also have powers to require additional information, including about the impact of algorithms in selecting content for users and to ensure that companies proactively report on both emerging and known harms.

The government also wants to create a new “Safety by Design” framework to help companies incorporate online safety features as they develop new apps and platforms, as well as launch a public awareness campaign so that people can recognise and deal with deceptive and malicious behaviours online, such as grooming, extremism, and “catfishing.”

And in a probable dig at Facebook, the government wants to set up codes of practice which could force social media companies to implement more effective measures—such as using dedicated fact-checkers—to minimise the spread of misleading and harmful disinformation, particularly during election periods.

The new proposed laws will apply to any company that allows users to share or discover user-generated content or interact with each other online. This means a wide range of companies of all sizes are in scope, including social media platforms, file hosting sites, public discussion forums, messaging services, and search engines.

The government’s consultation will run until 1 July. In a statement, Digital Secretary Jeremy Wright said: “The era of self-regulation for online companies is over. Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough.”

Home Secretary Sajid Javid has taken an equally tough line. “The tech giants and social media companies have a moral duty to protect the young people they profit from,” he said in a statement. “Despite our repeated calls to action, harmful and illegal content—including child abuse and terrorism—is still too readily available online. That is why we are forcing these firms to clean up their act once and for all.”