On Thursday, one day after executives from Facebook and Twitter testified before Congressional panels, there was a not-surprising drop in sector-wide stock prices.

It was a likely shareholder reaction to the idea that the world’s biggest social media companies were on a collision course with increased regulation.

During her testimony before the Senate Intelligence Committee, Facebook COO Sheryl Sandberg conceded as much. The question, she said, may no longer be “if,” but “what” rules make the most sense.

That discussion, at its core, was somewhat obfuscated by the many side shows surrounding the day of testimony.

Marco Rubio (R-Fla.) and conspiracy monger Alex Jones nearly came to blows in a hallway. Conservative protestors, amplifying questionable claims by President Trump, accused social media companies and their algorithms of censoring and obscuring right-wing commentary. Alphabet, Google’s parent company, refused to send its CEO.

As expected, the specter of Russian bot farms, election meddling, and “fake news” also loomed large.

Setting aside the chaos, controversy, and constitutional debate over first amendment protections, what might a realistic regulatory approach look like? Sen. Mark Warner (D-Va.), co-chair of the Senate Intelligence Committee, had some ideas.

In his view, given the global ubiquity of the top tech companies, social media problems will continue evolving, attracting bad actors and amplifying risks.

“Imagine the damage to the markets if forged communications from the Fed chairman were leaked online,” he said. “Or consider the price of a Fortune 500 company’s stock if a dishonest short-seller was able to spread false information about that company’s CEO—or the effects of its products—rapidly online?"

It made sense that Warner would dominate the hearing. In July and August leaks of a white paper he wrote about the issue, “Potential Policy Proposals for Regulation of Social Media and Technology Firms” started to leak out. This week’s hearings touched upon many of those ideas and proposals.

“Social media and wider digital communications technologies have changed our world in innumerable ways,” Warner wrote in the white paper. “The American companies behind these products and services—Facebook, Google, Twitter, Amazon, and Apple, among others—have been some of the most successful and innovative in the world …  As their collective influence has grown, however, these tech giants now also deserve increased scrutiny.”

“The speed with which these products have grown and come to dominate nearly every aspect of our social, political and economic lives has in many ways obscured the shortcomings of their creators in anticipating the harmful effects of their use,” he wrote in the white paper. “Government has failed to adapt and has been incapable or unwilling to adequately address the impacts of these trends on privacy, competition, and public discourse. It is time to begin to address these issues and work to adapt our regulations and laws.”

Among Warner’s proposals is a duty to clearly and conspicuously label “bots” masquerading as human users.

“Bot-enabled amplification and dissemination have also been utilized for promoting scams and financial frauds,” he wrote. “New technologies, such as Google Assistant’s AI-enabled Duplex, will increasingly make bots indistinguishable from humans (even in voice interfaces). To protect consumers, and to inhibit the use of bots for amplification of both disinformation and misinformation, platforms should be under an obligation to label bots.”

Such an effort is already underway, at least at the state level. California lawmakers have proposed a bill, known as the “Blade Runner law” after the 1980s movie. Earlier this year, California State Sen. Bob Hertzberg introduced the B.O.T Act of 2018 (Bolstering Online Transparency) to “address the growing occurrence of automated bots masquerading as individuals and being weaponized to spread fake and misleading news with a goal of lending false credibility and reshaping political debates.”

“Twitter recently developed and launched more than 30 policy and product changes designed to foster information integrity and protect the people who use our service from abuse and malicious automation ... Due to technology and process improvements, we are now removing 214 percent more accounts year-over-year for violating our platform manipulation policies.”
Jack Dorsey, CEO, Twitter

The legislative goal is requiring bots to be identified as automated accounts online.

A federal law could be crafted, Warner wrote, that imposes “an affirmative, ongoing duty on platforms to identify and curtail inauthentic accounts,” with a Securities and Exchange Commission reporting duty to disclose to the public (and advertisers) the number of identified inauthentic accounts and the percentage of the platform’s user base that represented.

Similar legislation might also direct the Federal Trade Commission to investigate lapses in addressing inauthentic accounts under its authority.

Failure to appropriately address inauthentic account activity, or misrepresentation of the extent of the problem, could be considered a violation of both SEC disclosure rules and/or Section 5 of the FTC Act.

Due to Section 230 of the Communications Decency Act, internet intermediaries like social media platforms are immunized from state tort and criminal liability. Warner suggests changing that.

“Currently the onus is on victims to exhaustively search for, and report, this content to platforms,” he says. “Many victims describe a ‘whack-a-mole’ situation. Even if a victim has successfully secured a judgment against the user who created the offending content, the content in question in many cases will be re-uploaded by other users.”

With a revision to Section 230 platforms could be held “liable in instances where they did not prevent the content in question from being re-uploaded in the future.”

Warner credits Yale law professor Jack Balkin with the concept of “information fiduciaries,” service providers who, because of the nature of their relationship with users, assume special duties to respect and protect the information they obtain in the course of the relationships.

Balkin has proposed that certain types of online service providers (including search engines, social networks, ISPs, and cloud computing providers) be deemed information fiduciaries because of the extent of user dependence on them, as well as the extent to which they are entrusted with sensitive information.

A fiduciary would stipulate not only “that providers had to zealously protect user data, but also pledge not to utilize or manipulate the data for the benefit of the platform or third parties (rather than the user),” Warner wrote. “This duty could be established statutorily, with defined functions and services qualifying for classification as an information fiduciary.”

Warner also suggests enhancing privacy rulemaking authority at the FTC.

“Many attribute the FTC’s failure to adequately police data protection and unfair competition in digital markets to its lack of genuine rulemaking authority, which it has lacked since 1980,” Warner added. “Efforts to endow the FTC with rulemaking authority, most recently in the context of Dodd-Frank, have been defeated. If the FTC had genuine rulemaking authority, many claim, it would be able to respond to changes in technology and business practices.”

The United States, Warner said, could also consider rules mirroring the European Union’s General Data Protection Regulation, with key features like data portability, the right to be forgotten, 72-hour data breach notification, first-party consent, and other major data protections.

Under a regime similar to the GDPR, no personal data could be processed unless it is done under a lawful basis specified by the regulation, or if the data processor has received an unambiguous and individualized consent from the data subject.

In addition, data subjects have the right to request a portable copy of the data collected by a processor and the right to have their data erased. Businesses must report any data breaches within 72 hours if they have an adverse effect on user privacy.

A data portability bill could be predicated on a legal recognition that data supplied by (or generated from) users (or user activity) is the users’ and not the service provider’s. In other words, users would be endowed with property rights to their data.

Under GDPR, service providers must provide data, free of charge, in a structured, commonly used, machine-readable format). Warner conceded, however, that a robust data ownership proposal might garner pushback in the United States and pose a number of cyber-security risks if not implemented correctly.

“Specifically, it increases attack surface by enlarging the number of sources for attackers to siphon user data; further, if the mechanism by which data is ported is not implemented correctly, unauthorized parties could use it to access data under the guise of portability requests,” he wrote.

Sandberg’s testimony Wednesday brought to mind approaches that would not be uncommon or inapplicable in the compliance department of a major financial institution.

“We’re investing heavily in people and technology to keep our community safe and keep our service secure,” she said. “This includes using artificial intelligence to help find bad content and locate bad actors. … We’re getting better at anticipating risks and taking a broader view of our responsibilities.”

 “We review reports in over 50 languages, 24 hours a day. Better machine learning technology and artificial intelligence have also enabled us to be much more proactive in identifying abuse,” she added. “We use both automated and manual review to detect and deactivate fake accounts, and we are taking steps to strengthen both.

“These systems analyze distinctive account characteristics and prioritize signals that are more difficult for bad actors to disguise. By using technology like machine learning, artificial intelligence, and computer vision, we can proactively detect more bad actors and take action more quickly.”

Sandberg also discussed Facebook’s maintenance of compliance controls.

“We’ve created a strong program to ensure compliance with our legal obligations and support our efforts to prevent foreign interference and support election integrity,” she said. “Facebook’s compliance team maintains a Political Activities and Lobbying Policy that is available to all employees. This policy is covered in our Code of Conduct training for all employees and includes guidelines to ensure compliance with the Federal Election Campaign Act.”

This process, in another parallel to financial industry regulation, includes Suspicious Activity Reporting.

“We have processes designed to identify inauthentic and suspicious activity, and we maintain a sanctions compliance program to screen advertisers, partners, vendors, and others using our payment products,” Sandberg said. “Our payments subsidiaries file Suspicious Activity Reports on developers of certain apps and take other steps as appropriate, including denying such apps access to our platforms.”

Data protection efforts include “measures to protect people who are likely to be targeted in times of heightened cyber-activity, including elections, periods of conflict or political turmoil, and other high-profile events,” Sandberg said. The strategy includes “building AI systems to detect and stop attempts to send malicious content.”

Twitter CEO Jack Dorsey (pictured, at top), sporting a nose ring, was less polished than Sandberg but also stressed pre-regulation improvements to his site when it comes to rooting out bots and fake news.

“Twitter recently developed and launched more than 30 policy and product changes designed to foster information integrity and protect the people who use our service from abuse and malicious automation,” he said.

Twitter, he said, “continues to develop the detection tools and systems needed to combat malicious automation on our platform” and “refined its detection systems.”

 “Due to technology and process improvements, we are now removing 214 percent more accounts year-over-year for violating our platform manipulation policies,” Dorsey said.