From its humble beginnings in a Harvard University dorm room, Facebook has grown to become a dominating force in social media and an international addiction. More than 2 billion people use Facebook every month; 1.4 billion people use it every day, more than the population of any country on earth except China.
The company’s very business model, one aped by many of its competitors, is becoming an increasingly problematic double-edged sword. The inherent conflict comes down to the data leveraging needed to return a profit and keep the service free for all users versus alienating those same customers with data collection practices that go beyond their comfort level.
Facebook’s problems are perfectly illustrated by a scandal the company is still struggling to get past. A quiz app used by 300,000 people led to information about 87 million Facebook users being obtained by the international political consultant, Cambridge Analytica. It, allegedly, improperly used psychological profiling, made possible by the data, in its well-compensated quest to influence election results around the globe—most notably, on behalf of Donald Trump in his successful 2016 U.S. presidential bid.
Facebook Founder and CEO Mark Zuckerberg, who once famously declared that privacy is no longer a “social norm,” has retreated from that sentiment as scrutiny of his company heats up. He recently announced that his company was simplifying and improving customer privacy settings, allowing more control over what data is, or isn’t, shared.
But are these, and other steps, enough?
We took a look at Zuckerberg’s comments during two days of Congressional hearings on Capitol Hill last week, and other recent statements, to uncover regulatory and compliance issues and lessons that may be worth the consideration of other companies.
A bipartisan scolding
For two days, and nearly seven hours of testimony before Congressional committees, Zuckerberg faced an audience of table-pounding, although frequently technology-befuddled, legislators.
“One reason that so many people are worried about this incident is what it says about how Facebook works,” said Sen. John Thune (R-S.D.). “The idea that for every person who decided to try an app, information about nearly 300 other people was scraped from your services is, to put it mildly, disturbing. The fact that those 87 million people may have technically consented to making their data available, doesn’t make most people feel any better.”
He fretted about recent reports that malicious actors were able to use Facebook’s default privacy settings to match e-mail addresses and phone numbers found on the so-called “dark Web” to public Facebook profiles, “potentially affecting all users.”
“Most of us understand that whether you're using Facebook or Google, or some other online services, that we are trading certain information about ourselves for free or low-cost services,” he added, “but for this model to persist, both sides of the bargain need to know the stakes that are involved. Right now, I’m not convinced that Facebook’s users have the information that they need to make meaningful choices.”
“There’s more we can do here to limit the information developers can access and put more safeguards in place to prevent abuse,” Zuckerberg wrote in a post-scandal Facebook post. Among the immediate changes he outlined: removing developers’ access to an individual’s data if they haven’t used their app in three months; and requiring developers to not only get approval but also to sign a contract that imposes strict requirements for access to posts or other private data. From now on, every advertiser who wants to run political or issue ads will need to be authorized, by confirming his or her identity and location.
“When we go to dinner in major cities, restaurants display letter grades from the health inspector in the window. There is no equivalent standard by which a consumer may judge the data security practices of a business, but there certainly ought to be.”
Craig Newman, Head of the Data Privacy Practice, Patterson Belknap Webb & Tyler
A call for regulation, but how and what?
After years of what amounted to self-regulation within the social media sector, Zuckerberg admitted that the times may call for new rules. The question, and it was one that legislators were also asking at the recent hearings, is what might that regime look like?
Zuckerberg, after suggesting his support would come on a rule-by-rule basis, was asked by Sen. Lindsay Graham (R-S.C.) to “submit some proposed regulations.” He agreed to do so.
New rules are needed to arm consumers with information that enables them to decide for themselves whether they should trust a company with their personal information, said Craig Newman, head of the data privacy practice at Patterson Belknap Webb & Tyler. “Current measures to assess cyber-preparedness are either not compulsory or too complex, meaning even if their principles are solid, their adoption is hindered by either a lack of public understanding, or a company’s unwillingness to participate.”
“When we go to dinner in major cities, restaurants display letter grades from the health inspector in the window,” Newman added. “There is no equivalent standard by which a consumer may judge the data security practices of a business, but there certainly ought to be. The simple grading system used by restaurant regulators can, and should, be a model to inform the public about the digital security of businesses that store sensitive consumer data.”
One piece of proposed regulation Zuckerberg said he would support is the Honest Ads Act.
Sens. Mark Warner (D-Va.), Amy Klobuchar (D-Minn.), and John McCain (R-Ariz.) in October introduced the legislation to help prevent foreign interference in future elections and improve the transparency of online political advertisements.
Digital platforms with at least 50 million monthly viewers would be required to maintain a public file of all electioneering communications purchased by a person or group that spends more than $500 total on ads published on their platform.
“This is an important issue for the whole industry to move on,” Zuckerberg told Congress.
Sen. Roy Blunt (R-Mo.) said the Federal Trade Commission has “flagged cross-device tracking as one of their concerns” and that too could be in a regulatory wish list.
Rules may also target data protection measures for minors, and their access to social media sites, said Sen. Dick Durbin (R-Ill.).
Facebook recently announced Messenger Kids, an app allowing children between the ages of 6 and 12 to send video and text messages as an extension of a parent’s account.
“The app collects a minimum amount of information that is necessary to operate the service,” Zuckerberg said. “In general, that data is not going to be shared with third parties. It is not connected to the broader Facebook.” He added that any service offered on the site is intended to be compliant with the Children’s Online Privacy Protection Rule, as enforced by the FTC.
From Europe to Farmville
On May 25, 2018, the EU’s General Data Protection Regulation will take effect. The result is a harmonized set of rules across the European Union.
The law has the effect of a global regulation. It makes any company, regardless of geographic location, covered and liable if it offers services to individuals in the European Union or monitors their behaviors. Its requirements include a ramping up of data collection transparency, clear data collection consent, and a “right to be forgotten” that consumers can rely upon to remove collected or posted information.
During a conference call with reporters, Zuckerberg clarified his stance on adopting GDPR rules for Facebook’s U.S. consumers.
“Overall, I think regulations like the GDPR are very positive,” he said. “We intend to make all the same controls and settings available everywhere, not just in Europe. Is it going to be exactly the same format? Probably not. We need to figure out what makes sense in different markets with the different laws and different places. But—let me repeat this—we’ll make all controls and settings the same everywhere, not just in Europe.”
His testimony “should act as the biggest wake-up call there is to the impact GDPR has on American businesses,” says Michael Hiskey, chief strategy officer of Semarchy, a provider of AI-driven data solutions. “Data privacy is a global issue, and one that cannot be itemized by a specific geographic region. What is playing out with Facebook could easily repeat for any company, regardless of size, which does not treat its customer data in a properly managed way, which puts user privacy in the forefront.”
“The trouble Zuckerberg finds himself in today is just as much a data organization issue as it is a privacy issue,” Hiskey added. “As companies such as Facebook stockpile larger and larger quantities of user data, the public should expect clear answers as to where their data is being kept and who can access it.”
The Transatlantic Consumer Dialogue, a forum of U.S. and EU consumer organizations, is also urging a broader adherence to GDPR.
“There is simply no reason for your company to provide less than the best legal standards currently available to protect the privacy of Facebook users,” the organization wrote.
Machines are taking over
New technology has the world of banks and financial services firms buzzing. Blockchain promises safe and secure ledger technology. RegTech can take the pain out of the regulatory onslaught they struggle with. Artificial Intelligence and machine learning can help ferret out money laundering and sanctions violations.
Facebook’s use of AI was among the topics Zuckerberg elaborated on during both his Congressional testimony and the earlier conference call with members of the press.
He explained that the company has roughly 15,000 people working on security and content review. In the coming weeks, it plans to have more than 20,000 assigned to the tasks. Their efforts will be supplemented by continuing investments in artificial intelligence.
“There have been a number of important elections since [the discovery of Russian created fake accounts] where these new tools have been successfully deployed,” Zuckerberg said. In the U.S. Senate Alabama special election last year, Facebook deployed new AI tools that “proactively detected and removed fake accounts from Macedonia trying to spread misinformation.”
“We have disabled thousands of accounts tied to organized, financially motivated fake news spammers. These investigations have been used to improve our automated systems that find fake accounts,” Zuckerberg said.
The drawback to technology as a screening tool is that there are false positives. At a bank, that just means additional review by a compliance team. For a social media company, a false positive can, and does, raise issues of censorship.
“Some problems lend themselves more easily to AI solutions than others,” Zuckerberg said.” “Hate speech is one of the hardest because determining if something is hate speech is very linguistically nuanced. you need to understand what is a slur and whether something is hateful, not just in English, but different languages across the world.”
Other types of content lead to greater success rates. Efforts to find terrorist propaganda have yielded a 99 percent success rate. “Our AI systems flag before any human sees it,” he testified.
Like any company, Facebook is a poster child for third-party risk, with external apps the basis for a lion’s share of its data woes.
In response to recent scrutiny, Facebook has committed to an audit of all third-party apps using the service.
“We’re not going to be able to go out and necessarily find every single bad use of data,” Zuckerberg said. “What we can do is make it a lot harder for folks to do that going forward: change the calculus on anyone who is considering doing something sketchy going forward. I actually do think that we’ll be able to uncover a large amount of bad activity, of what exists, and we will be able to go in and do audits and ensure people go get rid of bad data.”