A U.K. Parliamentary Committee last week released 250 pages of e-mails that show that Facebook’s Mark Zuckerberg and other senior executives at the social media giant gave certain app developers special access to user data.
Members of Parliament also said the files showed that Facebook had deliberately made it “as hard as possible” for users of its Android app to be aware of privacy changes.
The e-mails were disclosed by the Digital, Culture, Media and Sport (DCMS) Committee as part of its inquiry into disinformation and “fake news.” A California court order had placed the documents under seal, but the DCMS published them under U.K. parliamentary privilege, believing them to be in the public interest.
On Twitter, Damian Collins MP, the chair of the committee, wrote: “I believe there is considerable public interest in releasing these documents. They raise important questions about how Facebook treats users’ data, their policies for working with app developers, and how they exercise their dominant position in the social media market.”
I believe there is considerable public interest in releasing these documents. They raise important questions about how Facebook treats users data, their policies for working with app developers, and how they exercise their dominant position in the social media market.— Damian Collins (@DamianCollins) December 5, 2018
The e-mails, some of which are marked “highly confidential”, were obtained from Ted Kramer, the chief of Six4Three, which is suing Facebook after its business—specifically, an app named Pikinis that produced a way to search for bikini pictures from users’ contacts on Facebook—was decimated when the social network tightened up its privacy policies in 2015 and cut off the developer’s access to its data.
In a written summary, Collins highlighted six “key issues” that the e-mails show evidence of:
1. Facebook “clearly” entered into “whitelisting” agreements with certain companies (including Netflix and Airbnb) which gave them full access to friends’ data after the company introduced new privacy policies in 2014-2015. Collins says that “it is not clear that there was any user consent for this, nor how Facebook decided which companies should be whitelisted or not.”
2. One of the “key drivers” behind the Platform 3.0 changes at Facebook was to increase revenues from major app developers. The Facebook Platform is an umbrella term used to describe the set of services, tools, and products provided by the social networking service Facebook for third-party developers to create their own applications and services that access data in Facebook. Collins says that linking access to friends’ data to the financial value of the developers’ relationship with Facebook is a “recurring feature” of the documents.
3. In fact, data reciprocity between Facebook and app developers was a “central feature” in the discussions about the launch of Facebook’s platform.
4. Facebook knew that the changes to its policies on the Android mobile phone system, which enabled the Facebook app to collect a record of calls and texts sent by the user, would be controversial. To mitigate any bad PR, Facebook planned to make it as hard of possible for users to know that this was one of the underlying features of the upgrade of their app.
5. Without apparently seeking informed customer consent, Facebook used the analytics capabilities of Onavo, the Israeli mobile app developer it acquired in 2013, to conduct global surveys to see how many apps customers had downloaded, and how often they used them. Facebook also leveraged this information and Onavo’s analytics platform to monitor the performance of its competitors, target companies for acquisition, and make other business decisions. In August 2018, Facebook was forced to remove Onavo Protect from Apple’s App Store over concerns that it was tantamount to spyware.
6. The files also show evidence of Facebook “taking aggressive positions against apps,” even to the extent of forcing them out of business. In an e-mail exchange between Facebook vice president Justin Osofsky and Zuckerberg dated 24 January 2013, Zuckerberg personally approved a decision to deny access to Facebook data for the now-defunct Twitter video-looping app, Vine, on the day of its launch.
Shortly after the DCMS published the e-mails, Zuckerberg posted on Facebook to provide “context” and to reassure the public that the company never sold anyone’s data; instead, developers could access all or most of it if they used Facebook’s platform, agreed to its terms and conditions, and bought ads if they wanted to.
In his post, Zuckerberg even welcomed the extra regulatory attention. “I understand there is a lot of scrutiny on how we run our systems. That’s healthy given the vast number of people who use our services around the world, and it is right that we are constantly asked to explain what we do,” wrote Zuckerberg. “But it’s also important that the coverage of what we do—including the explanation of these internal documents—doesn’t misrepresent our actions or motives.”
In a separate post, Facebook accused Six4Three of “cherrypicking” the documents “to publish some, but not all, of the internal discussions at Facebook at the time of our platform changes. But the facts are clear: we’ve never sold people’s data.”
Facebook also defended its use of “whitelisting,” saying that it only allowed developers access to users’ lists of friends (such as name and profile pictures) rather than access to friends’ private information, and added that “white lists are common practice when testing new features and functionality” and that “it’s common to help partners transition their apps during platform changes to prevent their apps from crashing or causing disruptive experiences for users.”
The social media company also took issue with concerns over data reciprocity, saying that users always had the power to “opt out” whether they wanted the information collected by app developers to be fed back to Facebook, and that users were given legal notices about what information Onavo would collect and how it would use it.
Facebook has come under almost perpetual criticism in the United Kingdom and the European Union this year over concerns about how it runs its business and its failure to address regulators’ worries that data is being misused, particularly in the wake of the Cambridge Analytica scandal.
In October, the U.K.’s data regulator slapped Facebook with a maximum £500,000 (U.S. $637,000) fine for serious breaches of data protection rules after users’ data was “unlawfully processed” and subsequently used to guide political advertising and campaigns. The Information Commissioner’s Office (ICO) said Facebook “failed to make suitable checks on apps and developers using its platform.”
Last week, Italian authorities fined the company €10m (U.S. $11.4 million) for misleading users over its data practices. The company was specifically criticised for the default setting of the Facebook Platform services, which in the words of the regulator, “prepares the transmission of user data to individual websites/apps without express consent” from users. Although users can disable the platform, the regulator found that its opt-out nature did not provide a fully free choice.
Two days after the DCMS released its cache of e-mails, Robert Hannigan, the former head of the U.K.’s intelligence agency GCHQ, said that Facebook could become a threat to democracy without tougher regulation, adding that the social media giant was more interested in profiting from user data than “protecting your privacy.”
Zuckerberg’s repeated reluctance to attend any U.K. parliamentary hearings to give an account has also angered MPs, so much so that the DCMS—as well as a group of international politicians who convened under the banner of an “international grand committee” to discuss Facebook’s data policies and practices—deliberately empty-chaired him at meetings in November.