Don’t be evil? 

Google executives might want to think about updating their old motto to: “Don’t be dumb.”

For a tech giant that attracts the best and brightest of Silicon Valley, there has been a pile-up of bad decisions—not the least of which was concealing a March data breach from regulators, a poorly rationalized move that may ultimately have implications that reach far beyond the company’s Mountain View campus.

There is plenty of talk, especially in the halls of Congress, that a national law governing consumer privacy protections is needed and overdue. Thanks to Google’s recent headlines, lawmakers must certainly be even more empowered to consider what Silicon Valley fears most: a regime that moves ever closer to Europe’s strict General Data Protection Regulation.

The hidden breach

Were it not for the media attention that accompanied it, one might have interpreted an Oct. 8 blog post by Ben Smith, a Google vice president of engineering, as a positive development for customers concerned about their entrusted data. 

Google, we were told, created a data security task force, one with a cool and catchy code name: Project Strobe. The effort was described as “a root-and-branch review of third-party developer access to Google account and Android device data and of our philosophy around apps’ data access.” 

In reviewing privacy controls, concerns arose that some developers “may have been granted overly broad access,” Smith wrote. The exercise also discovered, in March, “a bug” with the Google+ product. 

“We made Google+ with privacy in mind and therefore keep this API’s log data for only two weeks,” he wrote. “That means we cannot confirm which users were impacted by this bug.” A best estimate: profiles of up to 500,000 Google+ accounts were potentially affected. 

Here, as the cliché suggests, the cover-up is worse than the crime. Had Google announced the breach upon discovery, there would be, at worst, a Day One headline. Now, the intentional obfuscation is akin to dumping gasoline on a raging dumpster fire.

In response, Google is “shutting down Google+ for consumers.”

In fairness, it is hard to make an argument that the Google+ breach would satisfy any materiality threshold that might be used as a yardstick for disclosures. 

There may have been nearly half-a-million affected members, but to call them “users” would be a stretch. According to Google, the social media platform “has not achieved broad consumer or developer adoption” and “has low usage and engagement.” Ninety percent of Google+ user sessions last less than five seconds. That user base is also a tiny fraction of the company’s full traffic and user engagement.  

The plot, however, thickens. According to reporting in the Wall Street Journal, based on a leaked internal memo, executives delayed announcing the breach for fear of “immediate regulatory interest” and comparisons to Facebook’s nefarious Cambridge Analytica data imbroglio. Among the concerns was protecting CEO Sundar Pichai from having to submit to a Congressional grilling. 

Here, as the cliché suggests, the cover-up is worse than the crime. Had Google announced the breach upon discovery, there would be, at worst, a Day One headline. Now, the intentional obfuscation is akin to dumping gasoline on a raging dumpster fire.

It would probably be unfair to claim that Google doesn’t take data protection initiatives seriously (although recent refusals to send top executives to Congressional hearings attended by its tech peers might suggest either neglect or hubris). Nevertheless, the Google+ bug and the secretly obtained memo illustrate where its failures lie.

First off, the notification decision was flawed. In his blog post, Smith attempted to explain the delay: “Our Privacy & Data Protection Office reviewed this issue, looking at the type of data involved, whether we could accurately identify the users to inform, whether there was any evidence of misuse, and whether there were any actions a developer or user could take in response. None of these thresholds were met in this instance.”

That uncertainty does complicate matters, but not so much that a quick press release or Securities and Exchange Commission filing wouldn’t have at least demonstrated good faith.

Then there is also the revelation, via the WSJ’s reporting, that the company’s investigation was hampered by a lack of “audit rights” over developers. Just as audit rights need to be a negotiated part of any supply chain relied upon by any company, the wide world of third parties in a tech company’s orbit demand an extra level of scrutiny. There is no excuse for not doing so. 

What repercussions will Google and its parent company, Alphabet, face? The March breach predated the May enactment of GDPR, so it remains to be seen how the European Union can or will respond. 

In the United States, the Federal Trade Commission is likely to step in, especially as Google remains covered by a 20-year consent decree related to “false and misleading” data policies regarding Buzz, another failed social media venture, and how Gmail users were targeted to populate the service, often against their wishes. The FTC can issue fines when companies violate terms of a consent decree.

The consent order prohibited Google, without prior “express affirmative consent” from engaging in any “new or additional sharing” of previously collected personal information “with any third party” that results from “any change, addition, or enhancement” to any Google product or service. It also prohibited the company from future privacy misrepresentations, required it to implement a comprehensive privacy program, and called for regular, independent privacy audits for the next 20 years.

As for the likelihood of federal legislation, Google certainly hasn’t helped its own cause, nor is it alone in tone-deafness. Amid its many troubles, for example, Facebook last week announced yet another data breach, this one affecting upwards of 50 million users, the largest attack in the company’s 14-year history. A code exploit allowed hackers to potentially take control of targeted user accounts.

A week later, Facebook, however, still thought it wise to announce a new in-home video-calling device, a launch that should raise the hackles of anyone following its security woes or fearing personal data compromises.

So as long as tech companies like Google and Facebook keep focusing more on product launches rather than shoring up the data security flaws that already exist in their ecosystem, and then either play dumb when bad things happen or play “the victim card,” legislators will push back with new laws. 

By hiding its breach from those elected officials, Google may have sealed its own fate, as well as that of its peers and competitors. Worries about “immediate regulatory attention?” It is now all but assured.