Last week, nearly 10 percent of the world's population lost electricity.

A two-day blackout that left more than 700 million people in India without power has been blamed on the thousands of homes that steal power from the grid through illegal hook-ups. Initially, there were fears that the cause might be the work of cyber-terrorists who hacked into control systems to disable them.

The later scenario, although not ultimately to blame for the India blackout, is one many fear could happen here in the United States, even as legislators haggle unsuccessfully over how to prevent it. A central debate comes down to the availability of data. Specifically, should government regulations require the breach disclosures needed to diagnose an organized threat, or leave that effort as a voluntary one?

Repeated attempts to pass legislation requiring new disclosures about digital security breaches have mostly failed, including last week's attempt by the Senate to pass the Cybersecurity Act of 2012, which was successfully blocked by Republicans with a cloture vote of 52-to-46, below the 60 votes needed to move the legislation forward.

Among the divisive issues, partly due to ambiguity in the language of the bill, was whether the legislation represented the start of a voluntary initiative to develop policies for the disclosure and handling of breach reports, or if it would open the door to a mandatory, government-controlled, regulatory regime. Critics, including the U.S. Chamber of Commerce, argue that the sharing of cyber-threat information should not be mandatory in certain situations.

An underlying concern for public companies is whether regulators like the Securities and Exchange Commission will initiate disclosure mandates.

The SEC, for its part, issued “guidance” last October on best practices for companies on the question of when a cyber-intrusion triggers an obligation to disclose it to investors under the federal securities laws. Among the cyber security legislation that has made its rounds in Washington was a bill introduced by Senator Jay Rockefeller (D-W.Va.) that would require the SEC to formalize its guidance into regulation. That would force companies to review, on an ongoing basis, the adequacy of their disclosures on cyber-security risks and cyber-incidents.

Regardless of whether such legislation passes, Adam Lurie, a partner with law firm Cadwalader, Wickersham & Taft, says there are several factors public companies need to consider when it comes to disclosure of cyber-threats. “Companies with well-developed policies and procedures for defending against, stopping, and addressing cyber-incidents will be more attractive to shareholders and the market, and may have an obligation to disclose cyber-intrusions under existing federal securities laws,” he says.

To assess whether any cyber intrusion or threat constitutes an event subject to potential disclosure, says Lurie, a company should consider the size and scope of the incident; whether it exposed a significant weakness in a company's security system; the type of data affected; the potential impact of the intrusion on the company's reputation; and whether the company's internal controls identified the intrusion. He stresses that companies need to have in place, at minimum, an incident response team, a plan of action to identify causes of breaches and correct them, and a communications plan.

Patrick Coyle, a Quality Assurance Manager at Louisiana-based Monolyte Labs, isn't convinced that companies will be as focused on cyber-security disclosures on their own as they would be with added regulations. “Right now, since we have never had a real demonstrated attack on a control system in the United States, everybody in management who is away from the control systems, is kind of thinking, ‘well it hasn't happened yet, so why should I be wasting my valuable resources?” he says.

Dale Peterson, CEO of  Digital Bond, a security consultant specializing in industrial control systems, is among those who say that voluntary disclosure doesn't work. “We should not expect reporting and disclosure unless there is a regulatory requirement,” Peterson says. “The idea of having this tied into SEC reporting requirements seems like the best overall approach.” But, if disclosure “is simply to raise awareness,” there are other techniques that could be as effective, he says.

“If something bad happens, regulation will come regardless of industry efforts. Maybe a more important question is why regulation [already in place for some industries] has been so unsuccessful in reducing risk.”

—Dale Peterson,

CEO,

Digital Bond

Some cyber-security experts say that it would be better to adopt level-headed disclosure requirements now, rather than waiting for a large incident to occur and a resulting knee-jerk legislative response to take place. “If you don't establish some minimum standards and policies and there are, at some stage, severe threats, the danger you face is that the legislative answer will be much more stringent and much more complicated,” says Sergio Thompson-Flores, CEO of Modulo, a provider of governance, risk and compliance management solutions. “There is merit in having some level of common standards, even if it is as a protection from the over-regulation that would come [otherwise].”

Even though the SEC referred to its run down as “guidance,” it still, apparently, reserves the right to flex regulatory muscles. In January, a security breach hit online footwear retailer Zappos, exposing the personal data of an estimated 24 million customers to hackers. Its parent company, Amazon, found itself in an argument with the Commission over whether the breach needed to be addressed in a pubic filing (specifically, its annual report). Amazon, although disagreeing with the incident being characterized as having “material” impact, ultimately gave in to pressure and amended the filing. In other words, the online giant found itself “volunteered.”

Other, high-profile cyber attacks on public companies have included the hotel chain Wyndham Worldwide, which had the payment data for thousands of customers hacked on three occasions since 2008. Unlike Amazon, its troubles were never specifically documented in public securities filings.

In June, social media site LinkedIn revealed the theft of 6.5 million user passwords. A month later, an Eastern European hacker collective was suspected of snatching, and disseminating, upwards of 453,000 unencrypted passwords stored in a Yahoo database. Neither company has subsequently made a cyber-specific SEC filing.

Third-Party Data Collectors

Some are suggesting the use of a middleman that would be able to collect data from companies, without the punitive response that government regulators may have. Dr. Gregory E. Shannon, chief Scientist for the CERT Program at the Software Engineering Institute at Carnegie Mellon University, is an advocate for a voluntary partnership between government and industries to collect data on cyber-threats that can be facilitated by a third party.

RISK FACTORS

Below is an excerpt from the SEC's Division of Corporation Finance guidance on cyber-security, discussing the disclosure obligations related to cyber-security risks and cyber incidents.

Registrants should disclose the risk of cyber incidents if these issues are among the most significant factors that make an investment in the company speculative or risky. In determining whether risk factor disclosure is required, we expect registrants to evaluate their cyber-security risks and take into account all available relevant information, including prior cyber incidents and the severity and frequency of those incidents. As part of this evaluation, registrants should consider the probability of cyber incidents occurring and the quantitative and qualitative magnitude of those risks, including the potential costs and other consequences resulting from misappropriation of assets or sensitive information, corruption of data or operational disruption. In evaluating whether risk factor disclosure should be provided, registrants should also consider the adequacy of preventative actions taken to reduce cyber-security risks in the context of the industry in which they operate and risks to that security, including threatened attacks of which they are aware.

Consistent with the Regulation S-K Item 503(c) requirements for risk factor disclosures generally, cyber-security risk disclosure provided must adequately describe the nature of the material risks and specify how each risk affects the registrant. Registrants should not present risks that could apply to any issuer or any offering and should avoid generic risk factor disclosure. Depending on the registrant's particular facts and circumstances, and to the extent material, appropriate disclosures may include:

Discussion of aspects of the registrant's business or operations that give rise to material cyber-security risks and the potential costs and consequences;

To the extent the registrant outsources functions that have material cyber-security risks, description of those functions and how the registrant addresses those risks;

Description of cyber incidents experienced by the registrant that are individually, or in the aggregate, material, including a description of the costs and other consequences;

Risks related to cyber incidents that may remain undetected for an extended period; and

Description of relevant insurance coverage.

A registrant may need to disclose known or threatened cyber incidents to place the discussion of cyber-security risks in context. For example, if a registrant experienced a material cyber attack in which malware was embedded in its systems and customer data was compromised, it likely would not be sufficient for the registrant to disclose that there is a risk that such an attack may occur. Instead, as part of a broader discussion of malware or other similar attacks that pose a particular risk, the registrant may need to discuss the occurrence of the specific attack and its known and potential costs and other consequences.

While registrants should provide disclosure tailored to their particular circumstances and avoid generic “boilerplate” disclosure, we reiterate that the federal securities laws do not require disclosure that itself would compromise a registrant's cyber-security. Instead, registrants should provide sufficient disclosure to allow investors to appreciate the nature of the risks faced by the particular registrant in a manner that would not have that consequence.

Source: SEC

Shannon says the Institute has urged policy makers to study past and existing  programs for data sharing in order to “identify how much is needed, and what sort of timeliness and level of detail is needed.”

“We've also been encouraging them to offer adequate safe harbors for those who do provide the information,” he says. “Legitimately, any private organization isn't going to provide information to anyone, whether it is the government or otherwise, that can come back to harm them.”

CERT has shown that a third-party aggregator can effectively “work on behalf of the government, but not as part of it,” Shannon says. As an example, he says, the Center for Disease Control (CDC) holds a similar responsibility when it comes to health information in order to respond to epidemics and other health threats.

“If you don't collect any data you are going to be pretty blind,” he says. “You won't know the impact of various policies. Imagine if the Commerce Department couldn't collect payroll information from companies. How would they know about the economic health of the country? We are in a position today where government is essentially blind and can't assess the impact of security policies that they may encourage.”

Another example of an intermediary agent facilitating the sharing of such data is the Health Information Trust Alliance (HITRUST).

CEO Daniel Nutkis says the consortium of nearly 150 companies in the healthcare sector was formed five years ago to establish a “harmonized” information security framework

“What became apparent was that nobody wanted to share information,” Nutkis says. “What companies didn't want was if they disclosed policies to a competitor, that they might tell everybody how bad their policies are, or what those policies were. So, we had to back up and establish an effective legal structure that allowed everybody to comfortably and confidentially share information.”

HITRUST essentially serves as a middleman that “sanitizes” data shared between government agencies and industry participants. Stripped of confidential data or identifiers, incident data can be analyzed without presenting a hardship to the “victim company.”

Last month, HITRUST launched a new initiative, the Cyber Threat Analysis Service, a collaborative platform for cyber defense specific to the healthcare industry. The goal, Nutkis says, is tracking vulnerabilities for electronic health record systems (EHRs) and medical devices, none of which might be possible without the shared willingness to provide data.