Access controls used to be easy: just lock the door, filing cabinet, or safe. Anyone who wanted to break in needed to steal the key and pick the lock, or force his way in.

Those days are gone. And anyone trying to recreate a control environment akin to the simplicity that existed before the Internet is deluding himself.

“In today’s open world, it’s very easy to attack people and systems,” says Andrew Morrison, principal in Deloitte’s cyber-risk practice. “The Internet created the ability to share information, and trying to lock it down is kind of at odds with its goal.”

A common fallacy, Morrison says, is to believe you can design a control for every type of hack. Cyber-controls are less like securing an office or filing cabinet, he says, and more like protecting against a flood: water is all around, searching for the path of least resistance. He cites three areas worth a compliance or audit executive’s attention:

Designing preventative controls to keep your data as secure as possible;

Knowing what “normal” looks like in your systems, and be vigilant against anomalies;

Building a system resilient enough to restore operations quickly after an attack happens.

Morrison recommends cyber-drills, where companies run through an attack that tests how someone can get in, how to detect, and how to respond—“just as you would operate for an earthquake at corporate headquarters.”

One of the biggest problems in practice, Morrison says, is underweighting internal threats versus external threats. “There’s been a lot of focus around keeping the bad guys from coming in, but it’s almost been done at the expense of understanding and monitoring who is in the organization already,” he says. “The failure in all these breaches typically comes down to a human with compromised credentials, or working in collusion with others.”

“In today’s open world, it’s very easy to attack people and systems. The internet created the ability to share information, and trying to lock it down is kind of at odds with its goal.”
Andrew Morrison, Principal - Cyber-Risk Practice, Deloitte

A Balancing Act

Granted, the concept of access controls has been floating around the compliance and audit community—and in guidance such as the COSO framework for internal control, among others—for decades. What’s changed, says Johnny Lee, forensic, investigative, and dispute services practice leader at Grant Thornton, is a shift away from individual access to network access.

And while best practices to protect network access might be clear, companies sometimes decide their budgets don’t allow for best practice anyway. The result is the need for a balancing act in responding to risk, and judgment comes into play.

“Unless we have a discussion about the specific residual risks we allow to survive, such as how long a time period between reviews for breaches,” Lee cautions, “if someone wreaks havoc on your network, that may become the balancing act between risk and performance.”

Each model for access control (say, decentralized or single sign-on to IT systems), has its own challenges and monitoring issues, Lee says. First, is the sheer volume of access logs to review. Second, context is crucial.


Below Brian Barnier of ISACA outlines nine steps companies should implement toward better access controls.

The control is used as intended

The control is maintained as implemented

The control is implemented as designed

The control is designed from the appropriate template

The control is appropriate for the process class and problem

The control is located properly in the process flow

The control is based on the location of useful warning signs

The control is based on robust, real-world “What if?” scenario analysis

The control is based on scenario analysis conducted properly based on a thorough “know the business” understanding of environment and capabilities
Though still challenging, these assumptions are easier to meet when applied to retrospective financial reporting, when those reporting systems are stable and a threshold of materiality (percent of revenue or income) can be applied. These assumptions are more difficult to meet when a prospective view is needed of a dynamic, operational world, where a tiny issue can turn into a huge problem.
The second cause for controls churn and confusion is when the auditor or compliance person fails to bark because all looks well—because he or she does not understand the chain of fitness and other assumptions. There is a false sense of security.
Why do some auditors miss these problems? In speaking at ISACA programs around the world, show-of-hands surveys reveal that it has much to do with the time a person began working in audit. In particular, whether a person’s work experience begins before the Sarbanes–Oxley Act of 2002, when IT audit began focusing on a narrow financial reporting notion of “IT General Controls” (ITGC).
The modern, skilled IT pro has a clear operational view of a control as something that senses and responds, whether dumb like a light switch or intelligent like server load balancing.
ISACA’s COBIT 5 offers help in the shift from “controls” (too often understood mostly as ITGC) to business-objective-oriented management practices. More broadly, consider ISACA’s tagline: “Trust in, and value from, information systems.” Value creation in Val IT (now incorporated in COBIT 5) is well beyond controls that struggle just to protect value.
I suggest taking action—host a “Cut Controls Churn and Confusion Day” at your chapter or for your team at work. Invite a panel of people with managerial accounting, operational process improvement and IT process improvement experience to discuss why improved oversight, management practice and core business process are more effective than controls for any operational situation.
Source: Brian Barnier, Risk Advisor, ISACA.

“If you see an access that in isolation seems sinister or inappropriate, and you don’t have the benefit of [knowing] that user’s access to other systems, you might spend a lot of time chasing false positives,” Lee says. The auditor needs to understand how systems interact with each other.

Lee also warns about another risk not always disclosed to auditors that he politely calls the “care and feeding of server logging,” which can be so resource-intensive that IT departments neglect it.

The problem is symbolic of many in IT controls. Users call the IT department to complain about system performance (“Why does it take four minutes to log onto e-mail?” Lee quips), so IT workers disable logging rather than let it slow down server efficiency.

An IT security no-no? Absolutely, but it happens. Lee encourages regular and candid discussions among internal audit, IT, and compliance to identify those problems with IT infrastructure and management. He also recommends that IT audits avoid a “gotcha” approach, which will probably cause the IT department to stop returning your calls.

Another common mistake, Lee says, is to assume that the control is the log itself—the control is the review of the log, and everyone involved should understand exactly what the log captures (name, date, time, device used, and so forth).

One of the biggest problems, says Lee, is revoking access rights that are no longer proper; such as when someone leaves the company, or moves to a different position or department. The more senior the executive, the more rights and access he or she likely had. That makes revoking rights all the more urgent, as well as periodic reviews of who has access to what.

Understanding Business Is Key

Brian Barnier, a risk adviser with ISACA and principal at Value Bridge Advisers, says access control is like a “chain of fitness,” with steps ranging from a control being used as intended to a control passing robust stress tests. If one of the links fails, the access control gate can be disabled.

As discussed in Barnier’s ISACA Now blog post Why Didn’t the Dog Bark?, another cause of control failure is when “the auditor or compliance person fails to bark because all looks well—because he or she does not understand the chain of fitness and other assumptions. There is a false sense of security.”

“If they don’t understand how the business works, all bets are off,” Barnier adds. “You can build a tool, but the tool can fail, and if you just add new tools, you can be replacing one failure with another failure.”

Barnier, who authored ISACA’s Operational Risk Handbook for Financial Companies, has reviewed FBI files on financial crimes and sees a classic security problem: someone with knowledge of front-, middle-, and back-office operations is able to cover up his tracks. That is an instance where certain access should have been turned off and more segregation of duties should have been in place.

Morrison adds that behavioral considerations are critical in determining what access, even when properly authenticated, is appropriate. For example, a doctor reviewing files of patients he is treating would be normal; looking at sequential patient files would not. “Typical controls don’t do that type of behavior monitoring, which is becoming more and more important,” Morrison says. Controls would include pattern analysis and behavioral analysis, and running correlations.

“Are hackers going to get better? Yes,” Barnier says. In applying longstanding literature infused with new knowledge, it appears that effective access controls rely heavily on communication and education.