The Optus data breach should serve as a reminder for all organizations that cybersecurity incidents are serious business risks that are costly to make right.

Although the alleged hacker purportedly responsible for the attack is now in police custody, the saga around trying to correct the damage has only just begun.

With 9.8 million customer records exposed, internal remediation could cost Optus billions of Australian dollars, one expert suggested to an Australian news outlet. The mobile phone network provider is also set to pay at least some of the costs incurred by federal and state governments for renewing exposed passports—AUD$193 (U.S. $122) each—and perhaps for drivers’ licenses as well after pressure from Australian Prime Minister Anthony Albanese.

There’s also the threat of regulatory investigations, including coordinated probes announced by the Australian Communications and Media Authority and Office of the Australian Information Commissioner on Tuesday.

Experts warned Optus is likely to see massive customer migration and will need to spend millions trying to attract users back. In short, it’s a disaster apparently of the company’s own making.

Media coverage suggests a testing environment which included the application programming interface (API) that pulls customer data was accidentally exposed on the internet. When designed for internal testing only, these kinds of APIs often do not have any authentication or authorization associated with them because such security measures get in the way of functional testing, according to Nick Rago, field chief technology officer at cyber risk firm Salt Security. As a result, he said, the entire testing environment puts customer data at risk.

Rago said the Optus breach should be a stark warning any API testing requires basic—but strong—controls.

“An API in a testing phase and testing environment should not be using real customer data. Dummy or test data should be in use to mitigate the chances of any wrongful data exposure during testing cycles,” said Rago. “If an API in test is externally exposed, strict authentication, authorization, and access rules should be in place to ensure only authorized entities can gain access to the API. Missing the mark on both requirements, especially when dealing with an API handling sensitive data, is a recipe for disaster.”

To identify and stop any data leakage, as well as quickly spot governance and control failures, organizations need API runtime security, Rago added.

“This provides full API visibility in real time,” he said. “Having these insights would have helped Optus by providing full visibility into all their APIs, rather than having visibility come in the form of a demand for ransom.”

“An API in a testing phase and testing environment should not be using real customer data. … If an API in test is externally exposed, strict authentication, authorization, and access rules should be in place to ensure only authorized entities can gain access to the API. Missing the mark on both requirements, especially when dealing with an API handling sensitive data, is a recipe for disaster.”

Nick Rago, Field Chief Technology Officer, Salt Security

Optus received a ransomware demand, reported to be AUD$1 million (U.S. $632,000) in cryptocurrency. The company said it did not pay.

“No one should ever pay a ransom demand,” said Thomas Eeles, incident response team manager at cybersecurity firm Orange Cyberdefense. “Ransomware is above all an extortion scam and should be handled as such.”

Instead of paying ransom demands, Eeles advised organizations “to think of the amount they would be willing to save their business, then halve it and invest that amount in doing the very basics of IT security as well as possible.”

Ross Brewer, general manager at cybersecurity firm AttackIQ, said, “While it might be tempting to pay ransomware demands and hope the problem goes away, on average only 8 percent of organizations manage to recover all data from a ransomware attack. It is nigh on impossible to guarantee hackers will delete stolen data. It may well be leaked down the line.”

Brewer said chief information security officers need to be able to evaluate performance in relation to specific ransomware threats. He said boards need to receive briefings from their security teams on the readiness and resilience of their key data, so they understand how well it is defended and how it fits into the organization’s overall security program if they want to ensure compliance.

Cliff Martin, head of cyber incident response at consultancy GRCI Law, said organizations should always have a cyber incident response plan that is kept relevant, up to date, and resourced so it can detect an attacker before he or she deploys ransomware.

When preparing the plan, he said organizations should consider what IT assets they have, determine where sensitive/personal information is stored, and check if it is protected and backed up. If a third party is involved in protecting the data or initiating the response plan, checks should be carried out to find out what exactly the scope of its response will be.

The Optus breach prompted the Australian government to consider overhauling the country’s Privacy Act so there are stronger penalties for companies with lax cybersecurity controls and procedures and limits on the type, amount, and duration of data companies hold. Other countries are already in the process of beefing up rules to make companies more accountable for breaches and their boards directly liable.

For example, the U.S. Securities and Exchange Commission (SEC) in March issued a proposal for companies to disclose board oversight of cybersecurity risk, including a requirement to report annually on the level of directors’ cybersecurity expertise.