Firms attempting to meet regulatory requirements for monitoring employee use of off-channel communications for business purposes face a bedeviling number of obstacles.
Many ethics and compliance programs have refocused their efforts away from bribery and corruption and onto data security and privacy, complex government regulations, artificial intelligence security, and other contemporary challenges, a survey from LRN found.
When organizations move their data or operations to the cloud, the compliance team has their work cut out and then some, experts discussed at CW’s Cyber Risk & Data Privacy Summit.
Learn more about how and when artificial intelligence tools can be used to increase value in your governance, risk, and compliance program.
The French data regulator’s fine against an Amazon warehouse manager for violating employees’ rights to privacy in the workplace once again raises questions about what constitutes an overzealous approach to employee monitoring and why companies fail to recognize the signs.
Forward-thinking third-party risk management teams are changing the way they approach vendor assessments.
The decision by France’s data regulator to fine an Amazon warehouse manager for breaches of the General Data Protection Regulation over the way it monitored employee productivity raises questions about the reach data protection authorities have over corporate conduct.
Different deadlines associated with the 13 U.S. state privacy laws currently on the books, including grace periods and enforcement dates, have proven challenging for compliance, experts discussed at CW’s Cyber Risk & Data Privacy Summit.
Just 25 percent of corporate leaders felt their organizations were highly prepared to handle the governance and risk issues posed by generative artificial intelligence, a Deloitte survey found.
Generative artificial intelligence providers often promote their technologies as revolutionary tools that can enhance anti-money laundering processes. But the issues and shortcomings associated with the technologies must also be acknowledged.
The Commodity Futures Trading Commission would like to learn more about how regulated entities might be using artificial intelligence in their compliance efforts, along with other applications.
The recent furor in the United Kingdom over the Post Office’s wrongful prosecutions of sub-postmasters for alleged fraud has put the government’s relationship with private contractors under the spotlight and raised questions about how companies could be held more accountable in future.
The U.K. Information Commissioner’s Office is seeking input from developers, users, and those interested in generative artificial intelligence to help inform policy and guidance regarding the technology.
Experts expect improvements in the U.K. Financial Conduct Authority’s use of data and reporting means a heightened obligation for timely compliance by companies subject to its remit.
Amazon’s warehouse management arm in France was assessed a penalty of €32 million (U.S. $35 million) for violating the General Data Protection Regulation by excessively tracking the productivity of employees.
Wells Fargo Securities agreed to pay a $425,000 penalty as part of a settlement with the Financial Industry Regulatory Authority addressing allegations of disclosure lapses affecting millions of trade confirmations and related supervisory failures.
A survey of financial crime professionals found that while three of every four companies added more anti-money laundering employees in 2023, nearly all respondents said growing their department’s headcount alone won’t keep up with emerging risks.
Join us for an in-depth discussion on proven strategies, advanced technologies, and industry best practices that form the bedrock of operational resilience.
Andrew McBride, chief risk officer of Albemarle Corp., and Tapan Debnath, head of integrity, regulatory affairs and data privacy at ABB, discussed how and why their respective organizations use data analytics to conduct business as part of a recent webcast.
Cyberattacks on artificial intelligence systems are increasing, so it’s important users know their vulnerabilities and try to soften the damage if they get hit, according to a new report by the National Institute of Standards and Technology.
Deepfakes have emerged in the digital world as a silent pandemic threatening not only our digital integrity but becoming a major risk to anti-money laundering efforts.
Fraud remains the leading form of identity-related suspicious activity cited in Bank Secrecy Act reports by a large margin, while technologies enable greater overall risks around exploitation, according to new research from the Financial Crimes Enforcement Network.
Canada’s financial intelligence agency aims to increase its use of artificial intelligence technology to improve its capabilities to analyze data and detect suspicious activity, according to an agency official.
The Federal Trade Commission was clear in its recent enforcement action against Rite Aid regarding its expectations for companies using facial recognition technology or any biometric security or surveillance systems.
The lack of clear regulations and guidelines for the ethical use of facial recognition technology further exacerbates concerns of discriminatory practices and potential infringements on human rights.
The Commodity Futures Trading Commission announced two appointments to its Division of Data designed to help the agency leverage analytics and emerging technologies to improve its policymaking and oversight efforts.
Sandeep Sacheti, executive vice president for customer information management and operational excellence at Wolters Kluwer, shares with Compliance Week how the company’s financial and corporate compliance division uses artificial intelligence to manage and analyze data.
As the European Union’s AI Act sets its sights on 2026 to take full effect, experts are concerned other key jurisdictions might introduce divergent legislation that treats artificial intelligence use differently, thus making it difficult for companies to ensure compliance.
Retail pharmacy chain Rite Aid agreed to a five-year ban on its use of facial recognition technology for surveillance purposes as part of a settlement with the Federal Trade Commission.
Businesses can prepare for a bumpy ride as the 2024 global landscape of data privacy and other related laws and regulations begins to take shape.
As if creating policies and procedures to handle employee use of off-channel communications is not difficult enough, ephemeral messaging platforms can make it even more difficult to recover messages in transit.
Managing directors at KPMG share how firms are making strides in building robust third-party risk management programs for the future.
Artificial intelligence was highlighted as an emerging risk to the federal banking system as part of the Office of the Comptroller of the Currency’s latest semiannual risk perspective.
The Department of Justice is upping its game regarding its use of data analytics to identify potential misconduct, and it expects companies to be doing the same.
Finance executives and general counsel alike say their firms and leadership teams are unprepared for the risks and benefits of artificial intelligence, according to two recent surveys.
Emerging technologies like automation and generative AI are on the radar as difference-makers for businesses serious about keeping pace with increasing regulatory scrutiny toward third-party due diligence and sanctions compliance, a survey conducted by Compliance Week and Certa found.
Learn the implications for regulated businesses in terms of understanding how artificial intelligence like ChatGPT can be governed in order to effectively remediate its risks.
The California Privacy Protection Agency drafted its rules to apply the rights allowed to residents under the California Consumer Privacy Act to automated decision-making technology used by businesses.
Insurers embracing artificial intelligence-based technologies might pose serious risks to companies buying insurance if the risk data used to price their insurance premiums is used to train AI algorithms or shared on commonly used chatbots like ChatGPT.
Addressing employee use of off-channel communications for conducting business was clearly on the minds of compliance officers who responded to our “Inside the Mind of the CCO” survey, but their confidence in their related policies and procedures was surprisingly weak.
Most compliance professionals say their teams are not using artificial intelligence to assist with compliance obligations, according to our “Inside the Mind of the CCO” survey.