The U.K. Information Commissioner’s Office (ICO) is seeking input from developers, users, and those interested in generative artificial intelligence (AI) to help inform policy and guidance regarding the technology.

On Jan. 15, the ICO announced the launch of a multipart consultation series on generative AI and data protection. The agency’s first consultation is open until March 1.

“The impact of generative AI can be transformative for society if it’s developed and deployed responsibly,” said Stephen Almond, executive director for regulatory risk at the ICO. “This call for views will help the ICO provide industry with certainty regarding its obligations and safeguard people’s information rights and freedoms.”

In the series’ first section on lawful basis for web scraping to train generative AI models, the ICO advises developers how they can avoid running afoul of the U.K. General Data Protection Regulation by running three core tests.

Purpose test: When scraping personal data to create a generative AI model, developers must be able to “evidence the model’s specific purpose and use,” the ICO said.

Without a specific purpose, there is no way to properly vet generative AI models or ensure their “downstream use will respect data protection,” the agency explained.

Necessity test: In the ICO’s view, large-scale scraping is the only way to train generative AI models. This test is a “factual assessment that asks whether the processing is necessary to achieve the interest identified in the purpose test,” the agency said.

Balancing test: Web scraping is considered a high-risk activity by the ICO because of the nature of its “invisible processing.”

A data protection impact assessment can help reduce these risks, according to agency guidance.

The ICO last updated its guidance on AI and data protection in March 2023.