Companies developing novel artificial intelligence (AI) tools might want to think carefully before laying off ethics personnel, an attorney with the Federal Trade Commission (FTC) warned.

The agency is closely tracking company use of AI tools for possible rule violations involving deception, discrimination, excessive manipulation, or unfairness, wrote Michael Atleson, an attorney in the FTC’s Division of Advertising Practices, in a blog post Monday.

Companies are now using chatbots and generative AI tools to influence people’s behavior, beliefs, and emotions. The FTC is likely to get involved if a chatbot directs people “unfairly or deceptively” into harmful decisions regarding finances, health, education, housing, and/or employment, Atleson said.

For starters, “People should know if they’re communicating with a real person or a machine,” Atleson said. Users should be informed of commercial relationships between an AI product and any websites or services a chatbot steers them to, he said.

Generative AI used for targeted advertising will also get the attention of the agency if it appears elements of the ad trick people into making harmful choices, said Atleson, who cited recent enforcement actions by the FTC as examples to hit that point home for businesses.

“If we haven’t made it obvious yet, FTC staff is focusing intensely on how companies may choose to use AI technology, including new generative AI tools, in ways that can have actual and substantial impact on consumers,” Atleson said.

The FTC might also consider it manipulative if an ad is placed within a generative AI feature, Atleson said.

“[I]t should always be clear that an ad is an ad,” he said.

In a previous blog post, Atleson warned the FTC might go after companies that make unsubstantiated claims about what their AI product can do.

“Given these many concerns about the use of new AI tools, it’s perhaps not the best time for firms building or deploying them to remove or fire personnel devoted to ethics and responsibility for AI and engineering,” Atleson wrote in Monday’s blog. “If the FTC comes calling and you want to convince us that you adequately assessed risks and mitigated harms, these reductions might not be a good look.”

A better approach, he said, is to beef up training of staff and contractors around risks concerning foreseeable downstream uses of AI tools and to monitor and address impacts of AI tools that are launched.