The Biden administration intends to broaden the way it addresses privacy concerns for marginalized and disadvantaged communities—everything from bias in lending decisions to other potential areas such as healthcare, education, crime, and workplace issues.
The National Telecommunications and Information Administration (NTIA), a division of the Commerce Department that serves as President Joe Biden’s principal adviser on telecommunications and information policy, announced it will host three virtual listening sessions “about issues and potential solutions at the intersection of privacy, equity, and civil rights.”
“The sessions will help to provide the data for a report on the ways in which commercial data flows of personal information can lead to disparate impact and outcomes for marginalized or disadvantaged communities,” according to an NTIA notice published in the Federal Register on Tuesday. The notice cited studies that show individuals in marginalized and disadvantaged communities “are often at increased risk of suffering harm from losses of privacy or misuse of collected data.”
Up to this point, the Biden administration’s focus on privacy concerns zeroed in on allegations some financial institutions use biased algorithms that make it more difficult for underserved communities to gain access to credit, mortgages, and loans at reasonable rates. Federal banking regulators have indicated they want to understand how these algorithms make lending decisions, how actively financial institutions monitor those decisions for potential biases, and how aggressively they seek to wring biases out of their automated decision-making.
The Consumer Financial Protection Bureau (CFPB), under new director Rohit Chopra, has signaled one of its examination and enforcement priorities will be analyzing how financial institutions use automation to churn out decisions on whether to grant loans or offer other kinds of financial products. The CFPB also noted it wants to learn more about how Big Tech firms collect and monetize data.
The NTIA wants to hear from the public about a wider range of privacy issues that could have an outsized effect on marginalized and disadvantaged communities, including:
- Digital advertising systems using targeted criteria like race, gender, disability, and other characteristics to perpetuate historical discrimination against those groups.
- Data brokers, health insurance companies, and their subsidiaries using information like neighborhood safety, bankruptcies, gun ownership, inferred hobbies, and other information “to determine coverage for people they deem more likely to require more expensive care.”
- Universities utilizing software to predict whether students will struggle academically that has “used race as a strong predictor for poor performance,” more often flagging Black students for dropping out of science and math classes.
“In light of these and many more examples, it is critical for policymakers to understand how information policy can reduce data-driven discrimination and disparate treatment,” the notice said.
When and where: The first listening session will be held Dec. 14, on the intersection of civil rights law and privacy. The second will be held Dec. 15 and focus on “the way in which the collection, use, and processing of personal and personally sensitive data affects structural inequities.” The final listening session on Dec. 16 will examine solutions to the gaps and problems identified in the first two sessions.
The meetings will be held virtually, with online slide share and dial-in information to be posted at the NTIA’s website.