Regulators aim to rapidly develop AI technology to protect consumers and workers

NEW YORK (AP) — As concerns grow about increasingly powerful artificial intelligence systems like ChatGPT, the country’s financial watchdog says it’s working to make sure companies follow the law when they use AI.
Already, automated systems and algorithms help determine credit scores, loan terms, bank account fees, and other aspects of our financial lives. AI also affects hiring, housing and working conditions.
Ben Winters, senior counsel for the Electronic Privacy Information Center, said a joint statement on the app released by federal agencies last month was a positive first step.
“There’s this narrative that AI is entirely unregulated, which isn’t really true,” he said. “They say, ‘Just because you’re using AI to make a decision doesn’t mean you’re exempt from responsibility for the impacts of that decision. This is our opinion on this. We look.'”
Over the past year, the Consumer Finance Protection Bureau said it fined banks for mismanaged automated systems that led to wrongful foreclosures, car seizures and lost benefit payments, after that institutions have relied on new technologies and faulty algorithms.
There will be no “AI exemptions” to consumer protections, regulators say, citing these enforcement actions as examples.
LEARN MORE: Sean Penn, backing WGA strike, says studios stance on AI is ‘human obscenity’
Consumer Finance Protection Bureau director Rohit Chopra said the agency has “already started working to continue to build muscle internally when it comes to engaging data scientists, technologists and others.” to ensure that we can meet these challenges” and that the agency continues to identify potentially illegal activities.
Representatives from the Federal Trade Commission, the Equal Employment Opportunity Commission and the Department of Justice, as well as the CFPB, all say they are directing resources and personnel to target new technologies and identify negative ways in which they could affect the lives of consumers. .
“One of the things we’re trying to clarify is that if companies don’t even understand how their AI makes decisions, they can’t really use it,” Chopra said. “In other cases, we’re looking at how our fair lending laws are being followed with respect to the use of all that data.”
Under the Fair Credit Reporting Act and the Equal Credit Opportunity Act, for example, financial service providers have a legal duty to explain any adverse credit decisions. These regulations also apply to housing and employment decisions. Where AI makes decisions too opaque to explain, regulators say algorithms shouldn’t be used.
“I think there was a feeling that, ‘Oh, let’s just give it to the robots and there will be no more discrimination,'” Chopra said. “I think the learning is that it’s actually not true at all. In some ways, the bias is built into the data.
SHOW: Why AI developers say regulation is needed to control AI
EEOC President Charlotte Burrows says there will be enforcement against AI hiring technology that screens applicants with disabilities, for example, as well as so-called “bossware” that illegally monitors job applicants. workers.
Burrows also described how algorithms could dictate how and when employees can work in ways that would violate existing law.
“If you need a break because you have a disability or are pregnant, you need a break,” she said. “The algorithm does not necessarily take this accommodation into account. These are things we are looking at closely…I want to be clear that while we recognize that technology is changing, the underlying message here is that the laws still apply and we have the tools to enforce them.
OpenAI’s top advocate, at a conference this month, suggested an industry-led approach to regulation.
“I think it starts with trying to achieve some sort of standard first,” Jason Kwon, general counsel for OpenAI, said at a technology summit in Washington, DC, hosted by the industry group. BSA software. “These could start with industry standards and kind of amalgamate around that. And the decisions of whether or not to make them mandatory, and then what’s the process for updating, those things are probably fertile ground for more conversation.
Sam Altman, the head of OpenAI, which makes ChatGPT, said government intervention “will be essential to mitigate the risks of increasingly powerful AI systems”, suggesting the formation of a US or global agency. to license and regulate the technology.
While there’s no immediate sign that Congress will craft sweeping new rules on AI, as European lawmakers are doing, societal concerns have brought Altman and other tech CEOs to the White House this this month to answer some tough questions about the implications of these tools.
Winters of the Electronic Privacy Information Center said agencies could do more to study and publish information about relevant AI markets, how the industry works, who the major players are, and how how the information collected is used – the way regulators have done in the past with new consumer credit products and technologies.
“The CFPB has done a very good job on this with the ‘Buy now, pay later’ companies,” he said. “There are so many parts of the AI ecosystem that are still so unknown. Publishing this information would go a long way.
Technology journalist Matt O’Brien contributed to this report.
gb7