Ethicists hit back at ‘AI Pause’ letter which they say ‘ignores real damage’
A group of well-known AI ethicists wrote a counterpoint to this week’s controversial letter asking for a six-month “pause” on AI development, criticizing it for focusing on hypothetical future threats while that the real damage is attributable to the misuse of technology today.
Thousands of people, including household names such as Steve Wozniak and Elon Musk, signed the Future of Life institute’s open letter earlier this week, proposing that development of AI models like GPT-4 be suspended. in order to avoid “the loss of control of our civilization”, among other threats.
Timnit Gebru, Emily M. Bender, Angelina McMillan-Major and Margaret Mitchell are all leading figures in the fields of AI and ethics, notorious (in addition to their work) for having been kicked out of Google because of an article criticizing the capabilities of AI. They are currently working together at the DAIR Institute, a new research team aimed at studying, exposing and preventing the harms associated with AI.
But they weren’t on the list of signatories and have now issued a reprimand slamming the letter’s failure to address existing issues caused by the technology.
“These hypothetical risks are at the center of a dangerous ideology called long-termism that ignores the real harms resulting from the deployment of AI systems today,” they wrote, citing worker exploitation, data theft, synthetic media that supports existing power structures and the pursuit of concentration. of these power structures in fewer hands.
The choice to worry about a robotic Terminator or Matrix apocalypse is a red herring when we have, at the same time, reports that companies like Clearview AI are being used by the police to essentially entrap an innocent man. No need for a T-1000 when you have Ring cameras on every front door accessible through online warranty factories.
While the DAIR team agrees with some of the goals of the letter, such as identifying synthetic media, they emphasize that action must be taken now, on the problems of today, with the remedies we have:
What we need is regulation that ensures transparency. Not only should it always be clear when we encounter synthetic media, but organizations building these systems should also be required to document and disclose training data and model architectures. The responsibility for creating tools that are safe to use should rest with the companies that build and deploy generative systems, which means that the builders of those systems should be held accountable for the results produced by their products.
The current race towards ever larger “AI experiments” is not a predetermined path where our only choice is the speed of execution, but rather a set of profit-driven decisions. Corporate actions and choices must be shaped by regulation that protects people’s rights and interests.
It is indeed time to act: but the focus of our concerns should not be imaginary “powerful digital minds”. Instead, we should focus on the very real and very present exploitative practices of the corporations that claim to build them, which rapidly centralize power and increase social inequality.
By the way, this letter echoes a sentiment I heard from Uncharted Power founder Jessica Matthews at yesterday’s AfroTech event in Seattle: “You shouldn’t be afraid of AI. You should be afraid of the people building it. (His solution: become the people who build it.)
While it is extremely unlikely that any major company will ever agree to suspend its research efforts in accordance with the open letter, it is clear, judging by the undertaking it has received, that the risks – real and hypotheticals – of AI are of great concern in many segments of society. But if they don’t, maybe someone will have to do it for them.