Technology

Artificial intelligence does not exist


No one sells the future more masterfully than the tech industry. According to its proponents, we will all live in the “metaverse,” build our financial infrastructure on “web3,” and power our lives with “artificial intelligence.” These three terms are mirages that have brought in billions of dollars, although bitten by reality. Artificial intelligence in particular evokes the notion of thinking machines. But no machine can think and no software is truly intelligent. This phrase alone is perhaps one of the most successful marketing terms of all time.

Last week, OpenAI announced GPT-4, a major upgrade to the technology behind ChatGPT. The system seems even more human than its predecessor, naturally reinforcing notions of intelligence. But GPT-4 and other similar large language models simply reflect databases of text—nearly a trillion words for the previous model—whose scale is difficult to envision. Helped by an army of humans who reprogram it with corrections, the models assemble the words based on probabilities. That’s not intelligence.

These systems are trained to generate text that seems plausible, but they are marketed as new knowledge oracles that can be plugged into search engines. It’s foolhardy when GPT-4 continues to make mistakes, and just a few weeks ago, Microsoft and Alphabet’s Google both suffered embarrassing demonstrations in which their new search engines got the facts wrong.

This doesn’t help matters: Terms like “neural networks” and “deep learning” only reinforce the idea that these programs are human-based. Neural networks are in no way copies of the human brain; they are only vaguely inspired by its functioning. Long-term efforts to reproduce the human brain, with its approximately 85 billion neurons, have all failed. The closest scientists have come is imitating the brain of a worm, with 302 neurons.

We need a different lexicon that doesn’t propagate magical thinking about computer systems and doesn’t absolve the people who design those systems of responsibility. What is the best alternative? Reasonable technologists have been trying to replace “AI” with “machine learning systems” for years, but it doesn’t quite come out the same way.

Stefano Quintarelli, a former Italian politician and technologist, proposed another alternative, “Systems Approaches to Learning Algorithms and Machine Inferences” or SALAMI, to highlight the ridiculousness of the questions people are asking about AI: Is SALAMI sensitive? Will the SALAMI one day have supremacy over humans?

The most desperate attempt at a semantic alternative is probably the most precise: “software.”

“But,” you hear yourself asking, “What’s wrong with using a little metaphorical shorthand to describe a technology that seems so magical?”

The answer is that attributing intelligence to machines gives them undeserved independence from humans, and it abdicates their creators of responsibility for their impact. If we view ChatGPT as “smart,” then we are less likely to try to hold San Francisco startup OpenAI, its creator, accountable for its inaccuracies and biases. It also creates a fatalistic complacency among humans who experience the harmful effects of technology; although “AI” won’t take your work or plagiarize your artistic creations – other humans will.

The problem is increasingly pressing now that companies ranging from Meta Platforms to Snap to Morgan Stanley are rushing to plug chatbots and text and image generators into their systems. Encouraged by its new arms race with Google, Microsoft is integrating OpenAI’s still largely untested language model technology into its most popular business applications, including Word, Outlook and Excel. “Copilot will fundamentally change the way people work with AI and the way AI works with people,” Microsoft said of its new feature.

But for customers, the promise of working with intelligent machines is almost misleading. “(AI is) one of those labels that expresses a kind of utopian hope rather than current reality, much like the rise of the phrase “smart weapons” during the first Gulf War implied a bloodless vision “Totally precise targeting is still not possible,” says Steven Poole, author of the book Unspeak, about the dangerous power of words and labels.

Margaret Mitchell, a computer scientist who was fired by Google after publishing a paper criticizing the biases of large language models, has reluctantly described her work as “AI”-based in recent years. “Before… people like me said we were working on “machine learning.” It’s a great way to get people’s attention,” she admitted Friday during a conference panel.

Her former Google colleague and founder of the Distributed Artificial Intelligence Research Institute, Timnit Gebru, said she also only started pronouncing “AI” around 2013: “It became the thing to say. »

“It’s terrible but I do that too,” Mitchell added. “I call everything I touch ‘AI’ because then people will listen to what I say.”

Unfortunately, the term “AI” is so ingrained in our vocabulary that it will be almost impossible to get rid of, with the obligatory air quotes being difficult to remember. At the very least, we should remember how dependent these systems are on human managers who should be held accountable for their side effects.

Author Poole says he prefers to call chatbots like ChatGPT and image generators like Midjourney “giant plagiarism machines” because they mostly recombine prose and images originally created by humans. “I’m not sure it will catch on,” he said.

In more ways than one, we are truly stuck with “AI.”

© 2023 Bloomberg LP


Realme may not want the Mini Capsule to be the defining feature of the Realme C55, but will it end up being one of the phone’s most talked about hardware specs? We discuss it on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Tech

Back to top button