Many fear that artificial intelligence is the end of humanity – here’s the truth according to experts.
Currently, most people around the world are using some sort of AI-enabled device that is integrated into their daily lives.
They use Siri to check the weather or ask Alexa to turn off their smart lights – these are all forms of AI that many people don’t realize.
However, despite the widespread (and relatively harmless) use of this technology in almost every aspect of our lives, some people still seem to believe that machines could one day wipe out humanity.
This apocalyptic ideal has been perpetuated through various texts and films over the years.
Even leading figures in science such as Stephen Hawking and Elon Musk have spoken out about the threat of technology to humanity.
In 2020, Musk told The New York Times that AI would become much smarter than humans and overtake the human race by 2025, adding that things would get “unstable or weird”.
Despite Musk’s prediction, most experts in the field say humanity has nothing to fear when it comes to AI — at least, not yet.
Most AIs are “narrow”
The fear of AI takeover grew out of the idea that machines will somehow gain consciousness and turn against their creators.
For AI to achieve this, it should not only have human-like intelligence, but it should also be able to predict the future or plan ahead.
As it stands, the AI isn’t able to do either.
When asked the question “Is AI an existential threat to humanity”, Matthew O’Brien, a robotics engineer from the Georgia Institute of Technology, wrote on Metafact: “The long-sought goal of ‘general AI’ is not on the horizon. . We just don’t know how to make adaptable general intelligence, and it’s not clear how much progress is still needed to get to this point.”
The facts are that machines generally operate as they are programmed and we are a long way from developing the ASI (artificial superintelligence) necessary to make this “takeover” even feasible.
Currently, most AI technologies used by machines are considered “narrow” or “weak”, meaning that they can only apply their knowledge to one or a few tasks.
“Machine learning and AI systems are a long way from solving the difficult problem of consciousness and being able to generate their own goals contrary to their programming,” George Montanez, a data scientist at Microsoft, wrote in the same Metafact thread.
AI could help us understand ourselves better
Some experts even go so far as to say that not only is AI not a threat to humanity, but it could help us understand ourselves better.
“Thanks to AI and robotics, we are now able to ‘simulate’ in robots and robot colonies the theories related to consciousness, emotions, intelligence, ethics and to compare them on a scientific basis,” said Antonio Chella, professor of robotics. at the University of Palermo.
“So we can use AI and robotics to understand each other better. In summary, I think AI is not a threat but an opportunity to become better humans by knowing ourselves better,” he added.
AI has risks
That said, it’s clear that AI (and any technology) could pose a risk to humans.
Some of those risks include over-optimization, militarization and ecological collapse, according to Ben Nye, director of learning sciences at the University of Southern California, Institute for Creative Technologies (USC-ICT).
“If AI is explicitly designed to kill or destabilize nations…the accidental or experimental release of weaponized viral AI could easily be one of the next big Manhattan Project scenarios,” he said on Metafact.
“We’re already seeing smarter virus attacks by state-sponsored actors, which is definitely how it’s starting,” Nye added.
This story originally appeared on The sun and has been reproduced here with permission.
New York Post