Machines today can perform a variety of tasks and can respond to almost any stimulus that humans can react to. Machines have already surpassed humans for simple tasks, calculating millions of times faster and more consistently than we do.
This is how we have “intelligent” assistants like Siri, Cortana and now Facebook’s M, creating virtual entities that can serve humans better. But in this evolution of machines to become smart and eye-catching, have we gone too far?
Today we live in a world where the concept of singularity is getting closer. The term “singularity” refers to a moment of complete change, where a society transforms to the point of being unrecognizable to its predecessors. An AI singularity could create a utopia where robots and human-learned machines automate common forms of work, and humans relax amid abundant resources. On the other hand, if popular culture is any indication, AI could exterminate humanity as a contender for dominance on Earth. It sounds alarmist, but it’s not just Hollywood saying that.
World-renowned physicist Stephen Hawking told the BBC that “the development of full artificial intelligence could spell the end of the human race. the slowness of biological evolution, could not compete and would be surpassed.”
Tesla and SpaceX CEO Elon Musk is also not a fan of AI. He equated “artificial intelligence enhancement” with “demon summoning” and called it the greatest existential threat to the human race during a lecture at MIT. Musk also tweeted that AI could be more dangerous than nuclear weapons.
Despite this, we continue to work on making smarter devices. Facebook’s new image recognition algorithm can track people’s unique characteristics, not just recognize faces. Voice-activated AIs that can interact in natural and intelligent ways are in all of our pockets and are becoming increasingly sophisticated.
This means, in simple terms, that if something bad happens inside your computer, it can talk to you and tell you what’s wrong, just like a person would. Thomas Hain, from the Department of Computing at the University of Sheffield, added: “Speech technology is clearly becoming mainstream, but the key to its success is human-like performance.”
Then there is Project Adam, initiated and developed by Microsoft to focus on visual recognition, one of the key tenets of deep machine learning that is needed to develop true AI. This project had been tested to identify certain breeds of dogs with visual recognition mechanisms.
With each step, we are heading towards better and better machines; for example, if you had a self-driving car, you could command the machine’s AI to drive to a particular location. And then, if you try to get off but there is traffic coming for example, then the car could prevent you from opening the door in an unsafe way. Which sounds great, but only if the car is 100% correct, otherwise you could be inadvertently kidnapped by your car.
And what about singularity? The computer program that becomes aware of itself and potentially becomes an intelligent form of “life” like we’ve never encountered before? We humans can barely do business with each other; highly intelligent lifeforms like the octopus are eaten carelessly; and a machine is certainly not something we have ever thought of as equal.
We evolve at our slow biological rate, but the inorganic spirit could overtake us with faster generational changes that are not limited by biological processes. And when that happens, in an age where everything is connected to the internet – from your watch to your phone to your TV to your cars and even your homes – then how safe are we?
We’ve been talking about artificial intelligence for a long time, but we’ve mostly thought of it in terms of smarter assistants on our phones. But it’s important that we consider fully importing warnings from people like Hawking and Musk, and tread carefully as we create the “brain machine.”