USA News

Why humans can’t trust AI

(The conversation) – There are alien spirits among us. Not the little green men of science fiction, but the alien minds that power your smartphone’s facial recognition, determine your creditworthiness, and write poetry and computer code. These alien minds are artificial intelligence systems, the ghost in the machine that you encounter every day.

But AI systems have an important limitation: many of their inner workings are inscrutable, making them fundamentally inexplicable and unpredictable. Additionally, building AI systems that behave in ways people expect is a significant challenge.

If you fundamentally don’t understand something as unpredictable as AI, how can you trust it?

Why AI is unpredictable

Trust is based on predictability. It depends on your ability to anticipate the behavior of others. If you trust someone and they don’t do what you expect, your perception of their trustworthiness decreases.

Many AI systems rely on deep learning neural networks, which mimic the human brain in some way. These networks contain interconnected “neurons” with variables or “parameters” that affect the strength of the connections between neurons. When a naive network receives training data, it “learns” how to classify the data by adjusting these parameters. This way, the AI ​​system learns to classify data that it has never seen before. It does not memorize what each data point is, but rather predicts what a data point might be.

Most of the most powerful AI systems contain billions of parameters. For this reason, the reasons why AI systems make the decisions they do are often opaque. This is the problem of AI explainability – the impenetrable black box of AI decision-making.

Consider a variation of the “trolley problem.” Imagine that you are a passenger in an autonomous vehicle, controlled by an AI. A small child runs into the road, and the AI ​​must now decide: run over the child or swerve and crash, potentially injuring its passengers. This choice would be difficult for a human to make, but a human has the advantage of being able to explain their decision. Their rationalization – shaped by ethical standards, perceptions of others, and expected behavior – promotes trust.

On the other hand, an AI cannot rationalize its decision-making. You can’t look under the hood of the autonomous vehicle at its trillions of parameters to explain why it made this decision. AI does not meet the predictive requirement of confidence.

AI behavior and human expectations

Trust is based not only on predictability, but also on normative or ethical motivations. You generally expect people to act not only as you assume they should, but also as they should. Human values ​​are influenced by shared experience, and moral reasoning is a dynamic process, shaped by ethical standards and the perceptions of others.

Unlike humans, AI does not adjust its behavior based on how it is perceived by others or by adhering to ethical standards. AI’s internal representation of the world is largely static, defined by its training data. Its decision-making process is based on an unchanging model of the world, insensitive to the dynamic and nuanced social interactions that constantly influence human behavior. Researchers are working on programming AI to include ethics, but this is proving difficult.

The autonomous car scenario illustrates this problem. How can you ensure that the car’s AI makes decisions that match human expectations? For example, the car might decide that hitting the child is the best course of action, something most human drivers would instinctively avoid. This problem is the AI ​​alignment problem, and it is another source of uncertainty that erects barriers to trust. AI expert Stuart Russell explains the AI ​​alignment problem.

Critical Systems and Trusted AI

One way to reduce uncertainty and build trust is to ensure that people participate in decisions made by AI systems. This is the approach taken by the US Department of Defense, which requires that for any AI decision-making, a human must either be in the loop or in the loop. In-the-loop means the AI ​​system makes a recommendation but a human is needed to initiate an action. Looped means that although an AI system can initiate an action on its own, a human monitor can interrupt or modify it.

Although involving humans is an important first step, I am not convinced that it will be sustainable in the long term. As businesses and governments continue to adopt AI, the future will likely include nested AI systems, where rapid decision-making limits individuals’ opportunities for intervention. It is important to resolve explainability and alignment issues before reaching the tipping point where human intervention becomes impossible. At this point, there will be no choice but to trust the AI.

Avoiding this threshold is particularly important as AI is increasingly integrated into critical systems, including power grids, the Internet, and military systems. In critical systems, trust is paramount and unwanted behavior could have deadly consequences. As AI integration becomes more complex, it becomes even more important to address issues that limit reliability.

Can people ever trust AI?

AI is extraterrestrial – an intelligent system in which people have little information. Humans are largely predictable to other humans because we share the same human experience, but this does not extend to artificial intelligence, even though humans created it.

While trustworthiness has inherently predictable and normative elements, AI fundamentally lacks the qualities that would make it trustworthy. We hope that more research in this area will shed light on this issue, ensuring that the AI ​​systems of the future are worthy of our trust.

NBC Chicago

Back to top button