EU wants companies to be held accountable for damage caused by AI – POLITICO


The European Commission on Wednesday proposed new rules that would force makers of software and artificial intelligence-based products to compensate those harmed by their creations.

A new AI liability directive would make it easier to pursue compensation claims when a person or organization is injured or suffers damage from AI-powered drones and robots or from software such as algorithms automated hiring process.

“The new rules will give victims of damage caused by AI systems an equal opportunity and access to a fair trial and redress,” Justice Commissioner Didier Reynders told reporters ahead of the presentation of the proposals.

The bill is the latest attempt by European officials to regulate AI and set a global standard for controlling the burgeoning technology. It comes as the EU is negotiating the AI ​​Act, the world’s first bill to limit high-risk uses of AI, including facial recognition, “social rating” systems and AI-powered software for immigration and benefits.

“If we want to have real consumer and user confidence in the AI ​​application, we have to be sure that it is possible to have such access to compensation and to have access to a real decision in court. if necessary, without too many obstacles, such as the opacity of the systems,” said Reynders.

Under the new law, victims could sue a provider, developer or user of AI technology if they suffer harm to their health or property, or if they experience discrimination based on fundamental rights such as private life. Until now, it has been difficult and extremely expensive for victims to build cases when they believe they have been harmed by an AI, because the technology is complex and opaque.

Courts would have more power to open the black boxes of AI companies and demand detailed information about the data used for algorithms, technical specifications and risk control mechanisms.

With this new access to information, victims could prove that the damage came from a technology company that sold an AI system or that the AI ​​user – for example, a university, a place of work or a government agency – has not complied with the obligations of other European laws. like the AI ​​law or a directive to protect platform workers. Victims would also have to prove that the damage is related to the specific AI applications.

The European Commission has also presented a revised directive on product liability. The 1985 law is not suited to new categories of products such as connected devices, and the revised rules aim to allow customers to claim compensation when they suffer damage resulting from a software update, upgrade or faulty service. The proposed product liability rules also put online marketplaces in the crosshairs, which under the rules can be held liable if they fail to disclose a trader’s name to someone who has suffered a damage on request.

The Commission’s proposal will still have to be approved by national governments in the Council of the EU and by the European Parliament.

Parliament, in particular, could oppose the European Commission’s choice to propose a weaker liability regime than the one it itself previously suggested.

In 2020, the chamber called on the Commission to adopt rules to ensure that victims of harmful AI can obtain compensation, specifically asking that developers, suppliers and users of high-risk autonomous AI can be held legally responsible. even for unintentional damage. But the EU executive decided to take a weaker “pragmatic” approach to this strict liability regime, saying the evidence was “not sufficient to justify” such a regime.

“We chose the lowest level of intervention,” Reynders said. “We need to see if new developments [will] justify stricter rules for the future.

The Commission will consider whether a tougher regime is needed, five years after it comes into force, she said.

Pieter Haeck contributed reporting.




Politico

Not all news on the site expresses the point of view of the site, but we transmit this news automatically and translate it through programmatic technology on the site and not from a human editor.
Back to top button