AI “Deadbots” Could Digitally “Haunt” Loved Ones From Beyond the Grave

Red robot artificial intelligence particles

Cambridge researchers are warning of the psychological dangers of “dead robots,” AI that mimics deceased individuals, and are urging ethical standards and consent protocols to prevent abuse and ensure respectful interaction.

Artificial intelligence that allows users to have text and voice conversations with missing loved ones risks causing psychological harm and even digitally “haunting” those left behind, according to researchers at the University of Cambridge. design safety standards.

“Deadbots” or “Griefbots” are AI chatbots that simulate the language patterns and personality traits of the dead using the digital footprints they leave behind. Some companies already offer these services, offering a whole new type of “post-mortem presence.”

AI ethicists from Cambridge’s Leverhulme Center for the Future of Intelligence present three platform design scenarios that could emerge as the ‘digital industry of the afterlife’ develops, to show the potential consequences of careless design in an area of ​​AI they describe as “high risk.” .”

Misuse of AI chatbots

The research, published in the journal Philosophy and technologyhighlights the potential for companies to use dead bots to surreptitiously advertise products to users in the manner of a deceased loved one, or to distress children by insisting that a deceased relative is still “with YOU “.

When the living sign up to be virtually recreated after death, the resulting chatbots could be used by companies to spam their surviving families and friends with unsolicited notifications, reminders, and updates on social networks. services they provide – almost as if the dead were being “tracked” digitally. .”

Even those who find initial comfort in a “dead robot” may be exhausted by daily interactions that become a “crushing emotional weight,” researchers say, but they may also be helpless in the face of the suspension of a life simulation. AI if their loved one, now deceased, signed a long contract. contract with a digital service beyond.

Visualization of a fictional company called MaNana

A visualization of a fictional company called MaNana, one of the design scenarios used in the paper to illustrate potential ethical issues in the emerging digital afterlife industry. Credit: Dr Tomasz Hollanek

“Rapid advances in generative AI mean that almost anyone with internet access and some basic know-how can resurrect a deceased loved one,” said Dr Katarzyna Nowaczyk-Basińska, co- author of the study and researcher at the Leverhulme Center for the Future of Intelligence in Cambridge (LCFI). “This area of ​​AI is an ethical minefield. It is important to prioritize the dignity of the deceased and ensure that it is not encroached upon by financial motivations linked to digital afterlife services, for example. At the same time, a person can leave an AI simulation as a farewell gift to loved ones who are not ready to deal with their grief in this way. The rights of data donors and those who interact with AI services after death must also be protected.

Existing services and hypothetical scenarios

Platforms offering to recreate the dead with AI for a modest fee already exist, such as “Project December”, which began by exploiting GPT models before developing its own systems, and applications like “HereAfter”. Similar services have also begun to emerge in China. One potential scenario in the new paper is “MaNana”: a conversational AI service allowing people to create a dead robot simulating their deceased grandmother without the consent of the “data donor” (the deceased grandparent) .

The hypothetical scenario sees an adult grandchild initially impressed and comforted by the technology begin to receive advertisements once a “premium trial” is completed. For example, the chatbot offers to order from food delivery services in the voice and style of the deceased. The parent feels he has disrespected his grandmother’s memory and wants the dead robot deactivated, but in a meaningful way – something the service providers have not considered.

Visualizing a fictitious company called Parent

A visualization of a fictional company called Paren’t. Credit: Dr Tomasz Hollanek

“People could develop strong emotional ties to such simulations, making them particularly vulnerable to manipulation,” said co-author Dr Tomasz Hollanek, also of Cambridge’s LCFI. “Methods and even rituals for dignified removal of dead robots should be considered. This could, for example, be a form of digital burial or other types of ceremonies depending on the social context. We recommend designing protocols that prevent deadbots from being used in disrespectful ways, such as for advertising purposes or to have an active presence on social media.

While Hollanek and Nowaczyk-Basińska argue that designers of recreation services should actively seek consent from data donors before transmitting it, they argue that a ban on dead robots based on non-consenting donors would be unworkable.

They suggest that design processes should involve a series of prompts for those seeking to “resurrect” their loved ones, such as “have you ever talked with X about how they would like to be remembered?” “, so that the dignity of the deceased is at the forefront. in the development of dead robots.

Age restrictions and transparency

Another scenario presented in the newspaper, an imaginary enterprise called “Paren’t”, highlights the example of a terminally ill woman leaving a dead robot to help her eight-year-old son with his grieving process.

While the dead robot initially serves as a therapeutic aid, the AI ​​begins to generate confusing responses as it adapts to the child’s needs, such as describing an impending in-person encounter.

Visualization of a fictional company called Stay

A visualization of a fictional company called Stay. Credit: Dr Tomasz Hollanek

The researchers recommend age restrictions for deadbots and also call for “meaningful transparency” to ensure that users are consistently aware that they are interacting with an AI. These could be similar to current warnings about content likely to provoke seizures, for example.

The final scenario explored by the study – a fictional company called “Stay” – shows an elderly person secretly signing up for a dead robot and paying for a twenty-year subscription, in the hope that it will comfort their adult children and allow them to his grandchildren to know them.

After death, the service kicks in. An adult child doesn’t engage and receives a barrage of emails in the voice of their deceased parent. Another does, but ends up emotionally exhausted and consumed with guilt over the fate of the dead robot. Yet hanging the dead robot would violate the terms of the contract their parent signed with the service company.

“It is essential that digital afterlife services consider the rights and consent of not only those they recreate, but also those who will need to interact with the simulations,” Hollanek said.

“These services run the risk of causing enormous distress to people if they are subjected to unwanted digital hauntings from alarmingly accurate AI recreations of those they have lost. The potential psychological effect, particularly in an already difficult time, could be devastating. »

The researchers call on design teams to prioritize opt-out protocols that allow potential users to end their relationships with dead robots in a way that puts an end to their emotions.

Nowaczyk-Basińska added: “We need to start thinking now about how we mitigate the social and psychological risks of digital immortality, because the technology is already here. »

Reference: “Griefbots, Deadbots, Postmortem Avatars: on Responsible Applications of Generative AI in the Digital Afterlife Industry” by Tomasz Hollanek and Katarzyna Nowaczyk-Basińska, May 9, 2024, Philosophy & Technology.
DOI: 10.1007/s13347-024-00744-w

News Source :
Gn tech

Back to top button