AI ethicists have warned that “deadbots” could cause psychological harm to, and even “haunt”, their creators and users.
- Deadbots are AI-enabled digital representations of departed loved ones. These chatbots simulate their language patterns and personality traits using their digital footprint, like emails, social media posts and even voice recordings, to create a conversational AI that reflects their personality.
- They may be marketed at parents with terminal diseases who want to leave something behind for their child to interact with, or simply sold to still-healthy people who want to catalogue their entire life and create an interactive legacy.
- Ethicists say “It’s important to prioritise the dignity of the deceased, and ensure that this isn’t encroached on by financial motives of digital afterlife services, for example.”
- In 2017, Microsoft secured a patent for a deadbot that could ‘resurrect’ the dead. An AI chatbot called Project December uses patent-pending technology to simulate text-based conversations with anyone, including the dead. Such services have taken off in the United States and China.
- Parents who want to help their children deal with the loss of a mother or father may soon turn to deadbots.
- But there is little evidence that such an approach is psychologically helpful, and much to suggest it could cause significant damage by short-circuiting the normal mourning process.