Advertisement

Digital 'immortality' is coming and we're not ready for it

Artificial intelligence has come a long way recently, but it's still imperfect.

OpenAI

In the 1990 fantasy drama Truly, Madly, Deeply, lead character Nina (Juliet Stevenson), is grieving the recent death of her boyfriend Jamie (Alan Rickman). Sensing her profound sadness, Jamie returns as a ghost to help her process her loss. If you’ve seen the film, you’ll know that his reappearance forces her to question her memory of him and, in turn, accept that maybe he wasn’t as perfect as she’d remembered. Here in 2023, a new wave of AI-based “grief tech” offers us all the chance to spend time with loved ones after their death — in varying forms. But unlike Jamie (who benevolently misleads Nina), we’re being asked to let artificial intelligence serve up a version of those we survive. What could possibly go wrong?

While generative tools like ChatGPT and Midjourney are dominating the AI conversation, we’re broadly ignoring the larger ethical questions around topics like grief and mourning. The Pope in a puffa is cool, after all, but thinking about your loved ones after death? Not so much. If you believe generative AI avatars for the dead are still a way out, you’d be wrong. At least one company is offering digital immortality already — and it’s as costly as it is eerie.

Re;memory, for example, is a service offered by Deepbrain AI, a company whose main business includes those “virtual assistant” type interactive screens along with AI news anchors. The Korean firm took its experience with marrying chatbots and generative AI video to its ultimate, macabre conclusion. For just $10,000 dollars and a few hours in a studio, you can create an avatar of yourself that your family can visit (an additional cost) at an offsite facility. Deepbrain is based in Korea, and Korean mourning traditions include “Jesa,” an annual visit to the departed’s resting place.

Right now, even by the company’s own admission, the service doesn’t claim to replicate their personality with too much depth; the training set only really affords the avatar to have one “mood.” Michael Jung, Business Development and Strategy Lead at Deepbrain told Engadget “If I want to be a very entertaining Michael, then I have to read very hyper voices or entertaining voices for 300 lines. Then every time when I input the text [to the avatar] I'm going to have a very exciting Michael”. Re;memory isn’t currently trying to create a true facsimile of the subject — it’s something you can visit occasionally and have basic interactions with — but one hopes there's a little more character to them than a virtual hotel receptionis.

While Re;memory has the added benefit of being a video avatar that can respond to your questions, audio-based HereAfter AI tries to capture a little more of your personality with a series of questions.The result is an audio chatbot that friends and family can interact with, receiving verbal answers and even stories and anecdotes from the past. By all accounts, the pre-trained chatbots provide convincing answers in their owners’ voices — until the illusion is unceremoniously broken when it robotically responds “Sorry, I didn’t understand that. You can try asking another way, or move onto another topic” to any query it doesn't have an answer for.

Whether these technologies create a realistic avatar or not isn’t the primary concern — AI is moving at such a clip that it will certainly improve. The trickier questions revolve around who owns this avatar once you’re gone? Or are your memories and data safe and secure? And what impact can all this have on those we leave behind anyway?

Joanna Bryson, Professor of Ethics and Technology at Hertie School of Governance, likens the current wave of grief tech to when Facebook was more popular with young people. Back then, it was a common destination to memorialize friends that had passed and the emotional impact of this was striking. “It was such a new, immediate form of communication, that kids couldn't believe they were gone. And they seriously believe that they're dead friends were reading it. And they're like, ‘I know, you're seeing this.’”

In this photo illustration a virtual friend is seen on the screen of an iPhone on  April 30, 2020, in Arlington, Virginia. -
OLIVIER DOULIERY via Getty Images

The inherent extra dimension that AI avatars bring only adds fuel to the concern about the impact these creations might have on our grieving brains. “What does it do to your life, that you're spending your time remembering […] maybe it's good to have some time to process it for a while. But it can turn into an unhealthy obsession.”

Bryson also thinks this same technology could start being used in ways it wasn’t originally intended. “What if you’re a teenager or preteen and you spend all your time on the phone with your best friend? And then you figure out you prefer, like a [AI] synthesis of your best friend and Justin Bieber or something. And you stop talking to your actual best friend,” she said.

Of course, that scenario is beyond current capabilities. Not least because to create an AI version of our best, living friend we’d need so much data that we’d need their participation/consent in the process. But this might not be the case for much longer. The recent spate of fake AI songs in the style of famous artists is already possible, and it won’t be long before you won’t need to be a celebrity for there to be enough publicly available input to feed a generative AI. Microsoft’s VALL-E, for example, can already do a decent job of cloning a voice with just three seconds of source material.

If you have ever had the misfortune of sorting through the possessions of a dead relative, you often learn things about them you never knew. Maybe it was their fondness for a certain type of poetry via their underlinings in a book. Or maybe something more sinister, like bank statements that showed crippling debt. We all have details that make us complex, complete human beings. Details that, often intentionally, remain hidden from our public persona. This throws up another time-honored ethical conundrum.

The internet is awash with stories of parents and loved ones seeking access to their deceased’s email or messaging accounts to remember them by. For better or worse we may not feel comfortable telling our immediate family about our sexuality or our politics, or that our spouse was having an affair —- all things that our private digital messages might reveal. And if we’re not careful, this could be data we inadvertently give over to AI for training, only for it to burp that secret out posthumously.

Even with the consent of the person being recreated in AI there are no assurances someone else can’t get their hands on the digital version of you and abuse it. And right now, that broadly falls into the same crime bucket as someone stealing your credit card details. Until they do something public with it, at which point other laws, such as right to publicity may apply — but usually, these protections are only for the living.

Bryson suggests that the logical answer for data protection might be something we’re already familiar with — like the locally stored biometric data we use to unlock our phones. “Apple has never trusted anyone. So they really are very privacy oriented. So I tend to think that, that's the kind of organization that will come up with stuff, because they want it themselves.” (The main issue this way, as Bryson points out, is that if your house burns down you risk losing “grandma” forever.)

Front view portrait of a sad teenager complaining in a bar in the night
AntonioGuillem via Getty Images

Data will always be at risk, no matter where or how it’s stored. It’s a peril of modern day living. And all those concerns about privacy might feel like a tomorrow problem (in the same way we tend to worry about online fraud only once it’s happened to us). The cost, accuracy and just general creepiness that AI and our future digital avatars create might be scary, but it’s also a crushing inevitability. But that doesn’t mean our future is doomed to be an ocean of Max Headroom’s spouting our innermost secrets to any hacker that will listen.

“It will be a problem in the immediate, there probably is a problem already,” Bryson said. “But I would hope that a good high-quality version would have transparency, and you'd be able to check it. And I'm sure that Bing and Google are working on this now, for being able to verify where chat programs get their ideas from.” Until that time, though, we’re at risk of finding out the hard way.

Bryson is keen to point out that there are some positive takeaways, and they’re available to the living. “If you make it too much about death, you aren't thinking correctly about it,” she said. This technology forces us to confront our mortality in a new, albeit curious way and that can only help us think about the relationships we have right here in the world of the living. An AI version of someone will always be a poor facsimile, so, as Bryson suggests, why not get to know the real person better while you can? “I wish people would rehearse conversations with a chatbot and then talk to a real person and find out what the differences are.”

This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.