China’s annual Ching Ming Festival, the origins of which go back more than two millennia, was observed across the country this week through the traditional practice of maintaining the gravesites of the dead. It’s a holiday that calls for remembering and honoring loved ones who’ve died by doing everything from decorating gravesites, burning incense, and bringing food offerings. Mourners young and old kneel and recite prayers. Families also burn “spirit money” as well as paper replicas of material goods that it’s believed the dead still need in the afterlife.
These time-honored rites and observances also co-exist with new technology that mourners are increasingly using to achieve the same ends. For as little as the US equivalent of $2.76, many Chinese families are reportedly choosing a more novel method for observing the so-called tomb-sweeping festival — by paying for hyper-realistic AI avatars of their dead loved ones, part of a growing marketplace that’s advertised to Chinese citizens widely online.
This aerial photo shows families attending graves at the Chai Wan Chinese Cemetery in Hong Kong on April 4, 2024, as people visit cemeteries to honour their ancestors during the annual tomb-sweeping day. Image source: PETER PARKS/AFP via Getty Images
One software developer who posted about the practice on Weibo, for example, said he’s worked with hundreds of families this year alone who’ve been “reunited” with loved ones via the creation of an AI avatar. Using voice samples and imagery to create hyper-realistic replicas of the deceased that look, move, and sound like them, AI companies and developers have turned the demand for digital personas into a big and growing business that was worth the US equivalent of nearly $2 billion in 2022. And that market is forecast to grow by several multiples through the rest of this year and into 2025.
These AI personas, by the way, are also turning up in settings that don’t involve the direct mourning for a loved one. Just a few weeks ago, the Chinese AI company SenseTime presented a speech at its annual general meeting from founder Tang Xiao’ou. “Hello everyone, we meet again,” Tang told employees. “Last year was tough for everyone, but I believe difficult things will eventually pass.”
Tech. Entertainment. Science. Your inbox.
Sign up for the most interesting tech & entertainment news out there.
By signing up, I agree to the Terms of Use and have reviewed the Privacy Notice.
That speech was given by an AI clone of the 55-year-old founder, his avatar a reflection of the fact that he died in December. According to news reports, some attendees were brought to tears by the speech, which included the late company founder urging them: “Look forward, never look back … Let’s drive forward on the path of artificial intelligence.”
To some people, this kind of AI use case — whereby avatars help people cheat death, to a certain extent — might sound like the stuff of science fiction. But it’s very much a natural extension of recent advancements in AI, including breakthroughs built on sophisticated language-processing models and generative technology that, more than ever before, are blurring the line between what’s real and what’s been digitally rendered.
Contrast that deeply personal use of AI, meanwhile, with another use of the technology that garnered new headlines this week. Specifically, for its military application on the battlefield.
The same technology that can bring a version of someone back from the dead — and help loved ones maintain an emotional connection to the person — is also being used in some cases to identify which targets to kill. According to The Guardian, Israel is using an AI database called “Lavender” that has been using machine learning to identify which targets to kill based on their presumptive links to Hamas.
At one point, the Lavender system — which was developed by the Israel Defense Forces’ Unit 8200 intelligence division — had flagged a staggering 37,000 potential targets, according to the newspaper’s reporting. And it gets even scarier. The newspaper goes on to report during the early weeks of the now 6-month-old conflict in Gaza, airstrikes were allowed to kill up to 20 civilians amid the targeting of low-ranking militants.
“The machine did it coldly. And that made it easier,” one intelligence offer said, arguing that the AI-based tool was a better mechanism than relying on a grieving soldier who lost someone on October 7 to pick the targets. Where the use of this technology gets especially chilling, though, is in the minimal human effort required to operate it.
Palestinian children inspect the area after an Israeli attack on the Jenin neighborhood in Rafah, Gaza, on April 04, 2024. Image source: Ahmed Zaqout/Anadolu via Getty Images
“I would invest 20 seconds for each target at this stage, and do dozens of them every day,” another user of the database acknowledged, in a report published by the Israeli-Palestinian outlet +972 Magazine and the Hebrew-language publication Local Call. “I had zero added-value as a human, apart from being a stamp of approval. It saved a lot of time.”
A companion AI-based decision tool that Israelis are using, dubbed The Gospel, does the same thing as “Lavender” except for a focus on structures and buildings rather than people.
From keeping loved ones “alive” to helping slaughter people on the battlefield — this is the dichotomy that the rapidly expanding AI industry is arguably forcing humanity to live with. The one constant, perhaps, that’s shared by both of them is that far too little attention is being given to these applications of advanced AI technology. There are certainly ominous repercussions that will stem from computers automating the decision to kill this or that person on the battlefield.
Meanwhile, what does it do to the human brain and psyche to use AI to maintain some kind of communion with the dead? “People around me think I’ve lost my mind,” Taiwanese singer Bao Xiaobai told Chinese media when he arranged for an AI avatar version of his 22-year-old daughter who died in 2022. Bao experimented with AI technology for more than a year to get the finished product just right, which he shared an example of online.
By way of explaining himself to reporters, he simply pointed out that he wanted “to hear her voice again.”
We’re sharing our learnings from a small-scale preview of Voice Engine, a model which uses text input and a single 15-second audio sample to generate natural-sounding speech that closely resembles the original speaker. https://t.co/yLsfGaVtrZ
— OpenAI (@OpenAI) March 29, 2024
Instead of grappling with the difficult implications of these kinds of use cases, though, when it comes to some of the companies at the forefront of this technology (like OpenAI) the leaders and founders themselves oftentimes espouse a contradiction: OpenAI CEO Sam Altman, for example, thinks there’s a non-zero chance that advanced AI could end up being perilous for humanity, yet he and others are plowing ahead, regardless, and building the very thing they think could destroy us.
An OpenAI blog post from February 2023 explains why none of this is likely to slow down anytime soon. The company argues that the artificial superintelligence known as AGI indeed comes with “serious risk of misuse, drastic accidents, and societal disruption. (But) because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right.”
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : BGR – https://bgr.com/tech/love-and-death-how-ai-is-being-used-to-remember-loved-ones-and-to-kill-on-the-battlefield/