top of page

Why some celebrities are embracing Artificial Intelligence deepfakes

Taken from www.bbc.co.uk | Author: Nick Marsh | Date: 19 July 2023


Singaporean actress, model and former radio DJ Jamie Yeo has no problem with being deepfaked. In fact, she signed up for it.


"It's a bit like that Black Mirror episode with Salma Hayek," Ms Yeo jokes.


She was speaking to the BBC the day after the release of the new series of Charlie Brooker's Netflix show. In the first episode, actress Salma Hayek, playing a fictionalised version of herself, signs away her image to a production company.


The deal allows it to use an artificial intelligence or AI-generated deepfake version of the Hollywood A-lister to "star" in their new TV drama. What she says and does in the show is controlled by the computer.


The consequences for Ms Hayek - without spoiling the story - are not good.

Concerns about the impact of AI are partly behind the first Hollywood actors' strike in more than four decades, bringing the US movie and television business to a halt.


It comes after Screen Actors Guild (SAG-AFTRA) failed to reach an agreement in the US for better protections against the misuse of AI for its members.


The actors' union has warned that "artificial intelligence poses an existential threat to creative professions" as it prepared to dig in over the issue.


However, Ms Yeo is not worried. She is one of a growing number of celebrities embracing AI-generated advertising.


The new technology is being met with a mixture of excitement and trepidation.


Ms Yeo has just agreed a deal with financial technology firm Hugosave, which allows it to use a digitally manipulated likeness of her to sell their content.


The process is fairly simple. She spends a couple of hours in front of a green screen to capture her face and movements, then a couple more hours in a recording studio to capture her voice.


An AI programme then synchronises the images with the audio to create a digital alter-ego capable of saying practically anything. The results are uncanny.


"I do understand the concern, but this technology is here to stay," she says. "So even if you don't embrace it because you're scared, there will be other people who will embrace it."


Some already have. As part of his deal with PepsiCo, superstar footballer Lionel Messi allowed it to use a deepfake version of himself to advertise Lay's crisps.


Not only can online users create personalised video messages from "Lionel Messi", they can get him to say it in English, Spanish, Portuguese and Turkish.


Fellow football superstar David Beckham and Hollywood legend Bruce Willis have also dabbled with deepfake technology - though, unlike Ms Yeo, they have so far stopped short of signing away full image rights.


"I think deepfakes will just become part of normal practice in the advertising industry over the next few years," says Dr Kirk Plangger, a marketing expert at King's College London.


"It opens the door to all kinds of creative options. They're able to micro-target consumers and are often extremely persuasive."


The efficiency of the process also makes it attractive from a commercial point of view.

"You're not doing that much work for the money you're charging," Ms Yeo says.


"It's also good for the client on a budget because they get so much more content than from a normal shoot. So it works for everyone."


The client - in this case Singapore-based Hugosave - agrees.


"Having this technology available means we can literally produce hundreds of videos in a matter of days. Compare that to the months, if not years, that we'd need if we were filming the content in the traditional way," says Braham Djidjelli, Hugosave's co-founder and chief product officer.


"We're able to leverage the benefit of AI while also retaining the human touch of a trusted local face - in this case Jamie's."


But, as analysts such as Dr Plangger point out, there is a "dark side" to the technology.


"It's not something we can put back into the box," he says. "The advertising industry needs to wake up to the risks as well as the possibilities of artificial intelligence. It means stepping back, as a society, and thinking about what is the proper or ethical use of this technology."


One of the things Dr Plangger is referring to is a looming "crisis of trust", where consumers cannot tell between what is real or fake. This is something already being exploited by vested interests online and can range from synthetically manipulated pornography to misinformation to political messaging.


Read Original Article

bottom of page