How to Combat the Complexities of Deepfakes
It started off with pornography.
Seems like that’s the natural starting place for a lot of disruptive tech.
Movie cameras, VHS, Blu-ray/HD DVD, VR…the internet…just a couple of examples. But the more alarming one is the deployment of ‘deepfakes’.
Now if you don’t know what a ‘deepfake’ is, you’re about to find out.
It’s a video where the person in it has been digitally altered to look like someone else. For example in the pornography world, deepfakes kicked off with celebrities being ‘deepfaked’ into pornography scenes.
Infamously Daisy Ridley (of Star Wars fame) and Taylor Swift were some of the victims of the early porn deepfakes.
These only really started gaining attention in 2018. And it’s mainly because the applications and the ‘deep learning’ technology behind the fakes was good enough to make them scarily real.
Political deepfakes are popping up
The early iteration of deepfakes was based in pornography. But subsequent uses have expanded somewhat. There’s the comical — Steve Buscemi’s face on a Jennifer Lawrence Golden Globes interview. And also Bill Hader morphing into Al Pacino and Arnold Schwarzenegger.
And increasingly now, political deepfakes are popping up. There was a recent charity publishing a deepfake of President Trump saying ‘AIDS is over’.
Also the organisation Future Advocacy is trying to promote ‘responsible AI’. Yet they too published deepfakes recently. Their aim was to show the trickery these deepfakes contain. They published two very convincing deepfakes of Boris Johnson and Jeremy Corbyn endorsing each other in the UK general election.
If a non-tech savvy person saw it and it wasn’t explained, they could believe one was endorsing the other. Their stunt missed the mark in my view. But it’s an example of the dangerous nature of what deepfakes can achieve.
What’s to stop the proliferation of multiple deepfakes during an election period? Even if they’re found to be deepfakes, often the damage is done from the outset.
But here’s one to really think about…
Let’s say you’re innocently minding your own business. You’re at home after a long day at the office. The commute home was boring, slow, and full of traffic. But you made it in the end.
You’re sitting there watching a bit of Gogglebox Australia when there’s a knock at the door. It’s after dinner so you’re annoyed someone’s at the door.
You open it, grumpily.
It’s the police. They ask your name to confirm your identity. You tell them your name. They then proceed to tell you that you’re under arrest for endangering life with the use of a motor vehicle.
You have no idea what they’re talking about. You tell them you’ve done nothing wrong. They don’t believe you. They haul you off to the police station.
At the station they proceed to show you in your car recklessly swerving in and out of traffic on your commute home. You’re on your phone the whole time. At one point your swerving causes an accident behind you.
It looks like your car. It looks like you. But there’s no way you did that. But how can the video evidence be disputed? It is after all, evidence.
Sound like something out of a dystopian sci-fi novel? Maybe. But the reality is that manufactured deepfakes aren’t hard to create anymore. Anyone with the right motive can make a video of pretty much anyone they want doing whatever they want them to do.
And at the heart of it all is artificial intelligence (AI). That’s what’s used to ensure the images perform according to how a human would act. It’s this use of ‘smart’ technology that is on one hand a huge opportunity for global development. But on the other hand, if used in the wrong way, a weapon to use against society.
Using AI to catch dangerous drivers
In NSW there’s a mass roll-out happening of AI to nab drivers on mobile phones.
‘The New South Wales government has started using the first cameras that can automatically detect when drivers are using their phones. The system uses AI to review photos for telltale signs of phone use, with human reviewing the flagged images to prevent any false positives.’
This in theory is reasonable. People shouldn’t be using phones at the wheel of a car. And if you’re legitimately caught doing it, you should be penalised.
Harsh penalties result in drivers thinking about the cost of putting people in danger. It works. End of story.
Now back to the point. Legitimate fines and penalties are all good with me. But what if the video evidence shows one thing, which never really happened at all?
What happens in the event that deepfakes are seeded in the NSW police camera database? The AI might pick up the penalty. And the human reviewer, well they’d be none the wiser either.
What defence do you have?
‘It’s a deepfake your honour!’
This is the worrying world we’re heading to with the potential use of AI. When we rely on video evidence like this too much, then we’re at risk of believing everything we see.
But if everything we see isn’t real, then where does that leave us?
I think as we move towards this high-tech future, we’ll end up on the right side of the equation. But it won’t just happen. And I think one of the underrated areas of this future is cyber security and cyber defence.
Protecting the digital future
There’s only one way to combat the complexities of deepfakes. And that’s through even higher-end technology to spot it and eliminate it. I believe that there will be an exponential need for better cyber security over the next decade.
It will be needed to stop misuse of tech like deepfakes, AI, quantum computing, and autonomous tech. The upside is there are plenty of small and microcap ASX-listed companies involved in some form with the high-tech defence of our future.
Tesserent Ltd [ASX:TNT], Senetas Corporation Ltd [ASX:SEN], and Adveritas Ltd [ASX:AV1] are three examples of different takes on cyber security plays on the ASX. Tesserent is a pure-play right into security for IT and managed systems. Senetas is a play into data protection across connectivity networks. And Adveritas is a different take by using their TrafficGuard product to prevent ad fraud and fake data (often from bots) from impacting data analysis.
These are just three examples of companies that work to protect the digital future. And as we push towards higher-end tech like AI and quantum computing, we expect these and more to work their way up the ranks and potentially in value as well.
Editor, Money Morning
PS: Download this free report and learn the most exciting AI and automation stocks on the ASX. Click here to download.