Home » Digging Deep into the Deepfake Appeal
Deepfakes

Digging Deep into the Deepfake Appeal

The term Deepfake itself is a combination of ‘Deep Learning’ and ‘Fake’. Belonging to the larger body of Machine Learning, Deep Learning depends on artificial neural networks to process raw information. Deepfake is AI-dependent technology which is used to create or modify video that implies false situations. This term first came into being back in 2017. That is when a Reddit user (called deepfakes) began applying deep learning technology to swap celebrity faces onto people performing in pornographic videos. The term now applies to faceswaps (which can be accomplished without AI technology), facial re-enactment or puppetry, and lipsynching (by pairing footage of someone’s face with their audio). 

The bottom-line: Deepfake videos and images look real but are fake. 

Deepfakes

DeepFake in Action

What is intimidating about Deepfake? Well, the artificial intelligence (AI) component makes it is quite easy to manipulate audio, video, or images – minimal techie skills required. AI automates the editing process and makes it that much easier to potentially deceive other people – whether or not that was the intention.

The threat posed by Deepfakes is two-fold: one, that it is so convenient to create a deepfake (for any purpose) and that there is no truly reliable method from which to check the authenticity. By the time it is apparent that a controversial image or video is a deepfake, enough time has usually passed to cause irreversible damage. Those in the ‘line-of-fire’? The victims can be private individuals, celebrities, politicians, and even entire governments. 

In the Grey Area

As with any new technology, it was all fun and games with deepfakes, until it suddenly wasn’t. A worrying aspect for businesses is news of hackers training AI to imitate high-level executive’s voices. Deepfake ‘audio’ attacks have resulted in “CEO’s” calling and directing their finance heads to make payments. In the three incidents reported by Symantec, a leading cybersecurity company, each of the private companies lost millions of pounds.  Businesses are still not prepared to deal with such sophisticated attacks. 

According to the Electric Frontier Foundation’s David Greene there are laws already in place to protect deepfake victims. In a recent blogpost, Greene mentioned that if the victims are being harassed or extorted, that can be used as a legal defense. He also referred to a legal concept called ‘False Light’. This relates to the misleading use of non-manipulated photos as well as the distortion, alteration or manipulation of images.  Another applicable legal safeguard is copyright piracy, especially when the deepfake video or image features previously copyrighted multimedia. What’s the latest news? People are concerned about deepfake video and images being used to influence next year’s U.S. Presidential elections.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *