In the world today, Artificial Intelligence (AI) has become a big buzzword. Much of this has been brought up with the advent ChatGPT in early 2023. Since that time, similar AI tools have come out, most notably that of Microsoft CoPilot, which can integrate in many of the apps that you can get in your M365 subscription. Although AI can be used for good and productive purposes, it also has a “Dark Side.” This is where the Cyber attacker can use it for nefarious purposes, such as creating Malware, and an even newer variant known as a “Deepfake.”

What Is a Deepfake?

A Deepfake can be technically as defined as follows: 

“Deepfake AI is a type of artificial intelligence used to create convincing images, audio and video hoaxes. The term describes both the technology and the resulting bogus content, and is a portmanteau of deep learning and fake.” (Source 1)

A great example of this are the political ads that you see during election time. For example, by using AI, an impersonation of a politician running legitimately for office can be easily created. And given the viral nature of digital media, this video or image can literally be sent to millions of people in just a matter of a few seconds. But the damage does not stop here. Deepfakes often come in the form of a video, and are primarily uploaded onto YouTube. From here, the impersonation will ask you to donate money to support their election efforts. But instead, if you do send any money over, it will most likely be sent over to a phony, offshore account. The worst part about this is that once you send it, it will be very difficult to recover it. Although the federal law enforcement agencies like the Secret Service and the FBI have gotten better at locating these funds, the process can take a very long time.

The AI Algorithms Involved

AI tends to be associated with what is known as the “Black Box Phenomenon.” This simply means that if you a submit a query ChatGPT, you will of course get an answer, but you will never know as to how it came to reaching it. Meaning, you do not know the resources that the tool used to produce your answer. The same is true of Deepfakes. Because one does not know how that particular video was created, it is almost impossible to tell the difference between a real video of that politician from the fake one.

In order to make Deepfakes so real looking, an AI algorithm is used (which uses a combination of Machine Learning and Neural Networks) makes of a “Generator” and a “Discriminator.” The former creates the actual fake content, and the latter then determines if it is legitimate or not. This is an iterative process that happens repeatedly, within just a matter of a few minutes. Over time, the Discriminator will tell the Generator that the content it has created is actually illegitimate. The summation of these two then creates what is known as the “General Adversarial Network,” also known more commonly as a GAN.

After enough iterations between the “Generator” and the “Discriminator” have taken place, the “GAN” will then learn on its own how many authentic images it will require to create the Deepfake. This is illustrated in the diagram below:

It is important to note at this point at this point, a Deepfake can be more than just a fake video on YouTube. It can take the form of audio, an image, or even an SMS text message. A good example of an audio-based Deepfake was used in the recent primary elections in New Hampshire: “One distressing headline out of New Hampshire as voters prepared to cast in-person primary ballots was that a fake version of President Joe Biden’s voice had been used in automatically generated robocalls to discourage Democrats from taking part in the primary.” (Source 2)

Up Next: Fighting back against Deepfakes

The next article in this series will cover two critical ways to fight back against Deepfakes: federal and state legislation that gives victims the right to sue culprits who create damaging Deepfakes, and tips on how you can spot a Deepfake.


Source 1:

Source 2:

Join the conversation.

Keesing Technologies

Keesing Platform forms part of Keesing Technologies
The global market leader in banknote and ID document verification

+ posts

Ravi Das is a Cybersecurity Consultant and Business Development Specialist. He also does Cybersecurity Consulting through his private practice, RaviDas Tech, Inc. He also possesses the Certified in Cybersecurity (CC) cert from the ISC2.

Previous articleSelecting the Appropriate Passport Security Stitching Thread
Next articleBosnia & Herzegovina Banknotes to be Removed from Circulation