Artificial Intelligence, AI, is making great strides and is becoming pervasive in our lives. It continues to create phenomenal outcomes that is unprecedented for the global economy by helping people perform their jobs better, increasing business efficiency, helping humanity by taking on dangerous tasks and more.

In this article I am going to focus on an area that is critical for everyone to understand as it affects us all. I am going to identify the area, the technology and implications on Economy, Personal, Global and National Security, namely

Deepfakes & Weaponization of Deepfakes.

What is the Risk of Deepfakes?

Deepfakes are challenging our perception of the world, challenging validation of fake and real and it’s a double edged sword.

  • Real events are being called out as fakes to cast aspersions and
  • Fake reality is being created.

This results in undermining trust in authorities, media and in digital world with deep and wide repercussions in our physical world.

What are Deefakes?

“Deepfake” is a AI technique that is used to fabricate images and videos.

It is manipulated media where a user can take an existing image or video and replace the person or object with the likeness of another by using artificial neural networks. most often to use for malicious purposes, but not always.

During the COVID19 lockdown in August 2020, the streaming site Hulu ran an ad to promote the return of sports to its service, starring NBA player Damian Lillard, WNBA player Skylar Diggins-Smith, and Canadian hockey player Sidney Crosby. The faces of those stars were superimposed onto body doubles using deepfake tech.

Tik Tok and other apps have a feature called Face Swap, that allows users to scan their face and transfer their image to videos. In Tik Tok, the face swap feature requires users to create a detailed multiple-angle biometric scan of their faces and that can raise serious concerns because TikTok was (in the past) sending data back to servers in China, potentially to be harvested by the Chinese government. This lead the US Army and Navy to ban the use of TikTok for servicemen and women; revealing the app to be a national security threat.

A point to note, one third of TikTok users are teens under 16 years of age, creating and posting videos. The security of this content is entirely dependent on their security posture.

Weaponization of Deepfakes

Deepfake phenomenon is growing rapidly online, supported by the growing commodification of tools and services that lower the barrier for non-experts to create deepfakes. These include platforms for hosting deepfakes and open source code for creating deepfakes.

Deepfake Pornography

“Deepfake technology is being weaponized against women by inserting their faces

into porn. It is terrifying, embarrassing, demeaning, and silencing. Deepfake sex

videos say to individuals that their bodies are not their own and can make it difficult

to stay online, get or keep a job, and feel safe.”

Danielle Citron, Professor of Law, Boston University, and author of Hate Crimes in Cyberspace

Noelle Martin talks about her devastating experience in detail starting from receiving an email that informing her there were deepfakes of her. She described looking at the deepfake porn video and said it looked convincing even to her. You can read her full account here.

Sensity a visual threat intelligence company with Threat Intelligence Specialists helping to defend individuals and organizations against the threats posed by deepfakes published disturbing research on a disturbing key trend of non-consensual deepfake pornography, which accounted for 96% of the total deepfake videos online.

According to Sensity, top four websites dedicated to deepfake pornography received more than 134 million views on videos targeting hundreds of female celebrities worldwide. This significant viewership demonstrates a market for websites creating and hosting deepfake pornography, a trend that will continue to grow unless decisive action is taken.

Technology powering Deepfakes

The Generative Adversarial Network, or GAN for short, is a Machine Learning area with the core idea that given a large set of data, the GAN is capable of generating brand new unique data that is effectively indistinguishable from the original.

GANS are the most popular technique used to generate deepfakes.

Detection & Blocking Deepfakes

So, how do we detect deepfakes? Technology is evolving but there is no established methodology or technology yet. Some ways we can use currently include:

  • Heuristic practices such as being aware of the timing of certain events and performing reverse internet searches of suspicious images and videos.
  • Analyze soft biometrics, which are distinct characteristics that are not like fingerprinting or iris scans, but the ways in which we move and talk. Characterizing these soft biometrics can be used to determine pseudo videos and imagery of a person.
  • Some research groups, including the Atlantic Council’s Digital Forensics Research Lab and Graphika, have been working with Facebook to identify when manipulated images are being used to lend authenticity to deceptive social media campaigns.

Google released a collection of deepfake videos for the purpose of providing researchers a data set for developing synthetic video detection methodologies. The Wall Street Journal has adopted trainings for its journalists to aid in identifying deepfakes.

Facebook, Google and Twitter independently are all conducting pre-emptive research into how to detect and highlight deepfakes to avoid misinterpretation.


In the US, 46 states have some ban on revenge porn, but only Virginia’s and California’s include faked and deepfaked media. California, Virginia, Maryland and Texas have all produced legislation in the last two years on deepfakes meant to provide victims with avenues for recourse. New York Governor Andrew Cuomo became the latest to sign a deepfake proposal into law in November.

Many of the states’ legislation focus on pornographic instances of deepfakes, which can cause emotional and psychological harms, violence, harassment, and blackmail.

The National Defense Authorization Act of 2021 directs the Department of Homeland Security to produce assessments on the technology behind deepfakes, their dissemination and the way they cause harm or harassment. The NDAA also directs the Pentagon to conduct assessments on the potential harms caused by deepfakes depicting members of the U.S. military.

In the UK, revenge porn is banned, but the law doesn’t encompass anything that’s been faked.

Beyond that, no other country bans fake nonconsensual porn at a national level.

Landmark Political case: Attempt at turning real to fake for political coverup

In 2019 there was a political scandal in Malaysia where Mohamed Azmin Ali, Minister of Internal affairs was videotaped in a sex tape with a man. Same sex activity is illegal in Malaysia and can lead to imprisonment. Ali, claimed the video was a deepfake, created to sabotage his career even though it could not be proved the tape was inauthentic.

In conclusion, as a Trusted AI strategist, my focus is on spreading awareness of risks and remediation.

In order for us to get the gains from AI we have to pay close attention to how we are building these systems so we can maximize gains.

It is critical we identify risks early on as we are creating these systems to minimize tectonic fallouts. We have to build Trust in AI and use AI responsibly.

Pamela Gupta is the founder of the group Women in CyberSecurity (WiCyS) AI Trusted Affiliate and AIEthics World Head of Trusted AI. She is a Cybersecurity Strategist for emerging risks and excels at helping clients identify and anticipate risks.