Real or Deepfake? Identifying synthetic media in the age of AI

December 6, 2018 Cara Schwartzkopf

Is what you’re seeing real?

In this era of “fake news,” the importance of fighting misinformation is paramount. Currently, one of the biggest threats is deepfakes, synthetic media made using artificial intelligence. These videos are almost indistinguishable from actual footage and could easily lead to the spread of false information. The best way to combat deepfakes at present is to learn how to spot them and prevent their dissemination.

How are deepfakes created?

Deepfakes are created by a machine learning technique called “generative adversarial networks” (GANs), in which the programs analyze multiple different images of a person at different angles in order to learn what the person looks like. Using this analysis, the network is then able to create a video of the person doing or saying something they’ve never done or said before.

What threats do deepfakes pose?

The speed of the internet means misinformation can spread like wildfire. Deepfake videos of political and public figures threaten careers and credibility, falsely influence supporters, and even corrode international relations. For example, imagine a doctored video of the President of the United States confirming alien contact at Area 51. Additionally, deepfakes have created a deep-seated distrust of media in society, which extends to otherwise credible sources, and threatens the democratization of information.

How can you spot a deepfake?

There are a few tricks to spotting today’s deepfakes. One flaw in deepfakes is a lack of blinking. Photos of an individual with their eyes closed are rarely publicly shared, so the systems often fail to pick up the human behavior of blinking. However, since creators of deepfakes are constantly evolving alongside those combating them, this isn’t a foolproof tell.

A few other tricks that can be used are:

  • Examining the source and its credibility
  • Checking the metadata of the video with a tool such as InVID
  • Using blockchain-powered tools to authenticate it
  • Using reverse image search engines (like Google Images), since many deepfakes are based on footage already available online.

 

Going forward

Though deepfakes pose a significant threat in modern society, researchers are working to stay one step ahead. Artificial intelligence may enable the development of deepfakes, but it is also the tool that has the power to spot and discredit them. In the meantime, it’s important to stay educated on how to identify credible media, and counteract “fake news.”

 

Previous Article
Transfer Learning: How It Works, What It’s Used for, and Where It’s Taking Us
Transfer Learning: How It Works, What It’s Used for, and Where It’s Taking Us

Human knowledge is cumulative. We learn the alphabet so that we can learn to read and write, and we then u...

Next Article
M.I.T. to create a college for AI backed by $1 billion
M.I.T. to create a college for AI backed by $1 billion

The Massachusetts Institute of Technology (MIT) recently announced its plan to create the MIT Stephen A. S...

×

Curated AI News Straight to Your Inbox Every Month

First Name
Last Name
Company Name
Thank you!
Error - something went wrong!