Learn about deepfakes

Deepfakes are a type of disinformation which is manipulated information that aims to influence your opinion. To prevent this you can use tools and keep thinking critically. Learn here what a deepfake is and how to spot one.


Rather than just one technique there are various types of deepfakes. We can regard deepfakes as a collective name for audiovisual data altered by smart algorithms instead of manual labour.

We distinguish 3 types of deepfakes; FaceSwap, StyleGANs and Deep Puppetry.

Note that deepfakes can be made of all visual data. There are for example horse StyleGANs. However, we focus on human faces.

FaceSwaps

The face of the donor file is copied and pasted, it mimics the emotions and motions of the source file.

StyleGANs

Generated faces by a neural network that learned with a big dataset to create new unique faces.

Deep Puppetry

The person in the source file mimics the emotions and motions of the donor file.

How do neural networks make deepfakes?

1 Training

Before a neural network can make deepfakes, it needs to learn about the object it has to manipulate. Therefore, the neural network needs a dataset of this object. A bigger dataset generally means higher quality.

At the moment, pre-trained neural networks can be found and bought online. With these products even less tech savvy people can create deepfakes.

2 Alteration

The neural network 'knows' after training how to alter certain objects. When the neural network is provided with donor footage, the motions can be copied and mimicked by the source footage.

The software alters the image or video by altering the pixels within the source footage. In the alteration and adding of the pixels, artefacts can occur which  are traces that sometimes can be detected by us humans.

3 Disguising

Sometimes the deepfake neural network has a feature to fool detection software by disguising the deepfake artefacts within the image.

These techniques range from quite simple to technically advanced. The full images can be down-sampled but also so-called 'blackbox'- and 'whitebox attacks' can be used. The techniques make it harder for detection software to spot the deepfakes.

How does DuckDuckGoose detect deepfakes?

1 Insightfullness

The DeepDetector pinpoints the pixels in the image the software used to determine whether the analysed footage is a deepfake or authentic. This analysis is called the 'activation map'. With this analysis the users of the software can understand the conclusion and evaluate the decision of the software themselves maintaining their autonomy.

2 Classifying

The DeepDetector is a neural network. It outputs probabilities which indicate whether the input is deemed a deepfake or not. Higher percentages indicate that the software is more confident of its decision.

3 Training

The DeepDetector is trained on a dataset of deepfake and authentic images. During training it learns to distinguish authentic images or videos from deepfaked ones. As more deepfake techniques are invented, we will increase the datasets used for training the DeepDetector. This ensures the DeepDetector will always stay up-to-date with the latest advancements in deepfake generation.

Learn more about the DeepDetector here!

How to detect deepfakes

What can you do to protect yourself from being a victim of deepfakes or being influenced by them? Deepfakes are becoming better and  high quality deepfakes are becoming easier and cheaper to make. It consequently becomes harder to spot deepfakes with the naked eye. However, at this moment you can still look for several typical graphic inconsistencies (called artefacts) in deepfakes, the typical artefacts to look for are listed below.

  1. Colour transition
    In neural puppetry and faceswap deepfakes, the face is manipulated. Sometimes a hard colour transition can be seen on the edges of the area the deepfake software manipulated.
  2. Unnatural looking eyes
    Eyes are rather hard to deepfake due to the complexity. Check if the reflections in the eyes have the same angle. Does the person in the video blink? Are the irises of both eyes equally large?
  3. Accessories
    Items like glasses might look convincing at first, but, when looking a bit closer, might actually contain some artefacts.
  4. Blurry and nonsensical context
    StyleGAN deepfakes are good in making faces, but bad at creating context that makes sense. Check the background and clothing, for example, to see if these make any sense.
What else can you do when you are in doubt?
Think critically

Does the footage spark an emotional response for you? Disinformation often tends to influence your opinion and does frequently do so by sparking negative emotions regarding specific people. Do you react emotionally to the footage? Stay calm and check the information more closely. By thinking critically one could potentially question the footage on its authenticity.

Use free online tools

There are several fact checking websites like Snopes and Bellingcat that you can visit for free. For faceswap and neural puppetry deepfakes a source file is needed to create the deepfake. If in doubt, you can reverse image or video search the image or video. And, of course, you can always use our online tool, the DeepDetector!

Do you want to learn more about deepfakes? Check out our blogs!

x
This website is using cookies. More info . Alright!