About the DeepDetector

What is the deep detector?

The deep detector is a deep learning network designed and trained to recognize A.I. manipulated faces, also called deepfakes. The deep detector can be seen as an artificial intelligence designed to spot forgeries of other artificial intelligences. It is able to do this, because the deepdetector has seen thousands of real and deefaked images. It has trained itself on these images and has learned to find the subtle differences between a photo made with a real camera or a photo synthesized/manipulated by an A.I.

What is a deep fake?

These are two pictures of President Obama, but one is real and the other is made up by a computer. Can you see the difference?

Maybe you can, but it is becoming rather difficult to spot these computer forgeries and it's only going to become more and more difficult. That's why we developed the deep detector. So if you are ever in doubt you can verify your content on this site.

Can I carelessly trust the deep detector result?

To put it simply, no. We do aim to reach this kind of precision but as of yet the deep detector is still a state of the art prototype. If the deep detector states that something is a deepfake then we suggest you scrutinize this media more closely.

What can I look for, if it's a deepfake?

The deepdetector will return a heatmap of the image like so:

The warmer regions in this map show the regions the deep detector used to make its decision. These are the so-called “Most fishy” regions of the picture.
In this example the left picture is deemed as probably real. The deep detector gives it a change of 6.17% that this is a deepfake. When we look at the heatmap which will tell us the most suspicious places in the picture. We see that there is almost nothing suspicious about this picture. It seems that the white of the eye triggers some reaction from the deep detector but it’s extremely weak. So it is probably safe.
Now let's take a look at the image to the right. Now the deep detector says there is a 53.97% change of it being a deepfake. Those aren’t great odds, so let's take a closer look. It seems that the eyes, eyebrows and the nostrils are deemed suspicious. These are typical areas to light up in a deepfake.

If we zoom in on this location there are some inconsistencies you might notice. Can you spot them?
The image is compressed, this doesn’t make our search easy but the eyebrows which lighted up in the heatmap do look a bit strange. It seems almost like they were painted on, they don’t have any hairs in them, it is simply a brownish blur. This is not decisive evidence that it's a deepfake, it could also be a result of heavy compression so let’s look farther.
Secondly both eyes have a slightly different colour. This is a typical deepfake mistake and makes this image rather suspicious. It’s not almost impossible for people to have 2 different eye colours but the former picture of Obama seems to suggest that Obama is not one of those people. It might be safer not to blindly thrust the source of this image.

Detection method

The DeepDetector is a deep neural network that performs a binary classification on an image. The neural network consists of a number of layers which the image is propagated through, ending with a binary classification. The weights of the network are learned during training.

Highlights and explainability

All deepfake generation techniques leave “artefacts'' behind in images or videos. Artefacts are what enable us to distinguish real from deepfaked material. Artefacts can include unrealistic facial features or incoherent transitions between frames. Current deepfake generation techniques are so good that artefacts become very hard to spot with the unaided eye. The DeepDetector leverages the power of artificial neural networks to make the distinction between real and deepfaked footage by highlighting the region in the image the neural network bases its classification prediction on. This serves as a guide to the user on where to look for the artefacts in an image.

Performance

As is customary, the performance is measured of the DeepDetector on a held out test set after training is completed. Results are provided below:

Accuracy 92.93% Percentage of correct predictions (of both real and fake)
Precision 84.37% Percentage of fake classifications that were actually fake
Recall 92.57% Percentage of actual fakes that are correctly predicted
F1-score 88.28% Measure of the test accuracy (max 100%), harmonic mean of precision and recall
AUC - score 0.9842 Area under the receiver operating characteristic curve, the probability that a actual fake has a higher score for the decision threshold than a actual real.
False Positive Rate (FPR) 0.0693 Miss out: actual real images that were classified as fake, relative to all images classified as fake
False Negative Rate (FNR) 0.0743 Fall-out: actual fake images that were classified as real, relative to all images classified as real

Click here to download a more scientific version of this website as PDF.

x
This website is using cookies. More info . Alright!