Latest Posts

Deep Fakes: Easy As Pie Creation With Extensive Damage

The possible threat scenarios and attacks on the World Wide Web are increasing significantly with the advancing development in the digital age. In addition to malware, ransomware, phishing, distributed denial of service attacks (DDoS), and bot networks, deep fakes will soon be part of the cyber criminals’ standard repertoire. However, as with any attack scenario, there are ways and means to protect yourself from deep fake scenarios and recognize them early on.

Deep Fakes Scenarios, Previously Only Known From Hollywood And Now In Reality

So far, deep fake scenarios were only known from Hollywood, from popular blockbuster films with lots of action, conspiracy theories, and, among other things, the involvement of secret services. Deepface videos are audio, video, or image material manipulated with deepface software, which is generously distributed on the Internet by tech-savvy threat actors to fulfill an illegal purpose.

What Is Behind The Deep Fakes?

The term “deepfake” is an amalgamation of the terms “deep learning” and “fake.” This combination refers to media content of image and sound recordings that appear quite authentic (absolute) but were produced or modified with artificial intelligence or deep learning. These deepfake recordings are mainly used to deceive or expose. The quality of these deepfakes has developed with technological progress and with the right technical equipment (exceptionally high graphics processors). Such deepfakes video recordings can hardly be distinguished or recognized from the actual video content at first glance.

Create Deepfakes With Autoencoder

Creating the so-called deepfake videos requires the necessary know-how and time and the technical requirements of hardware and software. Originally, autoencoders were primarily used to create deep fakes. These autoencoders consist of an artificial neural network that efficiently learns to extract the essential features from a data set and then create a dimensional reduction.

The Autoencoder uses a two-stage process to create a deepface. In the first step, the face and distinctive features are extracted from the original or source image using the artificial network and broken down into feature vectors. These are assembled into a model from layer to layer. In the second step of the deepfake process, artificial intelligence (AI) takes over the decoding of the vectors, aligns the face according to the given definition, and desires to deceive and convert it to the original image material.

When manipulating with an autoencoder, creating a deepfake requires a large number of images or video material with the corresponding recordings of the face from several angles and taking into account the most diverse lighting conditions. The process can take a few weeks if the performance of the graphics processors is not optimally provided.

Creating Deepfakes With Generative Adversarial Networks (GANs)

In addition to the Autoencoder just described, the deep fakes results of the Autoencoder are further processed and refined via Generative Adversarial Networks (GANs). Compared to an autoencoder, the generative adversarial networks are generative networks, which correspond to a grouping of several neural networks that can work on the result of the deep fake simultaneously. For example, one generative network may have the same statistics as the original. In contrast, another, a discriminative network, tries to detect differences between the fake and the original and disguise it. 

Compared to the Autoencoder, generative adversarial networks are a very time-consuming, iterative process and require enormous computing power from a graphics processor. Generative Adversarial Networks (GANs) are used more for realistic images of fictional people. The danger here is that the rapid development of deep learning hardware will change this in the foreseeable future, and the technology will be used for the purposes it was developed.

Warning And Threat Of Deep Fakes In The Foreseeable Future

The increased use of deepfake technology in the future assumes that manipulated images and video recordings will be counted among the simple criminal offenses shortly. The greatest fear of experts is that new forms of cybercrime could emerge from the new technology that cannot currently be foreseen with any degree of precision.

Deep Fakes Are A Danger To Society

With deep fakes, cybercriminals use disinformation campaigns and fake content to mislead the public. Some organizations and specialists see a very high risk in deep fake technology and the possible identity theft – where deepfakes can also be used. This deep fake spiral could cause significant damage in the private and social spheres. 

“Loss Of Reality” Through Deep Fakes

Deception and false news have been around since the beginning of information dissemination. There is nothing new about sending manipulated messages—just a new way of disseminating information. According to experts, the trend toward disinformation and fake news is increasing in the mass of new information, and actors are increasingly using deep fake technology for their purposes. Not only the mass of daily news that one follows as an autonomic user but also today’s credibility is increasingly being put to the test. 

The news and the information disseminated profoundly affect how people perceive the media and the authority of public bodies and institutions. Trust in official authorities, and independent reporters of official facts are undermined and questioned. Experts fear the emergence of a “social confusion” about which sources are reliable and which sources are not. The specialists declare these consequences as an “information apocalypse” and a “loss of reality” for society.

Deep Fakes Crime And Possible Harm Caused To Those Affected

In addition to disinformation and manipulation of the public or the masses, deep fakes facilitate various possibilities of criminal activities. In particular, deep fakes are used for fraud, humiliation on the Internet, blackmail, or falsification of evidence. In the field of fraud, deepfakes also include undermining visual identity checks, which can be used for a wide range of crimes such as human trafficking, the sale of illegal goods, and terrorism. Deep fakes are suitable for all criminal activities based on document fraud and identity forgery.

Detect Deep Fakes With And Without Deepface Software Detection

Many official authorities and institutes think a deepfake that has been created appears credible at first glance, but on closer inspection, it can contain errors and inadequacies. Abnormalities can include:

  1. errors in blurring at the edges of the face,
  2. lack of blinking,
  3. false light reflections in the eyes,
  4. wrong shadow cast
  5. irregularities in the hair structure, veins, and scars,
  6. Discrepancies in the background, subject, sharpness, or depth

If you pay attention to the little things, mistakes, and discrepancies when looking at deep fakes, you can recognize and expose them as fakes. In the future, there will also be software for recognizing deep fake image material, such as Facebook’s Deepface Detector or Microsoft’s Video Authenticator, to name two programs.

Like the deep fake generators, these programs work with machine learning (“deep learning”) and process the data in a neural network (AI instances). The current high technical level of these detection tools but also lists limitations that can be used to fool the software. Therefore, it is a race between deep face detection tools and software for creating fake material.


Latest Posts

Don't Miss