What are the GANs, these machine learning systems at the origin of deepfakes?

It is these systems, qualified by Yann LeCun, Chief AI Scientist at Facebook , ” the most interesting idea of ​​the last 10 years in the field of Machine Learning “. which today allow an ever closer resemblance between the false and the true. Introduced in 2014 by the American researcher Ian Goodfellow, they have the particularity of being able to create data (for example images in the case of deepfakes). And this without their learning being supervised.

How? “The GANs are made up of two competing neural networks: the generator, which aims to create images as realistic as possible, and the discriminator, responsible for recognizing whether or not the images produced by the generator are fake” , explains Elisa Fromont, a researcher specializing in machine learning at the Institute for Research in Computer Science and Random Systems (IRISA).

In fact, we start by feeding the data generator including “noise” (random images), which it will try to extract itself, by encoding, some rules. These will allow him, by decoding them, to produce images. “Initially, the generator will not produce anything very interesting since its first images will look like the noise it was given in input,” says the researcher.


To make it progress, each new image created will be submitted, in the middle of a stream of other images, to the examination of the discriminator – a conventional convolutional neural network – which, for its part, will compare it to the catalog. real images put at its disposal and give a probability of its authenticity between 0 (a fake) and 1 (a real image). This estimate is returned to the generator who, with the new data, produces a new image that is always closer to the catalog images that serve as a reference.

The goal is not to literally copy the real images but to create new “in the manner of” the latter. The two entities pursue this way, each participating in the improvement of the other. Because if the generator learns to make images more and more realistic, the discriminator, for its part, learns to distinguish better and better the real images of false. “It is quite difficult to train such networks because the discriminator must be good enough so that the images of the generator can look like real images,” raises Elisa Fromont. The images are considered realistic when the generator manages to deceive the discriminator. The resulting false is then supposed to be almost undetectable.

Leave a Reply

Your email address will not be published. Required fields are marked *

1 + = 10