What are Deep Generative Models?

A Deep Generative Model (DGM) is a neural network in the subdomain of the Deep Learningswhich follow the generative modelling approach. The opposite of this approach is the discriminative modelling, which is designed to find the best solution on the basis of the existing Training data Identify decision boundaries and classify the input accordingly.

The generative approach, on the other hand, follows the strategy of learning the data distribution from training data and creating new data points based on the learned or approximated distribution according to the word origin. While discriminative modelling is attributed to supervised learning, generative modelling is usually based on unsupervised learning.

Deep generative models thus ask the question of how data are generated in a probability model, while discriminative models aim to make classifications based on the existing training data. The generative models try to understand the probability distribution of the training data and generate new or similar data on the basis of this. For this reason, one area of application of deep generative models is image generation based on sample images, such as in the neural network DALL-E.

What are Flow-based Deep Generative Models?

A flow-based deep generative model is a generative model that is able to interpret and model a probability distribution of data. This can be illustrated with the help of the so-called "normalising flow".

The normalising flow describes a statistical method with which density functions of probability distributions can be estimated. In contrast to other types of generative models, such as the Generative Adversarial Networks (GAN) or Variational Autoencoder (VAE), flow-based deep generative models generate the "flow" through a sequence of invertible transformations. This allows the likelihood function to be represented and thus the true probability distribution to be learned.

In Generative Adversarial Networks, on the other hand, the methodology consists of a generator and a discriminator, which are to be seen as opponents. The generator produces data which the discriminator tries to identify as falsification (i.e. as not being part of the given, real distribution). The goal of the generator, on the other hand, is to ensure that the generated data is not identified as a forgery and that the generated distribution of the generator thus approximates the real distribution through training. In the Variational Autoencoder, the distribution is optimised by maximising ELBO (Evidence Lower Bound).

Where are these models applied?

Deep Generative Models have extensive applications in the field of Deep Learning.

For example, they are used in the Image generation used. For this purpose, new, artificial faces with human facial features are created from human faces in the training data. This method can also be used in the film and computer games sector. A special form of application of generative models is the so-called deepfakes. In this case, media content is artificially created, but gives the appearance of being real.

Also the Creation of genuine-looking handwriting can be implemented by means of generative models. For example, one can also be generated on the basis of a textual description of a photo.

The achievements of deep generative models can also be used in medicine. For example, in the essay "Disease variant prediction with deep generative models of evolutionary data" referred to the fact that Predicting previously unknown disease variants with the help of generative models. can. Specifically, the article refers to the detection of protein variants in disease-related genes that have the ability to cause disease. The disadvantage of previous methods (primarily using supervised learning) was that the models were based on known disease labels and no new ones could be predicted. This is to be improved with Deep Generative Models.