In this series of articles, author Ravi Das examines some of the more advanced topics that support Generative AI. In this first article, he describes the Variational Autoencoder, also known as VAE.
The Variational Autoencoder
The Variational Autoencoder (VAE) is a variation of the Generative AI Model and primarily makes use of what are known as “Autoencoders” and “Probabilistic Modeling.” These kinds of Generative AI Models are designed to not only capture what they have learned in the past, but also apply that knowledge in making future predictions and computing the various Outputs to the queries that are submitted to the Model. The VAE consists of the following components:
- The Encoder: This combines the datasets that the Generative AI Model has been trained on, as well as what has been fed into it recently. Unlike the other kinds of encoders, this one does not represent the data as a fixed length. Rather, data are represented as a “Probability Distribution,” using the concepts of Statistics. This approach allows the Encoder to introduce a level of uncertainty into the Generative AI Model as it adapts and learns from new datasets over time.
- The Latent Space: This is a space that has been specifically allocated in the Generative AI Model for datasets that have been deemed secondary in nature and will not be used in the initial round of training for the model. The secondary datasets will be called upon in the future if more datasets are needed to further optimize the Generative AI Model.
- The Reparameterization “Trick”: In this kind of scenario, rather than having the Generative AI Model learn directly from the datasets, hypothetical sets are created for it to train on. These kinds of datasets are sometimes referred to as “Synthetic Data.” It should also be noted here that Generative AI itself can be used to create various forms of Synthetic Data, which can be used in other market applications.
- The Decoder: This functionality of the Generative AI Model takes a sample from the “Secondary Dataset” described above and then attempts to actually map it back to the original datasets. In simpler terms, this is an attempt to use Secondary Datasets to fill any voids or gaps that exist in the primary datasets upon which the Generative AI Model has been trained.
- The Loss Function: In this scenario, the VAE is designed to be trained upon datasets that have a lower statistical probability of being used by the Generative AI Model. Two key concepts associated with this are:
- The Reconstruction: This functionality measures how well the Synthetic Datasets correlate with the Primary Datasets that are stored in the Generative AI Model.
- The Regularization: This functionality helps to ensure that the Synthetic Datasets that have been produced and utilized are void of any gaps or holes. The primary reason for doing this is that these kinds of datasets are manufactured data and really have no real-world value attached to them; they must, therefore, look like the real thing as much as possible.
- Generation and Interpolation: Once the Generative AI Model has been deemed fully trained, the VAE can then create new datasets (which are still synthetic in nature), and from there, send them off to the Decoder in order to confirm and validate that indeed these new datasets can be subsequently used by the Generative AI Model not just for the purposes of training, but for computing the Outputs as well.
It is important to note here that VAEs are typically used by Generative AI Models which serve an application that is much more computing and processing intensive. For example, typical applications include those of creating images, videos, and even compressing large datasets (typically known as “Big Data”). But VAEs can also skew the results of the Outputs to a certain degree, as there could potentially be a lot of use of the Synthetic Datasets.
Up Next: The General Adversarial Network
The next Advanced AI topic this series will tackle is the General Adversarial Network (GAN), a Machine Learning Model that creates new datasets from what it has previously learned.
Sources/References:
Ravi Das is a Cybersecurity Consultant and Business Development Specialist. He also does Cybersecurity Consulting through his private practice, RaviDas Tech, Inc. He also possesses the Certified in Cybersecurity (CC) cert from the ISC2.
Visit his website at mltechnologies.io