Unsupervised learning of global factors in deep generative models Articles uri icon

publication date

  • February 2023

start page

  • 1

end page

  • 12

issue

  • 109130

volume

  • 134

International Standard Serial Number (ISSN)

  • 0031-3203

abstract

  • We present a novel deep generative model based on non i.i.d. variational autoencoders that captures global dependencies among observations in a fully unsupervised fashion. In contrast to the recent semi-supervised alternatives for global modeling in deep generative models, our approach combines a mixture model in the local or data-dependent space and a global Gaussian latent variable, which lead us to obtain three particular insights. First, the induced latent global space captures interpretable disentangled representations with no user-defined regularization in the evidence lower bound (as in beta-VAE and its generalizations). Second, we show that the model performs domain alignment to find correlations and interpolate between different databases. Finally, we study the ability of the global space to discriminate between groups of observations with non-trivial underlying structures, such as face images with shared attributes or defined sequences of digits images.

subjects

  • Telecommunications

keywords

  • vae; deep generative models; global factors; unsupervised learning; disentanglement; representation learning