You are here:

A Bayesian Nonparametrics View into Deep Representations

In this talk we will present our work on probabilistic models for neural representations. Specifically, we will present nonparametric Bayesian models for neural activations in convolutional neural networks and latent representations in variational autoencoders. These models allow us to formulate a tractable complexity measure for distributions of neural activations and to explore global structure of latent spaces learned by autoencoders. We use this machinery to uncover how memorization and regularization influence representational complexity in convolutional networks. Among others, we demonstrate that networks that can exploit patterns in data learn vastly less complex representations than networks forced to memorize. Next, we investigate latent representations learned by variational autoencoders under different regularization regimes. We show that in standard variational autoencoders aggregated posterior quickly collapses to the prior when regularization strength increases. Autoencoders with a kernel-based regularization term learn more complex posterior distributions, even with strong regularization. However, they do not exhibit independence of latent dimensions.

Marcin Kurdziel holds a PhD in computer science from AGH University of Science and Technology, Krakow, Poland. Currently he is an associate professor at the Institute of Computer Science, AGH University of Science and Technology. His research interests focus on machine learning, probabilistic models and parallel implementations of learning algorithms.

Marcin Kurdziel
Institute of Computer Science
AGH University of Science and Technology

Monday, 28 June 2021, 2:00-3:30 PM (CEST)

Join via ZOOM on seminar.sano.science