We show that every d-dimensional probability distribution with bounded support can be generated through deep ReLU networks out of a one-dimensional uniform input distribution. What is more, this is possible without incurring a cost - in terms of approximation error measured in Wasserstein-distance - relative to generating the d-dimensional target distribution from d independent random variables.
Understanding fundamental limits of deep neural network learning is crucial for machine learning applications. We developed fundamental limits of deep neural network learning by characterizing what is possible if no constraints on the learning algorithm and on the amount of training data are imposed. Concretely, we consider Kolmogorov-optimal approximation through deep neural networks with the guiding theme being a relation between the complexity of the function (class) to be approximated and the complexity of the approximating network in terms of connectivity and memory requirements for storing the network topology and the associated quantized weights. The theory we developed educes remarkable universality properties of deep networks.
As a part of my diploma thesis I was working on improving stochastic coordinate descent algorithm by introducing new adaptive rules for the random selection of their updates.
In this project I applied the nonlinear dimensionality reduction technique t-SNE to data representations given by the hidden layers of trained deep neural networks on music datasets. Built a music recommendation system.
We proposed two deep neural network architectures for classification of arbitrary-length electrocardiogram (ECG) recordings and achieved top accuracy on the atrial fibrillation (AF) classification data set provided by the PhysioNet/CinC Challenge 2017.
Grounded language learning (GLL) is a technique for language acquisition that uses a multimodal set of inputs rather than just sets of words or symbols, e.g. it uses a combination of words and related sounds or visuals. Due to the similarity of GLL with the way humans are exposed to language, studying GLL can potentially yield insights on how language is comprehended by humans. I supervised multiple semester projects on this topic.