[13. December, 11:00] Let's Talk ML

Radek Bartyzal - Dataset Distillation
Model distillation aims to distill the knowledge of a complex model into a simpler one. Dataset distillation keeps the model fixed and instead attempts to distill the knowledge from a large training dataset into a small one. The idea is to synthesize a small number of examples that will, when given to the learning algorithm as training data, approximate the model trained on the original data.

Václav Ostrožlík - Self-Normalizing Neural Networks
Neural networks are gaining success at many domains in the last years. However, it seems like the main stage belongs to convolutional and recurrent networks while the feed forward neural networks (FNNs) are left behind in the beginner tutorial sections. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. Authors of this paper propose Self-Normalizing Neural Networks allowing training deeper feed-forward networks with a couple of new techniques.

Follow Us

Copyright (c) Data Science Laboratory @ FIT CTU 2014–2016. All rights reserved.