[7 March, 13:00] Let's Talk ML

Markéta Jůzlová - Hyperband (slides)

Hyperband is a multi-armed bandit strategy proposed for hyper-parameter optimization of learning algorithms. Despite its conceptual simplicity, the authors report competitive results to state-of-the-art hyper-parameter optimization methods such as Bayesian optimization.
I will describe the main principle of the method and its possible extension.

Ondra Bíža - Visualizing Deep Neural Networks (slides)

Techniques for visualizing deep neural networks have seen significant improvements in the last year. I will explain a novel algorithm for visualizing convolutional filters and use it to analyze a deep residual network.

[21 February, 13:00] Let's Talk ML

Radek Bartyzal - Born Again Neural Networks (slides)

Knowledge distillation is a process of training a compact model (student) to approximate the results of a previously trained, more complex model (teacher).
The authors of this paper have inspired themselves by this idea and tried training a student of same complexity as its teacher and found that the student surpasses the teacher in many cases. They also try to train a student that has a different architecture than the teacher with interesting results.

This will be one longer (40 min) talk where I will also describe the relevant architectures used in the paper. (DenseNet, Wide ResNet).

[4 January, 11:00] Let's Talk ML

Martin Čejka - ToyArch: Czech take on general artificial intelligence

GoodAI is research company from Prague, which is doing experiments with GAI. In this presentation, I will talk about their biologically inspired approach, Toy architecture. How does it work and what is it capable of so far?

Petr Nevyhoštěný - Non-Linear Semantic Embedding

Non-Linear Semantic Embedding is a technique presented in a paper where it was used to learn an efficient mapping of instruments from a time-frequency representation to a low-dimensional space. It consists of automatic feature extraction using convolutional neural networks and a learning model which uses extrinsic information about similarity of training data.

[14 December, 11:00] Let's Talk ML

Ondra Bíža - Overcoming catastrophic forgetting in neural networks (slides)

J. Kirkpatrick et al. (2017)
Artificial Neural Networks struggle with learning multiple different tasks due to a phenomenon known as catastrophic forgetting. In my talk, I will introduce catastrophic forgetting, describe a new learning algorithm called EWC that mitigates it and briefly mention neurobiological principles that inspired the creation of EWC.

Ondra Podsztavek - Deep Q-network (slides)

Deep Q-network (DQN) is a DeepRL system which combines deep neural networks with reinforcement learning and is able to master a diverse range of Atari 2600 games with only the raw pixels and score as input. It represents a general-purpose agent that is able to adapt its behaviour without any human intervention.

[30 November, 11:00] Let's Talk ML

Filip Paulů - Analogové umělé neuronové sítě (slides)

Výkon dnes používaných neuronových sítí přestává stačit. Jak můžeme jejich rychlost zvýšit? Je nadále cesta počítat složité a rozsáhlé struktury neuronových sítí na digitálních procesorech a grafických kartách? Je možné jít jinou cestou? O tom všem si budeme povídat.

Václav Ostrožlík - Capsule Networks (slides)

Geoffrey Hinton, jeden z "otců deep learningu", nedávno publikoval dva články představující novou architekturu neuronových sítí, nazvanou Capsule Networks. V přednášce ukážu princip fungování těchto sítí, jak je možné je trénovat a jaké nové možnosti přinášejí.

Follow Us

Copyright (c) Data Science Laboratory @ FIT CTU 2014–2016. All rights reserved.