Summer Camp 2018

No summer plans yet?

Keen to work on interesting projects in areas of Artificial Intelligence, Machine Learning, Data Mining and Big Data?

Well, this is the right place to be!

DataLab introduce Summer Camp 2018 for students interested in Artificial Intelligence, Data Mining, Machine Learning and Big Data.

Pre-register here.

Important Dates:

29.6.2018 at 10:00 - Summer Camp 2018 Kick off meeting

Read more: Summer Camp 2018

[13. December, 11:00] Let's Talk ML

Radek Bartyzal - Dataset Distillation
Model distillation aims to distill the knowledge of a complex model into a simpler one. Dataset distillation keeps the model fixed and instead attempts to distill the knowledge from a large training dataset into a small one. The idea is to synthesize a small number of examples that will, when given to the learning algorithm as training data, approximate the model trained on the original data.

Václav Ostrožlík - Self-Normalizing Neural Networks
Neural networks are gaining success at many domains in the last years. However, it seems like the main stage belongs to convolutional and recurrent networks while the feed forward neural networks (FNNs) are left behind in the beginner tutorial sections. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. Authors of this paper propose Self-Normalizing Neural Networks allowing training deeper feed-forward networks with a couple of new techniques.

[29. November, 11:00] Let's Talk ML

Matúš Žilinec - BERT: New state of the art in NLP (slides)
Last month, Google caused a sensation by setting a new standard on 11 natural language processing tasks with a single model, the BERT, which is a transformer designed to pre-train deep word representations by conditioning on both left and right word context in all layers. The pre-trained model can be fine-tuned in a few hours with just one additional layer for a wide range of NLP tasks, such as question answering, language inference or named entity recognition. I will talk about how this works, how it is different and why we should care.

[22. November, 11:00] Let's Talk ML

Pablo Maldonado: If RL is broken, how could we fix it? 

The appeal of the reinforcement learning framework is its full generality, which is unfortunately its curse: real-life, interesting problems are only solvable through graduate student descent, a high quality simulator and obscene amounts of computing power. In this talk I will show some of the issues and ideas to tackle them.

Filip Paulů: Online learning (slides)

Jak v přírodě, ve světě, tak i v životě se věci mění. Pokud jsou tyto věci analyzovány, musí se s tím počítat. Nicméně je tato skutečnost často zanedbávána, což vede ke kritickým chybám. Povíme si, jak se k postupně měnícímu prostředí postavit z pohledu AI a machine-learningu. Dále jaké jsou nejnovější metody.

[1. November, 11:00] Let's Talk ML

Markéta Jůzlová: Neural Architecture Search with Reinforcement Learning (slides)
The paper uses reinforcement learning to automatically generate architecture of a neural network for given task. Architecture is represented as a variable-length string. A controller network is used to generate such a string. The controller is trained with to assign higher probability architectures with better validation accuracy.

Petr Nevyhoštěný: Learning to Rank Applied onto Fault Localization (slides)
Debugging is a very time-consuming and tedious task which is also a large part of software development lifecycle. There already exist several techniques which aim to identify root causes of failure automatically. I will explain some of these techniques and describe how they can be combined together using a learning to rank algorithm.

Follow Us

Copyright (c) Data Science Laboratory @ FIT CTU 2014–2016. All rights reserved.