Summer Camp 2018

No summer plans yet?

Keen to work on interesting projects in areas of Artificial Intelligence, Machine Learning, Data Mining and Big Data?

Well, this is the right place to be!

DataLab introduce Summer Camp 2018 for students interested in Artificial Intelligence, Data Mining, Machine Learning and Big Data.

Pre-register here.

Important Dates:

29.6.2018 at 10:00 - Summer Camp 2018 Kick off meeting

Read more: Summer Camp 2018

[28. February] Let's Talk ML

Petr Nevyhoštěný - Introduction to Graph Neural Networks (slides)

Deep learning has achieved a great success in machine learning tasks, ranging from image and video classification, speech recognition and natural language understanding. However, there is an increasing number of applications where data are from non-Euclidean domains and are represented as graph structures. This talk will be a brief introduction to the topic of graph neural networks which attempt to deal with these problems.


Radek Bartyzal - Dropout is a special case of the stochastic delta rule: faster and more accurate deep learning (slides)

This talk will explain how is replacing weights with random variables connected to Dropout and what benefits it may bring.

[13. December, 11:00] Let's Talk ML

Radek Bartyzal - Dataset Distillation
Model distillation aims to distill the knowledge of a complex model into a simpler one. Dataset distillation keeps the model fixed and instead attempts to distill the knowledge from a large training dataset into a small one. The idea is to synthesize a small number of examples that will, when given to the learning algorithm as training data, approximate the model trained on the original data.

Václav Ostrožlík - Self-Normalizing Neural Networks
Neural networks are gaining success at many domains in the last years. However, it seems like the main stage belongs to convolutional and recurrent networks while the feed forward neural networks (FNNs) are left behind in the beginner tutorial sections. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. Authors of this paper propose Self-Normalizing Neural Networks allowing training deeper feed-forward networks with a couple of new techniques.

[29. November, 11:00] Let's Talk ML

Matúš Žilinec - BERT: New state of the art in NLP (slides)
Last month, Google caused a sensation by setting a new standard on 11 natural language processing tasks with a single model, the BERT, which is a transformer designed to pre-train deep word representations by conditioning on both left and right word context in all layers. The pre-trained model can be fine-tuned in a few hours with just one additional layer for a wide range of NLP tasks, such as question answering, language inference or named entity recognition. I will talk about how this works, how it is different and why we should care.

[22. November, 11:00] Let's Talk ML

Pablo Maldonado: If RL is broken, how could we fix it? 

The appeal of the reinforcement learning framework is its full generality, which is unfortunately its curse: real-life, interesting problems are only solvable through graduate student descent, a high quality simulator and obscene amounts of computing power. In this talk I will show some of the issues and ideas to tackle them.

Filip Paulů: Online learning (slides)

Jak v přírodě, ve světě, tak i v životě se věci mění. Pokud jsou tyto věci analyzovány, musí se s tím počítat. Nicméně je tato skutečnost často zanedbávána, což vede ke kritickým chybám. Povíme si, jak se k postupně měnícímu prostředí postavit z pohledu AI a machine-learningu. Dále jaké jsou nejnovější metody.

Follow Us

Copyright (c) Data Science Laboratory @ FIT CTU 2014–2016. All rights reserved.