Summer Camp 2018

No summer plans yet?

Keen to work on interesting projects in areas of Artificial Intelligence, Machine Learning, Data Mining and Big Data?

Well, this is the right place to be!

DataLab introduce Summer Camp 2018 for students interested in Artificial Intelligence, Data Mining, Machine Learning and Big Data.

Pre-register here.

Important Dates:

29.6.2018 at 10:00 - Summer Camp 2018 Kick off meeting

Read more: Summer Camp 2018

[22. November, 11:00] Let's Talk ML

Pablo Maldonado: If RL is broken, how could we fix it? 

The appeal of the reinforcement learning framework is its full generality, which is unfortunately its curse: real-life, interesting problems are only solvable through graduate student descent, a high quality simulator and obscene amounts of computing power. In this talk I will show some of the issues and ideas to tackle them.

Filip Paulů: Online learning (slides)

Jak v přírodě, ve světě, tak i v životě se věci mění. Pokud jsou tyto věci analyzovány, musí se s tím počítat. Nicméně je tato skutečnost často zanedbávána, což vede ke kritickým chybám. Povíme si, jak se k postupně měnícímu prostředí postavit z pohledu AI a machine-learningu. Dále jaké jsou nejnovější metody.

[29. November, 11:00] Let's Talk ML

Matúš Žilinec - BERT: New state of the art in NLP (slides)
Last month, Google caused a sensation by setting a new standard on 11 natural language processing tasks with a single model, the BERT, which is a transformer designed to pre-train deep word representations by conditioning on both left and right word context in all layers. The pre-trained model can be fine-tuned in a few hours with just one additional layer for a wide range of NLP tasks, such as question answering, language inference or named entity recognition. I will talk about how this works, how it is different and why we should care.

[1. November, 11:00] Let's Talk ML

Markéta Jůzlová: Neural Architecture Search with Reinforcement Learning (slides)
The paper uses reinforcement learning to automatically generate architecture of a neural network for given task. Architecture is represented as a variable-length string. A controller network is used to generate such a string. The controller is trained with to assign higher probability architectures with better validation accuracy.

Petr Nevyhoštěný: Learning to Rank Applied onto Fault Localization (slides)
Debugging is a very time-consuming and tedious task which is also a large part of software development lifecycle. There already exist several techniques which aim to identify root causes of failure automatically. I will explain some of these techniques and describe how they can be combined together using a learning to rank algorithm.

[18 October, 11:00] Let's Talk ML

Radek Bartyzal - HOP-Rec: High-Order Proximity for Implicit Recommendation (pdf) (slides)

Two of the most popular approaches to recommender systems are based on factorization and graph-based models. This recent paper introduces a method combining both of these approaches.

Ondra Bíža - Learning Synergies between Pushing and Grasping with Self-supervised Deep Reinforcement Learning (pdf) (slides)

Skilled robotic manipulation benefits from complex synergies between pushing and grasping actions: pushing can help rearrange cluttered objects to make space for arms and fingers; likewise, grasping can help displace objects to make pushing movements more precise and collision-free. This paper presents a policy able to learn pushing motions that enable future grasps, while learning grasps that can leverage past pushes.

Follow Us

Copyright (c) Data Science Laboratory @ FIT CTU 2014–2016. All rights reserved.