Let's talk ML

Let's Talk ML is a regular meeting of people interested in machine learning and related.
We meet at 1pm, every even wednesday in Datalab, CTU FIT (room A-1347).

The format is usually two short talks followed by discussion. The talk could be about anything Machine Learning related - your own research, an interesting ML paper, or some new exciting method.

Sign up to our mailing list to get notifications about new Let's Talk events.

Events dates:

  • [1. November, 11:00] Let's Talk ML

    Markéta Jůzlová: Neural Architecture Search with Reinforcement Learning (slides)
    The paper uses reinforcement learning to automatically generate architecture of a neural network for given task. Architecture is represented as a variable-length string. A controller network is used to generate such a string. The controller is trained with to assign higher probability architectures with better validation accuracy.

    Petr Nevyhoštěný: Learning to Rank Applied onto Fault Localization (slides)
    Debugging is a very time-consuming and tedious task which is also a large part of software development lifecycle. There already exist several techniques which aim to identify root causes of failure automatically. I will explain some of these techniques and describe how they can be combined together using a learning to rank algorithm.

  • [18 October, 11:00] Let's Talk ML

    Radek Bartyzal - HOP-Rec: High-Order Proximity for Implicit Recommendation (pdf) (slides)

    Two of the most popular approaches to recommender systems are based on factorization and graph-based models. This recent paper introduces a method combining both of these approaches.

    Ondra Bíža - Learning Synergies between Pushing and Grasping with Self-supervised Deep Reinforcement Learning (pdf) (slides)

    Skilled robotic manipulation benefits from complex synergies between pushing and grasping actions: pushing can help rearrange cluttered objects to make space for arms and fingers; likewise, grasping can help displace objects to make pushing movements more precise and collision-free. This paper presents a policy able to learn pushing motions that enable future grasps, while learning grasps that can leverage past pushes.

  • [2 May, 13:00] Let's Talk ML

    Petr Nevyhoštěný - Deep Clustering with Convolutional Autoencoders (slides)

    Deep clustering utilizes deep neural networks to learn feature representation that is suitable for clustering tasks. This paper proposes a clustering algorithm that uses convolutional autoencoders to learn embedded features, and then incorporates clustering oriented loss on embedded features to jointly perform feature refinement and cluster assignment.

    Václav Ostrožlík - Learning to learn by gradient descent by gradient descent (slides)

    We've seen many significant improvements when replacing hand-designed features with learned ones before. However, optimization algorithms are still designed by hand. In this work, authors describe that the optimization algorithm can be seen as learning problem itself allowing to get better performing, specialized optimizer.

  • [18 April, 13:00] Let's Talk ML

    Radek Bartyzal - Adversarial Network Compression (slides)

    Knowledge distillation is a method of training a smaller student model using a previously trained larger teacher model to achieve a better classification accuracy than a normally trained student model.
    This paper presents a new way of knowledge distillation by leveraging the recent advances in Generative Adversarial Networks.

    Ondra Podsztavek - World Models (slides)

    The paper explores building generative neural network models of popular reinforcement learning environments. A world model can be trained quickly in an unsupervised manner to learn a compressed spatial and temporal representation of the environment. By using features extracted from the world model as inputs to an agent, it can train a very compact and simple policy that can solve the required task. It can even train an agent entirely inside of its own hallucinated dream generated by its world model, and transfer this policy back into the actual environment.

  • [4 April, 13:00] Let's Talk ML

    Ondra Bíža - Learning to grasp object with convolutional networks (slides)

    Precise grasping of objects is essential in many applications of robotics such as assisting patients with motoric impairments. I will compare two approaches to learning how to grasp: Google's large-scale venture and a much smaller project carried out by the Northeastern University, which nevertheless achieved competitive results.

    Václav Ostrožlík - Differentiable Neural Computer (slides)

    Differentiable neural computer is a model based on neural network controller with external memory that is able to store and navigate complex data on its own. I'll go through its architectural details, compare it with Neural Turing Machines and show some interesting possibilities of using the model.

  • [21 March, 13:00] Let's Talk ML

    Matus Zilinec - Machine text comprehension with BiDAF (slides)

    I will talk about the Bi-Directional Attention Flow network for answering questions in natural language about an arbitrary paragraph. BiDAF is a multi-stage process that represents the context at different levels of granularity and uses attention mechanism to obtain a query-aware context representation without early summarization. The model achieves state-of-the-art results in Stanford Question Answering Dataset.

    Radek Bartyzal - Objects that sound (slides)

    A simple network architecture trained only from video is able to reach impressive results in localization of objects that produce a provided sound in a provided frame. This paper builds on an earlier work called 'Look Listen and Learn' and adds support of cross modal retrieval meaning that it can return an image for a provided sound and vice versa. I will present the new architecture and explain the advantages compared to the previous one.

  • [7 March, 13:00] Let's Talk ML

    Markéta Jůzlová - Hyperband (slides)

    Hyperband is a multi-armed bandit strategy proposed for hyper-parameter optimization of learning algorithms. Despite its conceptual simplicity, the authors report competitive results to state-of-the-art hyper-parameter optimization methods such as Bayesian optimization.
    I will describe the main principle of the method and its possible extension.

    Ondra Bíža - Visualizing Deep Neural Networks (slides)

    Techniques for visualizing deep neural networks have seen significant improvements in the last year. I will explain a novel algorithm for visualizing convolutional filters and use it to analyze a deep residual network.

  • [21 February, 13:00] Let's Talk ML

    Radek Bartyzal - Born Again Neural Networks (slides)

    Knowledge distillation is a process of training a compact model (student) to approximate the results of a previously trained, more complex model (teacher).
    The authors of this paper have inspired themselves by this idea and tried training a student of same complexity as its teacher and found that the student surpasses the teacher in many cases. They also try to train a student that has a different architecture than the teacher with interesting results.

    This will be one longer (40 min) talk where I will also describe the relevant architectures used in the paper. (DenseNet, Wide ResNet).

  • [14 December, 11:00] Let's Talk ML

    Ondra Bíža - Overcoming catastrophic forgetting in neural networks (slides)

    J. Kirkpatrick et al. (2017)
    Artificial Neural Networks struggle with learning multiple different tasks due to a phenomenon known as catastrophic forgetting. In my talk, I will introduce catastrophic forgetting, describe a new learning algorithm called EWC that mitigates it and briefly mention neurobiological principles that inspired the creation of EWC.

    Ondra Podsztavek - Deep Q-network (slides)

    Deep Q-network (DQN) is a DeepRL system which combines deep neural networks with reinforcement learning and is able to master a diverse range of Atari 2600 games with only the raw pixels and score as input. It represents a general-purpose agent that is able to adapt its behaviour without any human intervention.

  • [30 November, 11:00] Let's Talk ML

    Filip Paulů - Analogové umělé neuronové sítě (slides)

    Výkon dnes používaných neuronových sítí přestává stačit. Jak můžeme jejich rychlost zvýšit? Je nadále cesta počítat složité a rozsáhlé struktury neuronových sítí na digitálních procesorech a grafických kartách? Je možné jít jinou cestou? O tom všem si budeme povídat.

    Václav Ostrožlík - Capsule Networks (slides)

    Geoffrey Hinton, jeden z "otců deep learningu", nedávno publikoval dva články představující novou architekturu neuronových sítí, nazvanou Capsule Networks. V přednášce ukážu princip fungování těchto sítí, jak je možné je trénovat a jaké nové možnosti přinášejí.

  • [16 November, 11:00] Let's Talk ML

    Tomáš Pajurek: Machine Learning infrastructure in Azure (slides)

    Talk will be focused on early stages of ML pipeline (data ingestion, storage, preprocessing) and also cluster computation services. Some topics will be accompanied with hands-on examples. 

    Vladimir Ananyev: Unsupervised feature selection for time series clustering (slides)

    I will present how clustering can be performed using the extracted features instead of the raw time series, and also the problem of selecting relevant feature subsets and some of the techniques that can be used for that purpose.

  • [2 November, 11:00] Let's Talk ML

    Markéta Jůzlová: Using Meta-Learning to Support Data Mining (slides)

    Článek vysvětluje pojem meta-learning v data miningu. Poskytuje také přehled technik pro meta-learning. Dále bude podrobněji popsána metoda, která používá meta-learning pro rankování machine learningových modelů pro daný dataset, včetně vyhodnocení její úspěšnosti. Nakonec bude krátce popsány některé novější aplikace meta-learningu.

    Ondra Bíža: AlphaGo Zero (slides)

    Završující článek tříletého výzkumného projektu AlphaGo, který zplodil nejlepšího hráče deskové hry Go na světě. Autoři popisují nového agenta, AlphaGo Zero, jenž začíná bez jakékoliv znalosti Go a učí se pouze na základě hraní sama proti sobě (self-play). Po 36 hodinách trénování AlphaGo Zero dokáže porazit nejlepšího profesionálního hráče a vymýšlí unikátní strategie. Tím se velmi liší od předchozích AlphaGo agentů, kteří se učili pozorováním milionů tahů profesionálních hráčů, a hráli podobně jako oni.

  • [19 October, 11:00] Let's Talk ML

    Václav Ostrožlík: Understanding deep learning requires rethinking generalization (slides)

    Best paper award na ICML 2017. Autoři zkoumají, proč neuronové sítě generalizují. Ukazují na několika příkladech, že state of the art sítě mají dostatečnou kapacitu na to, aby se naučily i zcela náhodný dataset. To trochu rozporuje klasický pohled, že NN objevuje high-, mid-, low-level featury a díky nim dokáže dataset pojmout. Dále ukazují, že ani různé regularizační metody na generalizaci nemají až takový vliv.

    Petr Nevyhoštěný: Unsupervised Audio Segmentation based on Restricted Boltzmann Machines (slides)

    Rozdělení audia na homogenní sémantické části je v tomto článku řešeno za pomoci Conditional Restricted Boltzmann Machines, což je rozšíření RBM o podmíněnou pravděpodobnost nad nějakým jiným vektorem než je vstup, v tomto případě blízkou minulostí v audio záznamu.

  • [20 April, 11:00] Let's Talk ML

    Radek Bartyzal: Matrix Factorization (slides)
    Ondra Podsztavek: Transfer Learning (slides)

  • [6 April, 11:00] Let's Talk ML

    Ondřej Bíža: Recurrent Convolutional Networks (slides)
    Petr Nevyhoštěný: Restricted Boltzmann Machines (slides)

  • [23 March, 11:00] Let's Talk ML

    Place: TH:A:1347

    Markéta Jůzlová: Attacking machine learning with adversarial examples (slides)
    Veronika Maurerová: Feature extraction using CNN (slides)

     

  • [9 March, 11:00] Let's Talk ML

    Place: TH:A:1347

    Petr Nevyhoštěný: EU law and machine learning (slides)
    Václav Ostrožlík: Convolutional Neural Networks (slides)

  • [23 February, 11:00] Let's Talk ML

    Place: TH:A:1347

    Radek Bartyzal: Generative Adversarial Networks (slides)
    Veronika Maurerová: A Model-based Approach to Optimizing Ms. Pac-Man Game Strategies in Real Time (slides)

  • [8 December, 13:00] Let's Talk ML

    Place: T9:364

    Radek Bartyzal: t-SNE (slides)
    Václav Ostrožlík: Word2vec: A deeper look

  • [24 November, 13:00] Let's Talk ML

    Place: T9:364

    Václav Ostrožlík: Word2vec (slides)
    Petr Nevyhoštěný: Mood classification from lyrics (slides)

  • [10 November, 13:00] Let's Talk ML

    Place: T9:364

    Markéta Jůzlová: Dimensionality Reduction (slides)
    Petr Nevyhoštěný: Conditional Random Fields (slides)

  • [27 October, 13:00] Let's Talk ML

    Place: T9:364

    Veronika Maurerová: Gradient boosting machines (slides)
    Tomáš Frýda: Gaussian Processes (slides)

  • [21 September, 15:00] Let's Talk ML

    Place: TH:A:1242

    Václav Ostrožlík: WaveNet (slides)
    Veronika Maurerová: Predikce kriminality (slides)

  • [31 August, 14:00] Let's Talk ML

    Place: TH:A:1242

    Radek Bartyzal: Neural Machine Translation (slides)

  • [24 August, 14:00] Let's Talk ML

    Place: TH:A:1242

    Václav Ostrožlík: Dropout (slides)
    Tomáš Frýda: Introduction to Bayesian optimization (slides, Jupyter notebook)

  • [17 August, 14:00] Let's Talk ML

    Place: TH:A:1242

    Radek Bartyzal: Why ReLU? (slides)
    Tomáš Pajurek: Event streaming and storing in Azure (slides)

  • [10 August, 13:00] Let's Talk ML

    Place: TH:A:1242

    Radek Bartyzal: Online Optimization Algorithm (slides)
    Václav Ostrožlík: Neural Style (slides)


Follow Us

Copyright (c) Data Science Laboratory @ FIT CTU 2014–2016. All rights reserved.